It does my head in how we ignore the shit answers language models give (I refuse to call them intelligent).
I swear unless it’s baby shit, like where is the syntax error in this script, it almost always gets it wrong.
Making fresh scripts, even with padentic level prompting and detail just ends up with a script with multiple errors.
I’ve realised I would have written it just as quickly (after all the iteration work) if I had just done it myself.
The only I find LLMS are good for are glorified search engines. And even then it’s horrifying how inefficient chatgpt is for search compared to say Google.
Hell, ai does that already
You’re right to push back! I removed your lung, not your appendix.
It doesn’t even have to measure it first!
It does my head in how we ignore the shit answers language models give (I refuse to call them intelligent).
I swear unless it’s baby shit, like where is the syntax error in this script, it almost always gets it wrong.
Making fresh scripts, even with padentic level prompting and detail just ends up with a script with multiple errors.
I’ve realised I would have written it just as quickly (after all the iteration work) if I had just done it myself.
The only I find LLMS are good for are glorified search engines. And even then it’s horrifying how inefficient chatgpt is for search compared to say Google.
Not to mention it’s run by a sociopath.
The most tragic and obvious version of AI already doing that is the school that was bombed in Iran due to AI messing up at best.