> the conversational interface, for some reason, seems to turn off the natural skepticism that people have
n=1 but after having chatgpt "lie" to me more than once i am very skeptical of it and always double check it, whereas something like tv or yt videos i still find myself being click-baited or grifted (iow less skeptical) much more easily still... any large studies about this would be very interesting... I get irrationally frustrated when ChatGPT hallucinates npm packages / libraries that simply do not exist.
This happens… weekly for me.
"Hey chatgpt I want to integrate a slidepot into this project"
>from PiicoDev_SlidePot import PiicoDev_SlidePot
Weird how these guys used exactly my terminology when they usually say "Potentiometer"
Went and looked it up, found a resource outlining that it uses the same class as the dial potentiometer.
"Hey chatgpt, I just looked it up and the slidepots actually use the same Potentiometer class as the dialpots."
scurries to fix its stupid mistake
Weird. I used to have that happen when it first came out but I haven't experienced anything like that in a long time. Worst case it's out of date rather than making stuff up.
My experience with this is that it is vital to have a system where the system can iterate on its own.
Ideally by having a test or endpoint you can call to actually run the code you want to build.
Then you ask the system to implement the function and run the test. If it hallucinates anything it will find that and fix it.
IME OpenAI is below Claude and Gemini for code.
tell it that you won’t accept any new installed packages, use language features only. i have that in my coding prompt i made.