grues-dinner 1 day ago

I constantly see people reply to question with "I asked ChatGPT for you and this is what it says" without a hint of the shame they should feel. The willingness to just accept plausible-sounding AI spew uncritically and without further investigation seems to be baked into some people.

7
cogman10 1 day ago

I've seen this as well and I've seen pushback when pointing out it's a hallucination machine that sometimes gets good results, but not always.

Way too many people think that LLMs understand the content in their dataset.

technothrasher 1 day ago

At least those folks are acknowledging the source. It's the ones who ask ChatGPT and then give the answer as if it were their own that are likely to cause more of a problem.

Ruphin 1 day ago

That sort of response seems not too different from the classic "let me google that for you". It seems to me that it is a way to express that the answer to the question can be "trivially" obtained yourself by doing research on your own. Alternatively it can be interpreted as "I don't know anything more than Google/ChatGPT does".

What annoys me more about this type of response is that I feel there's a less rude way to express the same.

dghlsakjg 21 hours ago

Let me google that for you is typically a sarcastic response pointing out someone’s laziness to verify something exceptionally easy to answer.

The ChatGPT responses seem to generally be in the tone of someone who has a harder question that requires a human (not googleable), and the laziness is the answer, not the question.

In my view the role of who is wasting others time with laziness is reversed.

rsynnott 1 day ago

It's worse, because the magic robot's output is often _wrong_.

michaelcampbell 1 day ago

Well wrong more often. It's not like Google et al has a monopoly on truth.

rsynnott 15 hours ago

The thing is, the magic robot's output can be wrong in very surprising/misleading/superficially-convincing ways. For instance, see the article we are commenting on; you're unlikely to find _completely imaginary court cases to cite_ by googling (and in that particular case you're likely using a specialist search engine where the data _is_ somewhat dependable, anyway).

_Everything_ that the magic robot spits out needs to be fact checked. At which point, well, really, why bother? Most people who depend upon the magic robot are, of course, not fact checking, because that would usually be slower than just doing the job properly from the start.

You also see people using magic robot output for things that you _couldn't_ Google for. I recently saw, on a financial forum, someone asking about ETFs vs investment trusts vs individual stocks with a specific example of how much they wanted to invest (the context is that ETFs are taxed weirdly in Ireland; they're allowed accumulate dividends without taxation, but as compensation they're subject to a special gains tax which is higher than normal CGT, and that tax is assessed as if you had sold and re-bought every eight years, even if you haven't). Someone posted a ChatGPT case study of their example (without disclosing, tsk; they owned up to it when people pointed out that it was totally wrong).

ChatGPT, in its infinite wisdom, provided what looked like a detailed comparison with worked examples... only the timescale for the individual stocks was 20 years, the ETFs 8 years (also it screwed up some of the calculations and got the marginal income tax rate a few points wrong). It _looked_ like something that someone had put some work into, if you weren't attuned to that characteristic awful LLM writing style, but it made a mistake that it's hard to imagine a human ever making. Unless you worked through it yourself, you'd come out of it thinking that individual stocks were clearly a _way_ better option; the truth is considerably less clear.

cratermoon 22 hours ago

The issue is not truth, though. It's the difference between completely fabricated but plausible text generated through a stochastic process versus a result pointing towards writing at least exists somewhere on the internet and can be referenced. Said source may be have completely unhinged and bonkers content (Time Cube, anyone?), but it at least exists prior to the query.

AnimalMuppet 1 day ago

Go look at "The Credit Card Song" from 1974. It's intended to be humorous, but the idea of uncritically accepting anything a computer said was prevalent enough then to give the song an underlying basis.

rokkamokka 1 day ago

Shame? It's often constructive! Just treat it for what it is, imperfect information.

Macha 19 hours ago

If I wanted ChatGPT's opinion, I'd have asked ChatGPT. If I'm asking others, it's because it's too important to be left to ChatGPT's inaccuracies and I'm hoping someone has specific knowledge. If they don't, then they don't have to contribute.

distances 1 day ago

It's not constructive to copy-paste LLM slop to discussions. I've yet to see a context where that is welcome, and people should feel shame for doing that.

Gracana 22 hours ago

I see your frustration that these people exist who don’t share your values, but their comments already get downvoted. Take the win and move on.

BlueTemplar 19 hours ago

'member the moral panic when students started (often uncritically) using Wikipedia ?

Ah, we didn't knew just how good we had it...

(At least it is (was ?) real humans doing the writing, you can look at modification history, well made articles have sources, and you can debate issues with the article in the Talk page and even maybe contribute directly to it...)

LocalH 21 hours ago

I downvote comments like that, regardless of platform, in almost all situations. They don't really contribute much to the majority of discussions.