victorbjorklund 1 day ago

I dont understand how a lawyer can use AI like this and not just spend the little time required to check that the citations actually exist.

3
grues-dinner 1 day ago

I constantly see people reply to question with "I asked ChatGPT for you and this is what it says" without a hint of the shame they should feel. The willingness to just accept plausible-sounding AI spew uncritically and without further investigation seems to be baked into some people.

cogman10 1 day ago

I've seen this as well and I've seen pushback when pointing out it's a hallucination machine that sometimes gets good results, but not always.

Way too many people think that LLMs understand the content in their dataset.

technothrasher 1 day ago

At least those folks are acknowledging the source. It's the ones who ask ChatGPT and then give the answer as if it were their own that are likely to cause more of a problem.

Ruphin 1 day ago

That sort of response seems not too different from the classic "let me google that for you". It seems to me that it is a way to express that the answer to the question can be "trivially" obtained yourself by doing research on your own. Alternatively it can be interpreted as "I don't know anything more than Google/ChatGPT does".

What annoys me more about this type of response is that I feel there's a less rude way to express the same.

dghlsakjg 20 hours ago

Let me google that for you is typically a sarcastic response pointing out someone’s laziness to verify something exceptionally easy to answer.

The ChatGPT responses seem to generally be in the tone of someone who has a harder question that requires a human (not googleable), and the laziness is the answer, not the question.

In my view the role of who is wasting others time with laziness is reversed.

rsynnott 1 day ago

It's worse, because the magic robot's output is often _wrong_.

michaelcampbell 1 day ago

Well wrong more often. It's not like Google et al has a monopoly on truth.

rsynnott 14 hours ago

The thing is, the magic robot's output can be wrong in very surprising/misleading/superficially-convincing ways. For instance, see the article we are commenting on; you're unlikely to find _completely imaginary court cases to cite_ by googling (and in that particular case you're likely using a specialist search engine where the data _is_ somewhat dependable, anyway).

_Everything_ that the magic robot spits out needs to be fact checked. At which point, well, really, why bother? Most people who depend upon the magic robot are, of course, not fact checking, because that would usually be slower than just doing the job properly from the start.

You also see people using magic robot output for things that you _couldn't_ Google for. I recently saw, on a financial forum, someone asking about ETFs vs investment trusts vs individual stocks with a specific example of how much they wanted to invest (the context is that ETFs are taxed weirdly in Ireland; they're allowed accumulate dividends without taxation, but as compensation they're subject to a special gains tax which is higher than normal CGT, and that tax is assessed as if you had sold and re-bought every eight years, even if you haven't). Someone posted a ChatGPT case study of their example (without disclosing, tsk; they owned up to it when people pointed out that it was totally wrong).

ChatGPT, in its infinite wisdom, provided what looked like a detailed comparison with worked examples... only the timescale for the individual stocks was 20 years, the ETFs 8 years (also it screwed up some of the calculations and got the marginal income tax rate a few points wrong). It _looked_ like something that someone had put some work into, if you weren't attuned to that characteristic awful LLM writing style, but it made a mistake that it's hard to imagine a human ever making. Unless you worked through it yourself, you'd come out of it thinking that individual stocks were clearly a _way_ better option; the truth is considerably less clear.

cratermoon 22 hours ago

The issue is not truth, though. It's the difference between completely fabricated but plausible text generated through a stochastic process versus a result pointing towards writing at least exists somewhere on the internet and can be referenced. Said source may be have completely unhinged and bonkers content (Time Cube, anyone?), but it at least exists prior to the query.

AnimalMuppet 1 day ago

Go look at "The Credit Card Song" from 1974. It's intended to be humorous, but the idea of uncritically accepting anything a computer said was prevalent enough then to give the song an underlying basis.

rokkamokka 1 day ago

Shame? It's often constructive! Just treat it for what it is, imperfect information.

Macha 19 hours ago

If I wanted ChatGPT's opinion, I'd have asked ChatGPT. If I'm asking others, it's because it's too important to be left to ChatGPT's inaccuracies and I'm hoping someone has specific knowledge. If they don't, then they don't have to contribute.

distances 1 day ago

It's not constructive to copy-paste LLM slop to discussions. I've yet to see a context where that is welcome, and people should feel shame for doing that.

Gracana 22 hours ago

I see your frustration that these people exist who don’t share your values, but their comments already get downvoted. Take the win and move on.

BlueTemplar 19 hours ago

'member the moral panic when students started (often uncritically) using Wikipedia ?

Ah, we didn't knew just how good we had it...

(At least it is (was ?) real humans doing the writing, you can look at modification history, well made articles have sources, and you can debate issues with the article in the Talk page and even maybe contribute directly to it...)

LocalH 21 hours ago

I downvote comments like that, regardless of platform, in almost all situations. They don't really contribute much to the majority of discussions.

daymanstep 1 day ago

You could probably use AI to check that the citations exist

insin 1 day ago

The multiplying of numbers less than 1 together will continue until 1 is reached.

cobbal 21 hours ago

Clearly we just need to invent a "-2" AI

whatever1 1 day ago

And if they don't the AI will make up some for you

3036e4 1 day ago

Maybe someone can make a browser extension that does not take 404 for an answer but just silently makes up something plausible?

eviks 1 day ago

It's not "a little time"

blululu 1 day ago

The Judge spent the time to do exactly this. Judges are busy. Their time is valuable. The lawyer used AI to make the judge do work. The lawyer was too lazy to do the verification work that they expected the judge to perform. This speaks to a profound level of disrespect.

cbfrench 21 hours ago

I highly doubt the judge was tracking down citations or reading those cited cases herself to verify what was in them. They have law clerks for that. It doesn’t make it any less an egregious waste of the court’s time and resources, but I would be surprised if a district court judge is personally doing much, if any, of that sort of spadework.

victorbjorklund 18 hours ago

Checking if a case exists or not is little time in the context of legal research.

eviks 9 hours ago

Ok, now do this for every other mistake type mentioned in the article, and you've got yourself a case!

dwattttt 1 day ago

Perhaps not, but it is the time required to discharge their obligation under Rule 11 of the Federal Rules of Civil Procedure (IANAL).

bombcar 1 day ago

It’s “paralegal time” which is nearly free …

dghlsakjg 20 hours ago

Courts are not allocated an unlimited budget for clerks.

Outside of the literal dollar cost, the opportunity cost here is further delays on the docket because the clerk was unable to do something else, and the court time that must now be spent dealing with the issue.

eviks 1 day ago

First, you're confusing time with money

Second, the mistakes weren't just incorrect citations any paralegal could check

rsynnott 1 day ago

> Second, the mistakes weren't just incorrect citations any paralegal could check

... Some of the 'mistakes' (strictly speaking they are not mistakes, of course) are _citations of cases which do not exist_.

eviks 1 day ago

... just ...