As an attorney, I’ve found that this isn’t the issue it was a year ago.
1. Use reasoning models and include in the prompt to check the cited cases and verify holdings. 2. Take the draft, run it through ChatGpt deep research , Gemini deep research and Claude , and tell it to verify holdings.
I still double check, for now, but this is catching every hallucination.
Thanks for giving us the reality check. Distasteful as some here find it, squarely presenting the facts of what is surely becoming common practice is a service to the public.
With the Court's reply to Lindell, you now have an independent test case upon which to test your verification process and compare results against a "rival implementation" -- the Court's. One wonders if it may be AI-assisted as well. I'd be quite interested in hearing how the two stack up.
> this isn’t the issue it was a year ago
From the article, it looks like this brief was dated Feb 25 this year.
> still double check, for now
Whew, that's 4 LLM inference requests and still requires manual checking. Criminal levels of waste and inefficiency. Learn how to use LexisNexis, spend some time in a law library handling actual physical casebooks. Learn to do your job.
Even with checking, it turns a 3 day brief into a 4 hour brief.
And, part of the process is to do some research first, find the key cases, and the briefs of better lawyers on the same issue, and include them in the context.
And the time savings are passed onto the clients?
Doubtful, because GP has to pay the subscription fees for all the LLMs he's employing. I know ChatGPT pro for deep research is $200/month, Gemini deep research is (I think) $20/month for now, Claude pro is $20/month. Cheap compared to lawyer rates but I doubt they'll stay cheap.
LexisNexis rates vary quite a lot but $200/month for a small law firm is in the ballpark.