> It can help us, just like a calculator can help us solve an equation.
A calculator is consistent and doesn’t “hallucinate” answers to equations. An LLM puts an untrustworthy filter between the truth and the person. Google was revolutionary because it increased access to information. LLMs only obscure that access, while pretending to be something more.
I'm not a native english speaker, so I've used for an essay where they told us to target a certain word count. I was close, but the verbiage to get to that word count doesn't come naturally to me. So I used Germini and tell it to rewrite the text targeting that word count (my only prompt). Then I reviewed the answer, rewriting where it strayed from the points I was making.
Also I used it for a few programming tasks I was pretty sure was in the datasets (how to draw charts with python and manipulate pandas frame). I know the domain, but wasn't in the mood to analyse the docs to get the implementation information. But the information I was seeking was just a few lines of sample code. In my experience, anything longer is pretty inconsistent and worthless explanations.
Word count targets are a rough guideline for how much detail is expected; adding more useless filler is the last thing you want.