Thanks for the feedback. Optimizing for speed meant we had fewer LLMs to choose from. OpenAI had surprisingly high variance in latency, which made it unusable for this demo. I think we could probably do a better job with prompting for some of the characters.
You know the trick of having Gemini or mistral re-jigger the prompt?
Also, you do realize that this will be used to defraud people of money and/or property, right?
All about coulda, not shoulda.
> Also, you do realize that this will be used to defraud people of money and/or property, right?
Sadly, no one cares. LLM driven fraud is already happening and it is about to become more profitable.