mooreds 6 days ago

Is the output as good?

I'd love the ability to run the LLM locally, as that would make it easier to run on non public code.

1
fforflo 6 days ago

It's decent enough. But you'd probably have to use a model like llama2, which may set your GPU on fire.