exhilaration 6 days ago

Local LLMs are almost here, no Internet needed!

1
mystraline 6 days ago

Almost?

I've been running a programming LLM locally, with a 200k context length with using system ram.

Its also an abliterated model, so I get none of the moralizing or forced ethics either. I ask, and it answers.

I even have it hooked up to my HomeAssistant, and can trigger complex actions from there.

robotnikman 5 days ago

What model are you using and what kind of hardware are you running it on?