It’d be better if it was written in C or at least Vala. With Python, you have to wait a couple hundred milliseconds for the interpreter to start, which makes it feel less native than it can be. That said, the latency of the LLM responses is higher than the UI, so I guess the slowness of Python doesn’t matter.
Yeah I agree, I've been thinking about using Rust. But ultimately it's a problem with GTK3 vs GTK4 too because if we could reuse the Python interpreter from the applet that would speed things up but GTK4 doesn't have support for AppIndicator icons(!).
I've been pondering whether to backport to GTK3 for this sole purpose. I find that after the initial delay to startup the app, its speed is okay...
Porting to Rust is not really planned because I'd loose the llm-python base - but still something that triggers my curiosity.
What's the startup time now with 9950X3D, after a prior start so the pyc's are cached in RAM?
With a laptop 7735HS, using WSL2, I get 15ms for the interpreter to start and exit without any imports.
I've got a i5-10210U CPU @ 1.60GHz.
You triggered my curiosity. The chat window takes consistently 2.28s to start. The python interpreter takes roughly 30ms to start. I'll be doing some profiling.
I wonder! In my more modest setup, it takes a couple of seconds perhaps. After that it's quite usable.