Can you open up the options to use other model/versions, especially Gemini-2.5 pro experimental models available through aistudio? Would love to try this but gemini flash fails for even simple tasks. Example: I asked it to extract all the links from comment section of a hackernews comment section and it just scrolled all the way to the end and then nothing. Maybe pro models can do it better.
"Gemini Flash fails even for simple tasks." On the Gemini Flash page (https://deepmind.google/technologies/gemini/flash/), it claims to be 'best for fast performance on complex tasks.'. I always use Gemini Flash in my project for demos and testing, and it performs very well, if a project requires a large, expensive model to handle simple tasks, that could be an issue to users.
Gemini 2.5 pro is available. Is it missing on your side? Do you run index via CLI?
Yes it is, however API keys from aistudio only allows pro-experimental model. So if I select gemini-pro, I will see this: "Gemini 2.5 Pro Preview doesn't have a free quota tier. Please use Gemini 2.5 Pro Experimental (models/gemini-2.5-pro-exp-03-25) instead". Can I choose exact model somewhere in the CLI?
Oh I see, didn't know about that, fastest and easiest thing you can do is to play around with pro via our chat UI https://lmnr.ai/chat - it's free up to 10 messages.
For the CLI and custom models, you can clone the repo, then go to the cli.py and manually add your model there. I will work on proper support of custom models.
extremely slow