Is this running a custom llm under the hood or?
No, you can bring your own LLM. In the cloud, we're querying gpt-4o. We're looking to expand to have some fine-tuned VLMs for document parsing and extraction further in the roadmap, but that would heavily depend on use-case.