Roadmap

Run local open-source LLM from Orchestrator - like Jan.ai

I want the ability to run a Local LLM (without access to the internet) and leverage it in Orchestrator - for end-user impact.

Think Jan.ai which allows the end-use to use Mistral, Llama 2, tinyLlama, and the likes.