Replies: 1 comment
-
|
Subject: Re: Ollama Configuration & Model Selection Thank you for your thoughtful feedback and suggestions regarding the Graphite script's Ollama integration. I'm pleased to inform you that I'm actively implementing model selection functionality. The upcoming update will include:
This feature has been highly requested by the community, and I've dedicated significant development time to ensuring it meets user needs. The update will be released tomorrow along with a newly refactored repository structure. I want to personally thank you for taking the time to submit this request and engage in discussion about improving Graphite. User feedback like yours is invaluable in guiding the project's development. Best regards, |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @dovvnloading,
I’ve been exploring your Graphite script and noticed that the Ollama connection is hard‑coded with local instance with a single model. It worked great for my local tests, but I’m curious about a more flexible setup:
- Could you add a configuration layer (env vars, a simple config file, or CLI flags) that lets users specify the Ollama endpoint and model? (another ollama instance with big gpu)
- Is there a possibility to expose this directly in the Graphite interface, so users can choose the model or switch to a remote Ollama instance without editing the source?
- What would be the best way to handle multiple models or different providers in the future?
I’d love to hear your thoughts on these ideas and whether you’d be open to adding this feature. Let’s discuss!
Beta Was this translation helpful? Give feedback.
All reactions