Q: A way to switch local ollama models and not break everything?
One thing I can’t find a workaround for thst makes me sad is whenever you switch ollama models all of your assistants and snippets mess up because the default model changes.
Ideally I would want to be able to assign different models to different assistants such as Codestral to my coder and llama3.2 to my general purpose but it’s seems you have to pick 1 model for everything.
Then If you just manually do it by changing the models when needed for either coding or general as soon as you switch it, all your assistants and snippets get mess up just because you switched your local model and all defaults get erased. I would assume your power users are using local models so really hope this limitations is something that can be worked around or you have a plan for?
Amazing vision and I’ll still use it but this would make to 3x better.