This this what gets downloaded in IntelliJ? Why is it so much slower?

#9
by deusaquilus - opened

Is this the same thing as the model that get's downloaded in IntelliJ when I go to Tools -> AI Assistant and download the local model?
Why is so much slower when I run it in Ollama?

JetBrains org

Hi @deusaquilus , not sure I understood your question correctly, but if you’re referring to inline completion local models (full line), then no, they’re not the same. The local models we currently use are much smaller than Mellum - around 100-400M parameters.

Sign up or log in to comment