Embed model role
An “embeddings model” is trained to convert a piece of text into a vector, which can later be rapidly compared to other vectors to determine similarity between the pieces of text. Embeddings models are typically much smaller than LLMs, and will be extremely fast and cheap in comparison.
In Continue, embeddings are generated during indexing and then used by @Codebase to perform similarity search over your codebase.
You can add embed
to a model’s roles
to specify that it can be used to embed.
[Built-in model (VS Code only)] transformers.js
is used as a built-in
embeddings model in VS Code. In JetBrains, there currently is no built-in
embedder.
If you have the ability to use any model, we recommend voyage-code-3
, which is listed below along with the rest of the options for embeddings models.
If you want to generate embeddings locally, we recommend using nomic-embed-text
with Ollama.
After obtaining an API key from here, you can configure like this:
See here for instructions on how to use Ollama for embeddings.
Transformers.js is a JavaScript port of the popular Transformers library. It allows embeddings to be calculated entirely locally. The model used is all-MiniLM-L6-v2
, which is shipped alongside the Continue extension.
Hugging Face Text Embeddings Inference enables you to host your own embeddings endpoint. You can configure embeddings to use your endpoint as follows:
See here for instructions on how to use OpenAI for embeddings.
See here for instructions on how to use Cohere for embeddings.
See here for instructions on how to use Gemini for embeddings.
See here for instructions on how to use Vertex for embeddings.
See here for instructions on how to use Mistral for embeddings.
See here for instructions on how to use NVIDIA for embeddings.
See here for instructions on how to use Bedrock for embeddings.
See here for instructions on how to use WatsonX for embeddings.
See here for instructions on how to use LMStudio for embeddings.