Azure AI Foundry is a cloud-based service that provides access to models from OpenAI, Mistral AI, and others, integrated with the security and enterprise features of the Microsoft Azure platform. To get started, create an Azure AI Foundry resource in the Azure portal.
For details on OpenAI model setup, see Azure OpenAI Service configuration.
We recommend configuring GPT-4o as your chat model.
We recommend configuring Codestral as your autocomplete model. If you use Azure AI Foundry to deploy Codestral:
We recommend configuring text-embedding-3-large as your embeddings model.
Azure OpenAI currently does not offer any reranking models.
Click here to see a list of reranking models.
If you’d like to use OpenAI models but are concerned about privacy, you can use the Azure OpenAI service, which is GDPR and HIPAA compliant.
Getting access: Click here to apply for access to the Azure OpenAI service. Response times are typically within a few days.
Azure OpenAI Service requires a handful of additional parameters to be configured, such as a deployment name and API base URL.
To find this information in Azure AI Foundry, first select the model that you would like to connect. Then visit Endpoint > Target URI.
For example, a Target URI of https://just-an-example.openai.azure.com/openai/deployments/gpt-4o-july/chat/completions?api-version=2023-03-15-preview
would map to the following: