Get started with the Azure AI Inference integrations
Azure AI Inference provides serverless API endpoints for deploying and using AI models. The Aspire Azure AI Inference integration enables you to connect to Azure AI Inference services from your applications, making it easy to call models for chat, completions, embeddings, and more.
In this introduction, you'll see how to install and use the Aspire Azure AI Inference integrations in a simple configuration. If you already have this knowledge, see Azure AI Inference Hosting integration for full reference details.
Note
To follow this guide, you must have created an Aspire solution to work with. To learn how to do that, see Build your first Aspire app.
Set up hosting integration
Although the Azure AI Inference library doesn't currently offer direct hosting integration, you can still integrate it into your AppHost project. Simply add a connection string to establish a reference to an existing Microsoft Foundry resource.
If you already have a Microsoft Foundry service, you can easily connect to it by adding a connection string to your AppHost:
var builder = DistributedApplication.CreateBuilder(args);
var aiFoundry = builder.AddConnectionString("ai-foundry");
builder.AddProject<Projects.ExampleProject>()
.WithReference(aiFoundry);
// After adding all resources, run the app...
builder.Build().Run();
The connection string is configured in the AppHost's configuration, typically under User Secrets, under the ConnectionStrings section:
{
"ConnectionStrings": {
"ai-foundry": "Endpoint=https://{endpoint}/;DeploymentId={deploymentName}"
}
}
Tip
This is the simplest implementation of Azure AI Inference resources in the AppHost. For full details, see Azure AI Inference Hosting integration.
Set up client integration
To use Azure AI Inference from your client applications, install the 📦 Aspire.Azure.AI.Inference NuGet package in the client-consuming project:
dotnet add package Aspire.Azure.AI.InferenceIn the Program.cs file of your client-consuming project, add the Azure AI Inference Chat Completions client:
builder.AddAzureChatCompletionsClient(connectionName: "ai-foundry")
.AddChatClient("deploymentName");
After adding the IChatClient, you can retrieve the client instance using dependency injection:
public class ExampleService(IChatClient chatClient)
{
public async Task<string> GetResponseAsync(string userMessage)
{
var response = await chatClient.CompleteAsync(userMessage);
return response.Message.Text ?? string.Empty;
}
}
For more information on using the client integration, see Azure AI Inference Client integration.