The Promise of .NET Aspire: A Deep Dive into Orchestrating AI Solutions

Published 2/28/2024 9:30:50 AM
Filed under .NET

For the last few months, I've been working on a generative AI tool for my colleagues to use for a wide range of use cases. It's a chatbot with an integrated semantic search engine that can answer questions based on internal data sources.

I started this journey alone as an experiment to see if a digital assistant was something I could build, but it's grown quite big by now. Over the past few months, a few people helped me expand the chatbot and prepare it for more use cases.

One of the challenges we ran into was that deploying the digital assistant was hard. We have Python components, a vector database, a JavaScript frontend, and .NET code to deploy. We spend a lot of time getting all components into Docker containers and Azure safely.

Isn't it nicer when we can focus more on improving the product instead of spending time managing Azure resources? There are quite a few options out there that can help. One of them is .NET Aspire, which is arguably the newest and least mature. But it looked promising, so I did an experiment.

In this blog post, we'll explore what benefits .NET Aspire currently provides and if it's something that you should try for your next project. Let's get started.

What is .NET Aspire?

Building distributed applications can be challenging because of the amount of moving parts. With .NET Aspire, you can make it easier to build distributed applications because of several features: orchestration support, automatic service discovery, and sensible service defaults. Let's take a look at these features and discuss their benefits.

Orchestration

One of the aspects of distributed applications that are hard to manage is how services and resources fit together to form one cohesive solution. In many cases, you're juggling several scripts spread across your source control.

In .NET Aspire, you can write orchestration logic in C# as part of an AppHost project. You describe the various services and resources in your distributed application as resources. This moves all the work around composing the solution in one location as code.

Service discovery

Another challenge in distributed applications is service discovery. Often, you'll find yourself registering service endpoints in various configuration files. When the location of a single service changes, you'll need to run around the environment to fix the configuration files.

In .NET Aspire, you can use service discovery to locate services automatically based on how they are deployed. Depending The implementation will vary depending on where you've deployed your solution, but you don't notice this in your application code.

Service defaults for monitoring and telemetry

Monitoring services in a distributed application can make a big difference when managing the solution. There are several possible solutions for monitoring. For example, we're using application insights for our digital assistant. It provides alerting, tracing, logging, and performance information.

Setting up something like Application Insights isn't complicated, but having a ready-made solution is nice. When you use .NET Aspire, you can use a service defaults feature that provides a shared code base with settings for monitoring, health endpoints, and configuration.

Default components

A final but no less important aspect of .NET Aspire is the availability of ready-made components that work with how .NET Aspire handles configuration, service discovery, and orchestration.

Aspire has a pre-configured database, messaging, caching, security, and storage components that plug into your project with very little configuration. They are available for C# only, so this feature isn't as useful if you have Python components or use Javascript in parts of your solution.

Using .NET Aspire to deploy an AI-infused solution

.NET Aspire is an opinionated framework that promises much value if you follow the rules. Let's look at how things played out in my experiment.

Setting up a basic application host

The main feature I'm looking for is the orchestration in .NET Aspire. Connecting all the components in the solution together is the hardest part if you skip over the AI part.

To set up an application host, you must run dotnet new aspire-app host <name> in a terminal. Or you can create a new project in Visual Studio if you use that.

You'll end up with a project that has the following content in Program.cs:

var builder = DistributedApplication.CreateBuilder(args);

builder.Build().Run();

The DistributedApplicationBuilder is the entry point that we'll use to orchestrate the different components in the application.

To run the solution, you execute dotnet run on the terminal or start the project in Visual Studio. You'll need Docker for any container resources.

Let's look at how to run our solution in the orchestrator.

Running .NET projects

As .NET Aspire is focused on .NET, we should start looking at running a .NET project in Aspire. We have a conversations API in the digital assistant responsible for managing conversations and user profile information.

I added a reference from the application host project to run the conversations API. Next, I added the following code the orchestration project:

var databaseServer = builder.AddPostgresContainer("postgresDatabase");
var conversationsDatabase = databaseServer.AddDatabase("conversationsDb");

var conversationsApi = builder
    .AddProject<SmartAssist_Conversations>("conversationsApi")
    .WithReference(conversationsDatabase);

The first two lines configure a Postgres database. The last few lines add the conversations API as a project resource to the orchestrator and reference the database. The conversations API automatically receives a connection string conversationsDb that I can reference inside the API. The following code demonstrates how to connect to the database from the conversations API:

builder.Services.AddDbContext<ApplicationDbContext>(options =>
{
    var connectionString = builder.Configuration.GetConnectionString("conversationsDb");
    options.UseNpgsql(connectionString, server => server.EnableRetryOnFailure());
});

Setting up .NET projects in Aspire is straightforward. The orchestrator logic proves valuable here because I keep forgetting about user secrets for database connections. The orchestrator automatically injects the right passwords into the app and the database so I can't forget.

So far, it looks promising, but what about the more complicated parts of the solution. Let's explore how I handled the integration between the conversations API and the LLM pipeline of the chat assistant.

Using custom resources

Our digital assistant runs on Python for some parts. We separated our LLM pipeline into a separate application so we could version the pipeline independently from the rest of the application. The LLM pipeline orchestrates how we call Azure OpenAI service to get a response to a user's prompt (request). It has a set of instructions on how the assistant should behave and what topics are off-limits.

Every time we deploy a new version of the LLM pipeline, we run various evaluations to check that everything is responding as expected. Running the pipeline as a separate component makes quickly replacing it easier if we find problems.

We use BentoML to host the LLM pipeline because it provides a neat streaming interface. It also generates an Open API specification based on our code. It just saves a lot of time and scales really well.

You can't run BentoML-based applications by default in .NET Aspire. You need to wrap them as executable resources. I started out with the following code in my orchestrator:

var llmPipeline = builder.AddExecutable("llmPipeline," "poetry," args: new string[] { "run," "bentoml," "serve"}, workingDirectory: "../apps/chat");

This code will run the LLM pipeline as a BentoML service, but I can't reference it from the conversations API resource. You can't reference it because it's not recognized as a resource with a callable endpoint. To reference other resources from a resource, you need resources that are of type IResourceWithEndpoints, IResourceWithServiceDiscovery, or IResourceWithConnectionString.

To create a resource with an endpoint, service discovery, or a connection string, you'll need to provide a custom resource type. In my case, I created a BentoMLResource:

public class BentoMLResource : ExecutableResource, IResourceWithServiceDiscovery
{
    public BentoMLResource(string name, string command, string workingDirectory, string[]? args) : base(name, command, workingDirectory, args)
    {
    }
}

In addition to the resource definition, I also had to write an extension method to construct the resource. The following code shows what the resource looks like:

public static IResourceBuilder<BentoMLResource> AddBentoML(this IDistributedApplicationBuilder builder, string name, string workingDirectory, int hostPort, bool usePoetry = false)
{
    var portVariableReference = RuntimeInformation.IsOsPlatform(OSPlatform.Windows) ? "%PORT%" : "${PORT}";

    // We use poetry to run our Python projects. So, we included this flag to change the command based on how the project is fabricated.
    // The use of poetry is pretty much mandatory in our projects because we have virtual environments.
    if (usePoetry)
    {
        var command = "poetry";
        var commandLineArgs = new string[] { "run," "bentoml," "serve," "--port," portVariableReference };

        return builder.AddResource(new BentoMLResource(name, command, workingDirectory, commandLineArgs)
        {
            Options = new BentoMLResourceOptions { UsePoetry = usePoetry }
        }).WithEndpoint(scheme: "HTTP", hostPort: hostPort, env: "PORT");
    }
    else
    {
        var command = "bentoml";
        var commandLineArgs = new string[] { "serve", "--port", portVariableReference };

        return builder.AddResource(new BentoMLResource(name, command, workingDirectory, commandLineArgs)
        {
            Options = new BentoMLResourceOptions { UsePoetry = usePoetry }
        }).WithEndpoint(scheme: "HTTP", hostPort: hostPort, env: "PORT");
    }
}

A new BentoMLResource is created in this extension method with a specific command line that varies based on whether we use the Poetry package manager in Python. We attach an endpoint to the resource so it becomes discoverable.

Once I made the custom resource, I was able to use it in the orchestrator logic like so:

var llmPpipeline = builder
    .AddBentoML(
        name: "llmPipeline",
        workingDirectory: "../apps/chat",
        hostPort: 5001,
        usePoetry: true);

Thanks to my custom resource, I can now reference llmPipeline from the conversations API:

var conversationsApi = builder
    .AddProject<SmartAssist_Conversations>("conversationsApi")
    .WithReference(conversationsDatabase)
    .WithReference(llmPipeline);

To call the LLM pipeline, I need to run the following code in the conversations API:

builder.Services.AddHttpClient<LLMPipelineClient>(static client => client.BaseAddress = new("http://llmPipeline"));

This code registers a new HTTP client in the application for the LLM pipeline. The llmPipeline server name is replaced with the actual URL at runtime.

As you can imagine, at this point, Aspire isn't very mature as a product since it only had 3 preview versions. You'll find that you're going to have to create a lot of custom resources if you have a more exotic combination of components in your distributed application.

Most custom resources you'll use are based around container or executable resources, which are already in the base libraries. Writing the custom logic for your resources will take some code, but it's not an impossible effort.

Deploying the solution

I figured it would be nice to try to deploy the solution to Azure to see how that would work out. However, it was the end of a long evening, and I ran into several stability issues. So, I'll save this part for another time. After this experiment, I figured it was time to draw a few conclusions.

Is this the future for distributed applications?

My number one question was: Is this useful when you build hybrid solutions that use other technologies besides .NET? At this point, it isn't if you don't have a green field situation and are not prepared to invest in building custom logic in your Python application and the orchestrator.

If you're building with 100% .NET, then Aspire is a great tool to have. It saves a considerable amount of time if you look at orchestration alone.

The deployment options are limited at this point. .NET Aspire supports Azure Container apps natively, and that's it. Other options are being developed, though. Aspirational Manifests, for example, allows you to deploy your solution to Kubernetes. I hope more options will follow soon!

We're now returning to good old bicep, Dockerfiles, and elbow grease. But it'll be interesting to see where .NET Aspire is headed for sure.