Willem's Fizzy Logic

Get more out of Copilot Workspace with custom instructions

Get more out of Copilot Workspace with custom instructions

Copilot Workspace has been my intern since August 2024 and we’re having fun building agents and our prompt testing tool. I learned a lot how to get the most out of this tool and I soon hope that I can start giving workshops to colleagues to help them understand how to get started with Copilot Workspace once it’s available.

One of the things that I learned is how one seemingly simple trick, adding a .github/copilot-workspace/CONTRIBUTING.md file to your repository can make Copilot understand your project at least two times better.

In this post I’ll explain how it works and show you some of the tricks I used to write the contribution guide for Copilot Workspace.

Why use custom instructions?

I’ve been using Copilot Workspace for a couple of months and while it has been great so far there are moments where I feel like it’s not helping me. For example, I kept having to explain to Copilot Workspace what my architecture looks like. It just wouldn’t pick it up quite as well as I wanted it to.

Custom instructions help me reduce my own cognitive load by writing down the instructions I want Copilot Workspace to follow all the time regardless of the task we’re working on.

Let me show you what I’m talking about.

Creating instructions

But first, let’s create the instructions file. Add a new file called .github/copilot-workspace/CONTRIBUTING.md in your repository. In this file you can now write down the instructions. A fragment from my custom instructions looks like this:

# Technical architecture

This section covers the technical layout of the application and how to work with various parts
of the code.

## Working with use-cases

This project follows vertical slice architecture. Each use-case gets its own class under the feature
folder. The associated request and response objects are contained as subclass within the use-case
class. The use-case class has the name of the use case, for example:

- Creating a new conversation is implemented in Conversations/CreateConversation.cs
- Sending a message is implemented in Conversations/SendMessage.cs
- Uploading attachments is implemented in Attachments/UploadAttachment.cs

Use cases are implemented as a MediatR `IRequestHandler`. The associated request object
must implement `IRequest<T>` when sending a response or `IRequest` when no response is needed.

## Validating input to the use-cases

Each use-case request must be validated. We use [FluentValidation](https://docs.fluentvalidation.net/en/latest/).
You are only allowed the use the information in the request object to validate the contents.
If you need to validate something that involves a database then this is part of the logic in the use case.

## Real-time communication

We use real-time communication when handling prompts coming from the user. This makes the application
more responsive. To enable communicating the response as a stream we use SignalR.

All other communication around attachments, content, starting conversations, or removing conversations
from the history should be done through minimal API endpoints in ASP.NET Core.

## Handling API requests

The frontend is implemented as a single page application that talks to either the SignalR hub
or the minimal API endpoints. The minimal API endpoints must send requests through the `IMediator` object.

The whole request lifecycle happens inside the request handlers. This makes debugging and testing the code easier.

## Testing the components in the application

We require unit-tests written in XUnit in the `InfoSupport.Agents.Ricardo.Tests` project.
The directory structure for the test project mirrors the structure of the agent project. This makes it easier
for us to find the test cases later.

We need tests that validate individual components and testss that combine every piece of the request lifecycle
in the request handlers of the application.

You can run tests with `dotnet test` from the root of the repository.

We use `FakeItEasy` for mocking dependencies in the application. You should use mocks for unit-tests to isolate
the unit under test. For integration tests you should only use mocks for dependencies that fall outside the application scope.

## Database access

We use Entity Framework Core to store information in a Postgresql database. We use the PGVector extension
to allow for finding information using cosine distance measures between embedding vectors.

The database context is stored in `Data/ApplicationDbContext.cs`. You should model the entities for the
various features in the corresponding folder. For example:

- Conversation is stored in the `Conversations/Models` folder.
- Attachment is stored in the `Attachments/Models` folder.

## Language model interactions

We use Semantic Kernel to interact with GPT-4o on Azure OpenAI service. We don't have a dedicated class
to place all the logic used to interact with GPT-4o. Instead, we create a new kernel instance with
the necessary prompts and plugins in the use case request handlers.

By moving the kernel interactions into the use case handlers we are fully transparent about where the
language model is used. It allows us to remain flexible in what the language model is allowed to use.

That’s a lot of information to take in, but don’t worry too much about the actual content. Let’s talk about what you typically add to the contributing guide.

What can you do with custom instructions?

The custom instructions in the contributing guide allow you to explain the structure of your solution to Copilot Workspace. This helps the AI with extra context information to control how specifications, plans, and code are generated.

For example, if you explain that you’re using vertical slice architecture it helps improve the plan for your tasks because now Copilot Workspace will produce a class structure that follows this principle. You can also control what frameworks to use. I explained to Copilot Workspace that I want FakeItEasy instead of the (more popular?) Moq which it tried to force on me. I also was able to control how unit-tests needed to be generated.

But what if you have a framework that’s new?

References to external content

Wouldn’t it be nice if Copilot Workspace could read a manual? You may be tempted to copy in chunks from manuals to help Copilot Workspace understand how to use a new library that wasn’t in the training data. But that’s unnecessary.

You can add links in the contributing guide to external websites or Github repository content to help Copilot Workspace with extra information to use a specific library. I added links to fluent validation and semantic kernel to help me generate agent code more quickly.

Copilot Workspace has a RAG (retrieval augmented generation) approach to generating code so it can load up the manual and use the chunks in the output.

Overall, a great tool to help Copilot Workspace mess up your code a whole lot less.

There are things that don’t work yet

As good as the instructions are, there are a few areas where I wish they use my instructions but aren’t yet.

When you create a pull request in the Copilot Workspace environment you can let the tool generate a nice summary of the changes for you. Sadly, Copilot Workspace doesn’t follow instructions for this part. I tried and it did nothing with my comments.

I had similar experience generating messages for regular commits, but I guess this is due to the same limitation as with generating PR comments.

Finally, I must note that you can’t have a very long set of instructions yet. Only the first 30 URLs in the contributing guide get picked up by Copilot Workspace. I didn’t find that a limitation at all though during testing.

Summary

In this post we explored using custom instructions for Copilot Workspace using the contributing guide. Overall, it’s a powerful mechanism that can potentially save you even more time when developing with Copilot Workspace.

I know that not many people have access to Copilot Workspace yet. Consider this article a nice piece of documentation to explore once you gain access!

About

Willem Meints is a Chief AI Architect at Aigency/Info Support, bringing a wealth of experience in artificial intelligence and software development. As a Microsoft AI Most Valuable Professional (MVP), he has been recognized for his significant contributions to the AI community.

In addition to his professional work, Willem is an author dedicated to advancing the field of AI. His latest work focuses on effective applications of large language models, providing valuable guidance for professionals and enthusiasts alike.

Willem is a fan of good BBQ, reflecting his appreciation for craftsmanship both in and out of the digital realm.

Contact