Published on

Ollama Semantic Kernel Connector With C#: Console App & API Guide

5 min read
Table of Contents

Integrating Ollama Semantic Kernel Connector with C#: Console App & Minimal API

In this post, we'll explore how to integrate the Ollama Semantic Kernel Connector with C#, demonstrating two practical implementations: a Console Application and a Minimal API using ASP.NET Core. This integration allows developers to leverage the power of large language models (LLMs) in their C# applications with ease.

⚠️ Note: The Ollama Semantic Kernel Connector is currently in alpha. It’s not recommended for production use yet.

Prerequisites

Before we get started, ensure you have the following installed on your machine:

  • Ollama – A platform for interacting with various open-source LLMs.
  • .NET SDK – To build and run the C# applications.

What is Ollama?

Ollama simplifies interactions with open-source large language models (LLMs) like Llama, Mistral, and Gemma. It provides a user-friendly interface and an OpenAI-like API, making it easy to integrate LLMs into your applications. Ollama supports SDKs for multiple languages such as Python and JavaScript. You can learn more about Ollama here.

What is Semantic Kernel?

Semantic Kernel (SK) is an SDK that bridges the gap between traditional programming languages (like C#, Python, and Java) and powerful LLMs from platforms like OpenAI, Azure OpenAI, and Hugging Face. With Semantic Kernel, developers can define and chain plugins, enabling them to integrate advanced AI capabilities into their apps with just a few lines of code. More details about Semantic Kernel can be found here.

Why Combine Ollama and Semantic Kernel?

By running Ollama locally, you can seamlessly interact with a variety of LLMs, while Semantic Kernel simplifies the integration of these LLMs into traditional programming environments. Together, these tools enable you to build powerful AI applications that interact with LLMs in real-time. As the official Semantic Kernel connector for Ollama is still in alpha, expect even more streamlined integrations soon.


Building a Console Application with Ollama Semantic Kernel Connector

Step 1: Create a New Console Application

First, create a new C# console application:

dotnet new console -n OllamaSemanticKernelConnector
cd OllamaSemanticKernelConnector

Step 2: Add the Required NuGet Packages

Next, install the Semantic Kernel and Ollama connector packages:

dotnet add package Microsoft.SemanticKernel --version 1.24.1
dotnet add package Microsoft.SemanticKernel.Connectors.Ollama --version 1.24.1-alpha

Note: The version numbers may vary. I have pinned them to the latest version at the time of writing.

Step 3: Remove the Alpha Warning

Since the connector is in alpha, you'll need to suppress the alpha warnings. Add the following line to your .csproj file:

<NoWarn>SKEXP0070</NoWarn>

Step 4: Update the Program.cs File

Now, let’s update the Program.cs file with the code to build a simple chatbot using Ollama and Semantic Kernel.

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;

var builder = Kernel.CreateBuilder();
builder.AddOllamaChatCompletion("llama3.1:latest", new Uri("http://localhost:11434"));

var kernel = builder.Build();
var chatService = kernel.GetRequiredService<IChatCompletionService>();

var history = new ChatHistory();
history.AddSystemMessage("You are a helpful assistant.");

while (true)
{
    Console.Write("You: ");
    var userMessage = Console.ReadLine();

    if (string.IsNullOrWhiteSpace(userMessage))
    {
        break;
    }

    history.AddUserMessage(userMessage);

    var response = await chatService.GetChatMessageContentAsync(history);

    Console.WriteLine($"Bot: {response.Content}");

    history.AddMessage(response.Role, response.Content ?? string.Empty);
}

Step 5: Run the Application

Now, you can run the application and interact with the LLM through the console:

dotnet run

You can ask questions in the console, and the LLM will respond. The system keeps track of the conversation history to provide context in responses.


Building a Minimal API with Ollama Semantic Kernel Connector

In this section, we’ll create a Minimal API that interacts with the Ollama Semantic Kernel Connector. This API will expose an HTTP endpoint where users can send a message and receive responses from the LLM.

Step 1: Create a New Minimal API Project

To start, create a new Minimal API project using .NET:

dotnet new web -n OllamaSemanticKernelConnectorAPI
cd OllamaSemanticKernelConnectorAPI

Step 2: Add the Required NuGet Packages

Add the same Semantic Kernel and Ollama connector packages to your API project:

dotnet add package Microsoft.SemanticKernel --version 1.24.1
dotnet add package Microsoft.SemanticKernel.Connectors.Ollama --version 1.24.1-alpha

Note: The version numbers may vary. I have pinned them to the latest version at the time of writing.

Step 3: Remove the Alpha Warning

Just as with the console application, you need to suppress the alpha warning:

<NoWarn>SKEXP0070</NoWarn>

Step 4: Update the Program.cs File

Here’s the updated Program.cs file for creating a simple chat API:

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.ChatCompletion;

var builder = WebApplication.CreateBuilder(args);
builder.Services.AddOllamaChatCompletion("llama3.1:latest", new Uri("http://localhost:11434"));

var app = builder.Build();

var history = new ChatHistory();
history.AddSystemMessage("You are a helpful assistant.");

app.MapGet("/", () => "Hello World!");

app.MapPost("/chat", async (ChatRequest chatRequest, IChatCompletionService chatCompletionService) =>
{
    history.AddUserMessage(chatRequest.Message);
    var response = await chatCompletionService.GetChatMessageContentAsync(chatRequest.Message);
    history.AddMessage(response.Role, response.Content ?? string.Empty);
    return response.Content;
});

app.Run();

public class ChatRequest
{
    public string Message { get; set; } = string.Empty;
}

Step 5: Run the API

Run the Minimal API project:

dotnet run

The API is now running, and you can interact with it via HTTP requests. For example, you can send a POST request to the /chat endpoint to get responses from the LLM.

Example API Request

Here’s an example of how to test the API using an HTTP POST request:

POST http://localhost:5119/chat
Content-Type: application/json

{
  "message": "Why did the chicken cross the road?"
}

The LLM will respond with an appropriate answer.


Conclusion

In this post, we explored how to integrate the Ollama Semantic Kernel Connector with C# through both a Console Application and a Minimal API. With the help of OllamaSharp and Semantic Kernel, developers can now leverage powerful LLMs to build conversational AI applications easily. While the Ollama connector is still in alpha, it offers a promising glimpse into the future of AI development.

Stay tuned for the official release, and feel free to ask questions in the comments section below!