Full CodeMemobase supports any OpenAI-compatible LLM provider as its backend. This tutorial demonstrates how to use Ollama to run a local LLM for both the Memobase server and your chat application.
Setup
1. Configure Ollama
- Install Ollama on your local machine.
- Verify the installation by running
ollama -v. - Pull a model to use. For this example, we’ll use
qwen2.5:7b.
2. Configure Memobase
To use a local LLM provider with the Memobase server, you need to modify yourconfig.yaml file.
Set the following fields to point to your local Ollama instance:
config.yaml
host.docker.internal to allow it to access the Ollama server running on your local machine at port 11434.
Code Breakdown
This example uses Memobase’s OpenAI Memory Patch for a clear demonstration.Client Initialization
First, we set up the OpenAI client to point to our local Ollama server and then apply the Memobase memory patch.Chat Function
Next, we create a chat function that uses the patched client. The key is to pass auser_id to trigger the memory functionality.