Skip to content

Azure Agent

This Recipe shows an example of a web researcher chatbot.

Core Concepts

How to Configure Built-in Components

Prerequisites

Azure account with OpenAI access

If you do not have one already, sign up for an azure acount and get access to the OpenAI API.

Cloned Azure LLM Example Repository

Clone the Azure LLM example repo locally and work from the project’s root directory

Terminal window
https://github.com/eidolon-ai/azure-llm.git
cd azure-llm

Setup

1. Create an Azure OpenAPI Resource

Head over to the Azure OpenAPI Resource page to deploy a new resource (In this demo we used custom-azure-deployment).

Get Key and add it to .env as AZURE_OPENAI_API_KEY

Terminal window
# make .env will prompt you for AZURE_OPENAI_API_KEY and write it to .env
make .env

🚨 Also note the Endpoint value. We will use this to update the azure_endpoint field in azure_agent.yaml later.

2. Deploy a Model

Create an Azure deployment for your resource a model named (in this demo we named our deployment custom-azure-deployment).

3. Create an APU for your model.

Create a new APU resource in the resources directory. This APU will point to your Azure LLM deployment and will be used by your agent to interact with the LLM.

resources/azure_4o_apu.yaml
apiVersion: server.eidolonai.com/v1alpha1
kind: Reference
metadata:
name: azure-gpt4o
spec:
implementation: GPT4o
llm_unit:
implementation: "AzureLLMUnit"
azure_endpoint: https://eidolon-azure.openai.azure.com # resource azure endpoint
model:
name: custom-azure-deployment # your custom deployment name

👉 Note: both the spec.llm_unit.azure_endpoint and spec.llm_unit.model.name fields should be updated with the values you got from the Azure portal.

The example agent already points to this apu.
resources/azure_agent.yaml
apiVersion: server.eidolonai.com/v1alpha1
kind: Agent
metadata:
name: hello-world
spec:
implementation: SimpleAgent
apu:
implementation: azure-gpt4o # points to your apu resource

Testing your Azure Agent

To verify your deployment is working, run the tests in “record-mode” and see if they pass:

Terminal window
make test ARGS="--vcr-record=all"
What's up with the `--vcr-record=all` flag? 🤔

Eidolon is designed so that you can write cheap, fast, and deterministic tests by leveraging pyvcr.

This records http request/responses between test runs so that subsequent calls never actually need to go to your llm. These recordings are stored as cassette files.

This is normally great, but it does mean that when you change your config these cassettes are no longer valid. --vcr-record=all tells pyvcr to ignore existing recordings and re-record them again using real http requests.

Try it out!

If your tests are passing, try chatting with your agent using the Eidolon webui:

First start the backend server + webui

Terminal window
make docker-serve

then visit the webui and start experimenting!