Skip to content

Chatbot

This Recipe shows an example of a multi-llm multimedia enabled chatbot.

Not all LLMs support multimedia, let alone mid-conversation brain-boosts. This can cause issues when swapping out components.

Eidolon’s AgentProcessingUnit abstracts away those concepts so you can enable multimedia, json output, and function calling on even the smallest LLM.

Core Concepts

How to Configure Built-in Components

Agents

Conversational Agent

This uses the SimpleAgent template, but needs some customization to enable file uploads and support multiple LLMs.

You will notice that enabled file upload on our AgentProcessingUnit’s primary action.

actions:
- name: "converse"
description: "A copilot that engages with the user."
allow_file_upload: true

We also have a list of available APUs in resources/apus.eidolon.yaml.

apus:
- apu: MistralSmall
title: Mistral Small
- apu: MistralMedium
title: Mistral Medium
- apu: MistralLarge
...

We did not need to make any customization to support multimedia within the APU, this is turned on by default 🚀.

Try it out!

First let’s fork for Eidolon’s chatbot repository, clone it to your local machine, and start your server.

Terminal window
git clone https://github.com/eidolon-ai/eidolon-chatbot.git
cd eidolon-chatbot
make docker-serve # launches agent server and webui

Now Head over to the chatbot ui in your favorite browser and start chatting with your new agent.