Quickstart
Welcome to the Eidolon Quickstart! This page covers a few prerequisites, cloning the Quickstart repository from GitHub, starting the agent server, and conversing with your agent.
Compared to other multi-agent genAI systems, you may notice something really different with Eidolon’s Quickstart.
You’re starting a server that hosts pre-built and custom agents.
If you tried other multi-agent frameworks, you know that these Eidolon capabilities are also a big deal: deployment is automatic, and communication with and between those agents is built-in.
You’ll get to that soon enough. Let’s get started by deploying one agent and saying “hello world!”
Prerequisites
You don’t need much to get started with Eidolon.
OpenAI API Key The Quickstart connects to ChatGPT from OpenAI by default. Before continuing, obtain an API key from OpenAI and fund your account. Otherwise you will not be able to authenticate and will receive an error.
See LLM Prerequisites for links to OpenAI and other popular LLM providers.Docker The Quickstart uses Docker to run your agent server. See Install Docker Engine for installation instructions for Docker Engine, and links to Docker Desktop.
Run Eidolon Quickstart
Running the Eidolon Quickstart requires only that you clone the repository to your machine and run a script.
- Use
git
to clone the quickstart to your local machine:
- Run the Eidolon multi-agent server in dev mode:
🔎 The first time you run this command, you will be prompted to enter the OpenAI API Key. Quickstart API keys and other credentials are stored in
eidolon-quickstart/.env
.
If the Eidolon multi-agent server starts successfully, you should see output similar to this:
🤔 Running into problems? Get help from us on Discord 📞 or submit an issue on GitHub. We want you to love ❤️ working with Eidolon. If there’s a problem, we want to fix it 🛠.
Try it out!
Now that the Eidolon multi-agent server is running, let’s use it! You can use the Eidolon WebUI or command line interface (CLI) in a terminal.
WebUI Developer Tool
A Developer Tool web application is deployed to your server on port 3000. You can use the Developer Tool with any of your Eidolon applications.
To converse with your Eidolon agent for the first time…
- Open a web browser to http://localhost:3000 (or replace localhost with your server URL).
- Click the Eidolon Developer Tool app card.
- Click Execute on your agent’s “converse” action to begin a new conversation.
- Enter
Hello world! Write a haiku admiring the earth.
in the text box.
Did your agent respond to you? 🍾 Congratulations! You successfully deployed your first of many genAI agents.
🚨 A common error is not having a funded account with the LLM provider. OpenAI is the default LLM provider, so if you run into this error head over to OpenAI Billing to sort this out.
Command Line Interface (CLI)
If you prefer to use the CLI, open a new terminal window to interact with your agent.
- Download the Eidolon CLI.
- Create an AgentProcess.
🔬 a process defines the boundaries of an agent’s memory
- Converse with your agent.
What’s Happening?
The repository you cloned includes everything you need to deploy multi-agent genAI applications. It includes an agent server, memory, web server, and more.
The agent you interacted with is defined in a YAML file located at resources/hello_world_agent.eidolon.yaml
The agent’s YAML file defines:
- how to instantiate your agent named
hello-world
from Eidolon’s built-inSimpleAgent
template 🏭 - including the
system_prompt
, which are your instructions to the LLM - and any customization you might want, such as swapping out the LLM, custom tools, etc.
The make docker-serve
command:
- downloads dependencies required to run the multi-agent server
- starts the Eidolon multi-agent server in “dev-mode”
Dev mode provides a local http server and local memory, making it easy to focus on and get comfortable with Eidolon functionality.
Have Fun! Change the System Prompt
To see how quickly you can iterate to build powerful agentic applications, make a simple change.
Using a text editor, change the system prompt:
You do not need to restart. Simply return to the WebUI or CLI and say hello again.
Try different user prompts (messages you send to an agent in the WebUI or CLI) and system prompts (instructions in the resource file to tell the agent what to do with user messages). Have fun!
Next Steps
You can adapt this simple agent to do a lot of things! Try these things next.
- ⭐ Eidolon on GitHub. Eidolon is a fully open source project, and we love your support!
- Swap out the LLM
- Configure multi-agent communication
- Configure built-in components — why code when you can configure?
- Use structured inputs for prompt templating
- Leverage your agent’s state machine
- Add new capabilities via LogicUnits (tools)