- [May 2025] 🎉 EvoAgentX has been officially released!
- 🔥 Latest News
- ⚡ Get Started
- Installation
- LLM Configuration
- Automatic WorkFlow Generation
- Demo Video
- Evolution Algorithms
- Applications
- Tutorial and Use Cases
- 🎯 Roadmap
- 🙋 Support
- 🙌 Contributing to EvoAgentX
- 📚 Acknowledgements
- 📄 License
We recommend installing EvoAgentX using pip
:
pip install git+https://github.com/EvoAgentX/EvoAgentX.git
For local development or detailed setup (e.g., using conda), refer to the Installation Guide for EvoAgentX.
Example (optional, for local development):
git clone https://github.com/EvoAgentX/EvoAgentX.git
cd EvoAgentX
# Create a new conda environment
conda create -n evoagentx python=3.10
# Activate the environment
conda activate evoagentx
# Install the package
pip install -r requirements.txt
# OR install in development mode
pip install -e .
To use LLMs with EvoAgentX (e.g., OpenAI), you must set up your API key.
Option 1: Set API Key via Environment Variable
- Linux/macOS:
export OPENAI_API_KEY=<your-openai-api-key>
- Windows Command Prompt:
set OPENAI_API_KEY=<your-openai-api-key>
- Windows PowerShell:
$env:OPENAI_API_KEY="<your-openai-api-key>" # " is required
Once set, you can access the key in your Python code with:
import os
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
Option 2: Use .env File
- Create a .env file in your project root and add the following:
OPENAI_API_KEY=<your-openai-api-key>
Then load it in Python:
from dotenv import load_dotenv
import os
load_dotenv() # Loads environment variables from .env file
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
Once the API key is set, initialise the LLM with:
from evoagentx.models import OpenAILLMConfig, OpenAILLM
# Load the API key from environment
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
# Define LLM configuration
openai_config = OpenAILLMConfig(
model="gpt-4o-mini", # Specify the model name
openai_key=OPENAI_API_KEY, # Pass the key directly
stream=True, # Enable streaming response
output_response=True # Print response to stdout
)
# Initialize the language model
llm = OpenAILLM(config=openai_config)
# Generate a response from the LLM
response = llm.generate(prompt="What is Agentic Workflow?")
📖 More details on supported models and config options: LLM module guide.
Once your API key and language model are configured, you can automatically generate and execute multi-agent workflows in EvoAgentX.
🧩 Core Steps:
- Define a natural language goal
- Generate the workflow with
WorkFlowGenerator
- Instantiate agents using
AgentManager
- Execute the workflow via
WorkFlow
💡 Minimal Example:
from evoagentx.workflow import WorkFlowGenerator, WorkFlowGraph, WorkFlow
from evoagentx.agents import AgentManager
goal = "Generate html code for the Tetris game"
workflow_graph = WorkFlowGenerator(llm=llm).generate_workflow(goal)
agent_manager = AgentManager()
agent_manager.add_agents_from_workflow(workflow_graph, llm_config=openai_config)
workflow = WorkFlow(graph=workflow_graph, agent_manager=agent_manager, llm=llm)
output = workflow.execute()
print(output)
You can also:
- 📊 Visualise the workflow:
workflow_graph.display()
- 💾 Save/load workflows:
save_module()
/from_file()
📂 For a complete working example, check out the
workflow_demo.py
EvoAgentX_demo.mp4
In this demo, we showcase the workflow generation and execution capabilities of EvoAgentX through two examples:
- Application 1: Intelligent Job Recommendation from Resume
- Application 2: Visual Analysis of A-Share Stocks
![]() Application 1: Job Recommendation |
![]() Application 2: Stock Visual Analysis |
We have integrated some existing agent/workflow evolution algorithms into EvoAgentX, including TextGrad, MIPRO and AFlow.
To evaluate the performance, we use them to optimize the same agent system on three different tasks: multi-hop QA (HotPotQA), code generation (MBPP) and reasoning (MATH). We randomly sample 50 examples for validation and other 100 examples for testing.
Tip: We have integrated these benchmark and evaluation code in EvoAgentX. Please refer to the benchmark and evaluation tutorial for more details.
Method | HotPotQA (F1%) |
MBPP (Pass@1 %) |
MATH (Solve Rate %) |
---|---|---|---|
Original | 63.58 | 69.00 | 66.00 |
TextGrad | 71.02 | 71.00 | 76.00 |
AFlow | 65.09 | 79.00 | 71.00 |
MIPRO | 69.16 | 68.00 | 72.30 |
Please refer to the examples/optimization
folder for more details.
We use our framework to optimize existing multi-agent systems on the GAIA benchmark. We select Open Deep Research and OWL, two representative multi-agent framework from the GAIA leaderboard that is open-source and runnable.
We apply EvoAgentX to optimize their prompts. The performance of the optimized agents on the GAIA benchmark validation set is shown in the figure below.
![]() Open Deep Research |
![]() OWL Agent |
Full Optimization Reports: Open Deep Research and OWL.
💡 New to EvoAgentX? Start with the Quickstart Guide for a step-by-step introduction.
Explore how to effectively use EvoAgentX with the following resources:
Cookbook | Description |
---|---|
Build Your First Agent | Quickly create and manage agents with multi-action capabilities. |
Build Your First Workflow | Learn to build collaborative workflows with multiple agents. |
Working with Tools | Master EvoAgentX's powerful tool ecosystem for agent interactions |
Automatic Workflow Generation | Automatically generate workflows from natural language goals. |
Benchmark and Evaluation Tutorial | Evaluate agent performance using benchmark datasets. |
TextGrad Optimizer Tutorial | Automatically optimise the prompts within multi-agent workflow with TextGrad. |
AFlow Optimizer Tutorial | Automatically optimise both the prompts and structure of multi-agent workflow with AFlow. |
🛠️ Follow the tutorials to build and optimize your EvoAgentX workflows.
🚀 We're actively working on expanding our library of use cases and optimization strategies. More coming soon — stay tuned!
- Modularize Evolution Algorithms: Abstract optimization algorithms into plug-and-play modules that can be easily integrated into custom workflows.
- Develop Task Templates and Agent Modules: Build reusable templates for typical tasks and standardized agent components to streamline application development.
- Integrate Self-Evolving Agent Algorithms: Incorporate more recent and advanced agent self-evolution across multiple dimensions, including prompt tuning, workflow structures, and memory modules.
- Enable Visual Workflow Editing Interface: Provide a visual interface for workflow structure display and editing to improve usability and debugging.
📢 Stay connected and be part of the EvoAgentX journey!
🚩 Join our community to get the latest updates, share your ideas, and collaborate with AI enthusiasts worldwide.
- Discord — Chat, discuss, and collaborate in real-time.
- X (formerly Twitter) — Follow us for news, updates, and insights.
- WeChat — Connect with our Chinese community.
If you have any questions or feedback about this project, please feel free to contact us. We highly appreciate your suggestions!
- Email: [email protected]
We will respond to all questions within 2-3 business days.
Thanks go to these awesome contributors
We appreciate your interest in contributing to our open-source initiative. We provide a document of contributing guidelines which outlines the steps for contributing to EvoAgentX. Please refer to this guide to ensure smooth collaboration and successful contributions. 🤝🚀
This project builds upon several outstanding open-source projects: AFlow, TextGrad, DSPy, LiveCodeBench, and more. We would like to thank the developers and maintainers of these frameworks for their valuable contributions to the open-source community.
Source code in this repository is made available under the MIT License.