docs | ||
migrations | ||
scripts | ||
src | ||
static | ||
tests | ||
.cursorrules | ||
.dockerignore | ||
.env | ||
.env.example | ||
.flake8 | ||
.gitignore | ||
alembic.ini | ||
conftest.py | ||
docker_build.sh | ||
docker-compose.yml | ||
Dockerfile | ||
Makefile | ||
pyproject.toml | ||
README.md | ||
setup.py |
Evo AI - AI Agents Platform
Evo AI is an free platform for creating and managing AI agents, enabling integration with different AI models and services.
🚀 Overview
The Evo AI platform allows:
- Creation and management of AI agents
- Integration with different language models
- Client and contact management
- MCP server configuration
- Custom tools management
- JWT authentication with email verification
- Agent 2 Agent (A2A) Protocol Support: Interoperability between AI agents following Google's A2A specification
🤖 Agent Types and Creation
Evo AI supports different types of agents that can be flexibly combined to create complex solutions:
1. LLM Agent (Language Model)
Agent based on language models like GPT-4, Claude, etc. Can be configured with tools, MCP servers, and sub-agents.
{
"client_id": "{{client_id}}",
"name": "personal_assistant",
"description": "Specialized personal assistant",
"type": "llm",
"model": "gpt-4",
"api_key": "your-api-key",
"instruction": "Detailed instructions for agent behavior",
"config": {
"tools": [
{
"id": "tool-uuid",
"envs": {
"API_KEY": "tool-api-key",
"ENDPOINT": "http://localhost:8000"
}
}
],
"mcp_servers": [
{
"id": "server-uuid",
"envs": {
"API_KEY": "server-api-key",
"ENDPOINT": "http://localhost:8001"
},
"tools": ["tool_name1", "tool_name2"]
}
],
"custom_tools": {
"http_tools": []
},
"sub_agents": ["sub-agent-uuid"]
}
}
2. A2A Agent (Agent-to-Agent)
Agent that implements Google's A2A protocol for agent interoperability.
{
"client_id": "{{client_id}}",
"type": "a2a",
"agent_card_url": "http://localhost:8001/api/v1/a2a/your-agent/.well-known/agent.json",
"config": {
"sub_agents": ["sub-agent-uuid"]
}
}
3. Sequential Agent
Executes a sequence of sub-agents in a specific order.
{
"client_id": "{{client_id}}",
"name": "processing_flow",
"type": "sequential",
"config": {
"sub_agents": ["agent-uuid-1", "agent-uuid-2", "agent-uuid-3"]
}
}
4. Parallel Agent
Executes multiple sub-agents simultaneously.
{
"client_id": "{{client_id}}",
"name": "parallel_processing",
"type": "parallel",
"config": {
"sub_agents": ["agent-uuid-1", "agent-uuid-2"]
}
}
5. Loop Agent
Executes sub-agents in a loop with a defined maximum number of iterations.
{
"client_id": "{{client_id}}",
"name": "loop_processing",
"type": "loop",
"config": {
"sub_agents": ["sub-agent-uuid"],
"max_iterations": 5
}
}
Common Characteristics
- All agent types can have sub-agents
- Sub-agents can be of any type
- Agents can be flexibly combined
- Type-specific configurations
- Support for custom tools and MCP servers
MCP Server Configuration
Agents can be integrated with MCP (Model Control Protocol) servers for distributed processing:
{
"config": {
"mcp_servers": [
{
"id": "server-uuid",
"envs": {
"API_KEY": "server-api-key",
"ENDPOINT": "http://localhost:8001",
"MODEL_NAME": "gpt-4",
"TEMPERATURE": 0.7,
"MAX_TOKENS": 2000
},
"tools": ["tool_name1", "tool_name2"]
}
]
}
}
Available configurations for MCP servers:
- id: Unique MCP server identifier
- envs: Environment variables for configuration
- API_KEY: Server authentication key
- ENDPOINT: MCP server URL
- MODEL_NAME: Model name to be used
- TEMPERATURE: Text generation temperature (0.0 to 1.0)
- MAX_TOKENS: Maximum token limit per request
- Other server-specific variables
- tools: MCP server tool names for agent use
Agent Composition Examples
Different types of agents can be combined to create complex processing flows:
1. Sequential Processing Pipeline
{
"client_id": "{{client_id}}",
"name": "processing_pipeline",
"type": "sequential",
"config": {
"sub_agents": [
"llm-analysis-agent-uuid", // LLM Agent for initial analysis
"a2a-translation-agent-uuid", // A2A Agent for translation
"llm-formatting-agent-uuid" // LLM Agent for final formatting
]
}
}
2. Parallel Processing with Aggregation
{
"client_id": "{{client_id}}",
"name": "parallel_analysis",
"type": "sequential",
"config": {
"sub_agents": [
{
"type": "parallel",
"config": {
"sub_agents": [
"analysis-agent-uuid-1",
"analysis-agent-uuid-2",
"analysis-agent-uuid-3"
]
}
},
"aggregation-agent-uuid" // Agent for aggregating results
]
}
}
3. Multi-Agent Conversation System
{
"client_id": "{{client_id}}",
"name": "conversation_system",
"type": "parallel",
"config": {
"sub_agents": [
{
"type": "llm",
"name": "context_agent",
"model": "gpt-4",
"instruction": "Maintain conversation context"
},
{
"type": "a2a",
"agent_card_url": "expert-agent-url"
},
{
"type": "loop",
"config": {
"sub_agents": ["memory-agent-uuid"],
"max_iterations": 1
}
}
]
}
}
API Creation
For creating a new agent, use the endpoint:
POST /api/v1/agents
Content-Type: application/json
Authorization: Bearer your-token-jwt
{
// Configuration of the agent as per the examples above
}
🛠️ Technologies
- FastAPI: Web framework for building the API
- SQLAlchemy: ORM for database interaction
- PostgreSQL: Main database
- Alembic: Migration system
- Pydantic: Data validation and serialization
- Uvicorn: ASGI server
- Redis: Cache and session management
- JWT: Secure token authentication
- SendGrid: Email service for notifications
- Jinja2: Template engine for email rendering
- Bcrypt: Password hashing and security
🤖 Agent 2 Agent (A2A) Protocol Support
Evo AI implements the Google's Agent 2 Agent (A2A) protocol, enabling seamless communication and interoperability between AI agents. This implementation includes:
Key Features
- Standardized Communication: Agents can communicate using a common protocol regardless of their underlying implementation
- Interoperability: Support for agents built with different frameworks and technologies
- Well-Known Endpoints: Standardized endpoints for agent discovery and interaction
- Task Management: Support for task-based interactions between agents
- State Management: Tracking of agent states and conversation history
- Authentication: Secure API key-based authentication for agent interactions
Implementation Details
- Agent Card: Each agent exposes a
.well-known/agent.json
endpoint with its capabilities and configuration - Task Handling: Support for task creation, execution, and status tracking
- Message Format: Standardized message format for agent communication
- History Tracking: Maintains conversation history between agents
- Artifact Management: Support for handling different types of artifacts (text, files, etc.)
Example Usage
// Agent Card Example
{
"name": "My Agent",
"description": "A helpful AI assistant",
"url": "https://api.example.com/agents/123",
"capabilities": {
"streaming": false,
"pushNotifications": false,
"stateTransitionHistory": true
},
"authentication": {
"schemes": ["apiKey"],
"credentials": {
"in": "header",
"name": "x-api-key"
}
},
"skills": [
{
"id": "search",
"name": "Web Search",
"description": "Search the web for information"
}
]
}
For more information about the A2A protocol, visit Google's A2A Protocol Documentation.
📁 Project Structure
src/
├── api/ # API endpoints
├── core/ # Core business logic
├── models/ # Data models
├── schemas/ # Pydantic schemas for validation
├── services/ # Business services
├── templates/ # Email templates
│ └── emails/ # Jinja2 email templates
├── utils/ # Utilities
└── config/ # Configurations
📋 Requirements
- Python 3.8+
- PostgreSQL
- Redis
- OpenAI API Key (or other AI provider)
- SendGrid Account (for email sending)
🔧 Installation
- Clone the repository:
git clone https://github.com/your-username/evo-ai.git
cd evo-ai
- Create a virtual environment:
make venv
source venv/bin/activate # Linux/Mac
# or
venv\Scripts\activate # Windows
- Install dependencies:
make install # For basic installation
# or
make install-dev # For development dependencies
- Set up environment variables:
cp .env.example .env
# Edit the .env file with your settings
- Run migrations:
make alembic-upgrade
🔐 Authentication
The API uses JWT (JSON Web Token) authentication. To access the endpoints, you need to:
- Register a user or log in to obtain a JWT token
- Include the JWT token in the
Authorization
header of all requests in the formatBearer <token>
- Tokens expire after a configured period (default: 30 minutes)
Authentication Flow
- User Registration:
POST /api/v1/auth/register
-
Email Verification: An email will be sent containing a verification link.
-
Login:
POST /api/v1/auth/login
Returns a JWT token to be used in requests.
- Password Recovery (if needed):
POST /api/v1/auth/forgot-password
POST /api/v1/auth/reset-password
- Recover logged user data:
POST /api/v1/auth/me
Example Usage with curl:
# Login
curl -X POST "http://localhost:8000/api/v1/auth/login" \
-H "Content-Type: application/json" \
-d '{"email": "your-email@example.com", "password": "your-password"}'
# Use received token
curl -X GET "http://localhost:8000/api/v1/clients/" \
-H "Authorization: Bearer eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9..."
Access Control
- Regular users (associated with a client) only have access to their client's resources
- Admin users have access to all resources
- Certain operations (such as creating MCP servers) are restricted to administrators only
- Account lockout mechanism after multiple failed login attempts for enhanced security
📧 Email Templates
The platform uses Jinja2 templates for email rendering with a unified design system:
- Base Template: All emails extend a common base template for consistent styling
- Verification Email: Sent when users register to verify their email address
- Password Reset: Sent when users request a password reset
- Welcome Email: Sent after email verification to guide new users
- Account Locked: Security alert when an account is locked due to multiple failed login attempts
All email templates feature responsive design, clear call-to-action buttons, and fallback mechanisms.
🚀 Running the Project
make run # For development with automatic reload
# or
make run-prod # For production with multiple workers
The API will be available at http://localhost:8000
📚 API Documentation
The interactive API documentation is available at:
- Swagger UI:
http://localhost:8000/docs
- ReDoc:
http://localhost:8000/redoc
📊 Logs and Audit
- Logs are stored in the
logs/
directory with the following format:{logger_name}_{date}.log
- The system maintains audit logs for important administrative actions
- Each action is recorded with information such as user, IP, date/time, and details
🤝 Contributing
- Fork the project
- Create a feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
🙏 Acknowledgments
👨💻 Development Commands
# Database migrations
make init # Initialize Alembic
make alembic-revision message="description" # Create new migration
make alembic-upgrade # Update database to latest version
make alembic-downgrade # Revert latest migration
make alembic-migrate message="description" # Create and apply migration
make alembic-reset # Reset database
# Seeders
make seed-admin # Create default admin
make seed-client # Create default client
make seed-agents # Create example agents
make seed-mcp-servers # Create example MCP servers
make seed-tools # Create example tools
make seed-contacts # Create example contacts
make seed-all # Run all seeders
# Code verification
make lint # Verify code with flake8
make format # Format code with black
make clear-cache # Clear project cache
🐳 Running with Docker
To facilitate deployment and execution of the application, we provide Docker and Docker Compose configurations.
Prerequisites
- Docker installed
- Docker Compose installed
Configuration
-
Configure the necessary environment variables in the
.env
file at the root of the project (or use system environment variables) -
Build the Docker image:
make docker-build
- Start the services (API, PostgreSQL, and Redis):
make docker-up
- Populate the database with initial data:
make docker-seed
- To check application logs:
make docker-logs
- To stop the services:
make docker-down
Available Services
- API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- PostgreSQL: localhost:5432
- Redis: localhost:6379
Persistent Volumes
Docker Compose sets up persistent volumes for:
- PostgreSQL data
- Redis data
- Application logs directory
Environment Variables
The main environment variables used by the API container:
POSTGRES_CONNECTION_STRING
: PostgreSQL connection stringREDIS_HOST
: Redis hostJWT_SECRET_KEY
: Secret key for JWT token generationSENDGRID_API_KEY
: SendGrid API key for sending emailsEMAIL_FROM
: Email used as senderAPP_URL
: Base URL of the application