Evo AI - AI Agents Platform Evo AI is an open-source platform for creating and managing AI agents, enabling integration with different AI models and services.
Go to file
Davidson Gomes 6f1d2745fd
Merge pull request #31 from Danielpeter-99/main
fix: update import to new path
2025-06-02 18:59:04 -03:00
.github/workflows feat(makefile): update run command to exclude frontend and log files during reload 2025-05-24 10:27:30 -03:00
frontend chore(frontend): update .dockerignore and .gitignore to correct lib directory entry; add new utility and file handling modules 2025-05-24 11:00:55 -03:00
migrations feat(agent): add Task Agent for structured task execution and improve context management 2025-05-14 12:36:34 -03:00
scripts chore: update author information and file names in multiple files 2025-05-13 17:50:14 -03:00
src fix: update import to new path 2025-05-29 17:39:08 -04:00
.cursorrules chore(cleanup): remove contact-related files and update JWT expiration time settings 2025-05-07 11:43:00 -03:00
.dockerignore chore(frontend): update .dockerignore and .gitignore to correct lib directory entry; add new utility and file handling modules 2025-05-24 11:00:55 -03:00
.env.example feat(env): add AI engine configuration option to .env.example and update README for improved clarity 2025-05-19 15:34:42 -03:00
.flake8 chore: update project structure and add testing framework 2025-04-28 20:41:10 -03:00
.gitignore chore(frontend): update .dockerignore and .gitignore to correct lib directory entry; add new utility and file handling modules 2025-05-24 11:00:55 -03:00
alembic.ini new mcp servers format 2025-04-28 12:37:58 -03:00
CHANGELOG.md chore(changelog): update CHANGELOG for version 0.1.0 with new features and changes 2025-05-24 10:30:58 -03:00
CODE_OF_CONDUCT.md Create CODE_OF_CONDUCT.md 2025-05-30 11:18:24 -03:00
conftest.py chore: update author information and file names in multiple files 2025-05-13 17:50:14 -03:00
CONTRIBUTING.md Create CONTRIBUTING.md 2025-05-30 14:13:28 -03:00
docker_build.sh feat(docker): add docker build script and enhance docker-compose with health checks and environment variable support 2025-05-05 15:20:51 -03:00
docker-compose.yml fix: adding local build image, adding local .env to docker compose 2025-05-29 17:40:49 -04:00
Dockerfile ⚙️ Fix: importação e seeders automáticos 2025-05-16 01:51:47 -03:00
LICENSE chore: add Apache License 2.0 and update project license information in README and pyproject.toml 2025-05-13 06:35:16 -03:00
Makefile feat(makefile): update run command to exclude frontend and log files during reload 2025-05-24 10:27:30 -03:00
pyproject.toml feat(frontend): add initial frontend structure with components, services, and assets 2025-05-24 09:51:34 -03:00
README.md docs(readme): update frontend configuration section to reflect correct .env file name 2025-05-24 10:34:30 -03:00
SECURITY.md Create SECURITY.md 2025-05-30 14:15:36 -03:00
setup.py chore: update project structure and add testing framework 2025-04-28 20:41:10 -03:00

Evo AI - AI Agents Platform

Whatsapp Group Discord Community Postman Collection Documentation License Support Sponsors

Evo AI - AI Agents Platform

Evo AI is an open-source platform for creating and managing AI agents, enabling integration with different AI models and services.

🚀 Overview

The Evo AI platform allows:

  • Creation and management of AI agents
  • Integration with different language models
  • Client management and MCP server configuration
  • Custom tools management
  • Google Agent Development Kit (ADK): Base framework for agent development
  • CrewAI Support: Alternative framework for agent development (in development)
  • JWT authentication with email verification
  • Agent 2 Agent (A2A) Protocol Support: Interoperability between AI agents
  • Workflow Agent with LangGraph: Building complex agent workflows
  • Secure API Key Management: Encrypted storage of API keys
  • Agent Organization: Folder structure for organizing agents by categories

🤖 Agent Types

Evo AI supports different types of agents that can be flexibly combined:

1. LLM Agent (Language Model)

Agent based on language models like GPT-4, Claude, etc. Can be configured with tools, MCP servers, and sub-agents.

2. A2A Agent (Agent-to-Agent)

Agent that implements Google's A2A protocol for agent interoperability.

3. Sequential Agent

Executes a sequence of sub-agents in a specific order.

4. Parallel Agent

Executes multiple sub-agents simultaneously.

5. Loop Agent

Executes sub-agents in a loop with a defined maximum number of iterations.

6. Workflow Agent

Executes sub-agents in a custom workflow defined by a graph structure using LangGraph.

7. Task Agent

Executes a specific task using a target agent with structured task instructions.

🛠️ Technologies

Backend

  • FastAPI: Web framework for building the API
  • SQLAlchemy: ORM for database interaction
  • PostgreSQL: Main database
  • Alembic: Migration system
  • Pydantic: Data validation and serialization
  • Uvicorn: ASGI server
  • Redis: Cache and session management
  • JWT: Secure token authentication
  • SendGrid/SMTP: Email service for notifications (configurable)
  • Jinja2: Template engine for email rendering
  • Bcrypt: Password hashing and security
  • LangGraph: Framework for building stateful, multi-agent workflows

Frontend

  • Next.js 15: React framework with App Router
  • React 18: User interface library
  • TypeScript: Type-safe JavaScript
  • Tailwind CSS: Utility-first CSS framework
  • shadcn/ui: Modern component library
  • React Hook Form: Form management
  • Zod: Schema validation
  • ReactFlow: Node-based visual workflows
  • React Query: Server state management

📊 Langfuse Integration (Tracing & Observability)

Evo AI platform natively supports integration with Langfuse for detailed tracing of agent executions, prompts, model responses, and tool calls, using the OpenTelemetry (OTel) standard.

How to configure

  1. Set environment variables in your .env:

    LANGFUSE_PUBLIC_KEY="pk-lf-..."
    LANGFUSE_SECRET_KEY="sk-lf-..."
    OTEL_EXPORTER_OTLP_ENDPOINT="https://cloud.langfuse.com/api/public/otel"
    
  2. View in the Langfuse dashboard

    • Access your Langfuse dashboard to see real-time traces.

🤖 Agent 2 Agent (A2A) Protocol Support

Evo AI implements the Google's Agent 2 Agent (A2A) protocol, enabling seamless communication and interoperability between AI agents.

For more information about the A2A protocol, visit Google's A2A Protocol Documentation.

📋 Prerequisites

Backend

  • Python: 3.10 or higher
  • PostgreSQL: 13.0 or higher
  • Redis: 6.0 or higher
  • Git: For version control
  • Make: For running Makefile commands

Frontend

  • Node.js: 18.0 or higher
  • pnpm: Package manager (recommended) or npm/yarn

🔧 Installation

1. Clone the Repository

git clone https://github.com/EvolutionAPI/evo-ai.git
cd evo-ai

2. Backend Setup

Virtual Environment and Dependencies

# Create and activate virtual environment
make venv
source venv/bin/activate  # Linux/Mac
# or on Windows: venv\Scripts\activate

# Install development dependencies
make install-dev

Environment Configuration

# Copy and configure backend environment
cp .env.example .env
# Edit the .env file with your database, Redis, and other settings

Database Setup

# Initialize database and apply migrations
make alembic-upgrade

# Seed initial data (admin user, sample clients, etc.)
make seed-all

3. Frontend Setup

Install Dependencies

# Navigate to frontend directory
cd frontend

# Install dependencies using pnpm (recommended)
pnpm install

# Or using npm
# npm install

# Or using yarn
# yarn install

Frontend Environment Configuration

# Copy and configure frontend environment
cp .env.example .env
# Edit .env with your API URL (default: http://localhost:8000)

The frontend .env should contain:

NEXT_PUBLIC_API_URL=http://localhost:8000

🚀 Running the Application

Development Mode

Start Backend (Terminal 1)

# From project root
make run
# Backend will be available at http://localhost:8000

Start Frontend (Terminal 2)

# From frontend directory
cd frontend
pnpm dev

# Or using npm/yarn
# npm run dev
# yarn dev

# Frontend will be available at http://localhost:3000

Production Mode

Backend

make run-prod    # Production with multiple workers

Frontend

cd frontend
pnpm build && pnpm start

# Or using npm/yarn
# npm run build && npm start
# yarn build && yarn start

🐳 Docker Installation

Full Stack with Docker Compose

# Build and start all services (backend + database + redis)
make docker-build
make docker-up

# Initialize database with seed data
make docker-seed

Frontend with Docker

# From frontend directory
cd frontend

# Build frontend image
docker build -t evo-ai-frontend .

# Run frontend container
docker run -p 3000:3000 -e NEXT_PUBLIC_API_URL=http://localhost:8000 evo-ai-frontend

Or using the provided docker-compose:

# From frontend directory
cd frontend
docker-compose up -d

🎯 Getting Started

After installation, follow these steps:

  1. Access the Frontend: Open http://localhost:3000
  2. Create Admin Account: Use the seeded admin credentials or register a new account
  3. Configure MCP Server: Set up your first MCP server connection
  4. Create Client: Add a client to organize your agents
  5. Build Your First Agent: Create and configure your AI agent
  6. Test Agent: Use the chat interface to interact with your agent

Default Admin Credentials

After running the seeders, you can login with:

  • Email: Check the seeder output for the generated admin email
  • Password: Check the seeder output for the generated password

🖥️ API Documentation

The interactive API documentation is available at:

  • Swagger UI: http://localhost:8000/docs
  • ReDoc: http://localhost:8000/redoc

👨‍💻 Development Commands

Backend Commands

# Database migrations
make alembic-upgrade            # Update database to latest version
make alembic-revision message="description"  # Create new migration

# Seeders
make seed-all                   # Run all seeders

# Code verification
make lint                       # Verify code with flake8
make format                     # Format code with black

Frontend Commands

# From frontend directory
cd frontend

# Development
pnpm dev                        # Start development server
pnpm build                      # Build for production
pnpm start                      # Start production server
pnpm lint                       # Run ESLint

🚀 Configuration

Backend Configuration (.env file)

Key settings include:

# Database settings
POSTGRES_CONNECTION_STRING="postgresql://postgres:root@localhost:5432/evo_ai"

# Redis settings
REDIS_HOST="localhost"
REDIS_PORT=6379

# AI Engine configuration
AI_ENGINE="adk"  # Options: "adk" (Google Agent Development Kit) or "crewai" (CrewAI framework)

# JWT settings
JWT_SECRET_KEY="your-jwt-secret-key"

# Email provider configuration
EMAIL_PROVIDER="sendgrid"  # Options: "sendgrid" or "smtp"

# Encryption for API keys
ENCRYPTION_KEY="your-encryption-key"

Frontend Configuration (.env file)

# API Configuration
NEXT_PUBLIC_API_URL="http://localhost:8000"  # Backend API URL

Note

: While Google ADK is fully supported, the CrewAI engine option is still under active development. For production environments, it's recommended to use the default "adk" engine.

🔐 Authentication

The API uses JWT (JSON Web Token) authentication with:

  • User registration and email verification
  • Login to obtain JWT tokens
  • Password recovery flow
  • Account lockout after multiple failed login attempts

🚀 Star Us on GitHub

If you find EvoAI useful, please consider giving us a star! Your support helps us grow our community and continue improving the product.

Star History Chart

🤝 Contributing

We welcome contributions from the community! Please read our Contributing Guidelines for more details.

📄 License

This project is licensed under the Apache License 2.0.