chore(docs): remove outdated documentation files

This commit is contained in:
Davidson Gomes 2025-05-07 06:44:19 -03:00
parent 422639a629
commit e54e1039e1
7 changed files with 0 additions and 1474 deletions

View File

@ -1,19 +0,0 @@
# Evo AI Documentation
This directory contains comprehensive documentation for the Evo AI platform.
## Structure
- **swagger/** - OpenAPI/Swagger documentation for the REST API
- **technical/** - Technical documentation for developers, including architecture diagrams, data models, and workflows
- **contributing/** - Guidelines and information for contributors
## Purpose
These documents aim to provide clear and detailed information for:
1. Developers who want to contribute to the Evo AI codebase
2. Developers who want to integrate with the Evo AI API
3. Technical teams who want to understand the architecture and implementation details
All documentation is maintained in English to ensure accessibility for a global developer community.

View File

@ -1,65 +0,0 @@
# Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, religion, or sexual identity
and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the
overall community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or
advances of any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email
address, without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Project maintainers are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is representing the community in public spaces.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the project maintainers responsible for enforcement at
[INSERT CONTACT EMAIL]. All complaints will be reviewed and investigated
promptly and fairly.
All project maintainers are obligated to respect the privacy and security of the
reporter of any incident.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant](https://www.contributor-covenant.org),
version 2.0, available at
https://www.contributor-covenant.org/version/2/0/code_of_conduct.html.

View File

@ -1,129 +0,0 @@
# Contributing to Evo AI
Thank you for your interest in contributing to Evo AI! This document provides guidelines and instructions for contributing to the project.
## Getting Started
### Prerequisites
- Python 3.8+
- PostgreSQL
- Redis
- Git
### Setup Development Environment
1. Fork the repository
2. Clone your fork:
```bash
git clone https://github.com/YOUR-USERNAME/evo-ai.git
cd evo-ai
```
3. Create a virtual environment:
```bash
python -m venv .venv
source .venv/bin/activate # Linux/Mac
# or
.venv\Scripts\activate # Windows
```
4. Install dependencies:
```bash
pip install -r requirements.txt
pip install -r requirements-dev.txt
```
5. Set up environment variables:
```bash
cp .env.example .env
# Edit .env with your configuration
```
6. Run database migrations:
```bash
make alembic-upgrade
```
## Development Workflow
### Branching Strategy
- `main` - Main branch, contains stable code
- `feature/*` - For new features
- `bugfix/*` - For bug fixes
- `docs/*` - For documentation changes
### Creating a New Feature
1. Create a new branch from `main`:
```bash
git checkout -b feature/your-feature-name
```
2. Make your changes
3. Run tests:
```bash
make test
```
4. Commit your changes:
```bash
git commit -m "Add feature: description of your changes"
```
5. Push to your fork:
```bash
git push origin feature/your-feature-name
```
6. Create a Pull Request to the main repository
## Coding Standards
### Python Code Style
- Follow PEP 8
- Use 4 spaces for indentation
- Maximum line length of 79 characters
- Use descriptive variable names
- Write docstrings for all functions, classes, and modules
### Commit Messages
- Use the present tense ("Add feature" not "Added feature")
- Use the imperative mood ("Move cursor to..." not "Moves cursor to...")
- First line should be a summary under 50 characters
- Reference issues and pull requests where appropriate
## Testing
- All new features should include tests
- All bug fixes should include tests that reproduce the bug
- Run the full test suite before submitting a PR
## Documentation
- Update documentation for any new features or API changes
- Documentation should be written in English
- Use Markdown for formatting
## Pull Request Process
1. Ensure your code follows the coding standards
2. Update the documentation as needed
3. Include tests for new functionality
4. Ensure the test suite passes
5. Update the CHANGELOG.md if applicable
6. The PR will be reviewed by maintainers
7. Once approved, it will be merged into the main branch
## Code Review Process
All submissions require review. We use GitHub pull requests for this purpose.
Reviewers will check for:
- Code quality and style
- Test coverage
- Documentation
- Appropriateness of the change
## Community
- Be respectful and considerate of others
- Help others who have questions
- Follow the code of conduct
Thank you for contributing to Evo AI!

View File

@ -1,213 +0,0 @@
# Evo AI - API Flows
This document describes common API flows and usage patterns for the Evo AI platform.
## Authentication Flow
### User Registration and Verification
```mermaid
sequenceDiagram
Client->>API: POST /api/v1/auth/register
API->>Database: Create user (inactive)
API->>Email Service: Send verification email
API-->>Client: Return user details
Client->>API: GET /api/v1/auth/verify-email/{token}
API->>Database: Activate user
API-->>Client: Return success message
```
### Login Flow
```mermaid
sequenceDiagram
Client->>API: POST /api/v1/auth/login
API->>Database: Validate credentials
API->>Auth Service: Generate JWT token
API-->>Client: Return JWT token
Client->>API: Request with Authorization header
API->>Auth Middleware: Validate token
API-->>Client: Return protected resource
```
### Password Recovery
```mermaid
sequenceDiagram
Client->>API: POST /api/v1/auth/forgot-password
API->>Database: Find user by email
API->>Email Service: Send password reset email
API-->>Client: Return success message
Client->>API: POST /api/v1/auth/reset-password
API->>Auth Service: Validate reset token
API->>Database: Update password
API-->>Client: Return success message
```
## Agent Management
### Creating and Using an Agent
```mermaid
sequenceDiagram
Client->>API: POST /api/v1/agents/
API->>Database: Create agent
API-->>Client: Return agent details
Client->>API: POST /api/v1/chat
API->>Agent Service: Process message
Agent Service->>External LLM: Send prompt
External LLM-->>Agent Service: Return response
Agent Service->>Database: Store conversation
API-->>Client: Return agent response
```
### Sequential Agent Flow
```mermaid
sequenceDiagram
Client->>API: POST /api/v1/chat (sequential agent)
API->>Agent Service: Process message
Agent Service->>Sub-Agent 1: Process first step
Sub-Agent 1-->>Agent Service: Return intermediate result
Agent Service->>Sub-Agent 2: Process with previous result
Sub-Agent 2-->>Agent Service: Return intermediate result
Agent Service->>Sub-Agent 3: Process final step
Sub-Agent 3-->>Agent Service: Return final result
Agent Service->>Database: Store conversation
API-->>Client: Return final response
```
## Client and Contact Management
### Client Creation and Management
```mermaid
sequenceDiagram
Admin->>API: POST /api/v1/clients/
API->>Database: Create client
API-->>Admin: Return client details
Admin->>API: PUT /api/v1/clients/{client_id}
API->>Database: Update client
API-->>Admin: Return updated client
Client User->>API: GET /api/v1/clients/
API->>Auth Service: Check permissions
API->>Database: Fetch client(s)
API-->>Client User: Return client details
```
### Contact Management
```mermaid
sequenceDiagram
Client User->>API: POST /api/v1/contacts/
API->>Auth Service: Check permissions
API->>Database: Create contact
API-->>Client User: Return contact details
Client User->>API: GET /api/v1/contacts/{client_id}
API->>Auth Service: Check permissions
API->>Database: Fetch contacts
API-->>Client User: Return contact list
Client User->>API: POST /api/v1/chat
API->>Database: Validate contact belongs to client
API->>Agent Service: Process message
API-->>Client User: Return agent response
```
## MCP Server and Tool Management
### MCP Server Configuration
```mermaid
sequenceDiagram
Admin->>API: POST /api/v1/mcp-servers/
API->>Auth Service: Verify admin permissions
API->>Database: Create MCP server
API-->>Admin: Return server details
Admin->>API: PUT /api/v1/mcp-servers/{server_id}
API->>Auth Service: Verify admin permissions
API->>Database: Update server configuration
API-->>Admin: Return updated server
```
### Tool Configuration and Usage
```mermaid
sequenceDiagram
Admin->>API: POST /api/v1/tools/
API->>Auth Service: Verify admin permissions
API->>Database: Create tool
API-->>Admin: Return tool details
Client User->>API: POST /api/v1/chat (with tool)
API->>Agent Service: Process message
Agent Service->>Tool Service: Execute tool
Tool Service->>External API: Make external call
External API-->>Tool Service: Return result
Tool Service-->>Agent Service: Return tool result
Agent Service-->>API: Return final response
API-->>Client User: Return agent response
```
## Audit and Monitoring
### Audit Log Flow
```mermaid
sequenceDiagram
User->>API: Perform administrative action
API->>Auth Service: Verify permissions
API->>Audit Service: Log action
Audit Service->>Database: Store audit record
API->>Database: Perform action
API-->>User: Return action result
Admin->>API: GET /api/v1/admin/audit-logs
API->>Auth Service: Verify admin permissions
API->>Database: Fetch audit logs
API-->>Admin: Return audit history
```
## Error Handling
### Common Error Flows
```mermaid
sequenceDiagram
Client->>API: Invalid request
API->>Middleware: Process request
Middleware->>Exception Handler: Handle validation error
Exception Handler-->>Client: Return 422 Validation Error
Client->>API: Request protected resource
API->>Auth Middleware: Validate JWT
Auth Middleware->>Exception Handler: Handle authentication error
Exception Handler-->>Client: Return 401 Unauthorized
Client->>API: Request resource without permission
API->>Auth Service: Check resource permissions
Auth Service->>Exception Handler: Handle permission error
Exception Handler-->>Client: Return 403 Forbidden
```
## API Integration Best Practices
1. **Authentication**:
- Store JWT tokens securely
- Implement token refresh mechanism
- Handle token expiration gracefully
2. **Error Handling**:
- Implement proper error handling for all API calls
- Pay attention to HTTP status codes
- Log detailed error information for debugging
3. **Resource Management**:
- Use pagination for listing resources
- Filter only the data you need
- Consider implementing client-side caching for frequently accessed data
4. **Agent Configuration**:
- Start with preset agent templates
- Test agent configurations with sample data
- Monitor and adjust agent parameters based on performance
5. **Security**:
- Never expose API keys or tokens in client-side code
- Validate all user input before sending to the API
- Implement proper permission checks in your application

View File

@ -1,222 +0,0 @@
# Evo AI - System Architecture
This document provides an overview of the Evo AI system architecture, explaining how different components interact and the design decisions behind the implementation.
## High-Level Architecture
Evo AI follows a layered architecture pattern with clear separation of concerns:
```
┌─────────────────────────────────────────────────────────────┐
│ Client │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ FastAPI REST API Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ API Routes │ │ Middleware │ │ Exception │ │
│ │ │ │ │ │ Handlers │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Service Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Agent │ │ User │ │ MCP │ │
│ │ Services │ │ Services │ │ Services │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
│ │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ Client │ │ Contact │ │ Tool │ │
│ │ Services │ │ Services │ │ Services │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Data Access Layer │
│ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐ │
│ │ SQLAlchemy │ │ Alembic │ │ Redis │ │
│ │ ORM │ │ Migrations │ │ Cache │ │
│ └─────────────┘ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ External Storage Systems │
│ ┌─────────────┐ ┌─────────────┐ │
│ │ PostgreSQL │ │ Redis │ │
│ └─────────────┘ └─────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
## Component Details
### API Layer
The API Layer is implemented using FastAPI and handles all HTTP requests and responses. Key components include:
1. **API Routes** (`src/api/`):
- Defines all endpoints for the REST API
- Handles request validation using Pydantic models
- Manages authentication and authorization
- Delegates business logic to the Service Layer
2. **Middleware** (`src/core/`):
- JWT Authentication middleware
- Error handling middleware
- Request logging middleware
3. **Exception Handling**:
- Centralized error handling with appropriate HTTP status codes
- Standardized error responses
### Service Layer
The Service Layer contains the core business logic of the application. It includes:
1. **Agent Service** (`src/services/agent_service.py`):
- Agent creation, configuration, and management
- Integration with LLM providers
2. **Client Service** (`src/services/client_service.py`):
- Client management functionality
- Client resource access control
3. **MCP Server Service** (`src/services/mcp_server_service.py`):
- Management of Multi-provider Cognitive Processing (MCP) servers
- Configuration of server environments and tools
4. **User Service** (`src/services/user_service.py`):
- User management and authentication
- Email verification
5. **Additional Services**:
- Contact Service
- Tool Service
- Email Service
- Audit Service
### Data Access Layer
The Data Access Layer manages all interactions with the database and caching systems:
1. **SQLAlchemy ORM** (`src/models/`):
- Defines database models and relationships
- Provides methods for CRUD operations
- Implements transactions and error handling
2. **Alembic Migrations**:
- Manages database schema changes
- Handles version control for database schema
3. **Redis Cache**:
- Stores session data
- Caches frequently accessed data
- Manages JWT token blacklisting
### External Systems
1. **PostgreSQL**:
- Primary relational database
- Stores all persistent data
- Manages relationships between entities
2. **Redis**:
- Secondary database for caching
- Session management
- Rate limiting support
3. **Email System** (SendGrid):
- Handles email notifications
- Manages email templates
- Provides delivery tracking
## Authentication Flow
```
┌─────────┐ ┌────────────┐ ┌──────────────┐ ┌─────────────┐
│ User │ │ API Layer │ │ Auth Service │ │ User Service│
└────┬────┘ └──────┬─────┘ └──────┬───────┘ └──────┬──────┘
│ Login Request │ │ │
│──────────────────>│ │ │
│ │ Authenticate User │ │
│ │──────────────────>│ │
│ │ │ Validate Credentials
│ │ │────────────────────>│
│ │ │ │
│ │ │ Result │
│ │ │<────────────────────│
│ │ │ │
│ │ Generate JWT Token│ │
│ │<──────────────────│ │
│ JWT Token │ │ │
<──────────────────│ │ │
│ │ │ │
```
## Data Model
The core entities in the system are:
1. **Users**: Application users with authentication information
2. **Clients**: Organizations or accounts using the system
3. **Agents**: AI agents with configurations and capabilities
4. **Contacts**: End-users interacting with agents
5. **MCP Servers**: Server configurations for different AI providers
6. **Tools**: Tools that can be used by agents
The relationships between these entities are described in detail in the `DATA_MODEL.md` document.
## Security Considerations
1. **Authentication**:
- JWT-based authentication with short-lived tokens
- Secure password hashing with bcrypt
- Email verification for new accounts
- Account lockout after multiple failed attempts
2. **Authorization**:
- Role-based access control (admin vs regular users)
- Resource-based access control (client-specific resources)
- JWT payload containing essential user data for quick authorization checks
3. **Data Protection**:
- Environment variables for sensitive data
- Encrypted connections to databases
- No storage of plaintext passwords or API keys
## Deployment Architecture
Evo AI can be deployed using Docker containers for easier scaling and management:
```
┌─────────────────────────────────────────────────────────────┐
│ Load Balancer │
└─────────────────────────────────────────────────────────────┘
│ │
┌───────────┘ └───────────┐
▼ ▼
┌──────────┐ ┌──────────┐
│ API │ │ API │
│ Container│ │ Container│
└──────────┘ └──────────┘
│ │
└───────────┐ ┌───────────┘
▼ ▼
┌─────────────────────────────────────────────────────────────┐
│ PostgreSQL Cluster │
└─────────────────────────────────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ Redis Cluster │
└─────────────────────────────────────────────────────────────┘
```
## Further Reading
- See `DATA_MODEL.md` for detailed database schema information
- See `API_FLOW.md` for common API interaction patterns
- See `DEPLOYMENT.md` for deployment instructions and configurations

View File

@ -1,317 +0,0 @@
# Evo AI - Data Model
This document describes the database schema and entity relationships in the Evo AI platform.
## Database Schema
The Evo AI platform uses PostgreSQL as its primary database. Below is a detailed description of each table and its relationships.
## Entity Relationship Diagram
```
┌───────────┐ ┌───────────┐ ┌───────────┐
│ │ │ │ │ │
│ User │──┐ │ Client │◄─────│ Agent │
│ │ │ │ │ │ │
└───────────┘ │ └───────────┘ └───────────┘
│ ▲ ▲
│ │ │
└────────►│ │
│ │
┌────────┴──────┐ │
│ │ │
│ Contact │─────────┐│
│ │ ││
└───────────────┘ ││
││
┌───────────────┐ ││
│ │◄────────┘│
│ Tool │ │
│ │ │
└───────────────┘ │
┌───────────────┐ │
│ │◄─────────┘
│ MCP Server │
│ │
└───────────────┘
```
## Tables
### User
The User table stores information about system users.
```sql
CREATE TABLE user (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
email VARCHAR(255) NOT NULL UNIQUE,
password_hash VARCHAR(255) NOT NULL,
client_id UUID REFERENCES client(id) ON DELETE CASCADE,
is_active BOOLEAN DEFAULT false,
email_verified BOOLEAN DEFAULT false,
is_admin BOOLEAN DEFAULT false,
failed_login_attempts INTEGER DEFAULT 0,
locked_until TIMESTAMP,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
```
- **id**: Unique identifier (UUID)
- **email**: User's email address (unique)
- **password_hash**: Bcrypt-hashed password
- **client_id**: Reference to the client organization (null for admin users)
- **is_active**: Whether the user is active
- **email_verified**: Whether the email has been verified
- **is_admin**: Whether the user has admin privileges
- **failed_login_attempts**: Counter for failed login attempts
- **locked_until**: Timestamp until when the account is locked
- **created_at**: Creation timestamp
- **updated_at**: Last update timestamp
### Client
The Client table stores information about client organizations.
```sql
CREATE TABLE client (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name VARCHAR(255) NOT NULL,
email VARCHAR(255) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
```
- **id**: Unique identifier (UUID)
- **name**: Client name
- **email**: Client email contact
- **created_at**: Creation timestamp
- **updated_at**: Last update timestamp
### Agent
The Agent table stores information about AI agents.
```sql
CREATE TABLE agent (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
client_id UUID NOT NULL REFERENCES client(id) ON DELETE CASCADE,
name VARCHAR(255) NOT NULL,
description TEXT,
type VARCHAR(50) NOT NULL,
model VARCHAR(255),
api_key TEXT,
instruction TEXT,
config_json JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
```
- **id**: Unique identifier (UUID)
- **client_id**: Reference to the client that owns this agent
- **name**: Agent name
- **description**: Agent description
- **type**: Agent type (e.g., "llm", "sequential", "parallel", "loop")
- **model**: LLM model name (for "llm" type agents)
- **api_key**: API key for the model provider (encrypted)
- **instruction**: System instructions for the agent
- **config_json**: JSON configuration specific to the agent type
- **created_at**: Creation timestamp
- **updated_at**: Last update timestamp
### Contact
The Contact table stores information about end-users that interact with agents.
```sql
CREATE TABLE contact (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
client_id UUID NOT NULL REFERENCES client(id) ON DELETE CASCADE,
ext_id VARCHAR(255),
name VARCHAR(255) NOT NULL,
meta JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
```
- **id**: Unique identifier (UUID)
- **client_id**: Reference to the client that owns this contact
- **ext_id**: Optional external ID for integration
- **name**: Contact name
- **meta**: Additional metadata in JSON format
- **created_at**: Creation timestamp
- **updated_at**: Last update timestamp
### MCP Server
The MCP Server table stores information about Multi-provider Cognitive Processing servers.
```sql
CREATE TABLE mcp_server (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name VARCHAR(255) NOT NULL UNIQUE,
description TEXT,
config_json JSONB NOT NULL DEFAULT '{}',
environments JSONB NOT NULL DEFAULT '{}',
tools JSONB NOT NULL DEFAULT '[]',
type VARCHAR(50) NOT NULL,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
```
- **id**: Unique identifier (UUID)
- **name**: Server name
- **description**: Server description
- **config_json**: JSON configuration for the server
- **environments**: Environment variables as JSON
- **tools**: List of tools supported by this server
- **type**: Server type (e.g., "official", "custom")
- **created_at**: Creation timestamp
- **updated_at**: Last update timestamp
### Tool
The Tool table stores information about tools that can be used by agents.
```sql
CREATE TABLE tool (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
name VARCHAR(255) NOT NULL UNIQUE,
description TEXT,
config_json JSONB NOT NULL DEFAULT '{}',
environments JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW(),
updated_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
```
- **id**: Unique identifier (UUID)
- **name**: Tool name
- **description**: Tool description
- **config_json**: JSON configuration for the tool
- **environments**: Environment variables as JSON
- **created_at**: Creation timestamp
- **updated_at**: Last update timestamp
### Conversation
The Conversation table stores chat history between contacts and agents.
```sql
CREATE TABLE conversation (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
agent_id UUID NOT NULL REFERENCES agent(id) ON DELETE CASCADE,
contact_id UUID NOT NULL REFERENCES contact(id) ON DELETE CASCADE,
message TEXT NOT NULL,
response TEXT NOT NULL,
meta JSONB NOT NULL DEFAULT '{}',
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
```
- **id**: Unique identifier (UUID)
- **agent_id**: Reference to the agent
- **contact_id**: Reference to the contact
- **message**: Message sent by the contact
- **response**: Response generated by the agent
- **meta**: Additional metadata (e.g., tokens used, tools called)
- **created_at**: Creation timestamp
### Audit Log
The Audit Log table stores records of administrative actions.
```sql
CREATE TABLE audit_log (
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
user_id UUID REFERENCES "user"(id) ON DELETE SET NULL,
action VARCHAR(50) NOT NULL,
resource_type VARCHAR(50) NOT NULL,
resource_id UUID,
details JSONB NOT NULL DEFAULT '{}',
ip_address VARCHAR(45),
user_agent TEXT,
created_at TIMESTAMP WITH TIME ZONE DEFAULT NOW()
);
```
- **id**: Unique identifier (UUID)
- **user_id**: Reference to the user who performed the action
- **action**: Type of action (e.g., "CREATE", "UPDATE", "DELETE")
- **resource_type**: Type of resource affected (e.g., "AGENT", "CLIENT")
- **resource_id**: Identifier of the affected resource
- **details**: JSON with before/after state
- **ip_address**: IP address of the user
- **user_agent**: User-agent string
- **created_at**: Creation timestamp
## Indexes
To optimize performance, the following indexes are created:
```sql
-- User indexes
CREATE INDEX idx_user_email ON "user" (email);
CREATE INDEX idx_user_client_id ON "user" (client_id);
-- Agent indexes
CREATE INDEX idx_agent_client_id ON agent (client_id);
CREATE INDEX idx_agent_name ON agent (name);
-- Contact indexes
CREATE INDEX idx_contact_client_id ON contact (client_id);
CREATE INDEX idx_contact_ext_id ON contact (ext_id);
-- Conversation indexes
CREATE INDEX idx_conversation_agent_id ON conversation (agent_id);
CREATE INDEX idx_conversation_contact_id ON conversation (contact_id);
CREATE INDEX idx_conversation_created_at ON conversation (created_at);
-- Audit log indexes
CREATE INDEX idx_audit_log_user_id ON audit_log (user_id);
CREATE INDEX idx_audit_log_resource_type ON audit_log (resource_type);
CREATE INDEX idx_audit_log_resource_id ON audit_log (resource_id);
CREATE INDEX idx_audit_log_created_at ON audit_log (created_at);
```
## Relationships
1. **User to Client**: Many-to-one relationship. Each user belongs to at most one client (except for admin users).
2. **Client to Agent**: One-to-many relationship. Each client can have multiple agents.
3. **Client to Contact**: One-to-many relationship. Each client can have multiple contacts.
4. **Agent to Conversation**: One-to-many relationship. Each agent can have multiple conversations.
5. **Contact to Conversation**: One-to-many relationship. Each contact can have multiple conversations.
6. **User to Audit Log**: One-to-many relationship. Each user can have multiple audit logs.
## Data Security
1. **Passwords**: All passwords are hashed using bcrypt before storage.
2. **API Keys**: API keys are stored with encryption.
3. **Sensitive Data**: Sensitive data in JSON fields is encrypted where appropriate.
4. **Cascading Deletes**: When a parent record is deleted, related records are automatically deleted to maintain referential integrity.
## Notes on JSONB Fields
PostgreSQL's JSONB fields provide flexibility for storing semi-structured data:
1. **config_json**: Used to store configuration parameters that may vary by agent type or tool.
2. **meta**: Used to store additional attributes that don't warrant their own columns.
3. **environments**: Used to store environment variables needed for tools and MCP servers.
This approach allows for extensibility without requiring database schema changes.

View File

@ -1,509 +0,0 @@
# Evo AI - Deployment Guide
This document provides detailed instructions for deploying the Evo AI platform in different environments.
## Prerequisites
- Docker and Docker Compose
- PostgreSQL database
- Redis instance
- SendGrid account for email services
- Domain name (for production deployments)
- SSL certificate (for production deployments)
## Environment Configuration
The Evo AI platform uses environment variables for configuration. Create a `.env` file based on the example below:
```
# Database Configuration
POSTGRES_CONNECTION_STRING=postgresql://username:password@postgres:5432/evo_ai
POSTGRES_USER=username
POSTGRES_PASSWORD=password
POSTGRES_DB=evo_ai
# Redis Configuration
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DB=0
REDIS_PASSWORD=
# JWT Configuration
JWT_SECRET_KEY=your-secret-key-at-least-32-characters
JWT_ALGORITHM=HS256
JWT_ACCESS_TOKEN_EXPIRE_MINUTES=30
# Email Configuration
SENDGRID_API_KEY=your-sendgrid-api-key
EMAIL_FROM=noreply@your-domain.com
EMAIL_FROM_NAME=Evo AI Platform
# Application Configuration
APP_URL=https://your-domain.com
ENVIRONMENT=production # development, testing, or production
DEBUG=false
```
## Development Deployment
### Using Docker Compose
1. Clone the repository:
```bash
git clone https://github.com/your-username/evo-ai.git
cd evo-ai
```
2. Create a `.env` file:
```bash
cp .env.example .env
# Edit .env with your configuration
```
3. Start the development environment:
```bash
make docker-up
```
4. Apply database migrations:
```bash
make docker-migrate
```
5. Access the API at `http://localhost:8000`
### Local Development (without Docker)
1. Clone the repository:
```bash
git clone https://github.com/your-username/evo-ai.git
cd evo-ai
```
2. Create a virtual environment:
```bash
python -m venv .venv
source .venv/bin/activate # Linux/Mac
# or
.venv\Scripts\activate # Windows
```
3. Install dependencies:
```bash
pip install -r requirements.txt
```
4. Create a `.env` file:
```bash
cp .env.example .env
# Edit .env with your configuration
```
5. Apply database migrations:
```bash
make alembic-upgrade
```
6. Start the development server:
```bash
make run
```
7. Access the API at `http://localhost:8000`
## Production Deployment
### Docker Swarm
1. Initialize Docker Swarm (if not already done):
```bash
docker swarm init
```
2. Create a `.env` file for production:
```bash
cp .env.example .env.prod
# Edit .env.prod with your production configuration
```
3. Deploy the stack:
```bash
docker stack deploy -c docker-compose.prod.yml evo-ai
```
4. Verify the deployment:
```bash
docker stack ps evo-ai
```
### Kubernetes
1. Create Kubernetes configuration files:
**postgres-deployment.yaml**:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13
ports:
- containerPort: 5432
env:
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: evo-ai-secrets
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: evo-ai-secrets
key: POSTGRES_PASSWORD
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: evo-ai-secrets
key: POSTGRES_DB
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-data
persistentVolumeClaim:
claimName: postgres-pvc
```
**redis-deployment.yaml**:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:6
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /data
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-pvc
```
**api-deployment.yaml**:
```yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: evo-ai-api
spec:
replicas: 3
selector:
matchLabels:
app: evo-ai-api
template:
metadata:
labels:
app: evo-ai-api
spec:
containers:
- name: evo-ai-api
image: your-registry/evo-ai-api:latest
ports:
- containerPort: 8000
envFrom:
- secretRef:
name: evo-ai-secrets
- configMapRef:
name: evo-ai-config
readinessProbe:
httpGet:
path: /api/v1/health
port: 8000
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
httpGet:
path: /api/v1/health
port: 8000
initialDelaySeconds: 15
periodSeconds: 20
```
2. Create Kubernetes secrets:
```bash
kubectl create secret generic evo-ai-secrets \
--from-literal=POSTGRES_USER=username \
--from-literal=POSTGRES_PASSWORD=password \
--from-literal=POSTGRES_DB=evo_ai \
--from-literal=JWT_SECRET_KEY=your-secret-key \
--from-literal=SENDGRID_API_KEY=your-sendgrid-api-key
```
3. Create ConfigMap:
```bash
kubectl create configmap evo-ai-config \
--from-literal=POSTGRES_CONNECTION_STRING=postgresql://username:password@postgres:5432/evo_ai \
--from-literal=REDIS_HOST=redis \
--from-literal=REDIS_PORT=6379 \
--from-literal=JWT_ALGORITHM=HS256 \
--from-literal=JWT_ACCESS_TOKEN_EXPIRE_MINUTES=30 \
--from-literal=EMAIL_FROM=noreply@your-domain.com \
--from-literal=EMAIL_FROM_NAME="Evo AI Platform" \
--from-literal=APP_URL=https://your-domain.com \
--from-literal=ENVIRONMENT=production \
--from-literal=DEBUG=false
```
4. Apply the configurations:
```bash
kubectl apply -f postgres-deployment.yaml
kubectl apply -f redis-deployment.yaml
kubectl apply -f api-deployment.yaml
```
5. Create services:
```bash
kubectl expose deployment postgres --port=5432 --type=ClusterIP
kubectl expose deployment redis --port=6379 --type=ClusterIP
kubectl expose deployment evo-ai-api --port=80 --target-port=8000 --type=LoadBalancer
```
## Scaling Considerations
### Database Scaling
For production environments with high load, consider:
1. **PostgreSQL Replication**:
- Set up a master-slave replication
- Use read replicas for read-heavy operations
- Consider using a managed PostgreSQL service (AWS RDS, Azure Database, etc.)
2. **Redis Cluster**:
- Implement Redis Sentinel for high availability
- Use Redis Cluster for horizontal scaling
- Consider using a managed Redis service (AWS ElastiCache, Azure Cache, etc.)
### API Scaling
1. **Horizontal Scaling**:
- Increase the number of API containers/pods
- Use a load balancer to distribute traffic
2. **Vertical Scaling**:
- Increase resources (CPU, memory) for API containers
3. **Caching Strategy**:
- Implement response caching for frequent requests
- Use Redis for distributed caching
## Monitoring and Logging
### Monitoring
1. **Prometheus and Grafana**:
- Set up Prometheus for metrics collection
- Configure Grafana dashboards for visualization
- Monitor API response times, error rates, and system resources
2. **Health Checks**:
- Use the `/api/v1/health` endpoint to check system health
- Set up alerts for when services are down
### Logging
1. **Centralized Logging**:
- Configure ELK Stack (Elasticsearch, Logstash, Kibana)
- Or use a managed logging service (AWS CloudWatch, Datadog, etc.)
2. **Log Levels**:
- In production, set log level to INFO or WARNING
- In development, set log level to DEBUG for more details
## Backup and Recovery
1. **Database Backups**:
- Schedule regular PostgreSQL backups
- Store backups in a secure location (e.g., AWS S3, Azure Blob Storage)
- Test restoration procedures regularly
2. **Application State**:
- Store configuration in version control
- Document environment setup and dependencies
## SSL Configuration
For production deployments, SSL is required:
1. **Using Nginx**:
```nginx
server {
listen 80;
server_name your-domain.com;
return 301 https://$host$request_uri;
}
server {
listen 443 ssl;
server_name your-domain.com;
ssl_certificate /path/to/certificate.crt;
ssl_certificate_key /path/to/private.key;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location / {
proxy_pass http://evo-ai-api:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
```
2. **Using Let's Encrypt and Certbot**:
```bash
certbot --nginx -d your-domain.com
```
## Troubleshooting
### Common Issues
1. **Database Connection Errors**:
- Check PostgreSQL connection string
- Verify network connectivity between API and database
- Check database credentials
2. **Redis Connection Issues**:
- Verify Redis host and port
- Check network connectivity to Redis
- Ensure Redis service is running
3. **Email Sending Failures**:
- Verify SendGrid API key
- Check email templates
- Test email sending with SendGrid debugging tools
### Debugging
1. **Container Logs**:
```bash
# Docker
docker logs <container_id>
# Kubernetes
kubectl logs <pod_name>
```
2. **API Logs**:
- Check `/logs` directory
- Set DEBUG=true in development to get more detailed logs
3. **Database Connection Testing**:
```bash
psql postgresql://username:password@postgres:5432/evo_ai
```
4. **Health Check**:
```bash
curl http://localhost:8000/api/v1/health
```
## Security Considerations
1. **API Security**:
- Keep JWT_SECRET_KEY secure and random
- Rotate JWT secrets periodically
- Set appropriate token expiration times
2. **Network Security**:
- Use internal networks for database and Redis
- Expose only the API through a load balancer
- Implement a Web Application Firewall (WAF)
3. **Data Protection**:
- Encrypt sensitive data in database
- Implement proper access controls
- Regularly audit system access
## Continuous Integration/Deployment
### GitHub Actions Example
```yaml
name: Deploy Evo AI
on:
push:
branches: [ main ]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest
- name: Run tests
run: |
pytest
- name: Build Docker image
run: |
docker build -t your-registry/evo-ai-api:latest .
- name: Push to registry
run: |
echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
docker push your-registry/evo-ai-api:latest
- name: Deploy to production
run: |
# Deployment commands depending on your environment
# For example, if using Kubernetes:
kubectl set image deployment/evo-ai-api evo-ai-api=your-registry/evo-ai-api:latest
```
## Conclusion
This deployment guide covers the basics of deploying the Evo AI platform in different environments. For specific needs or custom deployments, additional configuration may be required. Always follow security best practices and ensure proper monitoring and backup procedures are in place.