12 KiB
Evo AI - Deployment Guide
This document provides detailed instructions for deploying the Evo AI platform in different environments.
Prerequisites
- Docker and Docker Compose
- PostgreSQL database
- Redis instance
- SendGrid account for email services
- Domain name (for production deployments)
- SSL certificate (for production deployments)
Environment Configuration
The Evo AI platform uses environment variables for configuration. Create a .env
file based on the example below:
# Database Configuration
POSTGRES_CONNECTION_STRING=postgresql://username:password@postgres:5432/evo_ai
POSTGRES_USER=username
POSTGRES_PASSWORD=password
POSTGRES_DB=evo_ai
# Redis Configuration
REDIS_HOST=redis
REDIS_PORT=6379
REDIS_DB=0
REDIS_PASSWORD=
# JWT Configuration
JWT_SECRET_KEY=your-secret-key-at-least-32-characters
JWT_ALGORITHM=HS256
JWT_ACCESS_TOKEN_EXPIRE_MINUTES=30
# Email Configuration
SENDGRID_API_KEY=your-sendgrid-api-key
EMAIL_FROM=noreply@your-domain.com
EMAIL_FROM_NAME=Evo AI Platform
# Application Configuration
APP_URL=https://your-domain.com
ENVIRONMENT=production # development, testing, or production
DEBUG=false
Development Deployment
Using Docker Compose
-
Clone the repository:
git clone https://github.com/your-username/evo-ai.git cd evo-ai
-
Create a
.env
file:cp .env.example .env # Edit .env with your configuration
-
Start the development environment:
make docker-up
-
Apply database migrations:
make docker-migrate
-
Access the API at
http://localhost:8000
Local Development (without Docker)
-
Clone the repository:
git clone https://github.com/your-username/evo-ai.git cd evo-ai
-
Create a virtual environment:
python -m venv .venv source .venv/bin/activate # Linux/Mac # or .venv\Scripts\activate # Windows
-
Install dependencies:
pip install -r requirements.txt
-
Create a
.env
file:cp .env.example .env # Edit .env with your configuration
-
Apply database migrations:
make alembic-upgrade
-
Start the development server:
make run
-
Access the API at
http://localhost:8000
Production Deployment
Docker Swarm
-
Initialize Docker Swarm (if not already done):
docker swarm init
-
Create a
.env
file for production:cp .env.example .env.prod # Edit .env.prod with your production configuration
-
Deploy the stack:
docker stack deploy -c docker-compose.prod.yml evo-ai
-
Verify the deployment:
docker stack ps evo-ai
Kubernetes
-
Create Kubernetes configuration files:
postgres-deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: postgres spec: replicas: 1 selector: matchLabels: app: postgres template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:13 ports: - containerPort: 5432 env: - name: POSTGRES_USER valueFrom: secretKeyRef: name: evo-ai-secrets key: POSTGRES_USER - name: POSTGRES_PASSWORD valueFrom: secretKeyRef: name: evo-ai-secrets key: POSTGRES_PASSWORD - name: POSTGRES_DB valueFrom: secretKeyRef: name: evo-ai-secrets key: POSTGRES_DB volumeMounts: - name: postgres-data mountPath: /var/lib/postgresql/data volumes: - name: postgres-data persistentVolumeClaim: claimName: postgres-pvc
redis-deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: redis spec: replicas: 1 selector: matchLabels: app: redis template: metadata: labels: app: redis spec: containers: - name: redis image: redis:6 ports: - containerPort: 6379 volumeMounts: - name: redis-data mountPath: /data volumes: - name: redis-data persistentVolumeClaim: claimName: redis-pvc
api-deployment.yaml:
apiVersion: apps/v1 kind: Deployment metadata: name: evo-ai-api spec: replicas: 3 selector: matchLabels: app: evo-ai-api template: metadata: labels: app: evo-ai-api spec: containers: - name: evo-ai-api image: your-registry/evo-ai-api:latest ports: - containerPort: 8000 envFrom: - secretRef: name: evo-ai-secrets - configMapRef: name: evo-ai-config readinessProbe: httpGet: path: /api/v1/health port: 8000 initialDelaySeconds: 5 periodSeconds: 10 livenessProbe: httpGet: path: /api/v1/health port: 8000 initialDelaySeconds: 15 periodSeconds: 20
-
Create Kubernetes secrets:
kubectl create secret generic evo-ai-secrets \ --from-literal=POSTGRES_USER=username \ --from-literal=POSTGRES_PASSWORD=password \ --from-literal=POSTGRES_DB=evo_ai \ --from-literal=JWT_SECRET_KEY=your-secret-key \ --from-literal=SENDGRID_API_KEY=your-sendgrid-api-key
-
Create ConfigMap:
kubectl create configmap evo-ai-config \ --from-literal=POSTGRES_CONNECTION_STRING=postgresql://username:password@postgres:5432/evo_ai \ --from-literal=REDIS_HOST=redis \ --from-literal=REDIS_PORT=6379 \ --from-literal=JWT_ALGORITHM=HS256 \ --from-literal=JWT_ACCESS_TOKEN_EXPIRE_MINUTES=30 \ --from-literal=EMAIL_FROM=noreply@your-domain.com \ --from-literal=EMAIL_FROM_NAME="Evo AI Platform" \ --from-literal=APP_URL=https://your-domain.com \ --from-literal=ENVIRONMENT=production \ --from-literal=DEBUG=false
-
Apply the configurations:
kubectl apply -f postgres-deployment.yaml kubectl apply -f redis-deployment.yaml kubectl apply -f api-deployment.yaml
-
Create services:
kubectl expose deployment postgres --port=5432 --type=ClusterIP kubectl expose deployment redis --port=6379 --type=ClusterIP kubectl expose deployment evo-ai-api --port=80 --target-port=8000 --type=LoadBalancer
Scaling Considerations
Database Scaling
For production environments with high load, consider:
-
PostgreSQL Replication:
- Set up a master-slave replication
- Use read replicas for read-heavy operations
- Consider using a managed PostgreSQL service (AWS RDS, Azure Database, etc.)
-
Redis Cluster:
- Implement Redis Sentinel for high availability
- Use Redis Cluster for horizontal scaling
- Consider using a managed Redis service (AWS ElastiCache, Azure Cache, etc.)
API Scaling
-
Horizontal Scaling:
- Increase the number of API containers/pods
- Use a load balancer to distribute traffic
-
Vertical Scaling:
- Increase resources (CPU, memory) for API containers
-
Caching Strategy:
- Implement response caching for frequent requests
- Use Redis for distributed caching
Monitoring and Logging
Monitoring
-
Prometheus and Grafana:
- Set up Prometheus for metrics collection
- Configure Grafana dashboards for visualization
- Monitor API response times, error rates, and system resources
-
Health Checks:
- Use the
/api/v1/health
endpoint to check system health - Set up alerts for when services are down
- Use the
Logging
-
Centralized Logging:
- Configure ELK Stack (Elasticsearch, Logstash, Kibana)
- Or use a managed logging service (AWS CloudWatch, Datadog, etc.)
-
Log Levels:
- In production, set log level to INFO or WARNING
- In development, set log level to DEBUG for more details
Backup and Recovery
-
Database Backups:
- Schedule regular PostgreSQL backups
- Store backups in a secure location (e.g., AWS S3, Azure Blob Storage)
- Test restoration procedures regularly
-
Application State:
- Store configuration in version control
- Document environment setup and dependencies
SSL Configuration
For production deployments, SSL is required:
-
Using Nginx:
server { listen 80; server_name your-domain.com; return 301 https://$host$request_uri; } server { listen 443 ssl; server_name your-domain.com; ssl_certificate /path/to/certificate.crt; ssl_certificate_key /path/to/private.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_ciphers HIGH:!aNULL:!MD5; location / { proxy_pass http://evo-ai-api:8000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } }
-
Using Let's Encrypt and Certbot:
certbot --nginx -d your-domain.com
Troubleshooting
Common Issues
-
Database Connection Errors:
- Check PostgreSQL connection string
- Verify network connectivity between API and database
- Check database credentials
-
Redis Connection Issues:
- Verify Redis host and port
- Check network connectivity to Redis
- Ensure Redis service is running
-
Email Sending Failures:
- Verify SendGrid API key
- Check email templates
- Test email sending with SendGrid debugging tools
Debugging
-
Container Logs:
# Docker docker logs <container_id> # Kubernetes kubectl logs <pod_name>
-
API Logs:
- Check
/logs
directory - Set DEBUG=true in development to get more detailed logs
- Check
-
Database Connection Testing:
psql postgresql://username:password@postgres:5432/evo_ai
-
Health Check:
curl http://localhost:8000/api/v1/health
Security Considerations
-
API Security:
- Keep JWT_SECRET_KEY secure and random
- Rotate JWT secrets periodically
- Set appropriate token expiration times
-
Network Security:
- Use internal networks for database and Redis
- Expose only the API through a load balancer
- Implement a Web Application Firewall (WAF)
-
Data Protection:
- Encrypt sensitive data in database
- Implement proper access controls
- Regularly audit system access
Continuous Integration/Deployment
GitHub Actions Example
name: Deploy Evo AI
on:
push:
branches: [ main ]
jobs:
build-and-deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.9'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest
- name: Run tests
run: |
pytest
- name: Build Docker image
run: |
docker build -t your-registry/evo-ai-api:latest .
- name: Push to registry
run: |
echo "${{ secrets.DOCKER_PASSWORD }}" | docker login -u "${{ secrets.DOCKER_USERNAME }}" --password-stdin
docker push your-registry/evo-ai-api:latest
- name: Deploy to production
run: |
# Deployment commands depending on your environment
# For example, if using Kubernetes:
kubectl set image deployment/evo-ai-api evo-ai-api=your-registry/evo-ai-api:latest
Conclusion
This deployment guide covers the basics of deploying the Evo AI platform in different environments. For specific needs or custom deployments, additional configuration may be required. Always follow security best practices and ensure proper monitoring and backup procedures are in place.