chore(cleanup): remove unused files and update documentation for clarity

This commit is contained in:
Davidson Gomes 2025-05-07 06:43:33 -03:00
parent bbc18371dd
commit 422639a629
19 changed files with 333 additions and 1094 deletions

50
.env
View File

@ -1,50 +0,0 @@
API_TITLE="Evo API"
API_DESCRIPTION="API para execução de agentes de IA"
API_VERSION="1.0.0"
API_URL="http://localhost:8000"
ORGANIZATION_NAME="Evo AI"
ORGANIZATION_URL="https://evoai.evoapicloud.com"
# Database settings
POSTGRES_CONNECTION_STRING="postgresql://postgres:root@localhost:5432/evo_ai"
# Logging settings
LOG_LEVEL="INFO"
LOG_DIR="logs"
# Redis settings
REDIS_HOST="localhost"
REDIS_PORT=6379
REDIS_DB=8
REDIS_PASSWORD=""
REDIS_SSL=false
REDIS_KEY_PREFIX="a2a:"
REDIS_TTL=3600
# Tools cache TTL in seconds (1 hour)
TOOLS_CACHE_TTL=3600
# JWT settings
JWT_SECRET_KEY="f6884ef5be4c279686ff90f0ed9d4656685eef9807245019ac94a3fbe32b0938"
JWT_ALGORITHM="HS256"
JWT_EXPIRATION_TIME=3600
# SendGrid
SENDGRID_API_KEY="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"
EMAIL_FROM="noreply@evolution-api.com"
APP_URL="https://evoai.evoapicloud.com"
# Server settings
HOST="0.0.0.0"
PORT=8000
DEBUG=false
# Seeders settings
ADMIN_EMAIL="admin@evoai.com"
ADMIN_INITIAL_PASSWORD="senhaforte123"
DEMO_EMAIL="demo@exemplo.com"
DEMO_PASSWORD="demo123"
DEMO_CLIENT_NAME="Cliente Demo"
# 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

2
.gitignore vendored
View File

@ -130,3 +130,5 @@ celerybeat-schedule
Thumbs.db
uv.lock
migrations/versions/*

174
README.md
View File

@ -1,6 +1,6 @@
# Evo AI - AI Agents Platform
Evo AI is an free platform for creating and managing AI agents, enabling integration with different AI models and services.
Evo AI is an open-source platform for creating and managing AI agents, enabling integration with different AI models and services.
## 🚀 Overview
@ -13,6 +13,7 @@ The Evo AI platform allows:
- Custom tools management
- JWT authentication with email verification
- **Agent 2 Agent (A2A) Protocol Support**: Interoperability between AI agents following Google's A2A specification
- **Workflow Agent with LangGraph**: Building complex agent workflows with LangGraph and ReactFlow
## 🤖 Agent Types and Creation
@ -120,6 +121,27 @@ Executes sub-agents in a loop with a defined maximum number of iterations.
}
```
### 6. Workflow Agent
Executes sub-agents in a custom workflow defined by a graph structure. This agent type uses LangGraph for implementing complex agent workflows with conditional execution paths.
```json
{
"client_id": "{{client_id}}",
"name": "workflow_agent",
"type": "workflow",
"config": {
"sub_agents": ["agent-uuid-1", "agent-uuid-2", "agent-uuid-3"],
"workflow": {
"nodes": [],
"edges": []
}
}
}
```
The workflow structure is built using ReactFlow in the frontend, allowing visual creation and editing of complex agent workflows with nodes (representing agents or decision points) and edges (representing flow connections).
### Common Characteristics
- All agent types can have sub-agents
@ -268,6 +290,8 @@ Authorization: Bearer your-token-jwt
- **SendGrid**: Email service for notifications
- **Jinja2**: Template engine for email rendering
- **Bcrypt**: Password hashing and security
- **LangGraph**: Framework for building stateful, multi-agent workflows
- **ReactFlow**: Library for building node-based visual workflows
## 🤖 Agent 2 Agent (A2A) Protocol Support
@ -337,9 +361,25 @@ src/
└── config/ # Configurations
```
## 📋 Prerequisites
Before starting, make sure you have the following installed:
- **Python**: 3.10 or higher
- **PostgreSQL**: 13.0 or higher
- **Redis**: 6.0 or higher
- **Git**: For version control
- **Make**: For running Makefile commands (usually pre-installed on Linux/Mac, for Windows use WSL or install via chocolatey)
You'll also need the following accounts/API keys:
- **OpenAI API Key**: Or API key from another AI provider
- **SendGrid Account**: For email functionality
- **Google API Key**: If using Google's A2A protocol implementation
## 📋 Requirements
- Python 3.8+
- Python 3.10+
- PostgreSQL
- Redis
- OpenAI API Key (or other AI provider)
@ -365,6 +405,14 @@ venv\Scripts\activate # Windows
3. Install dependencies:
```bash
pip install -e . # For basic installation
# or
pip install -e ".[dev]" # For development dependencies
```
Or using the Makefile:
```bash
make install # For basic installation
# or
@ -378,12 +426,81 @@ cp .env.example .env
# Edit the .env file with your settings
```
5. Run migrations:
5. Initialize the database and run migrations:
```bash
make alembic-upgrade
make alembic-migrate message="init migrations"
```
6. Seed the database with initial data:
```bash
make seed-all
```
## 🚀 Getting Started
After installation, follow these steps to set up your first agent:
1. **Configure MCP Server**: Set up your Model Control Protocol server configuration first
2. **Create Client or Register**: Create a new client or register a user account
3. **Create Agents**: Set up the agents according to your needs (LLM, A2A, Sequential, Parallel, Loop, or Workflow)
### Configuration (.env file)
Configure your environment using the following key settings:
```bash
# Database settings
POSTGRES_CONNECTION_STRING="postgresql://postgres:root@localhost:5432/evo_ai"
# Redis settings
REDIS_HOST="localhost"
REDIS_PORT=6379
REDIS_DB=0
REDIS_PASSWORD="your-redis-password"
# JWT settings
JWT_SECRET_KEY="your-jwt-secret-key"
JWT_ALGORITHM="HS256"
JWT_EXPIRATION_TIME=30 # In minutes
# SendGrid for emails
SENDGRID_API_KEY="your-sendgrid-api-key"
EMAIL_FROM="noreply@yourdomain.com"
APP_URL="https://yourdomain.com"
# A2A settings
A2A_TASK_TTL=3600
A2A_HISTORY_TTL=86400
```
### Project Dependencies
The project uses modern Python packaging standards with `pyproject.toml`. Key dependencies include:
```toml
dependencies = [
"fastapi==0.115.12",
"uvicorn==0.34.2",
"pydantic==2.11.3",
"sqlalchemy==2.0.40",
"psycopg2==2.9.10",
"alembic==1.15.2",
"redis==5.3.0",
"langgraph==0.4.1",
# ... other dependencies
]
```
For development, additional packages can be installed with:
```bash
pip install -e ".[dev]"
```
This includes development tools like black, flake8, pytest, and more.
## 🔐 Authentication
The API uses JWT (JSON Web Token) authentication. To access the endpoints, you need to:
@ -482,21 +599,37 @@ The interactive API documentation is available at:
## 🤝 Contributing
We welcome contributions from the community! Here's how you can help:
1. Fork the project
2. Create a feature branch (`git checkout -b feature/AmazingFeature`)
3. Commit your changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
3. Make your changes and add tests if possible
4. Run tests and make sure they pass
5. Commit your changes following conventional commits format (`feat: add amazing feature`)
6. Push to the branch (`git push origin feature/AmazingFeature`)
7. Open a Pull Request
Please read our [Contributing Guidelines](CONTRIBUTING.md) for more details.
## 📄 License
This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
## 📊 Stargazers
[![Stargazers repo roster for @your-username/evo-ai](https://reporoster.com/stars/your-username/evo-ai)](https://github.com/your-username/evo-ai/stargazers)
## 🔄 Forks
[![Forkers repo roster for @your-username/evo-ai](https://reporoster.com/forks/your-username/evo-ai)](https://github.com/your-username/evo-ai/network/members)
## 🙏 Acknowledgments
- [FastAPI](https://fastapi.tiangolo.com/)
- [SQLAlchemy](https://www.sqlalchemy.org/)
- [Google ADK](https://github.com/google/adk)
- [LangGraph](https://github.com/langchain-ai/langgraph)
- [ReactFlow](https://reactflow.dev/)
## 👨‍💻 Development Commands
@ -526,7 +659,7 @@ make clear-cache # Clear project cache
## 🐳 Running with Docker
To facilitate deployment and execution of the application, we provide Docker and Docker Compose configurations.
For quick setup and deployment, we provide Docker and Docker Compose configurations.
### Prerequisites
@ -535,7 +668,16 @@ To facilitate deployment and execution of the application, we provide Docker and
### Configuration
1. Configure the necessary environment variables in the `.env` file at the root of the project (or use system environment variables)
1. Create and configure the `.env` file:
```bash
cp .env.example .env
# Edit the .env file with your settings, especially:
# - POSTGRES_CONNECTION_STRING
# - REDIS_HOST (should be "redis" when using Docker)
# - JWT_SECRET_KEY
# - SENDGRID_API_KEY
```
2. Build the Docker image:
@ -549,19 +691,25 @@ make docker-build
make docker-up
```
4. Populate the database with initial data:
4. Apply migrations (first time only):
```bash
docker-compose exec api python -m alembic upgrade head
```
5. Populate the database with initial data:
```bash
make docker-seed
```
5. To check application logs:
6. To check application logs:
```bash
make docker-logs
```
6. To stop the services:
7. To stop the services:
```bash
make docker-down
@ -587,7 +735,7 @@ Docker Compose sets up persistent volumes for:
The main environment variables used by the API container:
- `POSTGRES_CONNECTION_STRING`: PostgreSQL connection string
- `REDIS_HOST`: Redis host
- `REDIS_HOST`: Redis host (use "redis" when running with Docker)
- `JWT_SECRET_KEY`: Secret key for JWT token generation
- `SENDGRID_API_KEY`: SendGrid API key for sending emails
- `EMAIL_FROM`: Email used as sender

View File

@ -1,132 +0,0 @@
"""init migrations
Revision ID: c107446e38aa
Revises:
Create Date: 2025-05-02 08:01:10.713496
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = 'c107446e38aa'
down_revision: Union[str, None] = None
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
"""Upgrade schema."""
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('clients',
sa.Column('id', sa.UUID(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('email', sa.String(), nullable=False),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('updated_at', sa.DateTime(timezone=True), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_clients_email'), 'clients', ['email'], unique=True)
op.create_table('mcp_servers',
sa.Column('id', sa.UUID(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('config_type', sa.String(), nullable=False),
sa.Column('config_json', sa.JSON(), nullable=False),
sa.Column('environments', sa.JSON(), nullable=False),
sa.Column('tools', sa.JSON(), nullable=False),
sa.Column('type', sa.String(), nullable=False),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('updated_at', sa.DateTime(timezone=True), nullable=True),
sa.CheckConstraint("config_type IN ('studio', 'sse')", name='check_mcp_server_config_type'),
sa.CheckConstraint("type IN ('official', 'community')", name='check_mcp_server_type'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('tools',
sa.Column('id', sa.UUID(), nullable=False),
sa.Column('name', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('config_json', sa.JSON(), nullable=False),
sa.Column('environments', sa.JSON(), nullable=False),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('updated_at', sa.DateTime(timezone=True), nullable=True),
sa.PrimaryKeyConstraint('id')
)
op.create_table('agents',
sa.Column('id', sa.UUID(), nullable=False),
sa.Column('client_id', sa.UUID(), nullable=True),
sa.Column('name', sa.String(), nullable=False),
sa.Column('description', sa.Text(), nullable=True),
sa.Column('type', sa.String(), nullable=False),
sa.Column('model', sa.String(), nullable=True),
sa.Column('api_key', sa.String(), nullable=True),
sa.Column('instruction', sa.Text(), nullable=True),
sa.Column('agent_card_url', sa.String(), nullable=True),
sa.Column('config', sa.JSON(), nullable=True),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('updated_at', sa.DateTime(timezone=True), nullable=True),
sa.CheckConstraint("type IN ('llm', 'sequential', 'parallel', 'loop', 'a2a')", name='check_agent_type'),
sa.ForeignKeyConstraint(['client_id'], ['clients.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('contacts',
sa.Column('id', sa.UUID(), nullable=False),
sa.Column('client_id', sa.UUID(), nullable=True),
sa.Column('ext_id', sa.String(), nullable=True),
sa.Column('name', sa.String(), nullable=True),
sa.Column('meta', sa.JSON(), nullable=True),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('updated_at', sa.DateTime(timezone=True), nullable=True),
sa.ForeignKeyConstraint(['client_id'], ['clients.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_table('users',
sa.Column('id', sa.UUID(), nullable=False),
sa.Column('email', sa.String(), nullable=False),
sa.Column('password_hash', sa.String(), nullable=False),
sa.Column('client_id', sa.UUID(), nullable=True),
sa.Column('is_active', sa.Boolean(), nullable=True),
sa.Column('is_admin', sa.Boolean(), nullable=True),
sa.Column('email_verified', sa.Boolean(), nullable=True),
sa.Column('verification_token', sa.String(), nullable=True),
sa.Column('verification_token_expiry', sa.DateTime(timezone=True), nullable=True),
sa.Column('password_reset_token', sa.String(), nullable=True),
sa.Column('password_reset_expiry', sa.DateTime(timezone=True), nullable=True),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.Column('updated_at', sa.DateTime(timezone=True), nullable=True),
sa.ForeignKeyConstraint(['client_id'], ['clients.id'], ondelete='CASCADE'),
sa.PrimaryKeyConstraint('id')
)
op.create_index(op.f('ix_users_email'), 'users', ['email'], unique=True)
op.create_table('audit_logs',
sa.Column('id', sa.UUID(), nullable=False),
sa.Column('user_id', sa.UUID(), nullable=True),
sa.Column('action', sa.String(), nullable=False),
sa.Column('resource_type', sa.String(), nullable=False),
sa.Column('resource_id', sa.String(), nullable=True),
sa.Column('details', sa.JSON(), nullable=True),
sa.Column('ip_address', sa.String(), nullable=True),
sa.Column('user_agent', sa.String(), nullable=True),
sa.Column('created_at', sa.DateTime(timezone=True), server_default=sa.text('now()'), nullable=True),
sa.ForeignKeyConstraint(['user_id'], ['users.id'], ondelete='SET NULL'),
sa.PrimaryKeyConstraint('id')
)
# ### end Alembic commands ###
def downgrade() -> None:
"""Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ###
op.drop_table('audit_logs')
op.drop_index(op.f('ix_users_email'), table_name='users')
op.drop_table('users')
op.drop_table('contacts')
op.drop_table('agents')
op.drop_table('tools')
op.drop_table('mcp_servers')
op.drop_index(op.f('ix_clients_email'), table_name='clients')
op.drop_table('clients')
# ### end Alembic commands ###

View File

@ -1,49 +0,0 @@
"""worflow_agent
Revision ID: ebac70616dab
Revises: c107446e38aa
Create Date: 2025-05-06 17:05:26.884902
"""
from typing import Sequence, Union
from alembic import op
import sqlalchemy as sa
# revision identifiers, used by Alembic.
revision: str = "ebac70616dab"
down_revision: Union[str, None] = "c107446e38aa"
branch_labels: Union[str, Sequence[str], None] = None
depends_on: Union[str, Sequence[str], None] = None
def upgrade() -> None:
"""Upgrade schema."""
# ### commands auto generated by Alembic - please adjust! ###
# Remover a constraint antiga
op.drop_constraint("check_agent_type", "agents", type_="check")
# Adicionar a nova constraint que inclui o tipo 'workflow'
op.create_check_constraint(
"check_agent_type",
"agents",
"type IN ('llm', 'sequential', 'parallel', 'loop', 'a2a', 'workflow')",
)
# ### end Alembic commands ###
def downgrade() -> None:
"""Downgrade schema."""
# ### commands auto generated by Alembic - please adjust! ###
# Remover a constraint nova
op.drop_constraint("check_agent_type", "agents", type_="check")
# Restaurar a constraint anterior sem o tipo 'workflow'
op.create_check_constraint(
"check_agent_type",
"agents",
"type IN ('llm', 'sequential', 'parallel', 'loop', 'a2a')",
)
# ### end Alembic commands ###

View File

@ -4,7 +4,7 @@ from pydantic_settings import BaseSettings
import secrets
from dotenv import load_dotenv
# Carrega as variáveis do .env
# Load environment variables
load_dotenv()

View File

@ -120,7 +120,7 @@ class AgentBase(BaseModel):
if "type" not in values:
return v
# Para agentes workflow, não fazemos nenhuma validação
# For workflow agents, we do not perform any validation
if "type" in values and values["type"] == "workflow":
return v

View File

@ -14,7 +14,7 @@ class Message(BaseModel):
class TaskStatusUpdateEvent(BaseModel):
state: str = Field(..., description="Estado da tarefa (working, completed, failed)")
state: str = Field(..., description="Task state (working, completed, failed)")
timestamp: datetime = Field(default_factory=datetime.utcnow)
message: Optional[Message] = None
error: Optional[Dict[str, Any]] = None

View File

@ -147,13 +147,10 @@ async def create_agent(db: Session, agent: AgentCreate) -> Agent:
detail=f"Failed to process agent card: {str(e)}",
)
# Para agentes workflow, não fazemos nenhuma validação específica
# apenas garantimos que config é um dicionário
elif agent.type == "workflow":
if not isinstance(agent.config, dict):
agent.config = {}
# Garantir a API key
if "api_key" not in agent.config or not agent.config["api_key"]:
agent.config["api_key"] = generate_api_key()
@ -199,11 +196,9 @@ async def create_agent(db: Session, agent: AgentCreate) -> Agent:
logger.info("Generating automatic API key for new agent")
config["api_key"] = generate_api_key()
# Preservar todos os campos originais
processed_config = {}
processed_config["api_key"] = config.get("api_key", "")
# Copiar campos originais
if "tools" in config:
processed_config["tools"] = config["tools"]
@ -216,7 +211,6 @@ async def create_agent(db: Session, agent: AgentCreate) -> Agent:
if "custom_mcp_servers" in config:
processed_config["custom_mcp_servers"] = config["custom_mcp_servers"]
# Preservar outros campos não processados especificamente
for key, value in config.items():
if key not in [
"api_key",
@ -228,7 +222,6 @@ async def create_agent(db: Session, agent: AgentCreate) -> Agent:
]:
processed_config[key] = value
# Processar apenas campos que precisam de processamento
# Process MCP servers
if "mcp_servers" in config and config["mcp_servers"] is not None:
processed_servers = []
@ -298,7 +291,6 @@ async def create_agent(db: Session, agent: AgentCreate) -> Agent:
# Convert tool id to string
tool_id = tool["id"]
# Validar envs para garantir que não é None
envs = tool.get("envs", {})
if envs is None:
envs = {}
@ -439,11 +431,9 @@ async def update_agent(
if "config" in agent_data:
config = agent_data["config"]
# Preservar todos os campos originais
processed_config = {}
processed_config["api_key"] = config.get("api_key", "")
# Copiar campos originais
if "tools" in config:
processed_config["tools"] = config["tools"]
@ -456,7 +446,6 @@ async def update_agent(
if "custom_mcp_servers" in config:
processed_config["custom_mcp_servers"] = config["custom_mcp_servers"]
# Preservar outros campos não processados especificamente
for key, value in config.items():
if key not in [
"api_key",
@ -468,7 +457,6 @@ async def update_agent(
]:
processed_config[key] = value
# Processar apenas campos que precisam de processamento
# Process MCP servers
if "mcp_servers" in config and config["mcp_servers"] is not None:
processed_servers = []
@ -541,7 +529,6 @@ async def update_agent(
# Convert tool id to string
tool_id = tool["id"]
# Validar envs para garantir que não é None
envs = tool.get("envs", {})
if envs is None:
envs = {}

View File

@ -59,7 +59,6 @@ class CustomToolBuilder:
):
query_params[param] = value
# Processa body parameters
body_data = {}
for param, param_config in parameters.get("body_params", {}).items():
if param in all_values:

View File

@ -4,19 +4,12 @@ from google.adk.agents.invocation_context import InvocationContext
from google.adk.events import Event
from google.genai.types import Content, Part
from typing import AsyncGenerator, Dict, Any, List, TypedDict, Annotated
import json
from typing import AsyncGenerator, Dict, Any, List, TypedDict
import uuid
import asyncio
import httpx
import threading
from google.adk.runners import Runner
from src.services.agent_service import get_agent
# Remover importação circular
# from src.services.agent_builder import AgentBuilder
from sqlalchemy.orm import Session
from langgraph.graph import StateGraph, END
@ -35,13 +28,13 @@ class State(TypedDict):
class WorkflowAgent(BaseAgent):
"""
Agente que implementa fluxos de trabalho usando LangGraph.
Agent that implements workflow flows using LangGraph.
Este agente permite definir e executar fluxos complexos entre vários agentes
utilizando o LangGraph para orquestração.
This agent allows defining and executing complex workflows between multiple agents
using LangGraph for orchestration.
"""
# Declarações de campo para Pydantic
# Field declarations for Pydantic
flow_json: Dict[str, Any]
timeout: int
db: Session
@ -56,16 +49,16 @@ class WorkflowAgent(BaseAgent):
**kwargs,
):
"""
Inicializa o agente de workflow.
Initializes the workflow agent.
Args:
name: Nome do agente
flow_json: Definição do fluxo em formato JSON
timeout: Tempo máximo de execução (segundos)
sub_agents: Lista de sub-agentes a serem executados após o agente de workflow
name: Agent name
flow_json: Workflow definition in JSON format
timeout: Maximum execution time (seconds)
sub_agents: List of sub-agents to be executed after the workflow agent
db: Session
"""
# Inicializar classe base
# Initialize base class
super().__init__(
name=name,
flow_json=flow_json,
@ -76,19 +69,19 @@ class WorkflowAgent(BaseAgent):
)
print(
f"Agente de workflow inicializado com {len(flow_json.get('nodes', []))} nós"
f"Workflow agent initialized with {len(flow_json.get('nodes', []))} nodes"
)
async def _create_node_functions(self, ctx: InvocationContext):
"""Cria as funções para cada tipo de nó no fluxo."""
"""Creates functions for each type of node in the flow."""
# Função para o nó inicial
# Function for the initial node
async def start_node_function(
state: State,
node_id: str,
node_data: Dict[str, Any],
) -> AsyncGenerator[State, None]:
print("\n🏁 NÓ INICIAL")
print("\n🏁 INITIAL NODE")
content = state.get("content", [])
@ -109,7 +102,7 @@ class WorkflowAgent(BaseAgent):
return
session_id = state.get("session_id", "")
# Armazenar resultados específicos para este nó
# Store specific results for this node
node_outputs = state.get("node_outputs", {})
node_outputs[node_id] = {"started_at": datetime.now().isoformat()}
@ -122,7 +115,7 @@ class WorkflowAgent(BaseAgent):
"conversation_history": ctx.session.events,
}
# Função genérica para nós de agente
# Generic function for agent nodes
async def agent_node_function(
state: State, node_id: str, node_data: Dict[str, Any]
) -> AsyncGenerator[State, None]:
@ -131,14 +124,14 @@ class WorkflowAgent(BaseAgent):
agent_name = agent_config.get("name", "")
agent_id = agent_config.get("id", "")
# Incrementar contador de ciclos
# Increment cycle counter
cycle_count = state.get("cycle_count", 0) + 1
print(f"\n👤 AGENTE: {agent_name} (Ciclo {cycle_count})")
print(f"\n👤 AGENT: {agent_name} (Cycle {cycle_count})")
content = state.get("content", [])
session_id = state.get("session_id", "")
# Obter o histórico de conversa
# Get conversation history
conversation_history = state.get("conversation_history", [])
agent = get_agent(self.db, agent_id)
@ -159,7 +152,7 @@ class WorkflowAgent(BaseAgent):
}
return
# Importação movida para dentro da função para evitar circular import
# Import moved to inside the function to avoid circular import
from src.services.agent_builder import AgentBuilder
agent_builder = AgentBuilder(self.db)
@ -179,8 +172,10 @@ class WorkflowAgent(BaseAgent):
"cycle": cycle_count,
}
content = content + new_content
yield {
"content": new_content,
"content": content,
"status": "processed_by_agent",
"node_outputs": node_outputs,
"cycle_count": cycle_count,
@ -191,23 +186,23 @@ class WorkflowAgent(BaseAgent):
if exit_stack:
await exit_stack.aclose()
# Função para nós de condição
# Function for condition nodes
async def condition_node_function(
state: State, node_id: str, node_data: Dict[str, Any]
) -> AsyncGenerator[State, None]:
label = node_data.get("label", "Condição Sem Nome")
label = node_data.get("label", "No name condition")
conditions = node_data.get("conditions", [])
cycle_count = state.get("cycle_count", 0)
print(f"\n🔄 CONDIÇÃO: {label} (Ciclo {cycle_count})")
print(f"\n🔄 CONDITION: {label} (Cycle {cycle_count})")
content = state.get("content", [])
print(f"Avaliando condição para conteúdo: '{content}'")
print(f"Evaluating condition for content: '{content}'")
session_id = state.get("session_id", "")
conversation_history = state.get("conversation_history", [])
# Verificar todas as condições
# Check all conditions
conditions_met = []
for condition in conditions:
condition_id = condition.get("id")
@ -217,18 +212,27 @@ class WorkflowAgent(BaseAgent):
expected_value = condition_data.get("value")
print(
f" Verificando se {field} {operator} '{expected_value}' (valor atual: '{state.get(field, '')}')"
f" Checking if {field} {operator} '{expected_value}' (current value: '{state.get(field, '')}')"
)
if self._evaluate_condition(condition, state):
conditions_met.append(condition_id)
print(f" ✅ Condição {condition_id} atendida!")
print(f" ✅ Condition {condition_id} met!")
# Verificar se o ciclo atingiu o limite (segurança extra)
# Check if the cycle reached the limit (extra security)
if cycle_count >= 10:
print(
f"⚠️ ATENÇÃO: Limite de ciclos atingido ({cycle_count}). Forçando término."
f"⚠️ ATTENTION: Cycle limit reached ({cycle_count}). Forcing termination."
)
condition_content = [
Event(
author="agent",
content=Content(parts=[Part(text="Cycle limit reached")]),
)
]
content = content + condition_content
yield {
"content": content,
"status": "cycle_limit_reached",
"node_outputs": state.get("node_outputs", {}),
"cycle_count": cycle_count,
@ -237,7 +241,7 @@ class WorkflowAgent(BaseAgent):
}
return
# Armazenar resultados específicos para este nó
# Store specific results for this node
node_outputs = state.get("node_outputs", {})
node_outputs[node_id] = {
"condition_evaluated": label,
@ -246,7 +250,22 @@ class WorkflowAgent(BaseAgent):
"cycle": cycle_count,
}
condition_content = [
Event(
author="agent",
content=Content(
parts=[
Part(
text=f"Condition evaluated: {label} with {str(conditions_met)}"
)
]
),
)
]
content = content + condition_content
yield {
"content": content,
"status": "condition_evaluated",
"node_outputs": node_outputs,
"cycle_count": cycle_count,
@ -261,7 +280,7 @@ class WorkflowAgent(BaseAgent):
}
def _evaluate_condition(self, condition: Dict[str, Any], state: State) -> bool:
"""Avalia uma condição contra o estado atual."""
"""Evaluates a condition against the current state."""
condition_type = condition.get("type")
condition_data = condition.get("data", {})
@ -272,9 +291,9 @@ class WorkflowAgent(BaseAgent):
actual_value = state.get(field, "")
# Tratamento especial para quando content é uma lista de Event
# Special treatment for when content is a list of Events
if field == "content" and isinstance(actual_value, list) and actual_value:
# Extrai o texto de cada evento para comparação
# Extract text from each event for comparison
extracted_texts = []
for event in actual_value:
if hasattr(event, "content") and hasattr(event.content, "parts"):
@ -284,9 +303,9 @@ class WorkflowAgent(BaseAgent):
if extracted_texts:
actual_value = " ".join(extracted_texts)
print(f" Texto extraído dos eventos: '{actual_value[:100]}...'")
print(f" Extracted text from events: '{actual_value[:100]}...'")
# Converter valores para string para facilitar comparações
# Convert values to string for easier comparisons
if actual_value is not None:
actual_str = str(actual_value)
else:
@ -297,58 +316,58 @@ class WorkflowAgent(BaseAgent):
else:
expected_str = ""
# Verificações de definição
# Checks for definition
if operator == "is_defined":
result = actual_value is not None and actual_value != ""
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
elif operator == "is_not_defined":
result = actual_value is None or actual_value == ""
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
# Verificações de igualdade
# Checks for equality
elif operator == "equals":
result = actual_str == expected_str
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
elif operator == "not_equals":
result = actual_str != expected_str
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
# Verificações de conteúdo
# Checks for content
elif operator == "contains":
# Converter ambos para minúsculas para comparação sem diferenciação
# Convert both to lowercase for case-insensitive comparison
expected_lower = expected_str.lower()
actual_lower = actual_str.lower()
print(
f" Comparação 'contains' sem distinção de maiúsculas/minúsculas: '{expected_lower}' em '{actual_lower[:100]}...'"
f" Comparison 'contains' without case distinction: '{expected_lower}' in '{actual_lower[:100]}...'"
)
result = expected_lower in actual_lower
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
elif operator == "not_contains":
expected_lower = expected_str.lower()
actual_lower = actual_str.lower()
print(
f" Comparação 'not_contains' sem distinção de maiúsculas/minúsculas: '{expected_lower}' em '{actual_lower[:100]}...'"
f" Comparison 'not_contains' without case distinction: '{expected_lower}' in '{actual_lower[:100]}...'"
)
result = expected_lower not in actual_lower
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
# Verificações de início e fim
# Checks for start and end
elif operator == "starts_with":
result = actual_str.lower().startswith(expected_str.lower())
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
elif operator == "ends_with":
result = actual_str.lower().endswith(expected_str.lower())
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
# Verificações numéricas (tentando converter para número)
# Numeric checks (attempting to convert to number)
elif operator in [
"greater_than",
"greater_than_or_equal",
@ -367,26 +386,26 @@ class WorkflowAgent(BaseAgent):
result = actual_num < expected_num
elif operator == "less_than_or_equal":
result = actual_num <= expected_num
print(f" Verificação numérica '{operator}': {result}")
print(f" Numeric check '{operator}': {result}")
return result
except (ValueError, TypeError):
# Se não for possível converter para número, retorna falso
# If it's not possible to convert to number, return false
print(
f" Erro ao converter valores para comparação numérica: '{actual_str[:100]}...' e '{expected_str}'"
f" Error converting values for numeric comparison: '{actual_str[:100]}...' and '{expected_str}'"
)
return False
# Verificações com expressões regulares
# Checks with regular expressions
elif operator == "matches":
import re
try:
pattern = re.compile(expected_str, re.IGNORECASE)
result = bool(pattern.search(actual_str))
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
except re.error:
print(f" Erro na expressão regular: '{expected_str}'")
print(f" Error in regular expression: '{expected_str}'")
return False
elif operator == "not_matches":
import re
@ -394,17 +413,17 @@ class WorkflowAgent(BaseAgent):
try:
pattern = re.compile(expected_str, re.IGNORECASE)
result = not bool(pattern.search(actual_str))
print(f" Verificação '{operator}': {result}")
print(f" Check '{operator}': {result}")
return result
except re.error:
print(f" Erro na expressão regular: '{expected_str}'")
return True # Se a regex for inválida, consideramos que não houve match
print(f" Error in regular expression: '{expected_str}'")
return True # If the regex is invalid, we consider that there was no match
return False
def _create_flow_router(self, flow_data: Dict[str, Any]):
"""Cria um roteador baseado nas conexões no flow.json."""
# Mapear conexões para entender como os nós se conectam
"""Creates a router based on the connections in flow.json."""
# Map connections to understand how nodes are connected
edges_map = {}
for edge in flow_data.get("edges", []):
@ -415,10 +434,10 @@ class WorkflowAgent(BaseAgent):
if source not in edges_map:
edges_map[source] = {}
# Armazenar o destino para cada handle específico
# Store the destination for each specific handle
edges_map[source][source_handle] = target
# Mapear nós de condição e suas condições
# Map condition nodes and their conditions
condition_nodes = {}
for node in flow_data.get("nodes", []):
if node.get("type") == "condition-node":
@ -426,35 +445,35 @@ class WorkflowAgent(BaseAgent):
conditions = node.get("data", {}).get("conditions", [])
condition_nodes[node_id] = conditions
# Função de roteamento para cada nó específico
# Routing function for each specific node
def create_router_for_node(node_id: str):
def router(state: State) -> str:
print(f"Roteando a partir do nó: {node_id}")
print(f"Routing from node: {node_id}")
# Verificar se o limite de ciclos foi atingido
# Check if the cycle limit has been reached
cycle_count = state.get("cycle_count", 0)
if cycle_count >= 10:
print(
f"⚠️ Limite de ciclos ({cycle_count}) atingido. Finalizando o fluxo."
f"⚠️ Cycle limit ({cycle_count}) reached. Finalizing the flow."
)
return END
# Se for um nó de condição, avaliar as condições
# If it's a condition node, evaluate the conditions
if node_id in condition_nodes:
conditions = condition_nodes[node_id]
for condition in conditions:
condition_id = condition.get("id")
# Verificar se a condição é atendida
# Check if the condition is met
is_condition_met = self._evaluate_condition(condition, state)
if is_condition_met:
print(
f"Condição {condition_id} atendida. Movendo para o próximo nó."
f"Condition {condition_id} met. Moving to the next node."
)
# Encontrar a conexão que usa este condition_id como handle
# Find the connection that uses this condition_id as a handle
if (
node_id in edges_map
and condition_id in edges_map[node_id]
@ -462,37 +481,31 @@ class WorkflowAgent(BaseAgent):
return edges_map[node_id][condition_id]
else:
print(
f"Condição {condition_id} NÃO atendida. Continuando avaliação ou usando caminho padrão."
f"Condition {condition_id} not met. Continuing evaluation or using default path."
)
# Se nenhuma condição for atendida, usar o bottom-handle se disponível
# If no condition is met, use the bottom-handle if available
if node_id in edges_map and "bottom-handle" in edges_map[node_id]:
print(
"Nenhuma condição atendida. Usando caminho padrão (bottom-handle)."
)
print("No condition met. Using default path (bottom-handle).")
return edges_map[node_id]["bottom-handle"]
else:
print(
"Nenhuma condição atendida e não há caminho padrão. Encerrando fluxo."
)
print("No condition met and no default path. Closing the flow.")
return END
# Para nós regulares, simplesmente seguir a primeira conexão disponível
# For regular nodes, simply follow the first available connection
if node_id in edges_map:
# Tentar usar o handle padrão ou bottom-handle primeiro
# Try to use the default handle or bottom-handle first
for handle in ["default", "bottom-handle"]:
if handle in edges_map[node_id]:
return edges_map[node_id][handle]
# Se nenhum handle específico for encontrado, usar o primeiro disponível
# If no specific handle is found, use the first available
if edges_map[node_id]:
first_handle = list(edges_map[node_id].keys())[0]
return edges_map[node_id][first_handle]
# Se não houver conexão de saída, encerrar o fluxo
print(
f"Nenhum caminho a seguir a partir do nó {node_id}. Encerrando fluxo."
)
# If there is no output connection, close the flow
print(f"No output connection from node {node_id}. Closing the flow.")
return END
return router
@ -502,30 +515,30 @@ class WorkflowAgent(BaseAgent):
async def _create_graph(
self, ctx: InvocationContext, flow_data: Dict[str, Any]
) -> StateGraph:
"""Cria um StateGraph a partir dos dados do fluxo."""
# Extrair nós do fluxo
"""Creates a StateGraph from the flow data."""
# Extract nodes from the flow
nodes = flow_data.get("nodes", [])
# Inicializar StateGraph
# Initialize StateGraph
graph_builder = StateGraph(State)
# Criar funções para cada tipo de nó
# Create functions for each node type
node_functions = await self._create_node_functions(ctx)
# Dicionário para armazenar funções específicas para cada nó
# Dictionary to store specific functions for each node
node_specific_functions = {}
# Adicionar nós ao grafo
# Add nodes to the graph
for node in nodes:
node_id = node.get("id")
node_type = node.get("type")
node_data = node.get("data", {})
if node_type in node_functions:
# Criar uma função específica para este nó
# Create a specific function for this node
def create_node_function(node_type, node_id, node_data):
async def node_function(state):
# Consumir o gerador assíncrono e retornar o último resultado
# Consume the asynchronous generator and return the last result
result = None
async for item in node_functions[node_type](
state, node_id, node_data
@ -535,106 +548,106 @@ class WorkflowAgent(BaseAgent):
return node_function
# Adicionar função específica ao dicionário
# Add specific function to the dictionary
node_specific_functions[node_id] = create_node_function(
node_type, node_id, node_data
)
# Adicionar o nó ao grafo
print(f"Adicionando nó {node_id} do tipo {node_type}")
# Add node to the graph
print(f"Adding node {node_id} of type {node_type}")
graph_builder.add_node(node_id, node_specific_functions[node_id])
# Criar função para gerar roteadores específicos
# Create function to generate specific routers
create_router = self._create_flow_router(flow_data)
# Adicionar conexões condicionais para cada nó
# Add conditional connections for each node
for node in nodes:
node_id = node.get("id")
if node_id in node_specific_functions:
# Criar dicionário de possíveis destinos
# Create dictionary of possible destinations
edge_destinations = {}
# Mapear todos os possíveis destinos
# Map all possible destinations
for edge in flow_data.get("edges", []):
if edge.get("source") == node_id:
target = edge.get("target")
if target in node_specific_functions:
edge_destinations[target] = target
# Adicionar END como possível destino
# Add END as a possible destination
edge_destinations[END] = END
# Criar roteador específico para este nó
# Create specific router for this node
node_router = create_router(node_id)
# Adicionar conexões condicionais
print(f"Adicionando conexões condicionais para o nó {node_id}")
print(f"Destinos possíveis: {edge_destinations}")
# Add conditional connections
print(f"Adding conditional connections for node {node_id}")
print(f"Possible destinations: {edge_destinations}")
graph_builder.add_conditional_edges(
node_id, node_router, edge_destinations
)
# Encontrar o nó inicial (geralmente o start-node)
# Find the initial node (usually the start-node)
entry_point = None
for node in nodes:
if node.get("type") == "start-node":
entry_point = node.get("id")
break
# Se não houver start-node, usar o primeiro nó encontrado
# If there is no start-node, use the first node found
if not entry_point and nodes:
entry_point = nodes[0].get("id")
# Definir ponto de entrada
# Define the entry point
if entry_point:
print(f"Definindo ponto de entrada: {entry_point}")
print(f"Defining entry point: {entry_point}")
graph_builder.set_entry_point(entry_point)
# Compilar o grafo
# Compile the graph
return graph_builder.compile()
async def _run_async_impl(
self, ctx: InvocationContext
) -> AsyncGenerator[Event, None]:
"""
Implementação do agente de workflow.
Implementation of the workflow agent.
Este método segue o padrão de implementação de agentes personalizados,
executando o fluxo de trabalho definido e retornando os resultados.
This method follows the pattern of custom agent implementation,
executing the defined workflow and returning the results.
"""
try:
# 1. Extrair a mensagem do usuário do contexto
# 1. Extract the user message from the context
user_message = None
# Procurar a mensagem do usuário nos eventos da sessão
# Search for the user message in the session events
if ctx.session and hasattr(ctx.session, "events") and ctx.session.events:
for event in reversed(ctx.session.events):
if event.author == "user" and event.content and event.content.parts:
user_message = event.content.parts[0].text
print("Mensagem encontrada nos eventos da sessão")
print("Message found in session events")
break
# Verificar no estado da sessão se a mensagem não foi encontrada nos eventos
# Check in the session state if the message was not found in the events
if not user_message and ctx.session and ctx.session.state:
if "user_message" in ctx.session.state:
user_message = ctx.session.state["user_message"]
elif "message" in ctx.session.state:
user_message = ctx.session.state["message"]
# 2. Usar o ID da sessão como identificador estável
# 2. Use the session ID as a stable identifier
session_id = (
str(ctx.session.id)
if ctx.session and hasattr(ctx.session, "id")
else str(uuid.uuid4())
)
# 3. Criar o grafo de fluxo de trabalho a partir do JSON fornecido
# 3. Create the workflow graph from the provided JSON
graph = await self._create_graph(ctx, self.flow_json)
# 4. Preparar o estado inicial
# 4. Prepare the initial state
initial_state = State(
content=[
Event(
@ -649,28 +662,28 @@ class WorkflowAgent(BaseAgent):
conversation_history=ctx.session.events,
)
# 5. Executar o grafo
print("\n🚀 Iniciando execução do fluxo de trabalho:")
print(f"Conteúdo inicial: {user_message[:100]}...")
# 5. Execute the graph
print("\n🚀 Starting workflow execution:")
print(f"Initial content: {user_message[:100]}...")
# Executar o grafo com limite de recursão para evitar loops infinitos
# Execute the graph with a recursion limit to avoid infinite loops
result = await graph.ainvoke(initial_state, {"recursion_limit": 20})
# 6. Processar e retornar o resultado
# 6. Process and return the result
final_content = result.get("content", [])
print(f"\nRESULTADO FINAL: {final_content[:100]}...")
print(f"\nFINAL RESULT: {final_content[:100]}...")
for content in final_content:
yield content
# Executar sub-agentes
# Execute sub-agents
for sub_agent in self.sub_agents:
async for event in sub_agent.run_async(ctx):
yield event
except Exception as e:
# Tratar qualquer erro não capturado
error_msg = f"Erro ao executar o agente de workflow: {str(e)}"
# Handle any uncaught errors
error_msg = f"Error executing the workflow agent: {str(e)}"
print(error_msg)
yield Event(
author=self.name,

View File

@ -9,15 +9,15 @@ class SSEUtils:
generator: AsyncGenerator, timeout: int = 30, retry_attempts: int = 3
) -> AsyncGenerator:
"""
Adiciona timeout e retry a um gerador de eventos SSE.
Adds timeout and retry to an SSE event generator.
Args:
generator: Gerador de eventos
timeout: Tempo máximo de espera em segundos
retry_attempts: Número de tentativas de reconexão
generator: Event generator
timeout: Maximum wait time in seconds
retry_attempts: Number of reconnection attempts
Yields:
Eventos do gerador
Events from the generator
"""
attempts = 0
while attempts < retry_attempts:
@ -29,33 +29,33 @@ class SSEUtils:
attempts += 1
if attempts >= retry_attempts:
raise HTTPException(
status_code=408, detail="Timeout após múltiplas tentativas"
status_code=408, detail="Timeout after multiple attempts"
)
await asyncio.sleep(1) # Espera antes de tentar novamente
await asyncio.sleep(1) # Wait before trying again
@staticmethod
def format_error_event(error: Exception) -> str:
"""
Formata um evento de erro SSE.
Formats an SSE error event.
Args:
error: Exceção ocorrida
error: Occurred exception
Returns:
String formatada do evento SSE
Formatted SSE error event
"""
return f"event: error\ndata: {str(error)}\n\n"
@staticmethod
def validate_sse_headers(headers: dict) -> None:
"""
Valida headers necessários para SSE.
Validates required headers for SSE.
Args:
headers: Dicionário de headers
headers: Dictionary of headers
Raises:
HTTPException se headers inválidos
HTTPException if invalid headers
"""
required_headers = {
"Accept": "text/event-stream",
@ -66,5 +66,5 @@ class SSEUtils:
for header, value in required_headers.items():
if headers.get(header) != value:
raise HTTPException(
status_code=400, detail=f"Header {header} inválido ou ausente"
status_code=400, detail=f"Invalid or missing header: {header}"
)

View File

@ -1,295 +0,0 @@
<!DOCTYPE html>
<html lang="pt-BR">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Teste de Streaming A2A</title>
<link rel="icon" href="data:,">
<style>
:root {
--primary-color: #2563eb;
--secondary-color: #1e40af;
--background-color: #f8fafc;
--text-color: #1e293b;
--border-color: #e2e8f0;
}
* {
margin: 0;
padding: 0;
box-sizing: border-box;
}
body {
font-family: 'Segoe UI', Tahoma, Geneva, Verdana, sans-serif;
background-color: var(--background-color);
color: var(--text-color);
line-height: 1.6;
padding: 20px;
}
.container {
max-width: 800px;
margin: 0 auto;
background: white;
border-radius: 8px;
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
padding: 20px;
}
h1 {
color: var(--primary-color);
margin-bottom: 20px;
text-align: center;
}
.form-group {
margin-bottom: 20px;
}
label {
display: block;
margin-bottom: 5px;
font-weight: 600;
}
input[type="text"] {
width: 100%;
padding: 10px;
border: 1px solid var(--border-color);
border-radius: 4px;
font-size: 16px;
}
button {
background-color: var(--primary-color);
color: white;
border: none;
padding: 10px 20px;
border-radius: 4px;
cursor: pointer;
font-size: 16px;
transition: background-color 0.3s;
}
button:hover {
background-color: var(--secondary-color);
}
.chat-container {
margin-top: 20px;
border: 1px solid var(--border-color);
border-radius: 4px;
height: 400px;
overflow-y: auto;
padding: 10px;
}
.message {
margin-bottom: 10px;
padding: 10px;
border-radius: 4px;
max-width: 80%;
}
.user-message {
background-color: #e3f2fd;
margin-left: auto;
}
.agent-message {
background-color: #f3f4f6;
}
.status-message {
background-color: #fef3c7;
text-align: center;
font-style: italic;
}
.error-message {
background-color: #fee2e2;
color: #dc2626;
}
.controls {
display: flex;
gap: 10px;
margin-top: 20px;
}
.status {
margin-top: 10px;
padding: 10px;
border-radius: 4px;
background-color: #f3f4f6;
font-size: 14px;
}
</style>
</head>
<body>
<div class="container">
<h1>Teste de Streaming A2A</h1>
<div class="form-group">
<label for="agentId">Agent ID:</label>
<input type="text" id="agentId" placeholder="Digite o ID do agente">
</div>
<div class="form-group">
<label for="apiKey">API Key:</label>
<input type="text" id="apiKey" placeholder="Digite sua API Key">
</div>
<div class="form-group">
<label for="message">Mensagem:</label>
<input type="text" id="message" placeholder="Digite sua mensagem">
</div>
<div class="controls">
<button onclick="startStreaming()">Iniciar Streaming</button>
<button onclick="stopStreaming()">Parar Streaming</button>
</div>
<div class="status" id="connectionStatus">
Status: Não conectado
</div>
<div class="chat-container" id="chatContainer">
<!-- Mensagens serão adicionadas aqui -->
</div>
</div>
<script>
let controller = null;
const chatContainer = document.getElementById('chatContainer');
const statusElement = document.getElementById('connectionStatus');
function addMessage(content, type = 'agent') {
const messageDiv = document.createElement('div');
messageDiv.className = `message ${type}-message`;
messageDiv.textContent = content;
chatContainer.appendChild(messageDiv);
chatContainer.scrollTop = chatContainer.scrollHeight;
}
function updateStatus(status) {
statusElement.textContent = `Status: ${status}`;
}
async function startStreaming() {
const agentId = document.getElementById('agentId').value;
const apiKey = document.getElementById('apiKey').value;
const message = document.getElementById('message').value;
if (!agentId || !apiKey || !message) {
alert('Por favor, preencha todos os campos');
return;
}
// Limpa o chat
chatContainer.innerHTML = '';
addMessage('Iniciando conexão...', 'status');
// Configura o payload
const payload = {
jsonrpc: "2.0",
id: "1",
method: "tasks/sendSubscribe",
params: {
id: "test-task",
message: {
role: "user",
parts: [{
type: "text",
text: message
}]
}
}
};
try {
// Cria um novo AbortController para controlar o streaming
controller = new AbortController();
const signal = controller.signal;
// Faz a requisição POST com streaming
const response = await fetch(
`http://${window.location.host}/api/v1/agents/${agentId}/tasks/sendSubscribe`, {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Accept': 'text/event-stream',
'X-API-Key': apiKey
},
body: JSON.stringify(payload),
signal
});
if (!response.ok) {
throw new Error(`HTTP error! status: ${response.status}`);
}
updateStatus('Conectado');
addMessage('Conexão estabelecida', 'status');
// Lê o stream de dados
const reader = response.body.getReader();
const decoder = new TextDecoder();
while (true) {
const {
done,
value
} = await reader.read();
if (done) break;
// Decodifica o chunk de dados
const chunk = decoder.decode(value);
// Processa cada linha do chunk
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data:')) {
try {
const data = JSON.parse(line.slice(5));
if (data.state) {
// Evento de status
addMessage(`Estado: ${data.state}`, 'status');
} else if (data.content) {
// Evento de conteúdo
addMessage(data.content, 'agent');
}
} catch (error) {
addMessage('Erro ao processar mensagem: ' + error, 'error');
}
}
}
}
} catch (error) {
if (error.name === 'AbortError') {
updateStatus('Desconectado');
addMessage('Conexão encerrada pelo usuário', 'status');
} else {
updateStatus('Erro');
addMessage('Erro ao iniciar streaming: ' + error.message, 'error');
}
}
}
function stopStreaming() {
if (controller) {
controller.abort();
controller = null;
updateStatus('Desconectado');
addMessage('Conexão encerrada', 'status');
}
}
</script>
</body>
</html>

View File

@ -1,343 +0,0 @@
<!DOCTYPE html>
<html>
<head>
<title>ADK Streaming Test</title>
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<style>
:root {
--primary-color: #2563eb;
--secondary-color: #f3f4f6;
--text-color: #1f2937;
--border-color: #e5e7eb;
--success-color: #10b981;
--error-color: #ef4444;
--user-message-bg: #dbeafe;
--agent-message-bg: #f3f4f6;
}
* {
box-sizing: border-box;
margin: 0;
padding: 0;
}
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, Oxygen, Ubuntu, Cantarell, sans-serif;
line-height: 1.6;
color: var(--text-color);
background-color: #f9fafb;
padding: 20px;
max-width: 1200px;
margin: 0 auto;
}
h1 {
color: var(--primary-color);
margin-bottom: 20px;
text-align: center;
font-size: 2rem;
}
#messages {
height: 60vh;
overflow-y: auto;
border: 1px solid var(--border-color);
border-radius: 8px;
padding: 20px;
margin-bottom: 20px;
background-color: white;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1);
}
.message {
margin: 10px 0;
padding: 12px 16px;
border-radius: 12px;
max-width: 80%;
word-wrap: break-word;
animation: fadeIn 0.3s ease-in-out;
}
@keyframes fadeIn {
from {
opacity: 0;
transform: translateY(10px);
}
to {
opacity: 1;
transform: translateY(0);
}
}
.user-message {
background-color: var(--user-message-bg);
margin-left: auto;
border-bottom-right-radius: 4px;
}
.agent-message {
background-color: var(--agent-message-bg);
margin-right: auto;
border-bottom-left-radius: 4px;
}
#connection-status {
padding: 10px;
border-radius: 6px;
text-align: center;
font-weight: 500;
margin-bottom: 15px;
}
.connected {
background-color: rgba(16, 185, 129, 0.1);
color: var(--success-color);
}
.disconnected {
background-color: rgba(239, 68, 68, 0.1);
color: var(--error-color);
}
#messageForm {
background-color: white;
padding: 20px;
border-radius: 8px;
box-shadow: 0 1px 3px rgba(0, 0, 0, 0.1);
}
input[type="text"] {
width: 100%;
padding: 12px;
margin-bottom: 15px;
border: 1px solid var(--border-color);
border-radius: 6px;
font-size: 1rem;
transition: border-color 0.3s;
}
input[type="text"]:focus {
outline: none;
border-color: var(--primary-color);
box-shadow: 0 0 0 3px rgba(37, 99, 235, 0.1);
}
button {
padding: 12px 24px;
border: none;
border-radius: 6px;
font-size: 1rem;
font-weight: 500;
cursor: pointer;
transition: all 0.3s;
}
#connectButton {
background-color: var(--primary-color);
color: white;
width: 100%;
}
#connectButton:hover {
background-color: #1d4ed8;
}
#connectButton:disabled {
background-color: #93c5fd;
cursor: not-allowed;
}
#sendButton {
background-color: var(--primary-color);
color: white;
margin-left: 10px;
}
#sendButton:hover {
background-color: #1d4ed8;
}
#sendButton:disabled {
background-color: #93c5fd;
cursor: not-allowed;
}
#debug {
margin-top: 20px;
padding: 15px;
background-color: #1f2937;
color: #e5e7eb;
border-radius: 8px;
font-family: 'Courier New', Courier, monospace;
font-size: 0.9rem;
line-height: 1.5;
max-height: 200px;
overflow-y: auto;
}
<blade media|%20(max-width%3A%20768px)%20%7B>body {
padding: 10px;
}
#messages {
height: 50vh;
}
.message {
max-width: 90%;
}
button {
width: 100%;
margin-bottom: 10px;
}
#sendButton {
margin-left: 0;
}
}
</style>
</head>
<body>
<h1>ADK Streaming Test</h1>
<div id="messages"></div>
<div id="connection-status" class="disconnected">Desconectado</div>
<form id="messageForm">
<input type="text" id="agentId" placeholder="Agent ID">
<input type="text" id="contactId" placeholder="Contact ID">
<input type="text" id="token" placeholder="JWT Token">
<button type="button" id="connectButton">Conectar</button>
<div style="display: flex; margin-top: 15px;">
<input type="text" id="message" placeholder="Digite sua mensagem...">
<button type="submit" id="sendButton" disabled>Enviar</button>
</div>
</form>
<div id="debug"></div>
<script>
let ws = null;
let currentMessageId = null;
const messagesDiv = document.getElementById('messages');
const messageForm = document.getElementById('messageForm');
const connectButton = document.getElementById('connectButton');
const sendButton = document.getElementById('sendButton');
const statusDiv = document.getElementById('connection-status');
const debugDiv = document.getElementById('debug');
function log(message, data = null) {
const timestamp = new Date().toISOString();
let logMessage = `${timestamp} - ${message}`;
if (data) {
logMessage += '\n' + JSON.stringify(data, null, 2);
}
debugDiv.textContent = logMessage + '\n\n' + debugDiv.textContent;
console.log(message, data);
}
connectButton.onclick = function () {
const agentId = document.getElementById('agentId').value;
const contactId = document.getElementById('contactId').value;
const token = document.getElementById('token').value;
if (!agentId || !contactId || !token) {
alert('Por favor, preencha todos os campos');
return;
}
if (ws) {
ws.close();
}
const ws_url = `ws://${window.location.host}/api/v1/chat/ws/${agentId}/${contactId}`;
log('Connecting to WebSocket', {
url: ws_url
});
ws = new WebSocket(ws_url);
ws.onopen = function () {
log('WebSocket connected, sending authentication');
const authMessage = {
type: 'authorization',
token: token
};
ws.send(JSON.stringify(authMessage));
log('Authentication sent', authMessage);
statusDiv.textContent = 'Conectado';
statusDiv.className = 'connected';
sendButton.disabled = false;
connectButton.disabled = true;
};
ws.onmessage = function (event) {
const packet = JSON.parse(event.data);
log('Received message', packet);
if (packet.turn_complete) {
currentMessageId = null;
return;
}
if (currentMessageId == null) {
currentMessageId = Math.random().toString(36).substring(7);
const messageDiv = document.createElement('div');
messageDiv.id = currentMessageId;
messageDiv.className = 'message agent-message';
messagesDiv.appendChild(messageDiv);
}
const messageDiv = document.getElementById(currentMessageId);
messageDiv.textContent = (messageDiv.textContent || '') + packet.message;
messagesDiv.scrollTop = messagesDiv.scrollHeight;
};
ws.onclose = function (event) {
log('WebSocket closed', {
code: event.code,
reason: event.reason,
wasClean: event.wasClean
});
statusDiv.textContent = 'Desconectado';
statusDiv.className = 'disconnected';
sendButton.disabled = true;
connectButton.disabled = false;
ws = null;
};
ws.onerror = function (error) {
log('WebSocket error', error);
statusDiv.textContent = 'Erro na conexão';
statusDiv.className = 'disconnected';
};
};
messageForm.onsubmit = function (e) {
e.preventDefault();
const messageInput = document.getElementById('message');
const message = messageInput.value;
if (message && ws) {
const userMessageDiv = document.createElement('div');
userMessageDiv.className = 'message user-message';
userMessageDiv.textContent = message;
messagesDiv.appendChild(userMessageDiv);
const messagePacket = {
message: message
};
log('Sending message', messagePacket);
ws.send(JSON.stringify(messagePacket));
messageInput.value = '';
messagesDiv.scrollTop = messagesDiv.scrollHeight;
}
return false;
};
</script>
</body>
</html>

View File

@ -1 +0,0 @@
# Package initialization for tests

View File

@ -1 +0,0 @@
# API tests package

View File

@ -1,11 +0,0 @@
def test_read_root(client):
"""
Test that the root endpoint returns the correct response.
"""
response = client.get("/")
assert response.status_code == 200
data = response.json()
assert "message" in data
assert "documentation" in data
assert "version" in data
assert "auth" in data

View File

@ -1 +0,0 @@
# Services tests package

View File

@ -1,27 +0,0 @@
from src.services.auth_service import create_access_token
from src.models.models import User
import uuid
def test_create_access_token():
"""
Test that an access token is created with the correct data.
"""
# Create a mock user
user = User(
id=uuid.uuid4(),
email="test@example.com",
hashed_password="hashed_password",
is_active=True,
is_admin=False,
name="Test User",
client_id=uuid.uuid4(),
)
# Create token
token = create_access_token(user)
# Simple validation
assert token is not None
assert isinstance(token, str)
assert len(token) > 0