Fix typos in docstrings of evaluation_generator.py and event.py (#101) (#121)

* Fix typos in docstrings of evaluation_generator.py and event.py (#101)

Corrected 'resposnes' to 'responses', 'uncertainity' to 'uncertainty', 'conversaction' to 'conversation', and 'exeuction' to 'execution' in relevant docstrings for clarity.

* Fix typos in docstrings and comments across multiple files

Corrected 'detla' to 'delta', 'buil-in' to 'built-in', 'walkaround' to 'workaround', and 'conversaction' to 'conversation' for clarity in relevant files. Updated comments for consistency.

---------

Co-authored-by: Hangfei Lin <hangfei@google.com>
This commit is contained in:
fangyh20
2025-04-12 09:55:22 -07:00
committed by GitHub
parent d810ff7b28
commit 089c1e6428
9 changed files with 21 additions and 21 deletions

View File

@@ -42,10 +42,10 @@ class EvaluationGenerator:
"""Returns evaluation responses for the given dataset and agent.
Args:
eval_dataset: The dataset that needs to be scraped for resposnes.
eval_dataset: The dataset that needs to be scraped for responses.
agent_module_path: Path to the module that contains the root agent.
repeat_num: Number of time the eval dataset should be repeated. This is
usually done to remove uncertainity that a single run may bring.
usually done to remove uncertainty that a single run may bring.
agent_name: The name of the agent that should be evaluated. This is
usually the sub-agent.
initial_session: Initial session for the eval data.
@@ -253,8 +253,8 @@ class EvaluationGenerator:
all_mock_tools: set[str],
):
"""Recursively apply the before_tool_callback to the root agent and all its subagents."""
# check if the agent has tools that defined by evalset
# We use function name to check if tools match
# Check if the agent has tools that are defined by evalset.
# We use function names to check if tools match
if not isinstance(agent, Agent) and not isinstance(agent, LlmAgent):
return