sherpa_ai.agents package

In This Page:

sherpa_ai.agents package#

Submodules#

sherpa_ai.agents.agent_pool module#

class sherpa_ai.agents.agent_pool.AgentPool[source]#

Bases: object

add_agent(agent: BaseAgent)[source]#

Add agent to agent pool

add_agents(agents: List[BaseAgent])[source]#

Add agents to agent pool

get_agent(agent_name: str) BaseAgent | None[source]#

Get agent by name

get_agent_pool_description() str[source]#

Create a description (prompt) of the AgentPool for agent planning

sherpa_ai.agents.base module#

class sherpa_ai.agents.base.BaseAgent(*, name: str, description: str, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, belief: ~sherpa_ai.memory.belief.Belief = None, policy: ~sherpa_ai.policies.base.BasePolicy = None, num_runs: int = 1, actions: ~typing.List[~sherpa_ai.actions.base.BaseAction] = [], validation_steps: int = 1, validations: ~typing.List[~sherpa_ai.output_parsers.base.BaseOutputProcessor] = [], feedback_agent_name: str = 'critic', global_regen_max: int = 12, llm: ~typing.Any = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, stop_checker: ~typing.Callable[[~sherpa_ai.memory.belief.Belief], bool] = <function BaseAgent.<lambda>>, **extra_data: ~typing.Any)[source]#

Bases: ABC, BaseModel

act(action: BaseAction, inputs: dict) str | None | Exception[source]#
actions: List[BaseAction]#
agent_finished(result: str) str[source]#
agent_preparation()[source]#
async async_act(action: BaseAction, inputs: dict) str | None[source]#
async async_run() TaskResult[source]#
async async_select_action() PolicyOutput | None[source]#
async async_send_event(event: str, args: dict)[source]#

Send an event to the state machine in the belief

Parameters:
  • event (str) – The event name

  • args (dict) – The arguments for the event

belief: Belief#
abstractmethod create_actions() List[BaseAction][source]#
description: str#
feedback_agent_name: str#
global_regen_max: int#
llm: Any#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: str#
num_runs: int#
observe()[source]#
policy: BasePolicy#
prompt_template: PromptTemplate#
run() TaskResult[source]#
select_action() PolicyOutput | None[source]#
send_event(event: str, args: dict)[source]#

Send an event to the state machine in the belief

Parameters:
  • event (str) – The event name

  • args (dict) – The arguments for the event

shared_memory: SharedMemory#
stop_checker: Callable[[Belief], bool]#
abstractmethod synthesize_output() str[source]#
validate_output()[source]#

Validate the synthesized output through a series of validation steps.

This method iterates through each validation in the ‘validations’ list, and for each validation, it performs ‘validation_steps’ attempts to synthesize output using ‘synthesize_output’ method. If the output doesn’t pass validation, feedback is incorporated into the belief system.

If a validation fails after all attempts, the error messages from the last failed validation are appended to the final result.

Returns:

The synthesized output after validation.

Return type:

str

validation_iterator(validations, global_regen_count, all_pass, validation_is_scaped, result)[source]#
validation_steps: int#
validations: List[BaseOutputProcessor]#

sherpa_ai.agents.critic module#

sherpa_ai.agents.ml_engineer module#

class sherpa_ai.agents.ml_engineer.MLEngineer(*args, name: str = 'ML Engineer', description: str = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, belief: ~sherpa_ai.memory.belief.Belief = None, policy: ~sherpa_ai.policies.base.BasePolicy = None, num_runs: int = 3, actions: ~typing.List[~sherpa_ai.actions.base.BaseAction] = [], validation_steps: int = 1, validations: ~typing.List[~sherpa_ai.output_parsers.base.BaseOutputProcessor] = [], feedback_agent_name: str = 'critic', global_regen_max: int = 12, llm: ~typing.Any = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, stop_checker: ~typing.Callable[[~sherpa_ai.memory.belief.Belief], bool] = <function BaseAgent.<lambda>>, **kwargs)[source]#

Bases: BaseAgent

The machine learning agent answers questions or research about ML-related topics

create_actions() List[BaseAction][source]#
description: str#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: str#
num_runs: int#
synthesize_output() str[source]#

sherpa_ai.agents.physicist module#

class sherpa_ai.agents.physicist.Physicist(*args, name: str = 'Physicist', description: str = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, belief: ~sherpa_ai.memory.belief.Belief = None, policy: ~sherpa_ai.policies.base.BasePolicy = None, num_runs: int = 3, actions: ~typing.List[~sherpa_ai.actions.base.BaseAction] = [], validation_steps: int = 1, validations: ~typing.List[~sherpa_ai.output_parsers.base.BaseOutputProcessor] = [], feedback_agent_name: str = 'critic', global_regen_max: int = 12, llm: ~typing.Any = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, stop_checker: ~typing.Callable[[~sherpa_ai.memory.belief.Belief], bool] = <function BaseAgent.<lambda>>, **kwargs)[source]#

Bases: BaseAgent

The physicist agent answers questions or research about physics-related topics

create_actions() List[BaseAction][source]#
description: str#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: str#
num_runs: int#
synthesize_output() str[source]#

sherpa_ai.agents.planner module#

sherpa_ai.agents.qa_agent module#

class sherpa_ai.agents.qa_agent.QAAgent(*args, name: str = 'QA Agent', description: str = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, belief: ~sherpa_ai.memory.belief.Belief = None, policy: ~sherpa_ai.policies.base.BasePolicy = None, num_runs: int = 3, actions: ~typing.List[~sherpa_ai.actions.base.BaseAction] = [], validation_steps: int = 1, validations: ~typing.List[~sherpa_ai.output_parsers.base.BaseOutputProcessor] = [], feedback_agent_name: str = 'critic', global_regen_max: int = 5, llm: ~typing.Any = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, stop_checker: ~typing.Callable[[~sherpa_ai.memory.belief.Belief], bool] = <function BaseAgent.<lambda>>, config: ~sherpa_ai.config.task_config.AgentConfig = None, citation_enabled: bool = False, **kwargs)[source]#

Bases: BaseAgent

The task agent is the agent handles a single task.

llm#

The language model used to generate text

Type:

BaseLanguageModel

name#

The name of the agent. Defaults to “QA Agent”.

Type:

str, optional

description#

The description of the agent. Defaults to TASK_AGENT_DESCRIPTION.

Type:

str, optional

shared_memory#

The shared memory used to store information and shared with other agents. Defaults to None.

Type:

SharedMemory, optional

belief#

The belief of the agent. Defaults to None.

Type:

Optional[Belief], optional

agent_config#

The agent configuration. Defaults to AgentConfig.

Type:

AgentConfig, optional

num_runs#

The number of runs the agent will perform. Defaults to 3.

Type:

int, optional

verbose_logger#

The verbose logger used to log information. Defaults to DummyVerboseLogger().

Type:

BaseVerboseLogger, optional

actions#

The list of actions the agent can perform. Defaults to [].

Type:

List[BaseAction], optional

validation_steps#

The number of validation steps the agent will perform. Defaults to 1.

Type:

int, optional

validations#

The list of validations the agent will perform. Defaults to [].

Type:

List[BaseOutputProcessor], optional

citation_enabled: bool#
config: AgentConfig#
create_actions() List[BaseAction][source]#
description: str#
global_regen_max: int#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: str#
num_runs: int#
synthesize_output() str[source]#

sherpa_ai.agents.user module#

class sherpa_ai.agents.user.UserAgent(*, name: str = 'User', description: str = 'A user agent that redirects the task to an expert', shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, belief: ~sherpa_ai.memory.belief.Belief = None, policy: ~sherpa_ai.policies.base.BasePolicy = None, num_runs: int = 1, actions: ~typing.List[~sherpa_ai.actions.base.BaseAction] = [], validation_steps: int = 1, validations: ~typing.List[~sherpa_ai.output_parsers.base.BaseOutputProcessor] = [], feedback_agent_name: str = 'critic', global_regen_max: int = 12, llm: ~typing.Any = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, stop_checker: ~typing.Callable[[~sherpa_ai.memory.belief.Belief], bool] = <function BaseAgent.<lambda>>, **extra_data: ~typing.Any)[source]#

Bases: BaseAgent

A wrapper class for redirecting the task to a real person

create_actions() List[BaseAction][source]#
description: str#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: str#
run() str[source]#

Redirect the task to a real person

synthesize_output() str[source]#

Module contents#

class sherpa_ai.agents.AgentPool[source]#

Bases: object

add_agent(agent: BaseAgent)[source]#

Add agent to agent pool

add_agents(agents: List[BaseAgent])[source]#

Add agents to agent pool

get_agent(agent_name: str) BaseAgent | None[source]#

Get agent by name

get_agent_pool_description() str[source]#

Create a description (prompt) of the AgentPool for agent planning

class sherpa_ai.agents.MLEngineer(*args, name: str = 'ML Engineer', description: str = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, belief: ~sherpa_ai.memory.belief.Belief = None, policy: ~sherpa_ai.policies.base.BasePolicy = None, num_runs: int = 3, actions: ~typing.List[~sherpa_ai.actions.base.BaseAction] = [], validation_steps: int = 1, validations: ~typing.List[~sherpa_ai.output_parsers.base.BaseOutputProcessor] = [], feedback_agent_name: str = 'critic', global_regen_max: int = 12, llm: ~typing.Any = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, stop_checker: ~typing.Callable[[~sherpa_ai.memory.belief.Belief], bool] = <function BaseAgent.<lambda>>, **kwargs)[source]#

Bases: BaseAgent

The machine learning agent answers questions or research about ML-related topics

actions: List[BaseAction]#
belief: Belief#
create_actions() List[BaseAction][source]#
description: str#
feedback_agent_name: str#
global_regen_max: int#
llm: Any#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: str#
num_runs: int#
policy: BasePolicy#
prompt_template: PromptTemplate#
shared_memory: SharedMemory#
stop_checker: Callable[[Belief], bool]#
synthesize_output() str[source]#
validation_steps: int#
validations: List[BaseOutputProcessor]#
class sherpa_ai.agents.Physicist(*args, name: str = 'Physicist', description: str = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, belief: ~sherpa_ai.memory.belief.Belief = None, policy: ~sherpa_ai.policies.base.BasePolicy = None, num_runs: int = 3, actions: ~typing.List[~sherpa_ai.actions.base.BaseAction] = [], validation_steps: int = 1, validations: ~typing.List[~sherpa_ai.output_parsers.base.BaseOutputProcessor] = [], feedback_agent_name: str = 'critic', global_regen_max: int = 12, llm: ~typing.Any = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, stop_checker: ~typing.Callable[[~sherpa_ai.memory.belief.Belief], bool] = <function BaseAgent.<lambda>>, **kwargs)[source]#

Bases: BaseAgent

The physicist agent answers questions or research about physics-related topics

actions: List[BaseAction]#
belief: Belief#
create_actions() List[BaseAction][source]#
description: str#
feedback_agent_name: str#
global_regen_max: int#
llm: Any#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: str#
num_runs: int#
policy: BasePolicy#
prompt_template: PromptTemplate#
shared_memory: SharedMemory#
stop_checker: Callable[[Belief], bool]#
synthesize_output() str[source]#
validation_steps: int#
validations: List[BaseOutputProcessor]#
class sherpa_ai.agents.QAAgent(*args, name: str = 'QA Agent', description: str = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, belief: ~sherpa_ai.memory.belief.Belief = None, policy: ~sherpa_ai.policies.base.BasePolicy = None, num_runs: int = 3, actions: ~typing.List[~sherpa_ai.actions.base.BaseAction] = [], validation_steps: int = 1, validations: ~typing.List[~sherpa_ai.output_parsers.base.BaseOutputProcessor] = [], feedback_agent_name: str = 'critic', global_regen_max: int = 5, llm: ~typing.Any = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, stop_checker: ~typing.Callable[[~sherpa_ai.memory.belief.Belief], bool] = <function BaseAgent.<lambda>>, config: ~sherpa_ai.config.task_config.AgentConfig = None, citation_enabled: bool = False, **kwargs)[source]#

Bases: BaseAgent

The task agent is the agent handles a single task.

llm#

The language model used to generate text

Type:

BaseLanguageModel

name#

The name of the agent. Defaults to “QA Agent”.

Type:

str, optional

description#

The description of the agent. Defaults to TASK_AGENT_DESCRIPTION.

Type:

str, optional

shared_memory#

The shared memory used to store information and shared with other agents. Defaults to None.

Type:

SharedMemory, optional

belief#

The belief of the agent. Defaults to None.

Type:

Optional[Belief], optional

agent_config#

The agent configuration. Defaults to AgentConfig.

Type:

AgentConfig, optional

num_runs#

The number of runs the agent will perform. Defaults to 3.

Type:

int, optional

verbose_logger#

The verbose logger used to log information. Defaults to DummyVerboseLogger().

Type:

BaseVerboseLogger, optional

actions#

The list of actions the agent can perform. Defaults to [].

Type:

List[BaseAction], optional

validation_steps#

The number of validation steps the agent will perform. Defaults to 1.

Type:

int, optional

validations#

The list of validations the agent will perform. Defaults to [].

Type:

List[BaseOutputProcessor], optional

actions: List[BaseAction]#
belief: Belief#
citation_enabled: bool#
config: AgentConfig#
create_actions() List[BaseAction][source]#
description: str#
feedback_agent_name: str#
global_regen_max: int#
llm: Any#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: str#
num_runs: int#
policy: BasePolicy#
prompt_template: PromptTemplate#
shared_memory: SharedMemory#
stop_checker: Callable[[Belief], bool]#
synthesize_output() str[source]#
validation_steps: int#
validations: List[BaseOutputProcessor]#
class sherpa_ai.agents.UserAgent(*, name: str = 'User', description: str = 'A user agent that redirects the task to an expert', shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, belief: ~sherpa_ai.memory.belief.Belief = None, policy: ~sherpa_ai.policies.base.BasePolicy = None, num_runs: int = 1, actions: ~typing.List[~sherpa_ai.actions.base.BaseAction] = [], validation_steps: int = 1, validations: ~typing.List[~sherpa_ai.output_parsers.base.BaseOutputProcessor] = [], feedback_agent_name: str = 'critic', global_regen_max: int = 12, llm: ~typing.Any = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, stop_checker: ~typing.Callable[[~sherpa_ai.memory.belief.Belief], bool] = <function BaseAgent.<lambda>>, **extra_data: ~typing.Any)[source]#

Bases: BaseAgent

A wrapper class for redirecting the task to a real person

actions: List[BaseAction]#
belief: Belief#
create_actions() List[BaseAction][source]#
description: str#
feedback_agent_name: str#
global_regen_max: int#
llm: Any#
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}#

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

name: str#
num_runs: int#
policy: BasePolicy#
prompt_template: PromptTemplate#
run() str[source]#

Redirect the task to a real person

shared_memory: SharedMemory#
stop_checker: Callable[[Belief], bool]#
synthesize_output() str[source]#
validation_steps: int#
validations: List[BaseOutputProcessor]#