sherpa_ai.actions package#
Overview#
The actions
package contains a collection of specialized actions that Sherpa AI agents can perform to accomplish tasks. These actions range from web searches to mathematical operations and content synthesis.
Key Components
Web Interactions: Google and arXiv search capabilities
Reasoning Actions: Deliberation and planning mechanisms
Content Processing: Context search and synthesis operations
Mathematical Tools: Arithmetic problem-solving actions
Example Usage#
from sherpa_ai.actions import GoogleSearch, Synthesize
# Perform a Google search
search_action = GoogleSearch()
search_results = search_action.run("latest developments in quantum computing")
# Synthesize information from search results
synthesize_action = Synthesize()
summary = synthesize_action.run(context=search_results)
print(summary)
Submodules#
Module |
Description |
---|---|
Implements search capabilities for academic papers and research on arXiv. |
|
Contains the abstract base classes that define the action interface. |
|
Offers tools for searching within provided context or documents. |
|
Provides reasoning and reflection capabilities for decision-making. |
|
Implements web search functionality using Google search engine. |
|
Contains actions for creating plans and strategic action sequences. |
|
Offers capabilities for generating summaries and synthesizing information. |
sherpa_ai.actions.arxiv_search module#
- class sherpa_ai.actions.arxiv_search.ArxivSearch(*, name: str = 'ArxivSearch', args: dict = {'query': 'string'}, usage: str = 'Search paper on the Arxiv website', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, resources: list[~sherpa_ai.actions.base.ActionResource] = <factory>, num_documents: int = 5, reranker: ~sherpa_ai.actions.utils.reranking.BaseReranking = None, refiner: ~sherpa_ai.actions.utils.refinement.BaseRefinement = None, current_task: str = '', perform_reranking: bool = False, perform_refinement: bool = True, role_description: str, task: str, llm: ~typing.Any = None, description: str = 'Role Description: {role_description}\nTask: {task}\n\nRelevant Paper Title and Summary:\n{paper_title_summary}\n\n\nReview and analyze the provided paper summary with respect to the task. Craft a concise and short, unified summary that distills key information that is most relevant to the task, incorporating reference links within the summary.\nOnly use the information given. Do not add any additional information. The summary should be less than {n} setences\n')[source]#
Bases:
BaseRetrievalAction
A class for searching and retrieving papers from the Arxiv website.
This class provides functionality to search for academic papers on Arxiv based on a query, retrieve relevant information, and refine the results using an LLM to create concise summaries.
- This class inherits from
BaseRetrievalAction
and provides methods to: Search for papers on Arxiv using a query
Refine search results into concise summaries relevant to a specific task
- role_description#
Description of the role context for refining results.
- Type:
str
- task#
The specific task or question to focus on when refining results.
- Type:
str
- llm#
Language model used for refining search results.
- Type:
Any
- description#
Template for generating refinement prompts.
- Type:
str
- _search_tool#
Internal tool for performing Arxiv searches.
- Type:
Any
- name#
Name of the action, set to “ArxivSearch”.
- Type:
str
- args#
Arguments accepted by the action, including “query”.
- Type:
dict
- usage#
Description of the action’s usage.
- Type:
str
- perform_refinement#
Whether to refine search results, default is True.
- Type:
bool
Example
>>> from sherpa_ai.actions import ArxivSearch >>> search = ArxivSearch(role_description="AI researcher", task="Find papers on transformer architecture") >>> results = search.search("transformer architecture") >>> summary = search.refine(results) >>> print(summary)
-
role_description:
str
#
-
task:
str
#
-
llm:
Any
#
-
description:
str
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
-
perform_refinement:
bool
#
- search(query)[source]#
Search for papers on Arxiv based on the provided query.
This method uses the SearchArxivTool to find papers matching the query, adds the found resources to the action’s resource collection, and returns them.
- Parameters:
query (str) – The search query to find relevant papers.
- Returns:
A list of dictionaries containing information about found papers.
- Return type:
list[dict]
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context, /)#
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Parameters:
self (
BaseModel
) – The BaseModel instance.context (
Any
) – The context.
- Return type:
None
- refine(result)[source]#
Refine the search results into a concise summary relevant to the specified task.
This method formats a prompt using the action’s description template and the provided result, then uses the LLM to generate a refined summary that focuses on information relevant to the task.
- Parameters:
result (str) – The search results to be refined into a summary.
- Returns:
A refined summary of the search results, focused on the specified task.
- Return type:
str
- resources: list[ActionResource]#
- num_documents: int#
- reranker: BaseReranking#
- refiner: BaseRefinement#
- current_task: str#
- perform_reranking: bool#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
sherpa_ai.actions.base module#
- class sherpa_ai.actions.base.ActionResource(**data)[source]#
Bases:
BaseModel
A model representing a resource used by an action.
This class defines the structure for resources that can be used by actions, such as documents, URLs, or other content sources.
- source#
Source identifier of the resource, such as document ID or URL.
- Type:
str
- content#
The actual content of the resource.
- Type:
str
Example
>>> resource = ActionResource(source="doc123", content="This is the document content") >>> print(resource.source) doc123
-
source:
str
#
-
content:
str
#
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class sherpa_ai.actions.base.ActionArgument(*args, name: str, type: str = 'str', description: str = '', source: str = 'agent', key: str | None = None)[source]#
Bases:
BaseModel
A model representing an argument used by an action.
This class defines the structure for arguments that can be passed to actions, including their type, description, and source.
- name#
Name of the argument.
- Type:
str
- type#
Data type of the argument. Defaults to “str”.
- Type:
str
- description#
Description of what the argument represents. Defaults to “”.
- Type:
str
- source#
Source of the argument value, either “agent” or “belief”. If “agent”, the argument is provided by the agent (LLM). If “belief”, the value is retrieved from the belief dictionary. Defaults to “agent”.
- Type:
str
- key#
Key in the belief dictionary if source is “belief”. Defaults to the argument name if not specified.
- Type:
Optional[str]
Example
>>> arg = ActionArgument(name="query", type="str", description="Search query") >>> print(arg.name) query
-
name:
str
#
-
type:
str
#
-
description:
str
#
-
source:
str
#
-
key:
Optional
[str
]#
- model_config: ClassVar[ConfigDict] = {}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class sherpa_ai.actions.base.BaseAction(*args, name: str, usage: str, belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>)[source]#
Bases:
ABC
,BaseModel
Base class for all actions in the Sherpa AI system.
This abstract class provides the foundation for all actions, defining the common interface and functionality that all actions must implement. It handles argument processing, validation, execution, and result management.
- This class inherits from
ABC
andBaseModel
and provides methods to: Process and validate action arguments
Execute actions with proper error handling
Manage action lifecycle (start, execution, end)
Store and retrieve results in the belief system
- name#
Unique identifier for the action.
- Type:
str
- args#
Arguments required to run the action.
- Type:
Union[dict, list[ActionArgument]]
- usage#
Description of how to use the action.
- Type:
str
- output_key#
Key used to store the action result in the belief system.
- Type:
Optional[str]
- prompt_template#
Template for generating prompts.
- Type:
Optional[PromptTemplate]
Example
>>> class MyAction(BaseAction): ... name = "my_action" ... args = {"input": "string"} ... usage = "Performs a specific task" ... def execute(self, **kwargs): ... return f"Processed: {kwargs['input']}" >>> action = MyAction() >>> result = action(input="test") >>> print(result) Processed: test
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
-
name:
str
#
-
args:
Union
[dict
,list
[ActionArgument
]]#
-
usage:
str
#
-
prompt_template:
Optional
[PromptTemplate
]#
-
output_key:
Optional
[str
]#
- abstractmethod execute(**kwargs)[source]#
Execute the action with the provided arguments.
This method must be implemented by all subclasses to define the specific behavior of the action.
- Parameters:
**kwargs – Keyword arguments required by the action.
- Returns:
The result of the action execution.
- Return type:
Any
- input_validation(**kwargs)[source]#
Validate and filter the input arguments for the action.
This method checks that all required arguments are provided and retrieves values from the belief system when needed.
- Parameters:
**kwargs – Keyword arguments to validate.
- Returns:
Filtered dictionary containing only the valid arguments.
- Return type:
dict
- Raises:
ValueError – If a required argument is missing or has an invalid source.
- This class inherits from
- class sherpa_ai.actions.base.AsyncBaseAction(*args, name: str, usage: str, belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>)[source]#
Bases:
BaseAction
,ABC
Base class for asynchronous actions in the Sherpa AI system.
This class extends BaseAction to provide asynchronous execution capabilities, allowing actions to be executed without blocking the main thread.
- This class inherits from
BaseAction
and provides methods to: Execute actions asynchronously
Handle asynchronous action lifecycle
Example
>>> class MyAsyncAction(AsyncBaseAction): ... name = "my_async_action" ... args = {"input": "string"} ... usage = "Performs an asynchronous task" ... async def execute(self, **kwargs): ... # Simulate async work ... await asyncio.sleep(1) ... return f"Processed: {kwargs['input']}" >>> action = MyAsyncAction() >>> result = await action(input="test") >>> print(result) Processed: test
- abstractmethod async execute(**kwargs)[source]#
Execute the action asynchronously with the provided arguments.
This method must be implemented by all subclasses to define the specific behavior of the asynchronous action.
- Parameters:
**kwargs – Keyword arguments required by the action.
- Returns:
The result of the action execution.
- Return type:
Any
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- name: str#
- args: Union[dict, list[ActionArgument]]#
- usage: str#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
- class sherpa_ai.actions.base.BaseRetrievalAction(*args, name: str, usage: str, belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, resources: list[~sherpa_ai.actions.base.ActionResource] = <factory>, num_documents: int = 5, reranker: ~sherpa_ai.actions.utils.reranking.BaseReranking = None, refiner: ~sherpa_ai.actions.utils.refinement.BaseRefinement = None, current_task: str = '', perform_reranking: bool = False, perform_refinement: bool = False)[source]#
Bases:
BaseAction
,ABC
Base class for retrieval-based actions in the Sherpa AI system.
This class extends BaseAction to provide functionality for retrieving and processing documents or resources based on a query. It supports reranking and refinement of search results.
- This class inherits from
BaseAction
and provides methods to: Search for relevant documents
Rerank search results
Refine search results
Manage document resources
- resources#
List of resources retrieved by the action.
- Type:
list[ActionResource]
- num_documents#
Number of documents to retrieve. Defaults to 5.
- Type:
int
- reranker#
Component for reranking search results.
- Type:
BaseReranking
- refiner#
Component for refining search results.
- Type:
BaseRefinement
- current_task#
Current task context for reranking and refinement.
- Type:
str
- perform_reranking#
Whether to perform reranking on search results.
- Type:
bool
- perform_refinement#
Whether to perform refinement on search results.
- Type:
bool
Example
>>> class MySearchAction(BaseRetrievalAction): ... name = "my_search" ... args = {"query": "string"} ... usage = "Searches for documents" ... def search(self, query): ... # Implement search logic ... return [{"Source": "doc1", "Document": "content1"}] >>> action = MySearchAction() >>> result = action(query="test") >>> print(result) content1
-
resources:
list
[ActionResource
]#
-
num_documents:
int
#
-
reranker:
BaseReranking
#
-
refiner:
BaseRefinement
#
-
current_task:
str
#
-
perform_reranking:
bool
#
-
perform_refinement:
bool
#
- add_resources(resources)[source]#
Add resources to the action’s resource collection.
This method clears the existing resources and adds the new ones.
- Parameters:
resources (list[dict]) – List of resource dictionaries with “Source” and “Document” keys.
- execute(query)[source]#
Execute the retrieval action with the provided query.
This method performs the search, optionally reranks and refines the results, and returns the final processed results.
- Parameters:
query (str) – The search query.
- Returns:
The processed search results as a string.
- Return type:
str
- Raises:
SherpaActionExecutionException – If the query is empty.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- abstractmethod search(query)[source]#
Search for relevant documents based on the query.
This method must be implemented by all subclasses to define the specific search behavior.
- Parameters:
query (str) – The search query.
- Returns:
List of dictionaries containing search results with “Source” and “Document” keys.
- Return type:
list[dict]
- name: str#
- args: Union[dict, list[ActionArgument]]#
- usage: str#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
sherpa_ai.actions.context_search module#
- class sherpa_ai.actions.context_search.ContextSearch(*, name: str = 'Context Search', args: dict = {'query': 'string'}, usage: str = 'Search the conversation history with the user', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, resources: list[~sherpa_ai.actions.base.ActionResource] = <factory>, num_documents: int = 5, reranker: ~sherpa_ai.actions.utils.reranking.BaseReranking = None, refiner: ~sherpa_ai.actions.utils.refinement.BaseRefinement = None, current_task: str = '', perform_reranking: bool = False, perform_refinement: bool = True, role_description: str, task: str, llm: ~typing.Any = None, description: str = 'Role Description: {role_description}\nTask: {task}\n\nRelevant Documents:\n{documents}\n\n\nReview and analyze the provided documents with respect to the task. Craft a concise and short, unified summary that distills key information that is most relevant to the task, incorporating reference links within the summary.\nOnly use the information given. Do not add any additional information. The summary should be less than {n} setences\n')[source]#
Bases:
BaseRetrievalAction
An action for searching and retrieving information from conversation context.
This class provides functionality to search through conversation history and retrieve relevant information based on a query, with optional refinement of results.
- This class inherits from
BaseRetrievalAction
and provides methods to: Search conversation history for relevant information
Refine search results into concise summaries
Process and structure context information
- role_description#
Description of the role context for refinement.
- Type:
str
- task#
The specific task or question to focus on.
- Type:
str
- llm#
Language model used for refining search results.
- Type:
Any
- description#
Template for generating refinement prompts.
- Type:
str
- _context#
Internal tool for accessing conversation context.
- Type:
Any
- name#
Name of the action, set to “Context Search”.
- Type:
str
- args#
Arguments required by the action.
- Type:
dict
- usage#
Description of the action’s usage.
- Type:
str
- perform_refinement#
Whether to refine search results. Defaults to True.
- Type:
bool
Example
>>> search = ContextSearch( ... role_description="AI assistant", ... task="Find information about previous discussions", ... llm=my_llm ... ) >>> results = search.search("quantum computing") >>> summary = search.refine(results) >>> print(summary) Based on our previous discussions, quantum computing uses quantum bits...
-
role_description:
str
#
-
task:
str
#
-
llm:
Any
#
-
description:
str
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
-
perform_refinement:
bool
#
- search(query)[source]#
Search conversation history for information relevant to the query.
This method uses the ContextTool to search through conversation history and retrieve relevant information.
- Parameters:
query (str) – The search query.
- Returns:
Containing search results.
- Return type:
resources (str)
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context, /)#
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Parameters:
self (
BaseModel
) – The BaseModel instance.context (
Any
) – The context.
- Return type:
None
- refine(result)[source]#
Refine the search results into a concise summary.
This method formats a prompt using the action’s description template and the provided result, then uses the LLM to generate a refined summary.
- Parameters:
result (str) – The search results to be refined into a summary.
- Returns:
A refined summary of the search results, focused on the specified task.
- Return type:
str
- resources: list[ActionResource]#
- num_documents: int#
- reranker: BaseReranking#
- refiner: BaseRefinement#
- current_task: str#
- perform_reranking: bool#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
sherpa_ai.actions.deliberation module#
- class sherpa_ai.actions.deliberation.Deliberation(*args, name: str = 'Deliberation', usage: str = 'Directly come up with a solution', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, role_description: str, llm: ~typing.Any = None, description: str = 'Role Description: {role_description}\nTask Description: {task}\n\nPlease deliberate on the task and generate a solution that is:\n\nHighly Detailed: Break down components and elements clearly.\nQuality-Oriented: Ensure top-notch performance and longevity.\nPrecision-Focused: Specific measures, materials, or methods to be used.\n\nKeep the result concise and short. No more than one paragraph.\n\n')[source]#
Bases:
BaseAction
A class for generating detailed and well-thought-out solutions to tasks.
This class provides functionality to analyze tasks and generate comprehensive solutions that are detailed, quality-oriented, and precision-focused. It uses an LLM to deliberate on the task and produce a concise yet thorough response.
- This class inherits from
BaseAction
and provides methods to: Analyze and break down task components
Generate detailed solutions with specific measures and methods
Ensure quality and precision in the output
- role_description#
Description of the role context for deliberation.
- Type:
str
- llm#
Language model used for generating solutions.
- Type:
Any
- description#
Template for generating deliberation prompts.
- Type:
str
- name#
Name of the action, set to “Deliberation”.
- Type:
str
- args#
Arguments accepted by the action, including “task”.
- Type:
dict
- usage#
Description of the action’s usage.
- Type:
str
Example
>>> from sherpa_ai.actions import Deliberation >>> deliberation = Deliberation( ... role_description="Expert problem solver", ... llm=my_llm ... ) >>> solution = deliberation.execute( ... task="Design a robust error handling system" ... ) >>> print(solution)
-
role_description:
str
#
-
llm:
Any
#
-
description:
str
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
- execute(task)[source]#
Execute the Deliberation action.
- Parameters:
task (str) – The task to deliberate on.
- Returns:
The solution to the task.
- Return type:
str
- Raises:
SherpaActionExecutionException – If the action fails to execute.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
sherpa_ai.actions.google_search module#
- class sherpa_ai.actions.google_search.GoogleSearch(*, name: str = 'Google Search', args: dict = {'query': 'string'}, usage: str = 'Get answers from Google Search', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, resources: list[~sherpa_ai.actions.base.ActionResource] = <factory>, num_documents: int = 5, reranker: ~sherpa_ai.actions.utils.reranking.BaseReranking = None, refiner: ~sherpa_ai.actions.utils.refinement.BaseRefinement = None, current_task: str = '', perform_reranking: bool = False, perform_refinement: bool = False, role_description: str, task: str, llm: ~typing.Any = None, description: str = 'Role Description: {role_description}\nTask: {task}\n\nRelevant Documents:\n{documents}\n\n\nReview and analyze the provided documents with respect to the task. Craft a concise and short, unified summary that distills key information that is most relevant to the task, incorporating reference links within the summary.\nOnly use the information given. Do not add any additional information. The summary should be less than {n} setences\n', config: ~sherpa_ai.config.task_config.AgentConfig = AgentConfig(verbose=True, gsite=[], do_reflect=False, use_task_agent=False, search_domains=[], invalid_domains=[]))[source]#
Bases:
BaseRetrievalAction
A class for searching and retrieving information from Google Search.
This class provides functionality to search for information on Google based on a query, retrieve relevant results, and refine them using an LLM to create concise summaries.
- This class inherits from
BaseRetrievalAction
and provides methods to: Search for information on Google using a query
Refine search results into concise summaries relevant to a specific task
Extract original sentences from search results when needed
- role_description#
Description of the role context for refining results.
- Type:
str
- task#
The specific task or question to focus on when refining results.
- Type:
str
- llm#
Language model used for refining search results.
- Type:
Any
- description#
Template for generating refinement prompts.
- Type:
str
- config#
Configuration for the search agent.
- Type:
- _search_tool#
Internal tool for performing Google searches.
- Type:
Any
- name#
Name of the action, set to “Google Search”.
- Type:
str
- args#
Arguments accepted by the action, including “query”.
- Type:
dict
- usage#
Description of the action’s usage.
- Type:
str
Example
>>> from sherpa_ai.actions import GoogleSearch >>> search = GoogleSearch( ... role_description="Research assistant", ... task="Find information about quantum computing" ... ) >>> results = search.search("quantum computing applications") >>> summary = search.refine(results) >>> print(summary)
-
role_description:
str
#
-
task:
str
#
-
llm:
Any
#
-
description:
str
#
-
config:
AgentConfig
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context, /)#
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Parameters:
self (
BaseModel
) – The BaseModel instance.context (
Any
) – The context.
- Return type:
None
- search(query)[source]#
Search for relevant documents based on the query.
This method performs a Google search and returns the top results.
- Parameters:
query (str) – The search query.
- Returns:
List of dictionaries containing search results with “Source” and “Document” keys.
- Return type:
list[dict]
- resources: list[ActionResource]#
- num_documents: int#
- reranker: BaseReranking#
- refiner: BaseRefinement#
- current_task: str#
- perform_reranking: bool#
- perform_refinement: bool#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
sherpa_ai.actions.planning module#
- class sherpa_ai.actions.planning.Step(agent_name, task)[source]#
Bases:
object
A single step in a task execution plan.
This class represents a single step in a plan, consisting of an agent assigned to perform a specific task.
- agent_name#
The name of the agent assigned to execute this step.
- Type:
str
- task#
The detailed description of the task to be executed.
- Type:
str
Example
>>> step = Step(agent_name="Researcher", task="Find information about quantum computing") >>> print(step) Agent: Researcher Task: Find information about quantum computing
- class sherpa_ai.actions.planning.Plan[source]#
Bases:
object
A collection of steps forming a complete task execution plan.
This class represents a complete plan for executing a task, consisting of multiple steps, each assigned to a specific agent.
Example
>>> plan = Plan() >>> plan.add_step(Step("Researcher", "Find information")) >>> plan.add_step(Step("Writer", "Summarize findings")) >>> print(plan) Step 1: Agent: Researcher Task: Find information Step 2: Agent: Writer Task: Summarize findings
- class sherpa_ai.actions.planning.TaskPlanning(*args, name: str = 'TaskPlanning', usage: str = 'Come up with a plan to solve the task', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, llm: ~typing.Any = None, num_steps: int = 5, prompt: str = 'You are a **task decomposition assistant** who simplifies complex tasks into sequential steps, assigning roles or agents to each.\nBy analyzing user-defined tasks and agent capabilities, you provides structured plans, enhancing project clarity and efficiency.\nYour adaptability ensures customized solutions for diverse needs.\n\nA good plan is concise, detailed, feasible and efficient.\n\nTask: **{task}**\n\nAgents:\n{agent_pool_description}\n\nPlease break down the task into maximum {num_steps} individual, detailed steps and designate an appropriate agent for each step. The result should be in the following format:\nStep 1:\n Agent: <AgentName>\n Task: <detailed task description>\n...\nStep N:\n Agent: <AgentName>\n Task: <detailed task description>\n\nDo not answer anything else, and do not add any other information in your answer. Only select agents from the the list and only select one agent at a time.\n', revision_prompt: str = 'You are a **task decomposition assistant** who simplifies complex tasks into sequential steps, assigning roles or agents to each.\nBy analyzing user-defined tasks and agent capabilities, you provide structured plans, enhancing project clarity and efficiency.\nYour adaptability ensures customized solutions for diverse needs.\n\nA good plan is concise, detailed, feasible and efficient. It should be broken down into individual steps, with each step assigned to an appropriate agent.\n\nTask: **{task}**\n\nAgents:\n{agent_pool_description}\n\nHere is your previous plan:\n{previous_plan}\n\nHere is the feedback from the last run:\n{feedback}\n\nPlease revise the plan based on the feedback to maximum {num_steps} steps. The result should be in the following format:\nStep 1:\n Agent: <AgentName>\n Task: <detailed task description>\n...\nStep N:\n Agent: <AgentName>\n Task: <detailed task description>\n\nDo not answer anything else, and do not add any other information in your answer. Only select agents from the the list and only select one agent at a time.\n')[source]#
Bases:
BaseAction
An action for creating and revising task execution plans.
This class provides functionality to decompose complex tasks into sequential steps, assigning appropriate agents to each step. It can create new plans or revise existing plans based on feedback.
- This class inherits from
BaseAction
and provides methods to: Create new task execution plans
Revise existing plans based on feedback
Process and structure plan outputs
- Attributes:
llm (Any): Language model used for generating plans. num_steps (int): Maximum number of steps in a plan. Defaults to 5. prompt (str): Template for generating new plans. revision_prompt (str): Template for revising existing plans. name (str): Name of the action, set to “TaskPlanning”. args (dict): Arguments required by the action. usage (str): Description of the action’s usage.
- Example:
>>> planning = TaskPlanning(llm=my_llm) >>> plan = planning.execute( ... task="Research quantum computing and write a summary", ... agent_pool_description="Researcher: Finds information
- Writer: Creates summaries”
… ) >>> print(plan) Step 1: Agent: Researcher Task: Find information about quantum computing Step 2: Agent: Writer Task: Summarize the findings about quantum computing
-
llm:
Any
#
-
num_steps:
int
#
-
prompt:
str
#
-
revision_prompt:
str
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
- execute(task, agent_pool_description, last_plan=None, feedback=None)[source]#
Execute the task planning action.
This method generates a new plan or revises an existing plan based on the provided task, agent pool, and optional feedback.
- Parameters:
task (str) – The task to be planned.
agent_pool_description (str) – Description of available agents and their capabilities.
last_plan (Optional[str]) – Previous plan to revise, if any.
feedback (Optional[str]) – Feedback on the previous plan, if any.
- Returns:
A structured plan for executing the task.
- Return type:
- post_process(action_output)[source]#
Process the raw output from the LLM into a structured Plan.
This method parses the text output from the language model and converts it into a structured Plan object with Step objects.
- Parameters:
action_output (str) – Raw text output from the language model.
- Returns:
A structured plan containing steps with assigned agents and tasks.
- Return type:
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
sherpa_ai.actions.synthesize module#
- class sherpa_ai.actions.synthesize.SynthesizeOutput(*, name: str = 'SynthesizeOutput', args: dict = {'context': 'string', 'history': 'string', 'task': 'string'}, usage: str = 'Answer the question using conversation history with the user', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, role_description: str, llm: ~typing.Any = None, description: str = None, add_citation: bool = False)[source]#
Bases:
BaseAction
An action for synthesizing information into a coherent response.
This class provides functionality to generate responses by combining task requirements, context, and conversation history, with optional citation support.
- This class inherits from
BaseAction
and provides methods to: Generate synthesized responses based on multiple inputs
Format responses with or without citations
Process and structure output using templates
- role_description#
Description of the role context for response generation.
- Type:
str
- llm#
Language model used for generating responses.
- Type:
Any
- description#
Custom description template for response generation.
- Type:
str
- add_citation#
Whether to include citations in the response.
- Type:
bool
- name#
Name of the action, set to “SynthesizeOutput”.
- Type:
str
- args#
Arguments required by the action.
- Type:
dict
- usage#
Description of the action’s usage.
- Type:
str
Example
>>> synthesizer = SynthesizeOutput( ... role_description="AI assistant", ... llm=my_llm, ... add_citation=True ... ) >>> response = synthesizer.execute( ... task="Summarize the benefits of exercise", ... context="Exercise improves cardiovascular health and mental well-being", ... history="User: Tell me about exercise benefits" ... ) >>> print(response) Exercise provides numerous health benefits, including improved cardiovascular health and mental well-being [1].
-
role_description:
str
#
-
llm:
Any
#
-
description:
str
#
-
add_citation:
bool
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
- execute(task, context, history)[source]#
Generate a synthesized response based on the provided inputs.
This method combines task requirements, context, and conversation history to generate a coherent response, with optional citation support.
- Parameters:
task (str) – The task or question to address.
context (str) – Relevant context information for the response.
history (str) – Conversation history for context.
- Returns:
The generated response text.
- Return type:
str
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
Module contents#
Sherpa AI Actions Package.
This package provides a collection of action classes that implement various functionalities for the Sherpa AI system. Each action represents a specific capability or operation that can be performed by the system.
- Available Actions:
ArxivSearch: Search and retrieve information from arXiv papers
Deliberation: Process and analyze information for decision making
EmptyAction: A placeholder action with no functionality
GoogleSearch: Search and retrieve information from Google
MockAction: A mock implementation for testing purposes
TaskPlanning: Generate and manage task execution plans
SynthesizeOutput: Generate synthesized responses from multiple inputs
Example
>>> from sherpa_ai.actions import ArxivSearch, TaskPlanning
>>> arxiv = ArxivSearch(role_description="Research assistant")
>>> planner = TaskPlanning(role_description="Task planner")
>>> # Use the actions as needed
- class sherpa_ai.actions.Deliberation(*args, name: str = 'Deliberation', usage: str = 'Directly come up with a solution', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, role_description: str, llm: ~typing.Any = None, description: str = 'Role Description: {role_description}\nTask Description: {task}\n\nPlease deliberate on the task and generate a solution that is:\n\nHighly Detailed: Break down components and elements clearly.\nQuality-Oriented: Ensure top-notch performance and longevity.\nPrecision-Focused: Specific measures, materials, or methods to be used.\n\nKeep the result concise and short. No more than one paragraph.\n\n')[source]#
Bases:
BaseAction
A class for generating detailed and well-thought-out solutions to tasks.
This class provides functionality to analyze tasks and generate comprehensive solutions that are detailed, quality-oriented, and precision-focused. It uses an LLM to deliberate on the task and produce a concise yet thorough response.
- This class inherits from
BaseAction
and provides methods to: Analyze and break down task components
Generate detailed solutions with specific measures and methods
Ensure quality and precision in the output
- role_description#
Description of the role context for deliberation.
- Type:
str
- llm#
Language model used for generating solutions.
- Type:
Any
- description#
Template for generating deliberation prompts.
- Type:
str
- name#
Name of the action, set to “Deliberation”.
- Type:
str
- args#
Arguments accepted by the action, including “task”.
- Type:
dict
- usage#
Description of the action’s usage.
- Type:
str
Example
>>> from sherpa_ai.actions import Deliberation >>> deliberation = Deliberation( ... role_description="Expert problem solver", ... llm=my_llm ... ) >>> solution = deliberation.execute( ... task="Design a robust error handling system" ... ) >>> print(solution)
- execute(task)[source]#
Execute the Deliberation action.
- Parameters:
task (str) – The task to deliberate on.
- Returns:
The solution to the task.
- Return type:
str
- Raises:
SherpaActionExecutionException – If the action fails to execute.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
-
role_description:
str
#
-
llm:
Any
#
-
description:
str
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
- class sherpa_ai.actions.GoogleSearch(*, name: str = 'Google Search', args: dict = {'query': 'string'}, usage: str = 'Get answers from Google Search', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, resources: list[~sherpa_ai.actions.base.ActionResource] = <factory>, num_documents: int = 5, reranker: ~sherpa_ai.actions.utils.reranking.BaseReranking = None, refiner: ~sherpa_ai.actions.utils.refinement.BaseRefinement = None, current_task: str = '', perform_reranking: bool = False, perform_refinement: bool = False, role_description: str, task: str, llm: ~typing.Any = None, description: str = 'Role Description: {role_description}\nTask: {task}\n\nRelevant Documents:\n{documents}\n\n\nReview and analyze the provided documents with respect to the task. Craft a concise and short, unified summary that distills key information that is most relevant to the task, incorporating reference links within the summary.\nOnly use the information given. Do not add any additional information. The summary should be less than {n} setences\n', config: ~sherpa_ai.config.task_config.AgentConfig = AgentConfig(verbose=True, gsite=[], do_reflect=False, use_task_agent=False, search_domains=[], invalid_domains=[]))[source]#
Bases:
BaseRetrievalAction
A class for searching and retrieving information from Google Search.
This class provides functionality to search for information on Google based on a query, retrieve relevant results, and refine them using an LLM to create concise summaries.
- This class inherits from
BaseRetrievalAction
and provides methods to: Search for information on Google using a query
Refine search results into concise summaries relevant to a specific task
Extract original sentences from search results when needed
- role_description#
Description of the role context for refining results.
- Type:
str
- task#
The specific task or question to focus on when refining results.
- Type:
str
- llm#
Language model used for refining search results.
- Type:
Any
- description#
Template for generating refinement prompts.
- Type:
str
- config#
Configuration for the search agent.
- Type:
- _search_tool#
Internal tool for performing Google searches.
- Type:
Any
- name#
Name of the action, set to “Google Search”.
- Type:
str
- args#
Arguments accepted by the action, including “query”.
- Type:
dict
- usage#
Description of the action’s usage.
- Type:
str
Example
>>> from sherpa_ai.actions import GoogleSearch >>> search = GoogleSearch( ... role_description="Research assistant", ... task="Find information about quantum computing" ... ) >>> results = search.search("quantum computing applications") >>> summary = search.refine(results) >>> print(summary)
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context, /)#
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Parameters:
self (
BaseModel
) – The BaseModel instance.context (
Any
) – The context.
- Return type:
None
- search(query)[source]#
Search for relevant documents based on the query.
This method performs a Google search and returns the top results.
- Parameters:
query (str) – The search query.
- Returns:
List of dictionaries containing search results with “Source” and “Document” keys.
- Return type:
list[dict]
-
role_description:
str
#
-
task:
str
#
-
llm:
Any
#
-
description:
str
#
-
config:
AgentConfig
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
- resources: list[ActionResource]#
- num_documents: int#
- reranker: BaseReranking#
- refiner: BaseRefinement#
- current_task: str#
- perform_reranking: bool#
- perform_refinement: bool#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
- class sherpa_ai.actions.TaskPlanning(*args, name: str = 'TaskPlanning', usage: str = 'Come up with a plan to solve the task', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, llm: ~typing.Any = None, num_steps: int = 5, prompt: str = 'You are a **task decomposition assistant** who simplifies complex tasks into sequential steps, assigning roles or agents to each.\nBy analyzing user-defined tasks and agent capabilities, you provides structured plans, enhancing project clarity and efficiency.\nYour adaptability ensures customized solutions for diverse needs.\n\nA good plan is concise, detailed, feasible and efficient.\n\nTask: **{task}**\n\nAgents:\n{agent_pool_description}\n\nPlease break down the task into maximum {num_steps} individual, detailed steps and designate an appropriate agent for each step. The result should be in the following format:\nStep 1:\n Agent: <AgentName>\n Task: <detailed task description>\n...\nStep N:\n Agent: <AgentName>\n Task: <detailed task description>\n\nDo not answer anything else, and do not add any other information in your answer. Only select agents from the the list and only select one agent at a time.\n', revision_prompt: str = 'You are a **task decomposition assistant** who simplifies complex tasks into sequential steps, assigning roles or agents to each.\nBy analyzing user-defined tasks and agent capabilities, you provide structured plans, enhancing project clarity and efficiency.\nYour adaptability ensures customized solutions for diverse needs.\n\nA good plan is concise, detailed, feasible and efficient. It should be broken down into individual steps, with each step assigned to an appropriate agent.\n\nTask: **{task}**\n\nAgents:\n{agent_pool_description}\n\nHere is your previous plan:\n{previous_plan}\n\nHere is the feedback from the last run:\n{feedback}\n\nPlease revise the plan based on the feedback to maximum {num_steps} steps. The result should be in the following format:\nStep 1:\n Agent: <AgentName>\n Task: <detailed task description>\n...\nStep N:\n Agent: <AgentName>\n Task: <detailed task description>\n\nDo not answer anything else, and do not add any other information in your answer. Only select agents from the the list and only select one agent at a time.\n')[source]#
Bases:
BaseAction
An action for creating and revising task execution plans.
This class provides functionality to decompose complex tasks into sequential steps, assigning appropriate agents to each step. It can create new plans or revise existing plans based on feedback.
- This class inherits from
BaseAction
and provides methods to: Create new task execution plans
Revise existing plans based on feedback
Process and structure plan outputs
- Attributes:
llm (Any): Language model used for generating plans. num_steps (int): Maximum number of steps in a plan. Defaults to 5. prompt (str): Template for generating new plans. revision_prompt (str): Template for revising existing plans. name (str): Name of the action, set to “TaskPlanning”. args (dict): Arguments required by the action. usage (str): Description of the action’s usage.
- Example:
>>> planning = TaskPlanning(llm=my_llm) >>> plan = planning.execute( ... task="Research quantum computing and write a summary", ... agent_pool_description="Researcher: Finds information
- Writer: Creates summaries”
… ) >>> print(plan) Step 1: Agent: Researcher Task: Find information about quantum computing Step 2: Agent: Writer Task: Summarize the findings about quantum computing
- execute(task, agent_pool_description, last_plan=None, feedback=None)[source]#
Execute the task planning action.
This method generates a new plan or revises an existing plan based on the provided task, agent pool, and optional feedback.
- Parameters:
task (str) – The task to be planned.
agent_pool_description (str) – Description of available agents and their capabilities.
last_plan (Optional[str]) – Previous plan to revise, if any.
feedback (Optional[str]) – Feedback on the previous plan, if any.
- Returns:
A structured plan for executing the task.
- Return type:
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- post_process(action_output)[source]#
Process the raw output from the LLM into a structured Plan.
This method parses the text output from the language model and converts it into a structured Plan object with Step objects.
- Parameters:
action_output (str) – Raw text output from the language model.
- Returns:
A structured plan containing steps with assigned agents and tasks.
- Return type:
-
llm:
Any
#
-
num_steps:
int
#
-
prompt:
str
#
-
revision_prompt:
str
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
- class sherpa_ai.actions.SynthesizeOutput(*, name: str = 'SynthesizeOutput', args: dict = {'context': 'string', 'history': 'string', 'task': 'string'}, usage: str = 'Answer the question using conversation history with the user', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, role_description: str, llm: ~typing.Any = None, description: str = None, add_citation: bool = False)[source]#
Bases:
BaseAction
An action for synthesizing information into a coherent response.
This class provides functionality to generate responses by combining task requirements, context, and conversation history, with optional citation support.
- This class inherits from
BaseAction
and provides methods to: Generate synthesized responses based on multiple inputs
Format responses with or without citations
Process and structure output using templates
- role_description#
Description of the role context for response generation.
- Type:
str
- llm#
Language model used for generating responses.
- Type:
Any
- description#
Custom description template for response generation.
- Type:
str
- add_citation#
Whether to include citations in the response.
- Type:
bool
- name#
Name of the action, set to “SynthesizeOutput”.
- Type:
str
- args#
Arguments required by the action.
- Type:
dict
- usage#
Description of the action’s usage.
- Type:
str
Example
>>> synthesizer = SynthesizeOutput( ... role_description="AI assistant", ... llm=my_llm, ... add_citation=True ... ) >>> response = synthesizer.execute( ... task="Summarize the benefits of exercise", ... context="Exercise improves cardiovascular health and mental well-being", ... history="User: Tell me about exercise benefits" ... ) >>> print(response) Exercise provides numerous health benefits, including improved cardiovascular health and mental well-being [1].
- execute(task, context, history)[source]#
Generate a synthesized response based on the provided inputs.
This method combines task requirements, context, and conversation history to generate a coherent response, with optional citation support.
- Parameters:
task (str) – The task or question to address.
context (str) – Relevant context information for the response.
history (str) – Conversation history for context.
- Returns:
The generated response text.
- Return type:
str
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
-
role_description:
str
#
-
llm:
Any
#
-
description:
str
#
-
add_citation:
bool
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
- class sherpa_ai.actions.ArxivSearch(*, name: str = 'ArxivSearch', args: dict = {'query': 'string'}, usage: str = 'Search paper on the Arxiv website', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>, resources: list[~sherpa_ai.actions.base.ActionResource] = <factory>, num_documents: int = 5, reranker: ~sherpa_ai.actions.utils.reranking.BaseReranking = None, refiner: ~sherpa_ai.actions.utils.refinement.BaseRefinement = None, current_task: str = '', perform_reranking: bool = False, perform_refinement: bool = True, role_description: str, task: str, llm: ~typing.Any = None, description: str = 'Role Description: {role_description}\nTask: {task}\n\nRelevant Paper Title and Summary:\n{paper_title_summary}\n\n\nReview and analyze the provided paper summary with respect to the task. Craft a concise and short, unified summary that distills key information that is most relevant to the task, incorporating reference links within the summary.\nOnly use the information given. Do not add any additional information. The summary should be less than {n} setences\n')[source]#
Bases:
BaseRetrievalAction
A class for searching and retrieving papers from the Arxiv website.
This class provides functionality to search for academic papers on Arxiv based on a query, retrieve relevant information, and refine the results using an LLM to create concise summaries.
- This class inherits from
BaseRetrievalAction
and provides methods to: Search for papers on Arxiv using a query
Refine search results into concise summaries relevant to a specific task
- role_description#
Description of the role context for refining results.
- Type:
str
- task#
The specific task or question to focus on when refining results.
- Type:
str
- llm#
Language model used for refining search results.
- Type:
Any
- description#
Template for generating refinement prompts.
- Type:
str
- _search_tool#
Internal tool for performing Arxiv searches.
- Type:
Any
- name#
Name of the action, set to “ArxivSearch”.
- Type:
str
- args#
Arguments accepted by the action, including “query”.
- Type:
dict
- usage#
Description of the action’s usage.
- Type:
str
- perform_refinement#
Whether to refine search results, default is True.
- Type:
bool
Example
>>> from sherpa_ai.actions import ArxivSearch >>> search = ArxivSearch(role_description="AI researcher", task="Find papers on transformer architecture") >>> results = search.search("transformer architecture") >>> summary = search.refine(results) >>> print(summary)
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context, /)#
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Parameters:
self (
BaseModel
) – The BaseModel instance.context (
Any
) – The context.
- Return type:
None
- refine(result)[source]#
Refine the search results into a concise summary relevant to the specified task.
This method formats a prompt using the action’s description template and the provided result, then uses the LLM to generate a refined summary that focuses on information relevant to the task.
- Parameters:
result (str) – The search results to be refined into a summary.
- Returns:
A refined summary of the search results, focused on the specified task.
- Return type:
str
- search(query)[source]#
Search for papers on Arxiv based on the provided query.
This method uses the SearchArxivTool to find papers matching the query, adds the found resources to the action’s resource collection, and returns them.
- Parameters:
query (str) – The search query to find relevant papers.
- Returns:
A list of dictionaries containing information about found papers.
- Return type:
list[dict]
-
role_description:
str
#
-
task:
str
#
-
llm:
Any
#
-
description:
str
#
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
-
perform_refinement:
bool
#
- resources: list[ActionResource]#
- num_documents: int#
- reranker: BaseReranking#
- refiner: BaseRefinement#
- current_task: str#
- perform_reranking: bool#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from
- class sherpa_ai.actions.EmptyAction(*args, name: str = '', usage: str = 'Make a decision', belief: ~sherpa_ai.memory.belief.Belief = None, shared_memory: ~sherpa_ai.memory.shared_memory.SharedMemory = None, output_key: str | None = None, prompt_template: ~sherpa_ai.prompts.prompt_template_loader.PromptTemplate | None = <sherpa_ai.prompts.prompt_template_loader.PromptTemplate object>)[source]#
Bases:
BaseAction
A placeholder action class with no functionality.
This class serves as a base template for creating new actions. It inherits from BaseAction but provides no actual implementation, making it useful for:
Testing action inheritance
Creating new action templates
Placeholder actions in development
- This class inherits from
BaseAction
and provides: Empty name and arguments
No-op execute method
Basic usage description
- name#
Empty string as this is a template class.
- Type:
str
- args#
Empty dictionary as no arguments are needed.
- Type:
dict
- usage#
Basic usage description.
- Type:
str
Example
>>> from sherpa_ai.actions import EmptyAction >>> empty = EmptyAction() >>> result = empty.execute() # Returns None
- execute(**kwargs)[source]#
Execute the action with the provided arguments.
This method must be implemented by all subclasses to define the specific behavior of the action.
- Parameters:
**kwargs – Keyword arguments required by the action.
- Returns:
The result of the action execution.
- Return type:
Any
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
-
name:
str
#
-
args:
dict
#
-
usage:
str
#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- class sherpa_ai.actions.MockAction(name, usage='Mock usage', args={}, belief=None, output_key=None, return_value='Mock result')[source]#
Bases:
BaseAction
A mock implementation of BaseAction for testing purposes.
This class provides a simple implementation of BaseAction that returns a predefined value when executed, allowing for testing of agents and workflows without making real API calls or requiring external dependencies.
- This class inherits from
BaseAction
and provides methods to: Initialize a mock action with customizable parameters
Execute the action and return a predefined result
- name#
Name of the action.
- Type:
str
- args#
Arguments required to run the action.
- Type:
Union[dict, list[ActionArgument]]
- usage#
Usage description of the action.
- Type:
str
- belief#
Belief used for the action. Optional.
- Type:
Any
- output_key#
Output key for storing the result. Defaults to the action name.
- Type:
Optional[str]
- _return_value#
The value returned when the action is executed.
- Type:
str
Example
>>> mock = MockAction(name="test_action", return_value="success") >>> result = mock.execute() >>> print(result) success
- execute(**kwargs)[source]#
Execute the mock action and return a predefined result.
This method simply returns the predefined return value, ignoring any input arguments.
- Parameters:
**kwargs – Keyword arguments (ignored).
- Returns:
The predefined mock result.
- Return type:
str
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}#
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- model_post_init(context, /)#
This function is meant to behave like a BaseModel method to initialise private attributes.
It takes context as an argument since that’s what pydantic-core passes when calling it.
- Parameters:
self (
BaseModel
) – The BaseModel instance.context (
Any
) – The context.
- Return type:
None
- name: str#
- args: Union[dict, list[ActionArgument]]#
- usage: str#
- belief: Belief#
- output_key: Optional[str]#
- prompt_template: Optional[PromptTemplate]#
- This class inherits from