Documentacao Crewai
Documentacao Crewai
crewAI
Home
Core Concepts
How to Guides
Tools Docs
Examples crewAI Documentation
Telemetry
Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI
empowers agents to work together seamlessly, tackling complex tasks.
crewAI
Home
Agents
Core Concepts
Agents
What is an Agent?
What is an Agent?
Agent Attributes
What is an Agent?
Creating an Agent
An agent is an autonomous unit programmed to:
Conclusion
Perform tasks
Tasks Make decisions
Tools Communicate with other agents
Processes
Crews Think of an agent as a member of a team, with speci<c skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer
Support', each contributing to the overall goal of the crew.
Collaboration
How to Guides
Tools Docs
Examples Agent Attributes
Telemetry
Attribute Description
Role De<nes the agent's function within the crew. It determines the kind of tasks the agent is best suited for.
Goal The individual objective that the agent aims to achieve. It guides the agent's decision-making process.
Backstory Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics.
LLM (optional) The language model used by the agent to process and generate text. It dynamically fetches the model name from the
OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4" if not speci<ed.
Tools (optional) Set of capabilities or functions that the agent can use to perform tasks. Tools can be shared or exclusive to speci<c agents. It's an
attribute that can be set during the initialization of an agent, with a default value of an empty list.
Function Calling If passed, this agent will use this LLM to execute function calling for tools instead of relying on the main LLM output.
LLM (optional)
Max Iter The maximum number of iterations the agent can perform before being forced to give its best answer. Default is 15 .
(optional)
Max RPM The maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspeci<ed, with a
(optional) default value of None .
Verbose Enables detailed logging of the agent's execution for debugging or monitoring purposes when set to True. Default is False .
(optional)
Allow Delegation Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is True .
(optional)
Step Callback A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will
(optional) overwrite the crew step_callback .
Memory Indicates whether the agent should have memory or not, with a default value of False. This impacts the agent's ability to remember past
(optional) interactions. Default is False .
Creating an Agent
Agent Interaction
Agents can interact with each other using crewAI's built-in delegation and communication mechanisms. This allows for dynamic task management and problem-
solving within the crew.
To create an agent, you would typically initialize an instance of the Agent class with the desired properties. Here's a conceptual
example including all attributes:
agent = Agent(
role='Data Analyst',
goal='Extract actionable insights',
backstory="""You're a data analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
performance of our marketing campaigns.""",
tools=[my_tool1, my_tool2], # Optional, defaults to an empty list
llm=my_llm, # Optional
function_calling_llm=my_llm, # Optional
max_iter=15, # Optional
max_rpm=None, # Optional
verbose=True, # Optional
allow_delegation=True, # Optional
step_callback=my_intermediate_step_callback, # Optional
memory=True # Optional
)
Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to de<ne and interact with agents, you can create
sophisticated AI systems that leverage the power of collaborative intelligence.
Next
Tasks
crewAI
Home
Tasks
Core Concepts
Agents
Overview of a Task
Tasks
Overview of a Task
What is a Task?
Task Attributes
In the CrewAI framework, tasks are individual assignments that agents complete. They encapsulate necessary information for execution, including a description,
Creating a Task
assigned agent, required tools, offering ?exibility for various action complexities.
Integrating Tools with Tasks
Creating a Task with Tools
Tasks in CrewAI can be designed to require collaboration between agents. For example, one agent might gather data while another
Referring to Other Tasks
analyzes it. This collaborative approach can be deCned within the task properties and managed by the Crew's process.
Asynchronous Execution
Callback Mechanism
Accessing a SpeciCc Task Task Attributes
Output
Tool Override Mechanism
Attribute Description
Error Handling and Validation
Mechanisms
Description A clear, concise statement of what the task entails.
Conclusion
Tools Agent Optionally, you can specify which agent is responsible for the task. If not, the crew's process will determine who takes it on.
Processes
Expected Output Clear and detailed deCnition of expected output for the task.
Crews
Collaboration Tools (optional) These are the functions or capabilities the agent can utilize to perform the task. They can be anything from simple actions like 'search'
How to Guides to more complex interactions with other agents or APIs.
Tools Docs
Async Execution Indicates whether the task should be executed asynchronously, allowing the crew to continue with the next task without waiting for
Examples (optional) completion.
Telemetry
Context Other tasks that will have their output used as context for this task. If a task is asynchronous, the system will wait for that to Cnish
(optional) before using its output as context.
Output JSON Takes a pydantic model and returns the output as a JSON object. Agent LLM needs to be using an OpenAI client, could be Ollama for
(optional) example but using the OpenAI wrapper
Output Pydantic Takes a pydantic model and returns the output as a pydantic object. Agent LLM needs to be using an OpenAI client, could be Ollama for
(optional) example but using the OpenAI wrapper
Output File Takes a Cle path and saves the output of the task on it.
(optional)
Creating a Task
This is the simplest example for creating a task, it involves deCning its scope and agent, but there are optional attributes that can
provide a lot of ?exibility:
task = Task(
description='Find and summarize the latest and most relevant news on AI',
agent=sales_agent
)
Task Assignment
Tasks can be assigned directly by specifying an agent to them, or they can be assigned in run time if you are using the hierarchical through CrewAI's
process, considering roles, availability, or other criteria.
research_agent = Agent(
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business."""
verbose=True
)
search_tool = SerperDevTool()
task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
crew = Crew(
agents=[research_agent],
tasks=[task],
verbose=2
)
result = crew.kickoff()
print(result)
This demonstrates how tasks with speciCc tools can override an agent's default set for tailored task execution.
# ...
research_ai_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
research_ops_task = Task(
description='Find and summarize the latest AI Ops news',
expected_output='A bullet list summary of the top 5 most important AI Ops news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)
write_blog_task = Task(
description="Write a full blog post about the importance of AI and its latest news",
expected_output='Full blog post that is 4 paragraphs long',
agent=writer_agent,
context=[research_ai_task, research_ops_task]
)
#...
Asynchronous Execution
You can deCne a task to be executed asynchronously. This means that the crew will not wait for it to be completed to continue with
the next task. This is useful for tasks that take a long time to be completed, or that are not crucial for the next tasks to be performed.
You can then use the context attribute to deCne in a future task that it should wait for the output of the asynchronous task to be
completed.
#...
list_ideas = Task(
description="List of 5 interesting ideas to explore for an article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
list_important_history = Task(
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)
write_article = Task(
description="Write an article about AI, its history, and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
)
#...
Callback Mechanism
The callback function is executed after the task is completed, allowing for actions or notiCcations to be triggered based on the task's
outcome.
# ...
research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
)
#...
# ...
task1 = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)
#...
crew = Crew(
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=2
)
result = crew.kickoff()
# Returns a TaskOutput object with the description and results of the task
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw_output}
""")
Ensuring only one output type is set per task to maintain clear output expectations.
Preventing the manual assignment of the id attribute to uphold the integrity of the unique identiCer system.
These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.
Conclusion
Tasks are the driving force behind the actions of agents in crewAI. By properly deCning tasks and their outcomes, you set the stage
for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools,
understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.
Previous Next
Agents Tools
crewAI
Home
Processes
Core Concepts
Agents
Understanding Processes
Tasks
Tools
Core Concept
Processes
In CrewAI, processes orchestrate the execution of tasks by agents, akin to project management in human teams. These processes ensure tasks are distributed
Understanding Processes
and executed e>ciently, in alignment with a prede@ned strategy.
Process Implementations
The Role of Processes in
Teamwork
Assigning Processes to a Process Implementations
Crew
Sequential Process Sequential: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.
Hierarchical Process Hierarchical: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of
Process Class: Detailed command. Note: A manager language model ( manager_llm ) must be speci@ed in the crew to enable the hierarchical process,
Overview allowing for the creation and management of tasks by the manager.
Planned Future Processes
Consensual Process (Planned): Currently under consideration for future development, this process type aims for collaborative
Conclusion
decision-making among agents on task execution, introducing a more democratic approach to task management within CrewAI.
Crews As of now, it is not implemented in the codebase.
Collaboration
How to Guides
Tools Docs
The Role of Processes in Teamwork
Examples
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with
Telemetry
e>ciency and coherence.
Note: Ensure my_agents and my_tasks are de@ned prior to creating a Crew object, and for the hierarchical process, manager_llm
is also required.
Sequential Process
This method mirrors dynamic team workMows, progressing through tasks in a thoughtful and systematic manner. Task execution
follows the prede@ned order in the task list, with the output of one task serving as context for the next.
To customize task context, utilize the context parameter in the Task class to specify outputs that should be used as context for
subsequent tasks.
Hierarchical Process
Emulates a corporate hierarchy, crewAI creates a manager automatically for you, requiring the speci@cation of a manager language
model ( manager_llm ) for the manager agent. This agent oversees task execution, including planning, delegation, and validation.
Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities, reviews outputs, and assesses task
completion.
Conclusion
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents.
Documentation will be regularly updated to reMect new processes and enhancements, ensuring users have access to the most
current and comprehensive information.
Previous Next
Tools Crews
crewAI
Home
Tools
Core Concepts
Agents
Introduction
Tasks
Tools CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks
Introduction among coworkers. This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework,
What is a Tool? including a new focus on collaboration tools.
Processes
Key Characteristics of Tools
Crews
Collaboration Utility: Crafted for tasks such as web searching, data analysis, content generation, and agent collaboration.
How to Guides Integration: Boosts agent capabilities by seamlessly integrating tools into their workCow.
Tools Docs
Customizability: Provides the Cexibility to develop custom tools or utilize existing ones, catering to the speciEc needs of agents.
Examples
Telemetry
import os
from crewai import Agent, Task, Crew
# Importing crewAI tools
from crewai_tools import (
DirectoryReadTool,
FileReadTool,
SerperDevTool,
WebsiteSearchTool
)
# Instantiate tools
docs_tool = DirectoryReadTool(directory='./blog-posts')
file_tool = FileReadTool()
search_tool = SerperDevTool()
web_rag_tool = WebsiteSearchTool()
# Create agents
researcher = Agent(
role='Market Research Analyst',
goal='Provide up-to-date market analysis of the AI industry',
backstory='An expert analyst with a keen eye for market trends.',
tools=[search_tool, web_rag_tool],
verbose=True
)
writer = Agent(
role='Content Writer',
goal='Craft engaging blog posts about the AI industry',
backstory='A skilled writer with a passion for technology.',
tools=[docs_tool, file_tool],
verbose=True
)
# Define tasks
research = Task(
description='Research the latest trends in the AI industry and provide a summary.',
expected_output='A summary of the top 3 trending developments in the AI industry with a unique perspective on their significance.'
agent=researcher
)
write = Task(
description='Write an engaging blog post about the AI industry, based on the research analyst’s summary. Draw inspiration from the latest b
expected_output='A 4-paragraph blog post formatted in markdown with engaging, informative, and accessible content, avoiding complex jargon.
agent=writer,
output_file='blog-posts/new_post.md' # The final blog post will be saved here
)
# Assemble a crew
crew = Crew(
agents=[researcher, writer],
tasks=[research, write],
verbose=2
)
# Execute tasks
crew.kickoff()
# This will allow the agent with this tool to read any directory it wants during it's execution
tool = DirectoryReadTool()
# OR
# This will allow the agent with this tool to read only the directory specified during it's execution
toos = DirectoryReadTool(directory='./directory')
SpeciEc per tool docs are coming soon. Here is a list of the available tools and their descriptions:
Tool Description
CodeDocsSearchTool A RAG tool optimized for searching through code documentation and related technical documents.
CSVSearchTool A RAG tool designed for searching within CSV Eles, tailored to handle structured data.
DirectorySearchTool A RAG tool for searching within directories, useful for navigating through Ele systems.
DOCXSearchTool A RAG tool aimed at searching within DOCX documents, ideal for processing Word Eles.
DirectoryReadTool Facilitates reading and processing of directory structures and their contents.
FileReadTool Enables reading and extracting data from Eles, supporting various Ele formats.
GithubSearchTool A RAG tool for searching within GitHub repositories, useful for code and documentation search.
SeperDevTool A specialized tool for development purposes, with speciEc functionalities under development.
TXTSearchTool A RAG tool focused on searching within text (.txt) Eles, suitable for unstructured data.
JSONSearchTool A RAG tool designed for searching within JSON Eles, catering to structured data handling.
MDXSearchTool A RAG tool tailored for searching within Markdown (MDX) Eles, useful for documentation.
PDFSearchTool A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents.
PGSearchTool A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries.
RagTool A general-purpose RAG tool capable of handling various data sources and types.
ScrapeElementFromWebsiteTool Enables scraping speciEc elements from websites, useful for targeted data extraction.
ScrapeWebsiteTool Facilitates scraping entire websites, ideal for comprehensive data collection.
WebsiteSearchTool A RAG tool for searching website content, optimized for web data extraction.
XMLSearchTool A RAG tool designed for searching within XML Eles, suitable for structured data formats.
YoutubeChannelSearchTool A RAG tool for searching within YouTube channels, useful for video content analysis.
YoutubeVideoSearchTool A RAG tool aimed at searching within YouTube videos, ideal for video data extraction.
Developers can craft custom tools tailored for their agent’s needs or utilize pre-built options:
To create your own crewAI tools you will need to install our extra tools package:
Once you do that there are two main ways for one to create a crewAI tool:
Subclassing BaseTool
class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
DeEne a new class inheriting from BaseTool , specifying name , description , and the _run method for operational logic.
For a simpler approach, create a Tool object directly with the required attributes and a functional logic.
import json
import requests
from crewai import Agent
from crewai.tools import tool
from unstructured.partition.html import partition_html
LangChain Integration
CrewAI seamlessly integrates with LangChain’s comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by
Langchain LangChain Toolkit
search = GoogleSerperAPIWrapper()
agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool]
)
Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and
collaborate effectively. When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and
enhance the AI ecosystem.
Previous Next
Tasks Processes
crewAI
Home
Crews
Core Concepts
Agents
What is a Crew?
Tasks
Tools
De@nition of a Crew
Processes
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew de:nes the strategy for task execution, agent
Crews
collaboration, and the overall work>ow.
What is a Crew?
Crew Attributes
Creating a Crew
Crew Attributes
Example: Assembling a
Crew
Crew Usage Metrics Attribute Description
How to Guides
Process (optional) The process >ow (e.g., sequential, hierarchical) the crew follows.
Tools Docs
Examples Verbose (optional) The verbosity level for logging during execution.
Telemetry
Manager LLM The language model used by the manager agent in a hierarchical process. Required when using a hierarchical process.
(optional)
Function Calling LLM If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM,
(optional) which overrides the crew's LLM for function calling.
Con@g (optional) Optional con:guration settings for the crew, in Json or Dict[str, Any] format.
Max RPM (optional) Maximum requests per minute the crew adheres to during execution.
Full Output (optional) Whether the crew should return the full output with all tasks outputs or just the :nal output.
Step Callback A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it
(optional) won't override the agent-speci:c step_callback .
Share Crew (optional) Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow
us to train models.
The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you
Creating a Crew
Crew Composition
When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and
interaction.
writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries',
verbose=True
)
Hierarchical Process: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding.
Note: A manager_llm is required for this process and it's essential for validating the process >ow.
Once your crew is assembled, initiate the work>ow with the kickoff() method. This starts the execution process according to the
de:ned process >ow.
Previous Next
Processes Collaboration
crewAI
Home
Collaboration
Core Concepts
Agents
Collaboration Fundamentals
Tasks
Tools
Core of Agent Interaction
Processes
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly
Crews
cooperative ecosystem.
Collaboration
Collaboration Fundamentals
Information Sharing: Ensures all agents are well-informed and can contribute effectively by sharing data and Andings.
Enhanced Attributes for
Improved Collaboration Task Assistance: Allows agents to seek help from peers with the required expertise for speciAc tasks.
Delegation: Dividing to
Resource Allocation: Optimizes task execution through the eEcient distribution and sharing of resources among agents.
Conquer
Implementing Collaboration
and Delegation
Example Scenario
Enhanced Attributes for Improved Collaboration
Conclusion
The Crew class has been enriched with several attributes to support advanced functionalities:
How to Guides Language Model Management ( manager_llm , function_calling_llm ): Manages language models for executing tasks and
Tools Docs tools, facilitating sophisticated agent-tool interactions. It's important to note that manager_llm is mandatory when using a
Examples hierarchical process for ensuring proper execution Iow.
Telemetry
Process Flow ( process ): DeAnes the execution logic (e.g., sequential, hierarchical) to streamline task distribution and
execution.
Verbose Logging ( verbose ): Offers detailed logging capabilities for monitoring and debugging purposes. It supports both
integer and boolean types to indicate the verbosity level.
ConCguration ( config ): Allows extensive customization to tailor the crew's behavior according to speciAc requirements.
Rate Limiting ( max_rpm ): Ensures eEcient utilization of resources by limiting requests per minute.
Internationalization Support ( language ): Facilitates operation in multiple languages, enhancing global usability.
Execution and Output Handling ( full_output ): Distinguishes between full and Anal outputs for nuanced control over task
results.
Callback and Telemetry ( step_callback ): Integrates callbacks for step-wise execution monitoring and telemetry for
performance analytics.
Crew Sharing ( share_crew ): Enables sharing of crew information with CrewAI for continuous improvement and training
models.
Usage Metrics ( usage_metrics ): Store all metrics for the language model (LLM) usage during all tasks' execution, providing
insights into operational eEciency and areas for improvement, you can check it after the crew execution.
Example Scenario
Consider a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The
integration of advanced language model management and process Iow attributes allows for more sophisticated interactions, such
as the writer delegating complex research tasks to the researcher or querying speciAc information, thereby facilitating a seamless
workIow.
Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework signiAcantly enriches the agent collaboration
ecosystem. These enhancements not only simplify interactions but also offer unprecedented Iexibility and control, paving the way
for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.
Previous Next
crewAI
Home
Create Custom Tools
Core Concepts
How to Guides
Creating your own Tools
Getting Started
Create Custom Tools
Custom Tool Creation
Creating your own Tools
Developers can craft custom tools tailored for their agent’s needs or utilize pre-built options:
Subclassing BaseTool
Utilizing the tool Decorator
Using the Tool function To create your own crewAI tools you will need to install our extra tools package:
from langchain
pip install 'crewai[tools]'
Using Sequential Process
Using Hierarchical Process
Once you do that there are two main ways for one to create a crewAI tool:
Connecting to any LLM
Customizing Agents
Subclassing BaseTool
Human Input on Execution
Tools Docs
from crewai_tools import BaseTool
Examples
Telemetry class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."
DeCne a new class inheriting from BaseTool , specifying name , description , and the _run method for operational logic.
For a simpler approach, create a Tool object directly with the required attributes and a functional logic.
import json
import requests
from crewai import Agent
from crewai.tools import tool
from unstructured.partition.html import partition_html
For another simple approach, create a function in python directly with the required attributes and a functional logic.
Then you can add that function into the your tool by using 'func' variable in the Tool function.
math_tool = Tool(
name="Math tool",
func=math_tool,
description="Useful for adding two numbers together, in other words combining them."
)
Previous Next
crewAI
Home
Getting Started
Core Concepts
How to Guides
Introduction
Getting Started
Introduction Embark on your CrewAI journey by setting up your environment and initiating your AI crew with enhanced features. This guide
Step 0: Installation ensures a seamless start, incorporating the latest updates.
Step 1: Assemble Your
Agents
Step 2: DeAne the Tasks Step 0: Installation
Step 3: Form the Crew
Install CrewAI and any necessary packages for your project.
Step 4: Kick It Off
Conclusion pip install crewai
pip install 'crewai[tools]'
Create Custom Tools
Using Sequential Process
Using Hierarchical Process
Connecting to any LLM
Step 1: Assemble Your Agents
Customizing Agents DeAne your agents with distinct roles, backstories, and now, enhanced capabilities such as verbose mode and memory usage. These
Human Input on Execution elements add depth and guide their task execution and interaction within the crew.
Tools Docs
Examples import os
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
Telemetry os.environ["OPENAI_API_KEY"] = "Your Key"
# Research task
research_task = Task(
description=(
"Identify the next big trend in {topic}."
"Focus on identifying pros and cons and the overall narrative."
"Your final report should clearly articulate the key points"
"its market opportunities, and potential risks."
),
expected_output='A comprehensive 3 paragraphs long report on the latest AI trends.',
tools=[search_tool],
agent=researcher,
)
Conclusion
Building and activating a crew in CrewAI has evolved with new functionalities. By incorporating verbose mode, memory capabilities,
asynchronous task execution, output customization, and language model conAguration, your AI team is more equipped than ever to
tackle challenges eJciently. The depth of agent backstories and the precision of their objectives enrich collaboration, leading to
successful project outcomes.
Previous Next
crewAI
Home
Using Hierarchical Process
Core Concepts
How to Guides
Introduction
Getting Started
Create Custom Tools The hierarchical process in CrewAI introduces a structured approach to task management, simulating traditional organizational
Using Sequential Process hierarchies for e;cient task delegation and execution. This systematic work?ow enhances project outcomes by ensuring tasks are
Using Hierarchical Process handled with optimal e;ciency and accuracy.
Introduction
Complexity and E3ciency
Hierarchical Process
Overview
The hierarchical process is designed to leverage advanced models like GPT-4, optimizing token usage while handling complex tasks with greater e;ciency.
Key Features
Implementing the
Hierarchical Process
Work?ow in Action Hierarchical Process Overview
Conclusion
By default, tasks in CrewAI are managed through a sequential process. However, adopting a hierarchical approach allows for a clear
Connecting to any LLM hierarchy in task management, where a 'manager' agent coordinates the work?ow, delegates tasks, and validates outcomes for
Customizing Agents streamlined and effective execution. This manager agent is automatically created by crewAI so you don't need to worry about it.
Human Input on Execution
Tools Docs
Key Features
Examples
Telemetry Task Delegation: A manager agent allocates tasks among crew members based on their roles and capabilities.
Result Validation: The manager evaluates outcomes to ensure they meet the required standards.
E3cient Work7ow: Emulates corporate structures, providing an organized approach to task management.
Assign tools at the agent level to facilitate task delegation and execution by the designated agents under the manager's guidance.
ConPguring the manager_llm parameter is crucial for the hierarchical process. The system requires a manager LLM to be set up for proper function, ensuring
tailored decision-making.
Work?ow in Action
Task Assignment: The manager assigns tasks strategically, considering each agent's capabilities.
Execution and Review: Agents complete their tasks, with the manager ensuring quality standards.
Sequential Task Progression: Despite being a hierarchical process, tasks follow a logical order for smooth progression,
facilitated by the manager's oversight.
Conclusion
Adopting the hierarchical process in crewAI, with the correct conPgurations and understanding of the system's capabilities,
facilitates an organized and e;cient approach to project management.
Previous Next
crewAI
Home
Using Sequential Process
Core Concepts
How to Guides
Introduction
Getting Started
Create Custom Tools CrewAI offers a ,exible framework for executing tasks in a structured manner, supporting both sequential and hierarchical
Using Sequential Process processes. This guide outlines how to effectively implement these processes to ensure eAcient task execution and project
Introduction completion.
Work,ow in Action
Initial Task: In a sequential process, the Crst agent completes their task and signals completion.
Subsequent Tasks: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or manager
directives guiding their execution.
Completion: The process concludes once the Cnal task is executed, leading to project completion.
Conclusion
The sequential process in CrewAI provides a clear, straightforward path for task execution. It's particularly suited for projects
requiring a logical progression of tasks, ensuring each step is completed before the next begins, thereby facilitating a cohesive Cnal
product.
Previous Next
crewAI
Home
Connecting to any LLM
Core Concepts
How to Guides
Connect CrewAI to LLMs
Getting Started
Create Custom Tools
Default LLM
Using Sequential Process
By default, CrewAI uses OpenAI's GPT-4 model for language processing. However, you can conCgure your agents to use a different model or API. This guide will
Using Hierarchical Process
show you how to connect your agents to different LLMs through environment variables and direct instantiation.
Connecting to any LLM
Connect CrewAI to LLMs
CrewAI offers Hexibility in connecting to various LLMs, including local models via Ollama and different APIs like Azure. It's
CrewAI Agent Overview
compatible with all LangChain LLM components, enabling diverse integrations for tailored AI solutions.
Ollama Integration
Setting Up Ollama
OpenAI Compatible API CrewAI Agent Overview
Endpoints
ConCguration Examples The Agent class is the cornerstone for implementing AI solutions in CrewAI. Here's an updated overview reHecting the latest
FastChat codebase changes:
LM Studio
Attributes:
Mistral API
Azure Open AI role : DeCnes the agent's role within the solution.
ConCguration
goal : SpeciCes the agent's objective.
Example Agent with Azure
LLM backstory : Provides a background story to the agent.
Conclusion llm : Indicates the Large Language Model the agent uses.
Customizing Agents function_calling_llm Optinal: Will turn the ReAct crewAI agent into a function calling agent.
Human Input on Execution
max_iter : Maximum number of iterations for an agent to execute a task, default is 15.
Tools Docs
memory : Enables the agent to retain information during the execution.
Examples
Telemetry max_rpm : Sets the maximum number of requests per minute.
allow_delegation : Allows the agent to delegate tasks to other agents, default is True .
tools : SpeciCes the tools available to the agent for task execution.
# Required
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"
# Agent will automatically use the model defined in the environment variable
example_agent = Agent(
role='Local Expert',
goal='Provide insights about the city',
backstory="A knowledgeable local guide.",
verbose=True
)
Ollama Integration
Ollama is preferred for local LLM integration, offering customization and privacy beneCts. To integrate Ollama with CrewAI, set the
appropriate environment variables as shown below. Note: Detailed Ollama setup is beyond this document's scope, but general
guidance is provided.
Setting Up Ollama
Environment Variables Con3guration: To integrate Ollama, set the following environment variables:
OPENAI_API_BASE='http://localhost:11434/v1'
OPENAI_MODEL_NAME='openhermes' # Adjust based on available model
OPENAI_API_KEY=''
ConCguration Examples
FastChat
OPENAI_API_BASE="http://localhost:8001/v1"
OPENAI_MODEL_NAME='oh-2.5m7b-q51'
OPENAI_API_KEY=NA
LM Studio
OPENAI_API_BASE="http://localhost:8000/v1"
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA
Mistral API
OPENAI_API_KEY=your-mistral-api-key
OPENAI_API_BASE=https://api.mistral.ai/v1
OPENAI_MODEL_NAME="mistral-small"
For Azure OpenAI API integration, set the following environment variables:
AZURE_OPENAI_VERSION="2022-12-01"
AZURE_OPENAI_DEPLOYMENT=""
AZURE_OPENAI_ENDPOINT=""
AZURE_OPENAI_KEY=""
load_dotenv()
azure_llm = AzureChatOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY")
)
azure_agent = Agent(
role='Example Agent',
goal='Demonstrate custom LLM configuration',
backstory='A diligent explorer of GitHub docs.',
llm=azure_llm
)
Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, eYcient AI solutions across
various domains and platforms.
Previous Next
crewAI
Home
Customizing Agents
Core Concepts
How to Guides
Customizable Attributes
Getting Started
Create Custom Tools Crafting an e+cient CrewAI team hinges on the ability to tailor your AI agents dynamically to meet the unique requirements of any
Using Sequential Process project. This section covers the foundational attributes you can customize.
Using Hierarchical Process
Connecting to any LLM Key Attributes for Customization
Customizing Agents
Role: SpeciCes the agent's job within the crew, such as 'Analyst' or 'Customer Service Rep'.
Customizable Attributes
Key Attributes for Goal: DeCnes what the agent aims to achieve, in alignment with its role and the overarching objectives of the crew.
Customization
Backstory: Provides depth to the agent's persona, enriching its motivations and engagements within the crew.
Advanced Customization
Options Tools: Represents the capabilities or methods the agent uses to perform tasks, from simple functions to intricate integrations.
Language Model
Customization
Enabling Memory for Advanced Customization Options
Agents
Performance and Debugging Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities signiCcantly.
Settings
Verbose Mode and RPM
Limit Language Model Customization
Maximum Iterations for
Task Execution Agents can be customized with speciCc language models ( llm ) and function-calling language models ( function_calling_llm ),
Customizing Agents and offering advanced control over their processing and decision-making abilities. By default crewAI agents are ReAct agents, but by
Tools setting the function_calling_llm you can turn them into a function calling agents.
Example: Assigning Tools
to an Agent
Delegation and Autonomy
Enabling Memory for Agents
Example: Disabling CrewAI supports memory for agents, enabling them to remember past interactions. This feature is critical for tasks requiring
Delegation for an Agent
awareness of previous contexts or decisions.
Conclusion
Verbose Mode: Enables detailed logging of an agent's actions, useful for debugging and optimization. SpeciCcally, it provides
insights into agent execution processes, aiding in the optimization of performance.
RPM Limit: Sets the maximum number of requests per minute ( max_rpm ), controlling the agent's query frequency to external
services.
The max_iter attribute allows users to deCne the maximum number of iterations an agent can perform for a single task, preventing
inCnite loops or excessively long executions. The default value is set to 15, providing a balance between thoroughness and
e+ciency. Once the agent approaches this number it will try it's best to give a good answer.
import os
from crewai import Agent
from crewai_tools import SerperDevTool
agent = Agent(
role='Content Writer',
goal='Write engaging content on market trends',
backstory='A seasoned writer with expertise in market analysis.',
allow_delegation=False
)
Conclusion
Customizing agents in CrewAI by setting their roles, goals, backstories, and tools, alongside advanced options like language model
customization, memory, performance settings, and delegation preferences, equips a nuanced and capable AI team ready for
complex challenges.
Previous Next
crewAI
Home
Human Input in Agent Execution
Core Concepts
How to Guides Human input plays a pivotal role in several agent execution scenarios, enabling agents to seek additional information or clari9cation
Getting Started when necessary. This capability is invaluable in complex decision-making processes or when agents need more details to complete
Create Custom Tools a task effectively.
Using Sequential Process
Using Hierarchical Process
Connecting to any LLM
Using Human Input with CrewAI
Customizing Agents Incorporating human input with CrewAI is straightforward, enhancing the agent's ability to make informed decisions. While the
Human Input on Execution documentation previously mentioned using a "LangChain Tool" and a speci9c "DuckDuckGoSearchRun" tool from
Using Human Input with langchain_community.tools , it's important to clarify that the integration of such tools should align with the actual capabilities and
CrewAI
con9gurations de9ned within your Agent class setup.
Example:
import os
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool
task2 = Task(
description=(
"Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements."
"Your post should be informative yet accessible, catering to a tech-savvy audience."
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024'
agent=writer
)
print("######################")
print(result)
Previous Next
crewAI
Home
SerperDevTool Documentation
Core Concepts
How to Guides Experimental
Tools Docs
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Google Serper Search
Description
Installation
Example Description
Steps to Get Started
This tool is designed to perform a semantic search for a speci<ed query from a text's content across the internet. It utilizes the
Conclusion
serper.dev API to fetch and display the most relevant search results based on the query provided by the user.
Scrape Website
Directory Read
File Read
Installation
Selenium Scraper
To incorporate this tool into your project, follow the installation instructions below:
Directory RAG Search
PDF RAG Search pip install 'crewai[tools]'
TXT RAG Search
CSV RAG Search
XML RAG Search Example
JSON RAG Search
The following example demonstrates how to initialize the tool and execute a search with a given query:
Docx Rag Search
MDX RAG Search
from crewai_tools import SerperDevTool
PG RAG Search
# Initialize the tool for internet searching capabilities
Website RAG Search
tool = SerperDevTool()
Github RAG Search
Code Docs RAG Search
Youtube Video RAG Search
Steps to Get Started
Youtube Chanel RAG Search
Examples To effectively use the SerperDevTool , follow these steps:
Telemetry
Package Installation: Con<rm that the crewai[tools] package is installed in your Python environment.
API Key Acquisition: Acquire a serper.dev API key by registering for a free account at serper.dev .
Environment Con9guration: Store your obtained API key in an environment variable named SERPER_API_KEY to facilitate its use
by the tool.
Conclusion
By integrating the SerperDevTool into Python projects, users gain the ability to conduct real-time, relevant searches across the
internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects
is streamlined and straightforward.
Previous Next
crewAI
Home
ScrapeWebsiteTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Description Experimental
Installation
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Example
Arguments
Arguments
website_url : Mandatory website URL to read the @le. This is the primary input for the tool, specifying which website's content
should be scraped and read.
Previous Next
crewAI
Home
DirectoryReadTool
Core Concepts
How to Guides Experimental
Tools Docs
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Google Serper Search
Scrape Website
Directory Read
Description Description
Installation
The DirectoryReadTool is a highly e?cient utility designed for the comprehensive listing of directory contents. It recursively
Example
navigates through the speciAed directory, providing users with a detailed enumeration of all Ales, including those nested within
Arguments subdirectories. This tool is indispensable for tasks requiring a thorough inventory of directory structures or for validating the
File Read organization of Ales within directories.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Installation
TXT RAG Search Install the crewai_tools package to use the DirectoryReadTool in your project. If you haven't added this package to your
CSV RAG Search environment, you can easily install it with pip using the following command:
XML RAG Search
JSON RAG Search pip install 'crewai[tools]'
# OR
# Initialize the tool with a specific directory, so the agent can only read the content of the specified directory
tool = DirectoryReadTool(directory='/path/to/your/directory')
Arguments
The DirectoryReadTool requires minimal conAguration for use. The essential argument for this tool is as follows:
directory : Optional A argument that speciAes the path to the directory whose contents you wish to list. It accepts both
absolute and relative paths, guiding the tool to the desired directory for content listing.
Previous Next
crewAI
Home
FileReadTool
Core Concepts
How to Guides Experimental
Tools Docs
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Google Serper Search
Scrape Website
Directory Read
File Read Description
Description
The FileReadTool is a versatile component of the crewai_tools package, designed to streamline the process of reading and retrieving
Installation
content from ?les. It is particularly useful in scenarios such as batch text ?le processing, runtime con?guration ?le reading, and data
Example importation for analytics. This tool supports various text-based ?le formats including .txt , .csv , .json and more, and adapts its
Arguments functionality based on the ?le type, for instance, converting JSON content into a Python dictionary for easy use.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Installation
TXT RAG Search Install the crewai_tools package to use the FileReadTool in your projects:
CSV RAG Search
XML RAG Search pip install 'crewai[tools]'
Arguments
file_path : The path to the ?le you want to read. It accepts both absolute and relative paths. Ensure the ?le exists and you have
the necessary permissions to access it.
Previous Next
crewAI
Home
SeleniumScrapingTool
Core Concepts
How to Guides Experimental
Tools Docs
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Google Serper Search
Scrape Website
Directory Read
File Read Description
Selenium Scraper
This tool is designed for e<cient web scraping, enabling users to extract content from web pages. It supports targeted scraping by
Description
allowing the speci?cation of a CSS selector for desired elements. The Bexibility of the tool enables it to be used on any website URL
Installation provided by the user, making it a versatile tool for various web scraping needs.
Example
Arguments
Website RAG Search # Example 1: Scrape any website it finds during its execution
Github RAG Search tool = SeleniumScrapingTool()
Youtube Chanel RAG Search # Example 3: Scrape a specific CSS element from the webpage
Examples tool = SeleniumScrapingTool(website_url='https://example.com', css_element='.main-content')
Arguments
website_url : Mandatory. The URL of the website to scrape.
css_element : Mandatory. The CSS selector for a speci?c element to scrape from the website.
cookie : Optional. A dictionary containing cookie information. This parameter allows the tool to simulate a session with cookie
information, providing access to content that may be restricted to logged-in users.
wait_time : Optional. The number of seconds the tool waits after loading the website and after setting a cookie, before
scraping the content. This allows for dynamic content to load properly.
Previous Next
crewAI
Home
DirectorySearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
Description
Description
Installation
Example This tool is designed to perform a semantic search for queries within the content of a speciBed directory. Utilizing the RAG
Arguments (Retrieval-Augmented Generation) methodology, it offers a powerful means to semantically navigate through the Bles of a given
directory. The tool can be dynamically set to search any directory speciBed at runtime or can be pre-conBgured to search within a
PDF RAG Search
speciBc directory upon initialization.
TXT RAG Search
CSV RAG Search
XML RAG Search Installation
JSON RAG Search
Docx Rag Search To start using the DirectorySearchTool, you need to install the crewai_tools package. Execute the following command in your
MDX RAG Search terminal:
PG RAG Search
pip install 'crewai[tools]'
Website RAG Search
Github RAG Search
Code Docs RAG Search
Youtube Video RAG Search
Example
Youtube Chanel RAG Search The following examples demonstrate how to initialize the DirectorySearchTool for different use cases and how to perform a search:
Examples
Telemetry from crewai_tools import DirectorySearchTool
Arguments
directory : This string argument speciBes the directory within which to search. It is mandatory if the tool has not been
initialized with a directory; otherwise, the tool will only search within the initialized directory.
Previous Next
crewAI
Home
PDFSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
Description
Installation The PDFSearchTool is a RAG tool designed for semantic searches within PDF content. It allows for inputting a search query and a
Example PDF document, leveraging advanced search techniques to Fnd relevant content eGciently. This capability makes it especially useful
Arguments for extracting speciFc information from large PDF Fles quickly.
# OR
# Initialize the tool with a specific PDF path for exclusive search within that document
tool = PDFSearchTool(pdf='path/to/your/document.pdf')
Arguments
pdf : Optinal The PDF path for the search. Can be provided at initialization or within the run method's arguments. If provided at
initialization, the tool conFnes its search to the speciFed document.
Previous Next
crewAI
Home
TXTSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
Description This tool is used to perform a RAG (Retrieval-Augmented Generation) search within the content of a text Dle. It allows for semantic
Installation searching of a query within a speciDed text Dle's content, making it an invaluable resource for quickly extracting information or
Example Dnding speciDc sections of text based on the query provided.
Arguments
Telemetry
initialization of the tool with a speciDc text Dle and the subsequent search within that Dle's content.
# Initialize the tool to search within any text file's content the agent learns about during its execution
tool = TXTSearchTool()
# OR
# Initialize the tool with a specific text file, so the agent can search within the given text file's content
tool = TXTSearchTool(txt='path/to/text/file.txt')
Arguments
txt (str): Optinal. The path to the text Dle you want to search. This argument is only required if the tool was not initialized with a
speciDc text Dle; otherwise, the search will be conducted within the initially provided text Dle.
Previous Next
crewAI
Home
CSVSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is used to perform a RAG (Retrieval-Augmented Generation) search within a CSV Gle's content. It allows users to
Description semantically search for queries in the content of a speciGed CSV Gle. This feature is particularly useful for extracting information
Installation from large CSV datasets where traditional search methods might be ineJcient. All tools with "Search" in their name, including
Example CSVSearchTool, are RAG tools designed for searching different sources of data.
Arguments
# Initialize the tool without a specific CSV file. Agent will need to provide the CSV path at runtime.
tool = CSVSearchTool()
Arguments
csv : The path to the CSV Gle you want to search. This is a mandatory argument if the tool was initialized without a speciGc
CSV Gle; otherwise, it is optional.
Previous Next
crewAI
Home
XMLSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search The XMLSearchTool is a cutting-edge RAG tool engineered for conducting semantic searches within XML Fles. Ideal for users
XML RAG Search needing to parse and extract information from XML content eGciently, this tool supports inputting a search query and an optional
Description XML Fle path. By specifying an XML path, users can target their search more precisely to the content of that Fle, thereby obtaining
Installation more relevant search outcomes.
Example
Arguments
Installation
JSON RAG Search
Docx Rag Search To start using the XMLSearchTool, you must Frst install the crewai_tools package. This can be easily done with the following
MDX RAG Search command:
PG RAG Search
pip install 'crewai[tools]'
Website RAG Search
Github RAG Search
Code Docs RAG Search
Youtube Video RAG Search
Example
Youtube Chanel RAG Search Here are two examples demonstrating how to use the XMLSearchTool. The Frst example shows searching within a speciFc XML Fle,
Examples while the second example illustrates initiating a search without predeFning an XML path, providing Mexibility in search scope.
Telemetry
from crewai_tools.tools.xml_search_tool import XMLSearchTool
# Allow agents to search within any XML file's content as it learns about their paths during execution
tool = XMLSearchTool()
# OR
# Initialize the tool with a specific XML file path for exclusive search within that document
tool = XMLSearchTool(xml='path/to/your/xmlfile.xml')
Arguments
xml : This is the path to the XML Fle you wish to search. It is an optional parameter during the tool's initialization but must be
provided either at initialization or as part of the run method's arguments to execute a search.
Previous Next
crewAI
Home
JSONSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is used to perform a RAG search within a JSON Ele's content. It allows users to initiate a search with a speciEc JSON path,
XML RAG Search focusing the search operation within that particular JSON Ele. If the path is provided at initialization, the tool restricts its search
JSON RAG Search scope to the speciEed JSON Ele, thereby enhancing the precision of search results.
Description
Installation
Example
Installation
Arguments Install the crewai_tools package by executing the following command in your terminal:
Docx Rag Search
pip install 'crewai[tools]'
MDX RAG Search
PG RAG Search
Website RAG Search
Github RAG Search
Example
Code Docs RAG Search
Below are examples demonstrating how to use the JSONSearchTool for searching within JSON Eles. You can either search any
Youtube Video RAG Search JSON content or restrict the search to a speciEc JSON Ele.
Youtube Chanel RAG Search
Examples from crewai_tools import JSONSearchTool
Telemetry
# Example 1: Initialize the tool for a general search across any JSON content. This is useful when the path is known or can be discovered durin
tool = JSONSearchTool()
# Example 2: Initialize the tool with a specific JSON path, limiting the search to a particular JSON file.
tool = JSONSearchTool(json_path='./path/to/your/file.json')
Arguments
json_path (str): An optional argument that deEnes the path to the JSON Ele to be searched. This parameter is only necessary if
the tool is initialized without a speciEc JSON path. Providing this argument restricts the search to the speciEed JSON Ele.
Previous Next
crewAI
Home
DOCXSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search The DOCXSearchTool is a RAG tool designed for semantic searching within DOCX documents. It enables users to effectively search
XML RAG Search and extract relevant information from DOCX Fles using query-based searches. This tool is invaluable for data analysis, information
JSON RAG Search management, and research tasks, streamlining the process of Fnding speciFc information within large document collections.
Docx Rag Search
Description
Installation
Installation
Example Install the crewai_tools package by running the following command in your terminal:
Arguments
pip install 'crewai[tools]'
MDX RAG Search
PG RAG Search
Website RAG Search
Github RAG Search
Example
Code Docs RAG Search
The following example demonstrates initializing the DOCXSearchTool to search within any DOCX Fle's content or with a speciFc
Youtube Video RAG Search DOCX Fle path.
Youtube Chanel RAG Search
Examples from crewai_tools import DOCXSearchTool
Telemetry
# Initialize the tool to search within any DOCX file's content
tool = DOCXSearchTool()
# OR
# Initialize the tool with a specific DOCX file, so the agent can only search the content of the specified DOCX file
tool = DOCXSearchTool(docx='path/to/your/document.docx')
Arguments
docx : An optional Fle path to a speciFc DOCX document you wish to search. If not provided during initialization, the tool allows
for later speciFcation of any DOCX Fle's content path for searching.
Previous Next
crewAI
Home
MDXSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search The MDX Search Tool, a key component of the crewai_tools package, is designed for advanced market data extraction, offering
XML RAG Search invaluable support to researchers and analysts requiring immediate market insights in the AI sector. With its ability to interface with
JSON RAG Search various data sources and tools, it streamlines the process of acquiring, reading, and organizing market data eGciently.
Docx Rag Search
MDX RAG Search
Description
Installation
Installation To utilize the MDX Search Tool, ensure the crewai_tools package is installed. If not already present, install it using the following
Example command:
Arguments
pip install 'crewai[tools]'
PG RAG Search
Website RAG Search
Github RAG Search
Code Docs RAG Search
Example
Youtube Video RAG Search ConJguring and using the MDX Search Tool involves setting up environment variables and utilizing the tool within a crewAI project
Youtube Chanel RAG Search for market research. Here's a simple example:
Examples
Telemetry from crewai_tools import MDXSearchTool
# Initialize the tool so the agent can search any MDX content if it learns about during its execution
tool = MDXSearchTool()
# OR
# Initialize the tool with a specific MDX file path for exclusive search within that document
tool = MDXSearchTool(mdx='path/to/your/document.mdx')
Arguments
mdx: Optional The MDX path for the search. Can be provided at initialization
Previous Next
crewAI
Home
PGSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is designed to facilitate semantic searches within PostgreSQL database tables. Leveraging the RAG (Retrieve and
XML RAG Search Generate) technology, the PGSearchTool provides users with an eGcient means of querying database table content, speciIcally
JSON RAG Search tailored for PostgreSQL databases. It simpliIes the process of Inding relevant data through semantic search queries, making it an
Docx Rag Search invaluable resource for users needing to perform advanced queries on extensive datasets within a PostgreSQL database.
MDX RAG Search
PG RAG Search
Description
Installation
Installation To install the crewai_tools package and utilize the PGSearchTool, execute the following command in your terminal:
Example
Arguments pip install 'crewai[tools]'
# Initialize the tool with the database URI and the target table name
tool = PGSearchTool(db_uri='postgresql://user:password@localhost:5432/mydatabase', table_name='employees')
Arguments
The PGSearchTool requires the following arguments for its operation:
db_uri : A string representing the URI of the PostgreSQL database to be queried. This argument is mandatory and must include
the necessary authentication details and the location of the database.
table_name : A string specifying the name of the table within the database on which the semantic search will be performed.
This argument is mandatory.
Previous Next
crewAI
Home
WebsiteSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is speciAcally crafted for conducting semantic searches within the content of a particular website. Leveraging a Retrieval-
XML RAG Search Augmented Generation (RAG) model, it navigates through the information provided on a given URL. Users have the Gexibility to either
JSON RAG Search initiate a search across any website known or discovered during its usage or to concentrate the search on a predeAned, speciAc
Docx Rag Search website.
MDX RAG Search
PG RAG Search
Website RAG Search
Installation
Description Install the crewai_tools package by executing the following command in your terminal:
Installation
Example pip install 'crewai[tools]'
Arguments
Telemetry
# To enable the tool to search any website the agent comes across or learns about during its operation
tool = WebsiteSearchTool()
# OR
# To restrict the tool to only search within the content of a specific website.
tool = WebsiteSearchTool(website='https://example.com')
Arguments
website : An optional argument that speciAes the valid website URL to perform the search on. This becomes necessary if the
tool is initialized without a speciAc website. In the WebsiteSearchToolSchema , this argument is mandatory. However, in the
FixedWebsiteSearchToolSchema , it becomes optional if a website is provided during the tool's initialization, as it will then only
search within the predeAned website's content.
Previous Next
crewAI
Home
GitHubSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search The GitHubSearchTool is a Read, Append, and Generate (RAG) tool speciEcally designed for conducting semantic searches within
XML RAG Search GitHub repositories. Utilizing advanced semantic search capabilities, it sifts through code, pull requests, issues, and repositories,
JSON RAG Search making it an essential tool for developers, researchers, or anyone in need of precise information from GitHub.
Docx Rag Search
MDX RAG Search
PG RAG Search
Installation
Website RAG Search To use the GitHubSearchTool, Erst ensure the crewai_tools package is installed in your Python environment:
Github RAG Search
Description pip install 'crewai[tools]'
Installation
Example This command installs the necessary package to run the GitHubSearchTool along with any other tools included in the crewai_tools
Arguments package.
Telemetry
from crewai_tools import GitHubSearchTool
# Initialize the tool for semantic searches within a specific GitHub repository
tool = GitHubSearchTool(
github_repo='https://github.com/example/repo',
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)
# OR
# Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about duri
tool = GitHubSearchTool(
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)
Arguments
github_repo : The URL of the GitHub repository where the search will be conducted. This is a mandatory Eeld and speciEes the
target repository for your search.
content_types : SpeciEes the types of content to include in your search. You must provide a list of content types from the
following options: code for searching within the code, repo for searching within the repository's general information, pr for
searching within pull requests, and issue for searching within issues. This Eeld is mandatory and allows tailoring the search to
speciEc content types within the GitHub repository.
Previous Next
crewAI
Home Code Docs RAG Search
Core Concepts
How to Guides
Tools Docs
Google Serper Search
Scrape Website
Directory Read
File Read
Selenium Scraper
Directory RAG Search
PDF RAG Search
TXT RAG Search
CSV RAG Search
XML RAG Search
JSON RAG Search
Docx Rag Search
MDX RAG Search
PG RAG Search
Website RAG Search
Github RAG Search
Code Docs RAG Search
Youtube Video RAG Search
Youtube Chanel RAG Search
Examples
Telemetry
Previous Next
crewAI
Home
YoutubeVideoSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is part of the crewai_tools package and is designed to perform semantic searches within Youtube video content, utilizing
XML RAG Search Retrieval-Augmented Generation (RAG) techniques. It is one of several "Search" tools in the package that leverage RAG for different
JSON RAG Search sources. The YoutubeVideoSearchTool allows for Jexibility in searches; users can search across any Youtube video content without
Docx Rag Search specifying a video URL, or they can target their search to a speciNc Youtube video by providing its URL.
MDX RAG Search
PG RAG Search
Website RAG Search
Installation
Github RAG Search To utilize the YoutubeVideoSearchTool, you must Nrst install the crewai_tools package. This package contains the
Code Docs RAG Search YoutubeVideoSearchTool among other utilities designed to enhance your data analysis and processing tasks. Install the package by
Youtube Video RAG Search executing the following command in your terminal:
Description
Installation pip install 'crewai[tools]'
Example
Arguments
Example
Youtube Chanel RAG Search
Examples To integrate the YoutubeVideoSearchTool into your Python projects, follow the example below. This demonstrates how to use the
Telemetry tool both for general Youtube content searches and for targeted searches within a speciNc video's content.
# General search across Youtube content without specifying a video URL, so the agent can search within any Youtube video content it learns abou
tool = YoutubeVideoSearchTool()
Arguments
The YoutubeVideoSearchTool accepts the following initialization arguments:
youtube_video_url : An optional argument at initialization but required if targeting a speciNc Youtube video. It speciNes the
Youtube video URL path you want to search within.
Previous Next
crewAI
Home
YoutubeChannelSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental
File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is designed to perform semantic searches within a speciAc Youtube channel's content. Leveraging the RAG (Retrieval-
XML RAG Search Augmented Generation) methodology, it provides relevant search results, making it invaluable for extracting information or Anding
JSON RAG Search speciAc content without the need to manually sift through videos. It streamlines the search process within Youtube channels,
Docx Rag Search catering to researchers, content creators, and viewers seeking speciAc information or topics.
MDX RAG Search
PG RAG Search
Website RAG Search
Installation
Github RAG Search To utilize the YoutubeChannelSearchTool, the crewai_tools package must be installed. Execute the following command in your
Code Docs RAG Search shell to install:
Youtube Video RAG Search
Youtube Chanel RAG Search pip install 'crewai[tools]'
Description
Installation
Example Example
Arguments
To begin using the YoutubeChannelSearchTool, follow the example below. This demonstrates initializing the tool with a speciAc
Examples Youtube channel handle and conducting a search within that channel's content.
Telemetry
from crewai_tools import YoutubeChannelSearchTool
# Initialize the tool to search within any Youtube channel's content the agent learns about during its execution
tool = YoutubeChannelSearchTool()
# OR
# Initialize the tool with a specific Youtube channel handle to target your search
tool = YoutubeChannelSearchTool(youtube_channel_handle='@exampleChannel')
Arguments
youtube_channel_handle : A mandatory string representing the Youtube channel handle. This parameter is crucial for
initializing the tool to specify the channel you want to search within. The tool is designed to only search within the content of the
provided channel handle.
Previous Next
crewAI
Home
Telemetry
Core Concepts
How to Guides
Telemetry
Tools Docs
Examples CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on
Telemetry improving and developing the features, integrations, and tools most utilized by our users.
Telemetry It's pivotal to understand that NO data is collected concerning prompts, task descriptions, agents' backstories or goals, usage of
Version of CrewAI: Assessing the adoption rate of our latest version helps us understand user needs and guide our updates.
Python Version: Identifying the Python versions our users operate with assists in prioritizing our support efforts for these
versions.
General OS Information: Details like the number of CPUs and the operating system type (macOS, Windows, Linux) enable us to
focus our development on the most used operating systems and explore the potential for OS-speciCc features.
Number of Agents and Tasks in a Crew: Ensures our internal testing mirrors real-world scenarios, helping us guide users
towards best practices.
Crew Process Utilization: Understanding how crews are utilized aids in directing our development focus.
Memory and Delegation Use by Agents: Insights into how these features are used help evaluate their effectiveness and future.
Task Execution Mode: Knowing whether tasks are executed in parallel or sequentially inOuences our emphasis on enhancing
parallel execution capabilities.
Language Model Utilization: Supports our goal to improve support for the most popular languages among our users.
Roles of Agents within a Crew: Understanding the various roles agents play aids in crafting better tools, integrations, and
examples.
Tool Usage: Identifying which tools are most frequently used allows us to prioritize improvements in those areas.
Users can choose to share their complete telemetry data by enabling the share_crew attribute to True in their crew conCgurations.
This opt-in approach respects user privacy and aligns with data protection standards by ensuring users have control over their data
sharing preferences. Enabling share_crew results in the collection of detailed crew and task execution data, including goal ,
backstory , context , and output of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to
share.
We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed
to ensure our documentation accurately reOects the latest developments of our codebase and telemetry practices. Users are
encouraged to review this section for the most current information on our data collection practices and how they contribute to the
improvement of CrewAI.
Previous