0% found this document useful (0 votes)
4K views34 pages

Documentacao Crewai

CrewAI is a framework for orchestrating autonomous AI agents to work together collaboratively. An agent is an autonomous unit with a role, goal, and optional attributes like tools and memory. Tasks are individual assignments that agents complete, and encapsulate information like a description, assigned agent, and required tools. Agents can interact through delegation and communication to dynamically manage tasks.

Uploaded by

preesende
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4K views34 pages

Documentacao Crewai

CrewAI is a framework for orchestrating autonomous AI agents to work together collaboratively. An agent is an autonomous unit with a role, goal, and optional attributes like tools and memory. Tasks are individual assignments that agents complete, and encapsulate information like a description, assigned agent, and required tools. Agents can interact through delegation and communication to dynamically manage tasks.

Uploaded by

preesende
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 34

crewAI

crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Core Concepts
How to Guides
Tools Docs
Examples crewAI Documentation
Telemetry

Cutting-edge framework for orchestrating role-playing, autonomous AI agents. By fostering collaborative intelligence, CrewAI
empowers agents to work together seamlessly, tackling complex tasks.

Core Concepts How-To Guides Examples


Agents Getting Started Prepare for meetings

Tasks Create Custom Tools Trip Planner Crew

Tools Using Sequential Process Create Instagram Post

Processes Using Hierarchical Process Stock Analysis

Crews Connecting to LLMs Game Generator

Customizing Agents Drafting emails with LangGraph

Human Input on Execution Landing Page Generator

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Agents
Core Concepts
Agents
What is an Agent?
What is an Agent?
Agent Attributes
What is an Agent?
Creating an Agent
An agent is an autonomous unit programmed to:
Conclusion
Perform tasks
Tasks Make decisions
Tools Communicate with other agents

Processes
Crews Think of an agent as a member of a team, with speci<c skills and a particular job to do. Agents can have different roles like 'Researcher', 'Writer', or 'Customer
Support', each contributing to the overall goal of the crew.
Collaboration
How to Guides
Tools Docs
Examples Agent Attributes
Telemetry

Attribute Description

Role De<nes the agent's function within the crew. It determines the kind of tasks the agent is best suited for.

Goal The individual objective that the agent aims to achieve. It guides the agent's decision-making process.

Backstory Provides context to the agent's role and goal, enriching the interaction and collaboration dynamics.

LLM (optional) The language model used by the agent to process and generate text. It dynamically fetches the model name from the
OPENAI_MODEL_NAME environment variable, defaulting to "gpt-4" if not speci<ed.

Tools (optional) Set of capabilities or functions that the agent can use to perform tasks. Tools can be shared or exclusive to speci<c agents. It's an
attribute that can be set during the initialization of an agent, with a default value of an empty list.

Function Calling If passed, this agent will use this LLM to execute function calling for tools instead of relying on the main LLM output.
LLM (optional)

Max Iter The maximum number of iterations the agent can perform before being forced to give its best answer. Default is 15 .
(optional)

Max RPM The maximum number of requests per minute the agent can perform to avoid rate limits. It's optional and can be left unspeci<ed, with a
(optional) default value of None .

Verbose Enables detailed logging of the agent's execution for debugging or monitoring purposes when set to True. Default is False .
(optional)

Allow Delegation Agents can delegate tasks or questions to one another, ensuring that each task is handled by the most suitable agent. Default is True .
(optional)

Step Callback A function that is called after each step of the agent. This can be used to log the agent's actions or to perform other operations. It will
(optional) overwrite the crew step_callback .

Memory Indicates whether the agent should have memory or not, with a default value of False. This impacts the agent's ability to remember past
(optional) interactions. Default is False .

Creating an Agent

Agent Interaction

Agents can interact with each other using crewAI's built-in delegation and communication mechanisms. This allows for dynamic task management and problem-
solving within the crew.

To create an agent, you would typically initialize an instance of the Agent class with the desired properties. Here's a conceptual
example including all attributes:

# Example: Creating an agent with all attributes


from crewai import Agent

agent = Agent(
role='Data Analyst',
goal='Extract actionable insights',
backstory="""You're a data analyst at a large company.
You're responsible for analyzing data and providing insights
to the business.
You're currently working on a project to analyze the
performance of our marketing campaigns.""",
tools=[my_tool1, my_tool2], # Optional, defaults to an empty list
llm=my_llm, # Optional
function_calling_llm=my_llm, # Optional
max_iter=15, # Optional
max_rpm=None, # Optional
verbose=True, # Optional
allow_delegation=True, # Optional
step_callback=my_intermediate_step_callback, # Optional
memory=True # Optional
)

Conclusion
Agents are the building blocks of the CrewAI framework. By understanding how to de<ne and interact with agents, you can create
sophisticated AI systems that leverage the power of collaborative intelligence.

Next

Tasks

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Tasks
Core Concepts
Agents
Overview of a Task
Tasks
Overview of a Task
What is a Task?
Task Attributes
In the CrewAI framework, tasks are individual assignments that agents complete. They encapsulate necessary information for execution, including a description,
Creating a Task
assigned agent, required tools, offering ?exibility for various action complexities.
Integrating Tools with Tasks
Creating a Task with Tools
Tasks in CrewAI can be designed to require collaboration between agents. For example, one agent might gather data while another
Referring to Other Tasks
analyzes it. This collaborative approach can be deCned within the task properties and managed by the Crew's process.
Asynchronous Execution
Callback Mechanism
Accessing a SpeciCc Task Task Attributes
Output
Tool Override Mechanism
Attribute Description
Error Handling and Validation
Mechanisms
Description A clear, concise statement of what the task entails.
Conclusion

Tools Agent Optionally, you can specify which agent is responsible for the task. If not, the crew's process will determine who takes it on.

Processes
Expected Output Clear and detailed deCnition of expected output for the task.
Crews
Collaboration Tools (optional) These are the functions or capabilities the agent can utilize to perform the task. They can be anything from simple actions like 'search'
How to Guides to more complex interactions with other agents or APIs.

Tools Docs
Async Execution Indicates whether the task should be executed asynchronously, allowing the crew to continue with the next task without waiting for
Examples (optional) completion.
Telemetry
Context Other tasks that will have their output used as context for this task. If a task is asynchronous, the system will wait for that to Cnish
(optional) before using its output as context.

Output JSON Takes a pydantic model and returns the output as a JSON object. Agent LLM needs to be using an OpenAI client, could be Ollama for
(optional) example but using the OpenAI wrapper

Output Pydantic Takes a pydantic model and returns the output as a pydantic object. Agent LLM needs to be using an OpenAI client, could be Ollama for
(optional) example but using the OpenAI wrapper

Output File Takes a Cle path and saves the output of the task on it.
(optional)

Callback A function to be executed after the task is completed.


(optional)

Creating a Task
This is the simplest example for creating a task, it involves deCning its scope and agent, but there are optional attributes that can
provide a lot of ?exibility:

from crewai import Task

task = Task(
description='Find and summarize the latest and most relevant news on AI',
agent=sales_agent
)

Task Assignment

Tasks can be assigned directly by specifying an agent to them, or they can be assigned in run time if you are using the hierarchical through CrewAI's
process, considering roles, availability, or other criteria.

Integrating Tools with Tasks


Tools from the crewAI Toolkit and LangChain Tools enhance task performance, allowing agents to interact more effectively with
their environment. Assigning speciCc tools to tasks can tailor agent capabilities to particular needs.

Creating a Task with Tools


import os
os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key

from crewai import Agent, Task, Crew


from crewai_tools import SerperDevTool

research_agent = Agent(
role='Researcher',
goal='Find and summarize the latest AI news',
backstory="""You're a researcher at a large company.
You're responsible for analyzing data and providing insights
to the business."""
verbose=True
)

search_tool = SerperDevTool()

task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)

crew = Crew(
agents=[research_agent],
tasks=[task],
verbose=2
)

result = crew.kickoff()
print(result)

This demonstrates how tasks with speciCc tools can override an agent's default set for tailored task execution.

Referring to Other Tasks


In crewAI, the output of one task is automatically relayed into the next one, but you can speciCcally deCne what tasks' output,
including multiple should be used as context for another task.
This is useful when you have a task that depends on the output of another task that is not performed immediately after it. This is
done through the context attribute of the task:

# ...

research_ai_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)

research_ops_task = Task(
description='Find and summarize the latest AI Ops news',
expected_output='A bullet list summary of the top 5 most important AI Ops news',
async_execution=True,
agent=research_agent,
tools=[search_tool]
)

write_blog_task = Task(
description="Write a full blog post about the importance of AI and its latest news",
expected_output='Full blog post that is 4 paragraphs long',
agent=writer_agent,
context=[research_ai_task, research_ops_task]
)

#...

Asynchronous Execution
You can deCne a task to be executed asynchronously. This means that the crew will not wait for it to be completed to continue with
the next task. This is useful for tasks that take a long time to be completed, or that are not crucial for the next tasks to be performed.
You can then use the context attribute to deCne in a future task that it should wait for the output of the asynchronous task to be
completed.

#...

list_ideas = Task(
description="List of 5 interesting ideas to explore for an article about AI.",
expected_output="Bullet point list of 5 ideas for an article.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)

list_important_history = Task(
description="Research the history of AI and give me the 5 most important events.",
expected_output="Bullet point list of 5 important events.",
agent=researcher,
async_execution=True # Will be executed asynchronously
)

write_article = Task(
description="Write an article about AI, its history, and interesting ideas.",
expected_output="A 4 paragraph article about AI.",
agent=writer,
context=[list_ideas, list_important_history] # Will wait for the output of the two tasks to be completed
)

#...

Callback Mechanism
The callback function is executed after the task is completed, allowing for actions or notiCcations to be triggered based on the task's
outcome.

# ...

def callback_function(output: TaskOutput):


# Do something after the task is completed
# Example: Send an email to the manager
print(f"""
Task completed!
Task: {output.description}
Output: {output.raw_output}
""")

research_task = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool],
callback=callback_function
)

#...

Accessing a SpeciAc Task Output


Once a crew Cnishes running, you can access the output of a speciCc task by using the output attribute of the task object:

# ...
task1 = Task(
description='Find and summarize the latest AI news',
expected_output='A bullet list summary of the top 5 most important AI news',
agent=research_agent,
tools=[search_tool]
)

#...

crew = Crew(
agents=[research_agent],
tasks=[task1, task2, task3],
verbose=2
)

result = crew.kickoff()

# Returns a TaskOutput object with the description and results of the task
print(f"""
Task completed!
Task: {task1.output.description}
Output: {task1.output.raw_output}
""")

Tool Override Mechanism


Specifying tools in a task allows for dynamic adaptation of agent capabilities, emphasizing CrewAI's ?exibility.

Error Handling and Validation Mechanisms


While creating and executing tasks, certain validation mechanisms are in place to ensure the robustness and reliability of task
attributes. These include but are not limited to:

Ensuring only one output type is set per task to maintain clear output expectations.

Preventing the manual assignment of the id attribute to uphold the integrity of the unique identiCer system.

These validations help in maintaining the consistency and reliability of task executions within the crewAI framework.

Conclusion
Tasks are the driving force behind the actions of agents in crewAI. By properly deCning tasks and their outcomes, you set the stage
for your AI agents to work effectively, either independently or as a collaborative unit. Equipping tasks with appropriate tools,
understanding the execution process, and following robust validation practices are crucial for maximizing CrewAI's potential,
ensuring agents are effectively prepared for their assignments and that tasks are executed as intended.

Previous Next

Agents Tools

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Processes
Core Concepts
Agents
Understanding Processes
Tasks
Tools
Core Concept
Processes
In CrewAI, processes orchestrate the execution of tasks by agents, akin to project management in human teams. These processes ensure tasks are distributed
Understanding Processes
and executed e>ciently, in alignment with a prede@ned strategy.
Process Implementations
The Role of Processes in
Teamwork
Assigning Processes to a Process Implementations
Crew
Sequential Process Sequential: Executes tasks sequentially, ensuring tasks are completed in an orderly progression.

Hierarchical Process Hierarchical: Organizes tasks in a managerial hierarchy, where tasks are delegated and executed based on a structured chain of
Process Class: Detailed command. Note: A manager language model ( manager_llm ) must be speci@ed in the crew to enable the hierarchical process,
Overview allowing for the creation and management of tasks by the manager.
Planned Future Processes
Consensual Process (Planned): Currently under consideration for future development, this process type aims for collaborative
Conclusion
decision-making among agents on task execution, introducing a more democratic approach to task management within CrewAI.
Crews As of now, it is not implemented in the codebase.
Collaboration
How to Guides
Tools Docs
The Role of Processes in Teamwork
Examples
Processes enable individual agents to operate as a cohesive unit, streamlining their efforts to achieve common objectives with
Telemetry
e>ciency and coherence.

Assigning Processes to a Crew


To assign a process to a crew, specify the process type upon crew creation to set the execution strategy. Note: For a hierarchical
process, ensure to de@ne manager_llm for the manager agent.

from crewai import Crew


from crewai.process import Process
from langchain_openai import ChatOpenAI

# Example: Creating a crew with a sequential process


crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.sequential
)

# Example: Creating a crew with a hierarchical process


# Ensure to provide a manager_llm
crew = Crew(
agents=my_agents,
tasks=my_tasks,
process=Process.hierarchical,
manager_llm=ChatOpenAI(model="gpt-4")
)

Note: Ensure my_agents and my_tasks are de@ned prior to creating a Crew object, and for the hierarchical process, manager_llm
is also required.

Sequential Process
This method mirrors dynamic team workMows, progressing through tasks in a thoughtful and systematic manner. Task execution
follows the prede@ned order in the task list, with the output of one task serving as context for the next.
To customize task context, utilize the context parameter in the Task class to specify outputs that should be used as context for
subsequent tasks.

Hierarchical Process
Emulates a corporate hierarchy, crewAI creates a manager automatically for you, requiring the speci@cation of a manager language
model ( manager_llm ) for the manager agent. This agent oversees task execution, including planning, delegation, and validation.
Tasks are not pre-assigned; the manager allocates tasks to agents based on their capabilities, reviews outputs, and assesses task
completion.

Process Class: Detailed Overview


The Process class is implemented as an enumeration ( Enum ), ensuring type safety and restricting process values to the de@ned
types ( sequential , hierarchical , and future consensual ). This design choice guarantees that only valid processes are utilized
within the CrewAI framework.

Planned Future Processes


Consensual Process: This collaborative decision-making process among agents on task execution is under consideration but
not currently implemented. This future enhancement aims to introduce a more democratic approach to task management within
CrewAI.

Conclusion
The structured collaboration facilitated by processes within CrewAI is crucial for enabling systematic teamwork among agents.
Documentation will be regularly updated to reMect new processes and enhancements, ensuring users have access to the most
current and comprehensive information.

Previous Next

Tools Crews

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Tools
Core Concepts
Agents
Introduction
Tasks
Tools CrewAI tools empower agents with capabilities ranging from web searching and data analysis to collaboration and delegating tasks
Introduction among coworkers. This documentation outlines how to create, integrate, and leverage these tools within the CrewAI framework,
What is a Tool? including a new focus on collaboration tools.

Key Characteristics of Tools


Using crewAI Tools
What is a Tool?
Available crewAI Tools
Creating your own Tools
DeKnition
Subclassing BaseTool
Utilizing the tool Decorator A tool in CrewAI is a skill or function that agents can utilize to perform various actions. This includes tools from the crewAI Toolkit and LangChain Tools, enabling
everything from simple searches to complex interactions and effective teamwork among agents.
Using LangChain Tools
Conclusion

Processes
Key Characteristics of Tools
Crews
Collaboration Utility: Crafted for tasks such as web searching, data analysis, content generation, and agent collaboration.
How to Guides Integration: Boosts agent capabilities by seamlessly integrating tools into their workCow.
Tools Docs
Customizability: Provides the Cexibility to develop custom tools or utilize existing ones, catering to the speciEc needs of agents.
Examples
Telemetry

Using crewAI Tools


To enhance your agents' capabilities with crewAI tools, begin by installing our extra tools package:

pip install 'crewai[tools]'

Here's an example demonstrating their use:

import os
from crewai import Agent, Task, Crew
# Importing crewAI tools
from crewai_tools import (
DirectoryReadTool,
FileReadTool,
SerperDevTool,
WebsiteSearchTool
)

# Set up API keys


os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
os.environ["OPENAI_API_KEY"] = "Your Key"

# Instantiate tools
docs_tool = DirectoryReadTool(directory='./blog-posts')
file_tool = FileReadTool()
search_tool = SerperDevTool()
web_rag_tool = WebsiteSearchTool()

# Create agents
researcher = Agent(
role='Market Research Analyst',
goal='Provide up-to-date market analysis of the AI industry',
backstory='An expert analyst with a keen eye for market trends.',
tools=[search_tool, web_rag_tool],
verbose=True
)

writer = Agent(
role='Content Writer',
goal='Craft engaging blog posts about the AI industry',
backstory='A skilled writer with a passion for technology.',
tools=[docs_tool, file_tool],
verbose=True
)

# Define tasks
research = Task(
description='Research the latest trends in the AI industry and provide a summary.',
expected_output='A summary of the top 3 trending developments in the AI industry with a unique perspective on their significance.'
agent=researcher
)

write = Task(
description='Write an engaging blog post about the AI industry, based on the research analyst’s summary. Draw inspiration from the latest b
expected_output='A 4-paragraph blog post formatted in markdown with engaging, informative, and accessible content, avoiding complex jargon.
agent=writer,
output_file='blog-posts/new_post.md' # The final blog post will be saved here
)

# Assemble a crew
crew = Crew(
agents=[researcher, writer],
tasks=[research, write],
verbose=2
)

# Execute tasks
crew.kickoff()

Available crewAI Tools


Most of the tools in the crewAI toolkit offer the ability to set speciEc arguments or let them to be more wide open, this is the case for
most of the tools, for example:

from crewai_tools import DirectoryReadTool

# This will allow the agent with this tool to read any directory it wants during it's execution
tool = DirectoryReadTool()

# OR

# This will allow the agent with this tool to read only the directory specified during it's execution
toos = DirectoryReadTool(directory='./directory')

SpeciEc per tool docs are coming soon. Here is a list of the available tools and their descriptions:

Tool Description

CodeDocsSearchTool A RAG tool optimized for searching through code documentation and related technical documents.

CSVSearchTool A RAG tool designed for searching within CSV Eles, tailored to handle structured data.

DirectorySearchTool A RAG tool for searching within directories, useful for navigating through Ele systems.

DOCXSearchTool A RAG tool aimed at searching within DOCX documents, ideal for processing Word Eles.

DirectoryReadTool Facilitates reading and processing of directory structures and their contents.

FileReadTool Enables reading and extracting data from Eles, supporting various Ele formats.

GithubSearchTool A RAG tool for searching within GitHub repositories, useful for code and documentation search.

SeperDevTool A specialized tool for development purposes, with speciEc functionalities under development.

TXTSearchTool A RAG tool focused on searching within text (.txt) Eles, suitable for unstructured data.

JSONSearchTool A RAG tool designed for searching within JSON Eles, catering to structured data handling.

MDXSearchTool A RAG tool tailored for searching within Markdown (MDX) Eles, useful for documentation.

PDFSearchTool A RAG tool aimed at searching within PDF documents, ideal for processing scanned documents.

PGSearchTool A RAG tool optimized for searching within PostgreSQL databases, suitable for database queries.

RagTool A general-purpose RAG tool capable of handling various data sources and types.

ScrapeElementFromWebsiteTool Enables scraping speciEc elements from websites, useful for targeted data extraction.

ScrapeWebsiteTool Facilitates scraping entire websites, ideal for comprehensive data collection.

WebsiteSearchTool A RAG tool for searching website content, optimized for web data extraction.

XMLSearchTool A RAG tool designed for searching within XML Eles, suitable for structured data formats.

YoutubeChannelSearchTool A RAG tool for searching within YouTube channels, useful for video content analysis.

YoutubeVideoSearchTool A RAG tool aimed at searching within YouTube videos, ideal for video data extraction.

Creating your own Tools

Custom Tool Creation

Developers can craft custom tools tailored for their agent’s needs or utilize pre-built options:

To create your own crewAI tools you will need to install our extra tools package:

pip install 'crewai[tools]'

Once you do that there are two main ways for one to create a crewAI tool:

Subclassing BaseTool

from crewai_tools import BaseTool

class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."

def _run(self, argument: str) -> str:


# Implementation goes here
return "Result from custom tool"

DeEne a new class inheriting from BaseTool , specifying name , description , and the _run method for operational logic.

Utilizing the tool Decorator

For a simpler approach, create a Tool object directly with the required attributes and a functional logic.

from crewai_tools import tool


@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, you agent will need this information to use it."""
# Function logic here

import json
import requests
from crewai import Agent
from crewai.tools import tool
from unstructured.partition.html import partition_html

# Annotate the function with the tool decorator from crewAI


@tool("Integration with a given API")
def integration_tool(argument: str) -> str:
"""Integration with a given API"""
# Code here
return resutls # string to be sent back to the agent

# Assign the scraping tool to an agent


agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[integration_tool]
)

Using LangChain Tools

LangChain Integration

CrewAI seamlessly integrates with LangChain’s comprehensive toolkit for search-based queries and more, here are the available built-in tools that are offered by
Langchain LangChain Toolkit

from crewai import Agent


from langchain.agents import Tool
from langchain.utilities import GoogleSerperAPIWrapper

# Setup API keys


os.environ["SERPER_API_KEY"] = "Your Key"

search = GoogleSerperAPIWrapper()

# Create and assign the search tool to an agent


serper_tool = Tool(
name="Intermediate Answer",
func=search.run,
description="Useful for search-based queries",
)

agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[serper_tool]
)

# rest of the code ...

Conclusion
Tools are pivotal in extending the capabilities of CrewAI agents, enabling them to undertake a broad spectrum of tasks and
collaborate effectively. When building solutions with CrewAI, leverage both custom and existing tools to empower your agents and
enhance the AI ecosystem.

Previous Next

Tasks Processes

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Crews
Core Concepts
Agents
What is a Crew?
Tasks
Tools
De@nition of a Crew
Processes
A crew in crewAI represents a collaborative group of agents working together to achieve a set of tasks. Each crew de:nes the strategy for task execution, agent
Crews
collaboration, and the overall work>ow.
What is a Crew?
Crew Attributes
Creating a Crew
Crew Attributes
Example: Assembling a
Crew
Crew Usage Metrics Attribute Description

Crew Execution Process


Tasks A list of tasks assigned to the crew.
Kicking Off a Crew

Collaboration Agents A list of agents that are part of the crew.

How to Guides
Process (optional) The process >ow (e.g., sequential, hierarchical) the crew follows.
Tools Docs
Examples Verbose (optional) The verbosity level for logging during execution.
Telemetry
Manager LLM The language model used by the manager agent in a hierarchical process. Required when using a hierarchical process.
(optional)

Function Calling LLM If passed, the crew will use this LLM to do function calling for tools for all agents in the crew. Each agent can have its own LLM,
(optional) which overrides the crew's LLM for function calling.

Con@g (optional) Optional con:guration settings for the crew, in Json or Dict[str, Any] format.

Max RPM (optional) Maximum requests per minute the crew adheres to during execution.

Language (optional) Language used for the crew, defaults to English.

Full Output (optional) Whether the crew should return the full output with all tasks outputs or just the :nal output.

Step Callback A function that is called after each step of every agent. This can be used to log the agent's actions or to perform other operations; it
(optional) won't override the agent-speci:c step_callback .

Share Crew (optional) Whether you want to share the complete crew information and execution with the crewAI team to make the library better, and allow
us to train models.

Crew Max RPM

The `max_rpm` attribute sets the maximum number of requests per minute the crew can perform to avoid rate limits and will override individual agents' `max_rpm` settings if you

Creating a Crew

Crew Composition

When assembling a crew, you combine agents with complementary roles and tools, assign tasks, and select a process that dictates their execution order and
interaction.

Example: Assembling a Crew

from crewai import Crew, Agent, Task, Process


from langchain_community.tools import DuckDuckGoSearchRun

# Define agents with specific roles and tools


researcher = Agent(
role='Senior Research Analyst',
goal='Discover innovative AI technologies',
tools=[DuckDuckGoSearchRun()]
)

writer = Agent(
role='Content Writer',
goal='Write engaging articles on AI discoveries',
verbose=True
)

# Create tasks for the agents


research_task = Task(
description='Identify breakthrough AI technologies',
agent=researcher
)
write_article_task = Task(
description='Draft an article on the latest AI technologies',
agent=writer
)

# Assemble the crew with a sequential process


my_crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_article_task],
process=Process.sequential,
full_output=True,
verbose=True,
)

Crew Usage Metrics


After the crew execution, you can access the usage_metrics attribute to view the language model (LLM) usage metrics for all tasks
executed by the crew. This provides insights into operational eNciency and areas for improvement.

# Access the crew's usage metrics


crew = Crew(agents=[agent1, agent2], tasks=[task1, task2])
crew.kickoff()
print(crew.usage_metrics)

Crew Execution Process


Sequential Process: Tasks are executed one after another, allowing for a linear >ow of work.

Hierarchical Process: A manager agent coordinates the crew, delegating tasks and validating outcomes before proceeding.
Note: A manager_llm is required for this process and it's essential for validating the process >ow.

Kicking Off a Crew

Once your crew is assembled, initiate the work>ow with the kickoff() method. This starts the execution process according to the
de:ned process >ow.

# Start the crew's task execution


result = my_crew.kickoff()
print(result)

Previous Next

Processes Collaboration

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Collaboration
Core Concepts
Agents
Collaboration Fundamentals
Tasks
Tools
Core of Agent Interaction
Processes
Collaboration in CrewAI is fundamental, enabling agents to combine their skills, share information, and assist each other in task execution, embodying a truly
Crews
cooperative ecosystem.
Collaboration
Collaboration Fundamentals
Information Sharing: Ensures all agents are well-informed and can contribute effectively by sharing data and Andings.
Enhanced Attributes for
Improved Collaboration Task Assistance: Allows agents to seek help from peers with the required expertise for speciAc tasks.
Delegation: Dividing to
Resource Allocation: Optimizes task execution through the eEcient distribution and sharing of resources among agents.
Conquer
Implementing Collaboration
and Delegation
Example Scenario
Enhanced Attributes for Improved Collaboration
Conclusion
The Crew class has been enriched with several attributes to support advanced functionalities:

How to Guides Language Model Management ( manager_llm , function_calling_llm ): Manages language models for executing tasks and
Tools Docs tools, facilitating sophisticated agent-tool interactions. It's important to note that manager_llm is mandatory when using a
Examples hierarchical process for ensuring proper execution Iow.
Telemetry
Process Flow ( process ): DeAnes the execution logic (e.g., sequential, hierarchical) to streamline task distribution and
execution.

Verbose Logging ( verbose ): Offers detailed logging capabilities for monitoring and debugging purposes. It supports both
integer and boolean types to indicate the verbosity level.

ConCguration ( config ): Allows extensive customization to tailor the crew's behavior according to speciAc requirements.

Rate Limiting ( max_rpm ): Ensures eEcient utilization of resources by limiting requests per minute.

Internationalization Support ( language ): Facilitates operation in multiple languages, enhancing global usability.

Execution and Output Handling ( full_output ): Distinguishes between full and Anal outputs for nuanced control over task
results.

Callback and Telemetry ( step_callback ): Integrates callbacks for step-wise execution monitoring and telemetry for
performance analytics.

Crew Sharing ( share_crew ): Enables sharing of crew information with CrewAI for continuous improvement and training
models.

Usage Metrics ( usage_metrics ): Store all metrics for the language model (LLM) usage during all tasks' execution, providing
insights into operational eEciency and areas for improvement, you can check it after the crew execution.

Delegation: Dividing to Conquer


Delegation enhances functionality by allowing agents to intelligently assign tasks or seek help, thereby amplifying the crew's overall
capability.

Implementing Collaboration and Delegation


Setting up a crew involves deAning the roles and capabilities of each agent. CrewAI seamlessly manages their interactions, ensuring
eEcient collaboration and delegation, with enhanced customization and monitoring features to adapt to various operational needs.

Example Scenario
Consider a crew with a researcher agent tasked with data gathering and a writer agent responsible for compiling reports. The
integration of advanced language model management and process Iow attributes allows for more sophisticated interactions, such
as the writer delegating complex research tasks to the researcher or querying speciAc information, thereby facilitating a seamless
workIow.

Conclusion
The integration of advanced attributes and functionalities into the CrewAI framework signiAcantly enriches the agent collaboration
ecosystem. These enhancements not only simplify interactions but also offer unprecedented Iexibility and control, paving the way
for sophisticated AI-driven solutions capable of tackling complex tasks through intelligent collaboration and delegation.

Previous Next

Crews Getting Started

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Create Custom Tools
Core Concepts
How to Guides
Creating your own Tools
Getting Started
Create Custom Tools
Custom Tool Creation
Creating your own Tools
Developers can craft custom tools tailored for their agent’s needs or utilize pre-built options:
Subclassing BaseTool
Utilizing the tool Decorator
Using the Tool function To create your own crewAI tools you will need to install our extra tools package:
from langchain
pip install 'crewai[tools]'
Using Sequential Process
Using Hierarchical Process
Once you do that there are two main ways for one to create a crewAI tool:
Connecting to any LLM
Customizing Agents
Subclassing BaseTool
Human Input on Execution
Tools Docs
from crewai_tools import BaseTool
Examples
Telemetry class MyCustomTool(BaseTool):
name: str = "Name of my tool"
description: str = "Clear description for what this tool is useful for, you agent will need this information to use it."

def _run(self, argument: str) -> str:


# Implementation goes here
return "Result from custom tool"

DeCne a new class inheriting from BaseTool , specifying name , description , and the _run method for operational logic.

Utilizing the tool Decorator

For a simpler approach, create a Tool object directly with the required attributes and a functional logic.

from crewai_tools import tool


@tool("Name of my tool")
def my_tool(question: str) -> str:
"""Clear description for what this tool is useful for, you agent will need this information to use it."""
# Function logic here

import json
import requests
from crewai import Agent
from crewai.tools import tool
from unstructured.partition.html import partition_html

# Annotate the function with the tool decorator from crewAI


@tool("Integration with a given API")
def integtation_tool(argument: str) -> str:
"""Integration with a given API"""
# Code here
return resutls # string to be sent back to the agent

# Assign the scraping tool to an agent


agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[integtation_tool]
)

Using the Tool function from langchain

For another simple approach, create a function in python directly with the required attributes and a functional logic.

def combine(a, b):


return a + b

Then you can add that function into the your tool by using 'func' variable in the Tool function.

from langchain.agents import Tool

math_tool = Tool(
name="Math tool",
func=math_tool,
description="Useful for adding two numbers together, in other words combining them."
)

Previous Next

Getting Started Using Sequential Process

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Getting Started
Core Concepts
How to Guides
Introduction
Getting Started
Introduction Embark on your CrewAI journey by setting up your environment and initiating your AI crew with enhanced features. This guide
Step 0: Installation ensures a seamless start, incorporating the latest updates.
Step 1: Assemble Your
Agents
Step 2: DeAne the Tasks Step 0: Installation
Step 3: Form the Crew
Install CrewAI and any necessary packages for your project.
Step 4: Kick It Off
Conclusion pip install crewai
pip install 'crewai[tools]'
Create Custom Tools
Using Sequential Process
Using Hierarchical Process
Connecting to any LLM
Step 1: Assemble Your Agents
Customizing Agents DeAne your agents with distinct roles, backstories, and now, enhanced capabilities such as verbose mode and memory usage. These
Human Input on Execution elements add depth and guide their task execution and interaction within the crew.
Tools Docs
Examples import os
os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key
Telemetry os.environ["OPENAI_API_KEY"] = "Your Key"

from crewai import Agent


from crewai_tools import SerperDevTool
search_tool = SerperDevTool()

# Creating a senior researcher agent with memory and verbose mode


researcher = Agent(
role='Senior Researcher',
goal='Uncover groundbreaking technologies in {topic}',
verbose=True,
memory=True,
backstory=(
"Driven by curiosity, you're at the forefront of"
"innovation, eager to explore and share knowledge that could change"
"the world."
),
tools=[search_tool],
allow_delegation=True
)

# Creating a writer agent with custom tools and delegation capability


writer = Agent(
role='Writer',
goal='Narrate compelling tech stories about {topic}',
verbose=True,
memory=True,
backstory=(
"With a flair for simplifying complex topics, you craft"
"engaging narratives that captivate and educate, bringing new"
"discoveries to light in an accessible manner."
),
tools=[search_tool],
allow_delegation=False
)

Step 2: De<ne the Tasks


Detail the speciAc objectives for your agents, including new features for asynchronous execution and output customization. These
tasks ensure a targeted approach to their roles.

from crewai import Task

# Research task
research_task = Task(
description=(
"Identify the next big trend in {topic}."
"Focus on identifying pros and cons and the overall narrative."
"Your final report should clearly articulate the key points"
"its market opportunities, and potential risks."
),
expected_output='A comprehensive 3 paragraphs long report on the latest AI trends.',
tools=[search_tool],
agent=researcher,
)

# Writing task with language model configuration


write_task = Task(
description=(
"Compose an insightful article on {topic}."
"Focus on the latest trends and how it's impacting the industry."
"This article should be easy to understand, engaging, and positive."
),
expected_output='A 4 paragraph article on {topic} advancements formatted as markdown.',
tools=[search_tool],
agent=writer,
async_execution=False,
output_file='new-blog-post.md' # Example of output customization
)

Step 3: Form the Crew


Combine your agents into a crew, setting the workDow process they'll follow to accomplish the tasks, now with the option to
conAgure language models for enhanced interaction.

from crewai import Crew, Process

# Forming the tech-focused crew with enhanced configurations


crew = Crew(
agents=[researcher, writer],
tasks=[research_task, write_task],
process=Process.sequential # Optional: Sequential task execution is default
)

Step 4: Kick It Off


Initiate the process with your enhanced crew ready. Observe as your agents collaborate, leveraging their new capabilities for a
successful project outcome. You can also pass the inputs that will be interpolated into the agents and tasks.

# Starting the task execution process with enhanced feedback


result = crew.kickoff(inputs={'topic': 'AI in healthcare'})
print(result)

Conclusion
Building and activating a crew in CrewAI has evolved with new functionalities. By incorporating verbose mode, memory capabilities,
asynchronous task execution, output customization, and language model conAguration, your AI team is more equipped than ever to
tackle challenges eJciently. The depth of agent backstories and the precision of their objectives enrich collaboration, leading to
successful project outcomes.

Previous Next

Collaboration Create Custom Tools

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Using Hierarchical Process
Core Concepts
How to Guides
Introduction
Getting Started
Create Custom Tools The hierarchical process in CrewAI introduces a structured approach to task management, simulating traditional organizational
Using Sequential Process hierarchies for e;cient task delegation and execution. This systematic work?ow enhances project outcomes by ensuring tasks are
Using Hierarchical Process handled with optimal e;ciency and accuracy.

Introduction
Complexity and E3ciency
Hierarchical Process
Overview
The hierarchical process is designed to leverage advanced models like GPT-4, optimizing token usage while handling complex tasks with greater e;ciency.
Key Features
Implementing the
Hierarchical Process
Work?ow in Action Hierarchical Process Overview
Conclusion
By default, tasks in CrewAI are managed through a sequential process. However, adopting a hierarchical approach allows for a clear
Connecting to any LLM hierarchy in task management, where a 'manager' agent coordinates the work?ow, delegates tasks, and validates outcomes for
Customizing Agents streamlined and effective execution. This manager agent is automatically created by crewAI so you don't need to worry about it.
Human Input on Execution
Tools Docs
Key Features
Examples
Telemetry Task Delegation: A manager agent allocates tasks among crew members based on their roles and capabilities.

Result Validation: The manager evaluates outcomes to ensure they meet the required standards.

E3cient Work7ow: Emulates corporate structures, providing an organized approach to task management.

Implementing the Hierarchical Process


To utilize the hierarchical process, it's essential to explicitly set the process attribute to Process.hierarchical , as the default
behavior is Process.sequential . DePne a crew with a designated manager and establish a clear chain of command.

Tools and Agent Assignment

Assign tools at the agent level to facilitate task delegation and execution by the designated agents under the manager's guidance.

Manager LLM Requirement

ConPguring the manager_llm parameter is crucial for the hierarchical process. The system requires a manager LLM to be set up for proper function, ensuring
tailored decision-making.

from langchain_openai import ChatOpenAI


from crewai import Crew, Process, Agent

# Agents are defined with an optional tools parameter


researcher = Agent(
role='Researcher',
goal='Conduct in-depth analysis',
# tools=[] # This can be optionally specified; defaults to an empty list
)
writer = Agent(
role='Writer',
goal='Create engaging content',
# tools=[] # Optionally specify tools; defaults to an empty list
)

# Establishing the crew with a hierarchical process


project_crew = Crew(
tasks=[...], # Tasks to be delegated and executed under the manager's supervision
agents=[researcher, writer],
manager_llm=ChatOpenAI(temperature=0, model="gpt-4"), # Mandatory for hierarchical process
process=Process.hierarchical # Specifies the hierarchical management approach
)

Work?ow in Action

Task Assignment: The manager assigns tasks strategically, considering each agent's capabilities.

Execution and Review: Agents complete their tasks, with the manager ensuring quality standards.

Sequential Task Progression: Despite being a hierarchical process, tasks follow a logical order for smooth progression,
facilitated by the manager's oversight.

Conclusion
Adopting the hierarchical process in crewAI, with the correct conPgurations and understanding of the system's capabilities,
facilitates an organized and e;cient approach to project management.

Previous Next

Using Sequential Process Connecting to any LLM

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Using Sequential Process
Core Concepts
How to Guides
Introduction
Getting Started
Create Custom Tools CrewAI offers a ,exible framework for executing tasks in a structured manner, supporting both sequential and hierarchical
Using Sequential Process processes. This guide outlines how to effectively implement these processes to ensure eAcient task execution and project
Introduction completion.

Sequential Process Overview


Key Features
Sequential Process Overview
Implementing the Sequential
Process
The sequential process ensures tasks are executed one after the other, following a linear progression. This approach is ideal for
Work,ow in Action
projects requiring tasks to be completed in a speciCc order.
Conclusion

Using Hierarchical Process


Key Features
Connecting to any LLM
Customizing Agents Linear Task Flow: Ensures orderly progression by handling tasks in a predetermined sequence.
Human Input on Execution Simplicity: Best suited for projects with clear, step-by-step tasks.
Tools Docs
Easy Monitoring: Facilitates easy tracking of task completion and project progress.
Examples
Telemetry

Implementing the Sequential Process


Assemble your crew and deCne tasks in the order they need to be executed.

from crewai import Crew, Process, Agent, Task

# Define your agents


researcher = Agent(
role='Researcher',
goal='Conduct foundational research',
backstory='An experienced researcher with a passion for uncovering insights'
)
analyst = Agent(
role='Data Analyst',
goal='Analyze research findings',
backstory='A meticulous analyst with a knack for uncovering patterns'
)
writer = Agent(
role='Writer',
goal='Draft the final report',
backstory='A skilled writer with a talent for crafting compelling narratives'
)

# Define the tasks in sequence


research_task = Task(description='Gather relevant data...', agent=researcher)
analysis_task = Task(description='Analyze the data...', agent=analyst)
writing_task = Task(description='Compose the report...', agent=writer)

# Form the crew with a sequential process


report_crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, writing_task],
process=Process.sequential
)

Work,ow in Action

Initial Task: In a sequential process, the Crst agent completes their task and signals completion.

Subsequent Tasks: Agents pick up their tasks based on the process type, with outcomes of preceding tasks or manager
directives guiding their execution.

Completion: The process concludes once the Cnal task is executed, leading to project completion.

Conclusion
The sequential process in CrewAI provides a clear, straightforward path for task execution. It's particularly suited for projects
requiring a logical progression of tasks, ensuring each step is completed before the next begins, thereby facilitating a cohesive Cnal
product.

Previous Next

Create Custom Tools Using Hierarchical Process

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Connecting to any LLM
Core Concepts
How to Guides
Connect CrewAI to LLMs
Getting Started
Create Custom Tools
Default LLM
Using Sequential Process
By default, CrewAI uses OpenAI's GPT-4 model for language processing. However, you can conCgure your agents to use a different model or API. This guide will
Using Hierarchical Process
show you how to connect your agents to different LLMs through environment variables and direct instantiation.
Connecting to any LLM
Connect CrewAI to LLMs
CrewAI offers Hexibility in connecting to various LLMs, including local models via Ollama and different APIs like Azure. It's
CrewAI Agent Overview
compatible with all LangChain LLM components, enabling diverse integrations for tailored AI solutions.
Ollama Integration
Setting Up Ollama
OpenAI Compatible API CrewAI Agent Overview
Endpoints
ConCguration Examples The Agent class is the cornerstone for implementing AI solutions in CrewAI. Here's an updated overview reHecting the latest
FastChat codebase changes:
LM Studio
Attributes:
Mistral API
Azure Open AI role : DeCnes the agent's role within the solution.
ConCguration
goal : SpeciCes the agent's objective.
Example Agent with Azure
LLM backstory : Provides a background story to the agent.

Conclusion llm : Indicates the Large Language Model the agent uses.

Customizing Agents function_calling_llm Optinal: Will turn the ReAct crewAI agent into a function calling agent.
Human Input on Execution
max_iter : Maximum number of iterations for an agent to execute a task, default is 15.
Tools Docs
memory : Enables the agent to retain information during the execution.
Examples
Telemetry max_rpm : Sets the maximum number of requests per minute.

verbose : Enables detailed logging of the agent's execution.

allow_delegation : Allows the agent to delegate tasks to other agents, default is True .

tools : SpeciCes the tools available to the agent for task execution.

step_callback : Provides a callback function to be executed after each step.

# Required
os.environ["OPENAI_MODEL_NAME"]="gpt-4-0125-preview"

# Agent will automatically use the model defined in the environment variable
example_agent = Agent(
role='Local Expert',
goal='Provide insights about the city',
backstory="A knowledgeable local guide.",
verbose=True
)

Ollama Integration
Ollama is preferred for local LLM integration, offering customization and privacy beneCts. To integrate Ollama with CrewAI, set the
appropriate environment variables as shown below. Note: Detailed Ollama setup is beyond this document's scope, but general
guidance is provided.

Setting Up Ollama

Environment Variables Con3guration: To integrate Ollama, set the following environment variables:

OPENAI_API_BASE='http://localhost:11434/v1'
OPENAI_MODEL_NAME='openhermes' # Adjust based on available model
OPENAI_API_KEY=''

OpenAI Compatible API Endpoints


Switch between APIs and models seamlessly using environment variables, supporting platforms like FastChat, LM Studio, and
Mistral AI.

ConCguration Examples

FastChat

OPENAI_API_BASE="http://localhost:8001/v1"
OPENAI_MODEL_NAME='oh-2.5m7b-q51'
OPENAI_API_KEY=NA

LM Studio

OPENAI_API_BASE="http://localhost:8000/v1"
OPENAI_MODEL_NAME=NA
OPENAI_API_KEY=NA

Mistral API

OPENAI_API_KEY=your-mistral-api-key
OPENAI_API_BASE=https://api.mistral.ai/v1
OPENAI_MODEL_NAME="mistral-small"

Azure Open AI ConCguration

For Azure OpenAI API integration, set the following environment variables:

AZURE_OPENAI_VERSION="2022-12-01"
AZURE_OPENAI_DEPLOYMENT=""
AZURE_OPENAI_ENDPOINT=""
AZURE_OPENAI_KEY=""

Example Agent with Azure LLM

from dotenv import load_dotenv


from crewai import Agent
from langchain_openai import AzureChatOpenAI

load_dotenv()

azure_llm = AzureChatOpenAI(
azure_endpoint=os.environ.get("AZURE_OPENAI_ENDPOINT"),
api_key=os.environ.get("AZURE_OPENAI_KEY")
)

azure_agent = Agent(
role='Example Agent',
goal='Demonstrate custom LLM configuration',
backstory='A diligent explorer of GitHub docs.',
llm=azure_llm
)

Conclusion
Integrating CrewAI with different LLMs expands the framework's versatility, allowing for customized, eYcient AI solutions across
various domains and platforms.

Previous Next

Using Hierarchical Process Customizing Agents

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Customizing Agents
Core Concepts
How to Guides
Customizable Attributes
Getting Started
Create Custom Tools Crafting an e+cient CrewAI team hinges on the ability to tailor your AI agents dynamically to meet the unique requirements of any
Using Sequential Process project. This section covers the foundational attributes you can customize.
Using Hierarchical Process
Connecting to any LLM Key Attributes for Customization
Customizing Agents
Role: SpeciCes the agent's job within the crew, such as 'Analyst' or 'Customer Service Rep'.
Customizable Attributes
Key Attributes for Goal: DeCnes what the agent aims to achieve, in alignment with its role and the overarching objectives of the crew.
Customization
Backstory: Provides depth to the agent's persona, enriching its motivations and engagements within the crew.
Advanced Customization
Options Tools: Represents the capabilities or methods the agent uses to perform tasks, from simple functions to intricate integrations.
Language Model
Customization
Enabling Memory for Advanced Customization Options
Agents
Performance and Debugging Beyond the basic attributes, CrewAI allows for deeper customization to enhance an agent's behavior and capabilities signiCcantly.
Settings
Verbose Mode and RPM
Limit Language Model Customization
Maximum Iterations for
Task Execution Agents can be customized with speciCc language models ( llm ) and function-calling language models ( function_calling_llm ),
Customizing Agents and offering advanced control over their processing and decision-making abilities. By default crewAI agents are ReAct agents, but by
Tools setting the function_calling_llm you can turn them into a function calling agents.
Example: Assigning Tools
to an Agent
Delegation and Autonomy
Enabling Memory for Agents
Example: Disabling CrewAI supports memory for agents, enabling them to remember past interactions. This feature is critical for tasks requiring
Delegation for an Agent
awareness of previous contexts or decisions.
Conclusion

Human Input on Execution


Tools Docs Performance and Debugging Settings
Adjusting an agent's performance and monitoring its operations are crucial for e+cient task execution.

Verbose Mode and RPM Limit

Verbose Mode: Enables detailed logging of an agent's actions, useful for debugging and optimization. SpeciCcally, it provides
insights into agent execution processes, aiding in the optimization of performance.

RPM Limit: Sets the maximum number of requests per minute ( max_rpm ), controlling the agent's query frequency to external
services.

Maximum Iterations for Task Execution

The max_iter attribute allows users to deCne the maximum number of iterations an agent can perform for a single task, preventing
inCnite loops or excessively long executions. The default value is set to 15, providing a balance between thoroughness and
e+ciency. Once the agent approaches this number it will try it's best to give a good answer.

Customizing Agents and Tools


Agents are customized by deCning their attributes and tools during initialization. Tools are critical for an agent's functionality,
enabling them to perform specialized tasks. In this example we will use the crewAI tools package to create a tool for a research
analyst agent.

pip install 'crewai[tools]'

Example: Assigning Tools to an Agent

import os
from crewai import Agent
from crewai_tools import SerperDevTool

# Set API keys for tool initialization


os.environ["OPENAI_API_KEY"] = "Your Key"
os.environ["SERPER_API_KEY"] = "Your Key"

# Initialize a search tool


search_tool = SerperDevTool()

# Initialize the agent with advanced options


agent = Agent(
role='Research Analyst',
goal='Provide up-to-date market analysis',
backstory='An expert analyst with a keen eye for market trends.',
tools=[search_tool],
memory=True,
verbose=True,
max_rpm=10, # Optional: Limit requests to 10 per minute, preventing API abuse
max_iter=5, # Optional: Limit task iterations to 5 before the agent tries to give its best answer
allow_delegation=False
)

Delegation and Autonomy


Controlling an agent's ability to delegate tasks or ask questions is vital for tailoring its autonomy and collaborative dynamics within
the crewAI framework. By default, the allow_delegation attribute is set to True , enabling agents to seek assistance or delegate
tasks as needed. This default behavior promotes collaborative problem-solving and e+ciency within the crewAI ecosystem.

Example: Disabling Delegation for an Agent

agent = Agent(
role='Content Writer',
goal='Write engaging content on market trends',
backstory='A seasoned writer with expertise in market analysis.',
allow_delegation=False
)

Conclusion
Customizing agents in CrewAI by setting their roles, goals, backstories, and tools, alongside advanced options like language model
customization, memory, performance settings, and delegation preferences, equips a nuanced and capable AI team ready for
complex challenges.

Previous Next

Connecting to any LLM Human Input on Execution

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
Human Input in Agent Execution
Core Concepts
How to Guides Human input plays a pivotal role in several agent execution scenarios, enabling agents to seek additional information or clari9cation
Getting Started when necessary. This capability is invaluable in complex decision-making processes or when agents need more details to complete
Create Custom Tools a task effectively.
Using Sequential Process
Using Hierarchical Process
Connecting to any LLM
Using Human Input with CrewAI
Customizing Agents Incorporating human input with CrewAI is straightforward, enhancing the agent's ability to make informed decisions. While the
Human Input on Execution documentation previously mentioned using a "LangChain Tool" and a speci9c "DuckDuckGoSearchRun" tool from
Using Human Input with langchain_community.tools , it's important to clarify that the integration of such tools should align with the actual capabilities and
CrewAI
con9gurations de9ned within your Agent class setup.
Example:

Tools Docs Example:


Examples
Telemetry pip install crewai
pip install 'crewai[tools]'

import os
from crewai import Agent, Task, Crew
from crewai_tools import SerperDevTool

from langchain.agents import load_tools

os.environ["SERPER_API_KEY"] = "Your Key" # serper.dev API key


os.environ["OPENAI_API_KEY"] = "Your Key"

# Loading Human Tools


human_tools = load_tools(["human"])
search_tool = SerperDevTool()

# Define your agents with roles, goals, and tools


researcher = Agent(
role='Senior Research Analyst',
goal='Uncover cutting-edge developments in AI and data science',
backstory=(
"You are a Senior Research Analyst at a leading tech think tank."
"Your expertise lies in identifying emerging trends and technologies in AI and data science."
"You have a knack for dissecting complex data and presenting actionable insights."
),
verbose=True,
allow_delegation=False,
tools=[search_tool]+human_tools # Passing human tools to the agent
)
writer = Agent(
role='Tech Content Strategist',
goal='Craft compelling content on tech advancements',
backstory=(
"You are a renowned Tech Content Strategist, known for your insightful and engaging articles on technology and innovation."
"With a deep understanding of the tech industry, you transform complex concepts into compelling narratives."
),
verbose=True,
allow_delegation=True
)

# Create tasks for your agents


task1 = Task(
description=(
"Conduct a comprehensive analysis of the latest advancements in AI in 2024."
"Identify key trends, breakthrough technologies, and potential industry impacts."
"Compile your findings in a detailed report."
"Make sure to check with a human if the draft is good before finalizing your answer."
),
expected_output='A comprehensive full report on the latest AI advancements in 2024, leave nothing out',
agent=researcher,
)

task2 = Task(
description=(
"Using the insights from the researcher's report, develop an engaging blog post that highlights the most significant AI advancements."
"Your post should be informative yet accessible, catering to a tech-savvy audience."
"Aim for a narrative that captures the essence of these breakthroughs and their implications for the future."
),
expected_output='A compelling 3 paragraphs blog post formatted as markdown about the latest AI advancements in 2024'
agent=writer
)

# Instantiate your crew with a sequential process


crew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2
)

# Get your crew to work!


result = crew.kickoff()

print("######################")
print(result)

Previous Next

Customizing Agents Google Serper Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
SerperDevTool Documentation
Core Concepts
How to Guides Experimental
Tools Docs
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Google Serper Search
Description
Installation
Example Description
Steps to Get Started
This tool is designed to perform a semantic search for a speci<ed query from a text's content across the internet. It utilizes the
Conclusion
serper.dev API to fetch and display the most relevant search results based on the query provided by the user.
Scrape Website
Directory Read
File Read
Installation
Selenium Scraper
To incorporate this tool into your project, follow the installation instructions below:
Directory RAG Search
PDF RAG Search pip install 'crewai[tools]'
TXT RAG Search
CSV RAG Search
XML RAG Search Example
JSON RAG Search
The following example demonstrates how to initialize the tool and execute a search with a given query:
Docx Rag Search
MDX RAG Search
from crewai_tools import SerperDevTool
PG RAG Search
# Initialize the tool for internet searching capabilities
Website RAG Search
tool = SerperDevTool()
Github RAG Search
Code Docs RAG Search
Youtube Video RAG Search
Steps to Get Started
Youtube Chanel RAG Search
Examples To effectively use the SerperDevTool , follow these steps:
Telemetry
Package Installation: Con<rm that the crewai[tools] package is installed in your Python environment.

API Key Acquisition: Acquire a serper.dev API key by registering for a free account at serper.dev .

Environment Con9guration: Store your obtained API key in an environment variable named SERPER_API_KEY to facilitate its use
by the tool.

Conclusion
By integrating the SerperDevTool into Python projects, users gain the ability to conduct real-time, relevant searches across the
internet directly from their applications. By adhering to the setup and usage guidelines provided, incorporating this tool into projects
is streamlined and straightforward.

Previous Next

Human Input on Execution Scrape Website

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
ScrapeWebsiteTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Description Experimental

Installation
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Example
Arguments

Directory Read Description


File Read
Selenium Scraper
A tool designed to extract and read the content of a speci@ed website. It is capable of handling various types of web pages by
making HTTP requests and parsing the received HTML content. This tool can be particularly useful for web scraping tasks, data
Directory RAG Search
collection, or extracting speci@c information from websites.
PDF RAG Search
TXT RAG Search
CSV RAG Search
Installation
XML RAG Search
JSON RAG Search Install the crewai_tools package
Docx Rag Search
pip install 'crewai[tools]'
MDX RAG Search
PG RAG Search
Website RAG Search
Github RAG Search
Example
Code Docs RAG Search
from crewai_tools import ScrapeWebsiteTool
Youtube Video RAG Search
Youtube Chanel RAG Search # To enable scrapping any website it finds during it's execution
tool = ScrapeWebsiteTool()
Examples
Telemetry # Initialize the tool with the website URL, so the agent can only scrap the content of the specified website
tool = ScrapeWebsiteTool(website_url='https://www.example.com')

Arguments
website_url : Mandatory website URL to read the @le. This is the primary input for the tool, specifying which website's content
should be scraped and read.

Previous Next

Google Serper Search Directory Read

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
DirectoryReadTool
Core Concepts
How to Guides Experimental
Tools Docs
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Google Serper Search
Scrape Website
Directory Read
Description Description
Installation
The DirectoryReadTool is a highly e?cient utility designed for the comprehensive listing of directory contents. It recursively
Example
navigates through the speciAed directory, providing users with a detailed enumeration of all Ales, including those nested within
Arguments subdirectories. This tool is indispensable for tasks requiring a thorough inventory of directory structures or for validating the
File Read organization of Ales within directories.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Installation
TXT RAG Search Install the crewai_tools package to use the DirectoryReadTool in your project. If you haven't added this package to your
CSV RAG Search environment, you can easily install it with pip using the following command:
XML RAG Search
JSON RAG Search pip install 'crewai[tools]'

Docx Rag Search


This installs the latest version of the crewai_tools package, allowing access to the DirectoryReadTool and other utilities.
MDX RAG Search
PG RAG Search
Website RAG Search
Example
Github RAG Search
Code Docs RAG Search The DirectoryReadTool is simple to use. The code snippet below shows how to set up and use the tool to list the contents of a
Youtube Video RAG Search speciAed directory:
Youtube Chanel RAG Search
from crewai_tools import DirectoryReadTool
Examples
Telemetry # Initialize the tool so the agent can read any directory's content it learns about during execution
tool = DirectoryReadTool()

# OR

# Initialize the tool with a specific directory, so the agent can only read the content of the specified directory
tool = DirectoryReadTool(directory='/path/to/your/directory')

Arguments
The DirectoryReadTool requires minimal conAguration for use. The essential argument for this tool is as follows:

directory : Optional A argument that speciAes the path to the directory whose contents you wish to list. It accepts both
absolute and relative paths, guiding the tool to the desired directory for content listing.

Previous Next

Scrape Website File Read

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
FileReadTool
Core Concepts
How to Guides Experimental
Tools Docs
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Google Serper Search
Scrape Website
Directory Read
File Read Description
Description
The FileReadTool is a versatile component of the crewai_tools package, designed to streamline the process of reading and retrieving
Installation
content from ?les. It is particularly useful in scenarios such as batch text ?le processing, runtime con?guration ?le reading, and data
Example importation for analytics. This tool supports various text-based ?le formats including .txt , .csv , .json and more, and adapts its
Arguments functionality based on the ?le type, for instance, converting JSON content into a Python dictionary for easy use.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Installation
TXT RAG Search Install the crewai_tools package to use the FileReadTool in your projects:
CSV RAG Search
XML RAG Search pip install 'crewai[tools]'

JSON RAG Search


Docx Rag Search
MDX RAG Search Example
PG RAG Search
To get started with the FileReadTool:
Website RAG Search
Github RAG Search
from crewai_tools import FileReadTool
Code Docs RAG Search
# Initialize the tool to read any files the agents knows or lean the path for
Youtube Video RAG Search
file_read_tool = FileReadTool()
Youtube Chanel RAG Search
# OR
Examples
Telemetry # Initialize the tool with a specific file path, so the agent can only read the content of the specified file
file_read_tool = FileReadTool(file_path='path/to/your/file.txt')

Arguments
file_path : The path to the ?le you want to read. It accepts both absolute and relative paths. Ensure the ?le exists and you have
the necessary permissions to access it.

Previous Next

Directory Read Selenium Scraper

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
SeleniumScrapingTool
Core Concepts
How to Guides Experimental
Tools Docs
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Google Serper Search
Scrape Website
Directory Read
File Read Description
Selenium Scraper
This tool is designed for e<cient web scraping, enabling users to extract content from web pages. It supports targeted scraping by
Description
allowing the speci?cation of a CSS selector for desired elements. The Bexibility of the tool enables it to be used on any website URL
Installation provided by the user, making it a versatile tool for various web scraping needs.
Example
Arguments

Directory RAG Search


Installation
PDF RAG Search Install the crewai_tools package
TXT RAG Search
CSV RAG Search pip install 'crewai[tools]'

XML RAG Search


JSON RAG Search
Docx Rag Search Example
MDX RAG Search
PG RAG Search from crewai_tools import SeleniumScrapingTool

Website RAG Search # Example 1: Scrape any website it finds during its execution
Github RAG Search tool = SeleniumScrapingTool()

Code Docs RAG Search # Example 2: Scrape the entire webpage


Youtube Video RAG Search tool = SeleniumScrapingTool(website_url='https://example.com')

Youtube Chanel RAG Search # Example 3: Scrape a specific CSS element from the webpage
Examples tool = SeleniumScrapingTool(website_url='https://example.com', css_element='.main-content')

Telemetry # Example 4: Scrape using optional parameters for customized scraping


tool = SeleniumScrapingTool(website_url='https://example.com', css_element='.main-content', cookie={'name': 'user'

Arguments
website_url : Mandatory. The URL of the website to scrape.

css_element : Mandatory. The CSS selector for a speci?c element to scrape from the website.

cookie : Optional. A dictionary containing cookie information. This parameter allows the tool to simulate a session with cookie
information, providing access to content that may be restricted to logged-in users.

wait_time : Optional. The number of seconds the tool waits after loading the website and after setting a cookie, before
scraping the content. This allows for dynamic content to load properly.

Previous Next

File Read Directory RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
DirectorySearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
Description
Description
Installation
Example This tool is designed to perform a semantic search for queries within the content of a speciBed directory. Utilizing the RAG
Arguments (Retrieval-Augmented Generation) methodology, it offers a powerful means to semantically navigate through the Bles of a given
directory. The tool can be dynamically set to search any directory speciBed at runtime or can be pre-conBgured to search within a
PDF RAG Search
speciBc directory upon initialization.
TXT RAG Search
CSV RAG Search
XML RAG Search Installation
JSON RAG Search
Docx Rag Search To start using the DirectorySearchTool, you need to install the crewai_tools package. Execute the following command in your
MDX RAG Search terminal:

PG RAG Search
pip install 'crewai[tools]'
Website RAG Search
Github RAG Search
Code Docs RAG Search
Youtube Video RAG Search
Example
Youtube Chanel RAG Search The following examples demonstrate how to initialize the DirectorySearchTool for different use cases and how to perform a search:
Examples
Telemetry from crewai_tools import DirectorySearchTool

# To enable searching within any specified directory at runtime


tool = DirectorySearchTool()

# Alternatively, to restrict searches to a specific directory


tool = DirectorySearchTool(directory='/path/to/directory')

Arguments
directory : This string argument speciBes the directory within which to search. It is mandatory if the tool has not been
initialized with a directory; otherwise, the tool will only search within the initialized directory.

Previous Next

Selenium Scraper PDF RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
PDFSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
Description
Installation The PDFSearchTool is a RAG tool designed for semantic searches within PDF content. It allows for inputting a search query and a
Example PDF document, leveraging advanced search techniques to Fnd relevant content eGciently. This capability makes it especially useful
Arguments for extracting speciFc information from large PDF Fles quickly.

TXT RAG Search


CSV RAG Search
Installation
XML RAG Search
JSON RAG Search To get started with the PDFSearchTool, Frst, ensure the crewai_tools package is installed with the following command:
Docx Rag Search
pip install 'crewai[tools]'
MDX RAG Search
PG RAG Search
Website RAG Search
Github RAG Search
Example
Code Docs RAG Search
Here's how to use the PDFSearchTool to search within a PDF document:
Youtube Video RAG Search
Youtube Chanel RAG Search from crewai_tools import PDFSearchTool
Examples
# Initialize the tool allowing for any PDF content search if the path is provided during execution
Telemetry tool = PDFSearchTool()

# OR

# Initialize the tool with a specific PDF path for exclusive search within that document
tool = PDFSearchTool(pdf='path/to/your/document.pdf')

Arguments
pdf : Optinal The PDF path for the search. Can be provided at initialization or within the run method's arguments. If provided at
initialization, the tool conFnes its search to the speciFed document.

Previous Next

Directory RAG Search TXT RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
TXTSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
Description This tool is used to perform a RAG (Retrieval-Augmented Generation) search within the content of a text Dle. It allows for semantic
Installation searching of a query within a speciDed text Dle's content, making it an invaluable resource for quickly extracting information or
Example Dnding speciDc sections of text based on the query provided.
Arguments

CSV RAG Search


Installation
XML RAG Search
JSON RAG Search To use the TXTSearchTool, you Drst need to install the crewai_tools package. This can be done using pip, a package manager for
Docx Rag Search Python. Open your terminal or command prompt and enter the following command:
MDX RAG Search
pip install 'crewai[tools]'
PG RAG Search
Website RAG Search
This command will download and install the TXTSearchTool along with any necessary dependencies.
Github RAG Search
Code Docs RAG Search
Youtube Video RAG Search Example
Youtube Chanel RAG Search
Examples The following example demonstrates how to use the TXTSearchTool to search within a text Dle. This example shows both the

Telemetry
initialization of the tool with a speciDc text Dle and the subsequent search within that Dle's content.

from crewai_tools import TXTSearchTool

# Initialize the tool to search within any text file's content the agent learns about during its execution
tool = TXTSearchTool()

# OR

# Initialize the tool with a specific text file, so the agent can search within the given text file's content
tool = TXTSearchTool(txt='path/to/text/file.txt')

Arguments
txt (str): Optinal. The path to the text Dle you want to search. This argument is only required if the tool was not initialized with a
speciDc text Dle; otherwise, the search will be conducted within the initially provided text Dle.

Previous Next

PDF RAG Search CSV RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
CSVSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is used to perform a RAG (Retrieval-Augmented Generation) search within a CSV Gle's content. It allows users to
Description semantically search for queries in the content of a speciGed CSV Gle. This feature is particularly useful for extracting information
Installation from large CSV datasets where traditional search methods might be ineJcient. All tools with "Search" in their name, including
Example CSVSearchTool, are RAG tools designed for searching different sources of data.
Arguments

XML RAG Search Installation


JSON RAG Search
Docx Rag Search Install the crewai_tools package
MDX RAG Search
pip install 'crewai[tools]'
PG RAG Search
Website RAG Search
Github RAG Search
Code Docs RAG Search
Example
Youtube Video RAG Search
from crewai_tools import CSVSearchTool
Youtube Chanel RAG Search
Examples # Initialize the tool with a specific CSV file. This setup allows the agent to only search the given CSV file.
tool = CSVSearchTool(csv='path/to/your/csvfile.csv')
Telemetry
# OR

# Initialize the tool without a specific CSV file. Agent will need to provide the CSV path at runtime.
tool = CSVSearchTool()

Arguments
csv : The path to the CSV Gle you want to search. This is a mandatory argument if the tool was initialized without a speciGc
CSV Gle; otherwise, it is optional.

Previous Next

TXT RAG Search XML RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
XMLSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search The XMLSearchTool is a cutting-edge RAG tool engineered for conducting semantic searches within XML Fles. Ideal for users
XML RAG Search needing to parse and extract information from XML content eGciently, this tool supports inputting a search query and an optional
Description XML Fle path. By specifying an XML path, users can target their search more precisely to the content of that Fle, thereby obtaining
Installation more relevant search outcomes.
Example
Arguments
Installation
JSON RAG Search
Docx Rag Search To start using the XMLSearchTool, you must Frst install the crewai_tools package. This can be easily done with the following
MDX RAG Search command:

PG RAG Search
pip install 'crewai[tools]'
Website RAG Search
Github RAG Search
Code Docs RAG Search
Youtube Video RAG Search
Example
Youtube Chanel RAG Search Here are two examples demonstrating how to use the XMLSearchTool. The Frst example shows searching within a speciFc XML Fle,
Examples while the second example illustrates initiating a search without predeFning an XML path, providing Mexibility in search scope.
Telemetry
from crewai_tools.tools.xml_search_tool import XMLSearchTool

# Allow agents to search within any XML file's content as it learns about their paths during execution
tool = XMLSearchTool()

# OR

# Initialize the tool with a specific XML file path for exclusive search within that document
tool = XMLSearchTool(xml='path/to/your/xmlfile.xml')

Arguments
xml : This is the path to the XML Fle you wish to search. It is an optional parameter during the tool's initialization but must be
provided either at initialization or as part of the run method's arguments to execute a search.

Previous Next

CSV RAG Search JSON RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI
crewAI v0.22.4 9.5k 1.1k

crewAI
Home
JSONSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is used to perform a RAG search within a JSON Ele's content. It allows users to initiate a search with a speciEc JSON path,
XML RAG Search focusing the search operation within that particular JSON Ele. If the path is provided at initialization, the tool restricts its search
JSON RAG Search scope to the speciEed JSON Ele, thereby enhancing the precision of search results.
Description
Installation
Example
Installation
Arguments Install the crewai_tools package by executing the following command in your terminal:
Docx Rag Search
pip install 'crewai[tools]'
MDX RAG Search
PG RAG Search
Website RAG Search
Github RAG Search
Example
Code Docs RAG Search
Below are examples demonstrating how to use the JSONSearchTool for searching within JSON Eles. You can either search any
Youtube Video RAG Search JSON content or restrict the search to a speciEc JSON Ele.
Youtube Chanel RAG Search
Examples from crewai_tools import JSONSearchTool

Telemetry
# Example 1: Initialize the tool for a general search across any JSON content. This is useful when the path is known or can be discovered durin
tool = JSONSearchTool()

# Example 2: Initialize the tool with a specific JSON path, limiting the search to a particular JSON file.
tool = JSONSearchTool(json_path='./path/to/your/file.json')

Arguments
json_path (str): An optional argument that deEnes the path to the JSON Ele to be searched. This parameter is only necessary if
the tool is initialized without a speciEc JSON path. Providing this argument restricts the search to the speciEed JSON Ele.

Previous Next

XML RAG Search Docx Rag Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI crewAI

crewAI
Home
DOCXSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search The DOCXSearchTool is a RAG tool designed for semantic searching within DOCX documents. It enables users to effectively search
XML RAG Search and extract relevant information from DOCX Fles using query-based searches. This tool is invaluable for data analysis, information
JSON RAG Search management, and research tasks, streamlining the process of Fnding speciFc information within large document collections.
Docx Rag Search
Description
Installation
Installation
Example Install the crewai_tools package by running the following command in your terminal:
Arguments
pip install 'crewai[tools]'
MDX RAG Search
PG RAG Search
Website RAG Search
Github RAG Search
Example
Code Docs RAG Search
The following example demonstrates initializing the DOCXSearchTool to search within any DOCX Fle's content or with a speciFc
Youtube Video RAG Search DOCX Fle path.
Youtube Chanel RAG Search
Examples from crewai_tools import DOCXSearchTool

Telemetry
# Initialize the tool to search within any DOCX file's content
tool = DOCXSearchTool()

# OR

# Initialize the tool with a specific DOCX file, so the agent can only search the content of the specified DOCX file
tool = DOCXSearchTool(docx='path/to/your/document.docx')

Arguments
docx : An optional Fle path to a speciFc DOCX document you wish to search. If not provided during initialization, the tool allows
for later speciFcation of any DOCX Fle's content path for searching.

Previous Next

JSON RAG Search MDX RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI crewAI

crewAI
Home
MDXSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search The MDX Search Tool, a key component of the crewai_tools package, is designed for advanced market data extraction, offering
XML RAG Search invaluable support to researchers and analysts requiring immediate market insights in the AI sector. With its ability to interface with
JSON RAG Search various data sources and tools, it streamlines the process of acquiring, reading, and organizing market data eGciently.
Docx Rag Search
MDX RAG Search
Description
Installation
Installation To utilize the MDX Search Tool, ensure the crewai_tools package is installed. If not already present, install it using the following
Example command:
Arguments
pip install 'crewai[tools]'
PG RAG Search
Website RAG Search
Github RAG Search
Code Docs RAG Search
Example
Youtube Video RAG Search ConJguring and using the MDX Search Tool involves setting up environment variables and utilizing the tool within a crewAI project
Youtube Chanel RAG Search for market research. Here's a simple example:
Examples
Telemetry from crewai_tools import MDXSearchTool

# Initialize the tool so the agent can search any MDX content if it learns about during its execution
tool = MDXSearchTool()

# OR

# Initialize the tool with a specific MDX file path for exclusive search within that document
tool = MDXSearchTool(mdx='path/to/your/document.mdx')

Arguments
mdx: Optional The MDX path for the search. Can be provided at initialization

Previous Next

Docx Rag Search PG RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI crewAI

crewAI
Home
PGSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is designed to facilitate semantic searches within PostgreSQL database tables. Leveraging the RAG (Retrieve and
XML RAG Search Generate) technology, the PGSearchTool provides users with an eGcient means of querying database table content, speciIcally
JSON RAG Search tailored for PostgreSQL databases. It simpliIes the process of Inding relevant data through semantic search queries, making it an
Docx Rag Search invaluable resource for users needing to perform advanced queries on extensive datasets within a PostgreSQL database.
MDX RAG Search
PG RAG Search
Description
Installation
Installation To install the crewai_tools package and utilize the PGSearchTool, execute the following command in your terminal:
Example
Arguments pip install 'crewai[tools]'

Website RAG Search


Github RAG Search
Code Docs RAG Search
Example
Youtube Video RAG Search Below is an example showcasing how to use the PGSearchTool to conduct a semantic search on a table within a PostgreSQL
Youtube Chanel RAG Search database:
Examples
Telemetry from crewai_tools import PGSearchTool

# Initialize the tool with the database URI and the target table name
tool = PGSearchTool(db_uri='postgresql://user:password@localhost:5432/mydatabase', table_name='employees')

Arguments
The PGSearchTool requires the following arguments for its operation:

db_uri : A string representing the URI of the PostgreSQL database to be queried. This argument is mandatory and must include
the necessary authentication details and the location of the database.

table_name : A string specifying the name of the table within the database on which the semantic search will be performed.
This argument is mandatory.

Previous Next

MDX RAG Search Website RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI crewAI

crewAI
Home
WebsiteSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is speciAcally crafted for conducting semantic searches within the content of a particular website. Leveraging a Retrieval-
XML RAG Search Augmented Generation (RAG) model, it navigates through the information provided on a given URL. Users have the Gexibility to either
JSON RAG Search initiate a search across any website known or discovered during its usage or to concentrate the search on a predeAned, speciAc
Docx Rag Search website.
MDX RAG Search
PG RAG Search
Website RAG Search
Installation
Description Install the crewai_tools package by executing the following command in your terminal:
Installation
Example pip install 'crewai[tools]'

Arguments

Github RAG Search


Code Docs RAG Search
Example
Youtube Video RAG Search To utilize the WebsiteSearchTool for different use cases, follow these examples:
Youtube Chanel RAG Search
Examples from crewai_tools import WebsiteSearchTool

Telemetry
# To enable the tool to search any website the agent comes across or learns about during its operation
tool = WebsiteSearchTool()

# OR

# To restrict the tool to only search within the content of a specific website.
tool = WebsiteSearchTool(website='https://example.com')

Arguments
website : An optional argument that speciAes the valid website URL to perform the search on. This becomes necessary if the
tool is initialized without a speciAc website. In the WebsiteSearchToolSchema , this argument is mandatory. However, in the
FixedWebsiteSearchToolSchema , it becomes optional if a website is provided during the tool's initialization, as it will then only
search within the predeAned website's content.

Previous Next

PG RAG Search Github RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI crewAI

crewAI
Home
GitHubSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search The GitHubSearchTool is a Read, Append, and Generate (RAG) tool speciEcally designed for conducting semantic searches within
XML RAG Search GitHub repositories. Utilizing advanced semantic search capabilities, it sifts through code, pull requests, issues, and repositories,
JSON RAG Search making it an essential tool for developers, researchers, or anyone in need of precise information from GitHub.
Docx Rag Search
MDX RAG Search
PG RAG Search
Installation
Website RAG Search To use the GitHubSearchTool, Erst ensure the crewai_tools package is installed in your Python environment:
Github RAG Search
Description pip install 'crewai[tools]'
Installation
Example This command installs the necessary package to run the GitHubSearchTool along with any other tools included in the crewai_tools
Arguments package.

Code Docs RAG Search


Youtube Video RAG Search Example
Youtube Chanel RAG Search
Examples Here’s how you can use the GitHubSearchTool to perform semantic searches within a GitHub repository:

Telemetry
from crewai_tools import GitHubSearchTool

# Initialize the tool for semantic searches within a specific GitHub repository
tool = GitHubSearchTool(
github_repo='https://github.com/example/repo',
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)

# OR

# Initialize the tool for semantic searches within a specific GitHub repository, so the agent can search any repository if it learns about duri
tool = GitHubSearchTool(
content_types=['code', 'issue'] # Options: code, repo, pr, issue
)

Arguments
github_repo : The URL of the GitHub repository where the search will be conducted. This is a mandatory Eeld and speciEes the
target repository for your search.

content_types : SpeciEes the types of content to include in your search. You must provide a list of content types from the
following options: code for searching within the code, repo for searching within the repository's general information, pr for
searching within pull requests, and issue for searching within issues. This Eeld is mandatory and allows tailoring the search to
speciEc content types within the GitHub repository.

Previous Next

Website RAG Search Code Docs RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI crewAI

crewAI
Home Code Docs RAG Search
Core Concepts
How to Guides
Tools Docs
Google Serper Search
Scrape Website
Directory Read
File Read
Selenium Scraper
Directory RAG Search
PDF RAG Search
TXT RAG Search
CSV RAG Search
XML RAG Search
JSON RAG Search
Docx Rag Search
MDX RAG Search
PG RAG Search
Website RAG Search
Github RAG Search
Code Docs RAG Search
Youtube Video RAG Search
Youtube Chanel RAG Search
Examples
Telemetry

Previous Next

Github RAG Search Youtube Video RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI crewAI

crewAI
Home
YoutubeVideoSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is part of the crewai_tools package and is designed to perform semantic searches within Youtube video content, utilizing
XML RAG Search Retrieval-Augmented Generation (RAG) techniques. It is one of several "Search" tools in the package that leverage RAG for different
JSON RAG Search sources. The YoutubeVideoSearchTool allows for Jexibility in searches; users can search across any Youtube video content without
Docx Rag Search specifying a video URL, or they can target their search to a speciNc Youtube video by providing its URL.
MDX RAG Search
PG RAG Search
Website RAG Search
Installation
Github RAG Search To utilize the YoutubeVideoSearchTool, you must Nrst install the crewai_tools package. This package contains the
Code Docs RAG Search YoutubeVideoSearchTool among other utilities designed to enhance your data analysis and processing tasks. Install the package by
Youtube Video RAG Search executing the following command in your terminal:
Description
Installation pip install 'crewai[tools]'

Example
Arguments
Example
Youtube Chanel RAG Search
Examples To integrate the YoutubeVideoSearchTool into your Python projects, follow the example below. This demonstrates how to use the
Telemetry tool both for general Youtube content searches and for targeted searches within a speciNc video's content.

from crewai_tools import YoutubeVideoSearchTool

# General search across Youtube content without specifying a video URL, so the agent can search within any Youtube video content it learns abou
tool = YoutubeVideoSearchTool()

# Targeted search within a specific Youtube video's content


tool = YoutubeVideoSearchTool(youtube_video_url='https://youtube.com/watch?v=example')

Arguments
The YoutubeVideoSearchTool accepts the following initialization arguments:

youtube_video_url : An optional argument at initialization but required if targeting a speciNc Youtube video. It speciNes the
Youtube video URL path you want to search within.

Previous Next

Code Docs RAG Search Youtube Chanel RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI crewAI

crewAI
Home
YoutubeChannelSearchTool
Core Concepts
How to Guides Depend on OpenAI
Tools Docs
All RAG tools at the moment can only use openAI to generate embeddings, we are working on adding support for other providers.
Google Serper Search
Scrape Website
Directory Read Experimental

File Read
We are still working on improving tools, so there might be unexpected behavior or changes in the future.
Selenium Scraper
Directory RAG Search
PDF RAG Search
Description
TXT RAG Search
CSV RAG Search This tool is designed to perform semantic searches within a speciAc Youtube channel's content. Leveraging the RAG (Retrieval-
XML RAG Search Augmented Generation) methodology, it provides relevant search results, making it invaluable for extracting information or Anding
JSON RAG Search speciAc content without the need to manually sift through videos. It streamlines the search process within Youtube channels,
Docx Rag Search catering to researchers, content creators, and viewers seeking speciAc information or topics.
MDX RAG Search
PG RAG Search
Website RAG Search
Installation
Github RAG Search To utilize the YoutubeChannelSearchTool, the crewai_tools package must be installed. Execute the following command in your
Code Docs RAG Search shell to install:
Youtube Video RAG Search
Youtube Chanel RAG Search pip install 'crewai[tools]'

Description
Installation
Example Example
Arguments
To begin using the YoutubeChannelSearchTool, follow the example below. This demonstrates initializing the tool with a speciAc
Examples Youtube channel handle and conducting a search within that channel's content.

Telemetry
from crewai_tools import YoutubeChannelSearchTool

# Initialize the tool to search within any Youtube channel's content the agent learns about during its execution
tool = YoutubeChannelSearchTool()

# OR

# Initialize the tool with a specific Youtube channel handle to target your search
tool = YoutubeChannelSearchTool(youtube_channel_handle='@exampleChannel')

Arguments
youtube_channel_handle : A mandatory string representing the Youtube channel handle. This parameter is crucial for
initializing the tool to specify the channel you want to search within. The tool is designed to only search within the content of the
provided channel handle.

Previous Next

Youtube Video RAG Search Telemetry

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs
crewAI crewAI

crewAI
Home
Telemetry
Core Concepts
How to Guides
Telemetry
Tools Docs
Examples CrewAI utilizes anonymous telemetry to gather usage statistics with the primary goal of enhancing the library. Our focus is on
Telemetry improving and developing the features, integrations, and tools most utilized by our users.
Telemetry It's pivotal to understand that NO data is collected concerning prompts, task descriptions, agents' backstories or goals, usage of

Data Collected Includes:


tools, API calls, responses, any data processed by the agents, or secrets and environment variables, with the exception of the
conditions mentioned. When the share_crew feature is enabled, detailed data including task descriptions, agents' backstories or
Opt-In Further Telemetry
Sharing goals, and other speciCc attributes are collected to provide deeper insights while respecting user privacy.
Updates and Revisions

Data Collected Includes:

Version of CrewAI: Assessing the adoption rate of our latest version helps us understand user needs and guide our updates.

Python Version: Identifying the Python versions our users operate with assists in prioritizing our support efforts for these
versions.

General OS Information: Details like the number of CPUs and the operating system type (macOS, Windows, Linux) enable us to
focus our development on the most used operating systems and explore the potential for OS-speciCc features.

Number of Agents and Tasks in a Crew: Ensures our internal testing mirrors real-world scenarios, helping us guide users
towards best practices.

Crew Process Utilization: Understanding how crews are utilized aids in directing our development focus.

Memory and Delegation Use by Agents: Insights into how these features are used help evaluate their effectiveness and future.

Task Execution Mode: Knowing whether tasks are executed in parallel or sequentially inOuences our emphasis on enhancing
parallel execution capabilities.

Language Model Utilization: Supports our goal to improve support for the most popular languages among our users.

Roles of Agents within a Crew: Understanding the various roles agents play aids in crafting better tools, integrations, and
examples.

Tool Usage: Identifying which tools are most frequently used allows us to prioritize improvements in those areas.

Opt-In Further Telemetry Sharing

Users can choose to share their complete telemetry data by enabling the share_crew attribute to True in their crew conCgurations.
This opt-in approach respects user privacy and aligns with data protection standards by ensuring users have control over their data
sharing preferences. Enabling share_crew results in the collection of detailed crew and task execution data, including goal ,
backstory , context , and output of tasks. This enables a deeper insight into usage patterns while respecting the user's choice to
share.

Updates and Revisions

We are committed to maintaining the accuracy and transparency of our documentation. Regular reviews and updates are performed
to ensure our documentation accurately reOects the latest developments of our codebase and telemetry practices. Users are
encouraged to review this section for the most current information on our data collection practices and how they contribute to the
improvement of CrewAI.

Previous

Youtube Chanel RAG Search

Copyright © 2024 crewAI, Inc


Made with Material for MkDocs

You might also like