This bot is used to provide information about AllSee products and services but can be adapted for your own needs (see instructions below).
- Bot can reply to user according to the provided system prompt.
- Bot can send hyperlinks.
- Bot can send any files.
- Bot can call retrieval to get information from provided documents.
- Bot is using python-telegram-bot for interaction with Telegram bot API.
- Bot is using pydantic-settings for configuration management. All settings are stored in the
.envfile. - Bot is using LangGraph for managing conversations, function calling and so on.
- Bot is using ChatOpenAI from langchain_openai as LLM-wrapper. It can be easily replaced with other LLMs by updating llm.py file.
- Bot is using Chroma from langchain_chroma for vector database (for retrieval purpose). Currently, all retrieval-related code is placed in retrieval.py.
- Clone the repository
git clone https://github.com/allseeteam/allsee-info-bot cd allsee-info-bot - Create a virtual environment, activate it and install dependencies. Tested on python 3.11.11
python -m venv venv source venv/bin/activate pip install -r requirements.txt - Create a
.envfile from .env.example and fill in the required variables (instruction for each variable is in the file):cp ./env/.env.example ./env/.env nano ./env/.env
- You need to add data to
datafolder and update paths in retrieval.py and manager.py files. Files used in retrieval.py should be in.mdformat, where each part is splitted with "#" symbol. For example:You can use any number of parts, but it should be in# Part 1 This is part 1 of the document. # Part 2 This is part 2 of the document. # Part 3 This is part 3 of the document.
.mdformat. You can also use other formats like.txt, but you will need to update the code in retrieval.py file. - Start the bot
python -m src.bot
You can also run the bot using Docker:
-
Make sure you have Docker and Docker Compose installed:
# Arch Linux sudo pacman -S docker docker-compose sudo systemctl enable docker sudo systemctl start docker sudo usermod -aG docker $USER # Log out and back in for this to take effect
-
Build and start the container:
sudo docker compose --env-file ./env/.env up --build
The bot will use the environment variables from ./env/.env file.
- Some parts of the code are using global variables like llm from llm.py and other staff. In the future we need to make sure that all global variables are safe from changes from other threads and make sure that methods from globally used objects like llm and graph are not-blocking (e.g. using async methods).
- For now setup of Chroma is hardcoded in retrieval.py. In the future we need to make config for providing documents which we want to use for retrieval. For now we are using Chroma with default settings and no documents as well as other configurations.
- We need to make our agents more configurable. Create separate config files for storing system prompts and other settings.
- We need to add agent for processing structured data like tables.
- We need to properly handle all runtime errors and exceptions.
- You can change LLM used in llm.py
- You can update manager agent system prompt in manager.py
- You can add new manager-agent tools or update existing ones in tools.
- You can add new agents to by creating the like modules and setting handshakes between them like in this doc.