FileChat is an AI assistant designed to help users understand and improve their local projects. It allows you to chat about files in your local folder while maintaining full control over your code.
FileChat is still quite new and under intense development. Expect bugs! If you find some or if you have a feature suggestion, please create an issue
Here is a short video:
filechat.webm
- Project Indexing: Creates a searchable index of your project files
- Contextual Chat: Ask questions about your project with AI that understands your codebase
- Real-time Updates: Automatically detects and indexes file changes
- Chat History: ChatGPT-like chat history for each directory
- Configurable: Customize which files to index, and choose your own LLM provider. We currently support models from:
- Mistral AI
- OpenAI
- Self-hosted servers with OpenAI-compatible API like Ollama or llama.cpp. We recommend a context window of at least 16384.
- Python 3.12 or higher
- An API key for the LLM provider you want to use or access to a self-hosted LLM server with an OpenAI-compatible API
- On Windows, you need Visual C++ Redistributable. It's very likely you have it already installed on your machine.
You can use any Package management tool you like. Here is an example for pip:
pip install filechatAnd here is an example of installing FileChat as a UV tool:
uv tool install filechatOn Linux, you should also specify the hardware accelerator as an optional dependency.
This accelerator will be used to run the local embedding model.
We support xpu (Intel Arc), and cuda.
If you don't specify a hardware accelerator, the embedding model will run on a CPU.
Here is an example of installing FileChat with xpu support:
PIP:
pip install filechat[xpu]UV Tool:
uv tool install filechat[xpu]- Clone the repository:
git clone https://github.com/msvana/filechat
cd filechat- Install dependencies using
uv:
uv sync- (Optional) Install GPU support:
# CUDA (NVIDIA)
uv sync --extra cuda
# XPU (Intel Arc)
uv sync --extra xpufilechat /path/to/your/projectOn the first run, FileChat guides you through an initial setup where you will choose your LLM provider, select a model, and set an API key.
These settings will be then stored at ~/.config/filechat.json. Feel free to change the file as you need.
You can invoke the initial setup at any time by running FileChat with the --setup or -s flag.
You can make FileChat use a different config file path by using the --config or -c argument.
Here is an example of a valid config file:
{
"max_file_size_kb": 25,
"ignored_dirs": [".git", "__pycache__", ".venv", ".pytest_cache", "node_modules", "dist"],
"allowed_suffixes": [".md", ".txt", ".json", ".toml", ".html", ".css", ...],
"index_store_path": "/home/milos/.cache/filechat",
"model": {
"provider": "openai",
"model": "gpt-5-mini",
"api_key": "[VALID_OPENAI_API_KEY]",
"base_url": null
}
}