This project demonstrates why a hybrid chat agent (classifier + LLM) can be more robust or secure than a pure LLM approach. This example uses intent classification for core tasks (like "check balance") ensuring accuracy and control of the LLM actions.
- Install Ollama from ollama.com
- Pull and run the llama3.2 model:
ollama pull llama3.2:latest
ollama run llama3.2:latestThe Python API server will connect to Ollama running on localhost:11434 by default.
- Install Python dependencies:
cd classify_query
pip install -r requirements.txt- Train the classifier:
python fine_tune.pyThis will train the classifier on the financial intents dataset and save the model for use by the API server.