A simple, open-source example demonstrating how to build a personalized AI chat application using Fastino's Pioneer Personalization API with OpenAI's GPT-4o.
This project showcases how to:
- Build a personalized AI assistant that learns from conversations
- Integrate Pioneer's personalization API to provide context-aware responses
- Use GPT-4o for natural language understanding and generation
- Create a clean, modern UI for chat interactions
- Automatically ingest conversations for continuous learning
- Contextual Memory - Remembers user preferences and past conversations
- Personalized Responses - Adapts to user communication style
- Relevant Context Retrieval - Shows what context was used for each response
- User Profile Summaries - Displays learned information about the user
Before you begin, you'll need:
- Python 3.9+ installed
- Node.js 18+ and npm installed
- Pioneer API Key - Get one at https://fastino.ai
- OpenAI API Key - Get one at https://platform.openai.com/api-keys
git clone <your-repo-url>
cd pioneer-exampleCopy the example environment file and add your API keys:
cp env.example .envEdit .env and add your API keys:
# Required
PIONEER_API_KEY=pio_sk_your_api_key_here
OPENAI_API_KEY=sk-your_openai_key_here
# Optional
BACKEND_PORT=8000On macOS/Linux:
chmod +x run.sh
./run.shOn Windows:
run.batThe script will:
- Create a virtual environment
- Install all dependencies
- Start both backend and frontend
- Open the app at
http://localhost:3000
Install Backend Dependencies:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txtInstall Frontend Dependencies:
cd frontend
npm install
cd ..Start Backend (Terminal 1):
source venv/bin/activate
python backend/main.pyStart Frontend (Terminal 2):
cd frontend
npm run devNavigate to http://localhost:3000
You'll see a registration screen. Enter your email (use the same email as your Pioneer account) and click "Start Chatting"!
When you first open the app, you'll see a registration screen where you enter your email. This calls Pioneer's /register endpoint:
POST /register
{
"email": "user@example.com",
"purpose": "A personalized AI chat assistant..."
}Pioneer generates a profile summary from available data:
GET /summary?user_id=user@example.comThis summary is added to the system prompt for personalization.
For each message, relevant context is retrieved:
POST /chunks
{
"user_id": "user@example.com",
"history": [...conversation...],
"k": 5
}The user's message is enhanced with:
- User profile summary (in system prompt)
- Relevant context chunks (appended to message)
OpenAI generates a response with full context awareness.
The conversation is automatically ingested back to Pioneer:
POST /ingest
{
"user_id": "user@example.com",
"message_history": [...]
}This creates a continuous learning loop!
-
POST /chat - Send a message and get a personalized response
{ "message": "What should I eat for dinner?", "conversation_history": [], "user_email": "user@example.com" } -
POST /register - Register a new user
{ "email": "user@example.com", "name": "John Doe", "timezone": "America/Los_Angeles" } -
GET /health - Health check
Edit backend/main.py and change the model parameter:
completion = openai_client.chat.completions.create(
model="gpt-4o", # Change to "gpt-3.5-turbo", "gpt-4-turbo", etc.
messages=messages,
temperature=0.7,
max_tokens=1000
)Modify the chunk retrieval parameters in backend/main.py:
chunks = await get_relevant_chunks(
user_email,
conversation,
k=5, # Number of chunks (increase for more context)
)And update the similarity threshold:
json={
"user_id": user_email,
"history": history,
"k": k,
"similarity_threshold": 0.25 # Lower = more permissive (0.15-0.50)
}The frontend is built with React and uses CSS for styling. Edit:
frontend/src/App.jsx- Main component logicfrontend/src/App.css- Styling
Personalization Patterns:
- System Prompt Enhancement - Add user summary to every session
- Contextual Grounding - Retrieve relevant chunks at every turn
- Tool-Augmented Agent - Use
/queryfor complex questions - Continuous Learning - Ingest conversations for improvement
- All API keys should be stored in
.env(never commit to git) - Pioneer automatically anonymizes PII using GLiNER-2
- User data is isolated by
user_id - Frontend runs on your local machine
This is an open-source example project. Feel free to:
- Fork and modify for your use case
- Submit issues for bugs or improvements
- Share your own implementations
See CONTRIBUTING.md for details.
Apache-2.0 License - see LICENSE file for details
- Fastino Pioneer API - Personalization infrastructure
- OpenAI - GPT-4o language model
Questions? Check out the Pioneer Documentation or open an issue!