An intelligent chatbot application designed to help developers with OpenTelemetry integration and instrumentation. Built with Node.js, React, and powered by multiple LLM providers through LangChain with RAG (Retrieval Augmented Generation) capabilities.
- Multi-LLM Support: Compatible with OpenAI, Anthropic Claude, and AWS Bedrock
- RAG-Powered Responses: Uses vector search to provide contextually relevant answers
- OpenTelemetry Expertise: Pre-loaded with comprehensive OpenTelemetry documentation
- Modern Web Interface: Clean, responsive React-based chat interface
- Real-time Streaming: Fast response generation with typing indicators
- Source Attribution: Shows which documents were used to generate responses
- Provider Switching: Easily switch between different AI providers
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β React Client ββββββ Express API ββββββ LLM Provider β
β β β β β (OpenAI/Claude/ β
β - Chat UI β β - Chat Routes β β Bedrock) β
β - Provider β β - Admin Routes β βββββββββββββββββββ
β Selection β β - Middleware β
βββββββββββββββββββ βββββββββββββββββββ
β
βββββββββββββββββββββ
β Vector Store β
β (ChromaDB) β
β β
β - OTel Docs β
β - Code Examples β
β - Best Practices β
βββββββββββββββββββββ
- Node.js 18+
- npm or yarn
- One or more LLM provider API keys:
- OpenAI API key (mandatory)
- Anthropic API key (optional)
- AWS credentials (for Bedrock, optional)
-
Clone the repository
git clone <repository-url> cd hny-ai-workshop
-
Run quick start script
scripts/quick-start.sh
The quick start script is a one-for-all script that checks and installs all the required libraries and starts up the whole thing. Here are the layout of the individual steps that you can perform individually:
NOTE: When running quick-start.sh for the first time, during the startup of chromaDB, the script may time out and exit. When that happens, no need to worry. You can re-run the script and the script will continue from where it left off.
OPTIONAL
-
Install dependencies
# install concurrently npm install --save-dev concurrently # Install server dependencies npm install # Install client dependencies cd client && npm install && cd .. # Or use the convenience script npm run install-all
-
Configure environment variables
cp env.example .env
Edit
.env
with your configuration:# Choose your default LLM provider DEFAULT_LLM_PROVIDER=openai # Add your API keys OPENAI_API_KEY=your_openai_api_key_here ANTHROPIC_API_KEY=your_anthropic_api_key_here # AWS Bedrock (optional) AWS_ACCESS_KEY_ID=your_aws_access_key AWS_SECRET_ACCESS_KEY=your_aws_secret_key AWS_REGION=us-east-1
-
Set up ChromaDB (Vector Database)
Install and start ChromaDB:
pip install chromadb chroma run --host localhost --port 8000
-
Ingest OpenTelemetry documentation
npm run setup-data
-
Start the application
# Development mode (starts both server and client) npm run dev # Or start server only npm start # Or start everything (client, server, and chromaDB) npm run start:all or scripts/quick-start.sh
-
-
Access the application
- Frontend: http://localhost:3000
- API: http://localhost:3001/api
- Health check: http://localhost:3001/api/health
-
**Stop the application
npm run stop:all
- Open the web application in your browser
- Select your preferred LLM provider from the dropdown
- Ask questions about OpenTelemetry integration:
- "How do I set up auto-instrumentation for Express?"
- "What's the difference between manual and automatic instrumentation?"
- "How can I create custom spans?"
- "How can I instrument react web?"
POST /api/chat
- Send a chat messageGET /api/chat/context
- Get context for a questionGET /api/chat/providers
- List available providersPOST /api/chat/test-provider
- Test a provider
POST /api/admin/ingest
- Add documents to knowledge baseGET /api/admin/vector-store/info
- Get vector store infoPOST /api/admin/search
- Search documentsDELETE /api/admin/vector-store
- Reset knowledge base
Variable | Description | Default |
---|---|---|
PORT |
Server port | 3001 |
NODE_ENV |
Environment | development |
DEFAULT_LLM_PROVIDER |
Default AI provider | openai |
OPENAI_API_KEY |
OpenAI API key | - |
ANTHROPIC_API_KEY |
Anthropic API key | - |
AWS_ACCESS_KEY_ID |
AWS access key | - |
AWS_SECRET_ACCESS_KEY |
AWS secret key | - |
CHROMA_DB_PATH |
Vector DB path | ./data/chroma_db |
MAX_CONTEXT_LENGTH |
Max context tokens | 4000 |
TEMPERATURE |
LLM temperature | 0.7 |
You can add your own documentation to the knowledge base:
curl -X POST http://localhost:3001/api/admin/ingest \
-H "Content-Type: application/json" \
-d '{
"title": "Custom OTel Guide",
"content": "Your documentation content here...",
"source": "internal-docs",
"metadata": {
"type": "guide",
"version": "1.0"
}
}'
hny-ai-workshop/
βββ server/ # Backend Express.js application
β βββ config/ # Configuration and logging
β βββ middleware/ # Express middleware
β βββ routes/ # API route handlers
β βββ services/ # Business logic services
β βββ index.js # Server entry point
βββ client/ # React frontend application
β βββ src/
β β βββ components/ # React components
β β βββ services/ # API service layer
β β βββ App.js # Main app component
β βββ public/ # Static assets
βββ scripts/ # Utility scripts
β βββ ingest-data.js # Data ingestion script
βββ data/ # Data storage directory
βββ docs/ # Documentation
- Install the LangChain integration package
- Add provider configuration to
server/config/index.js
- Initialize the provider in
server/services/llmProvider.js
- Update environment variable documentation
npm test
# Build the React client
npm run build
# Start in production mode
NODE_ENV=production npm start
-
No traces/providers available
- Check your API keys in
.env
- Verify the provider is properly configured
- Check server logs for initialization errors
- Check your API keys in
-
ChromaDB connection issues
- Ensure ChromaDB is running on localhost:8000
- Check if the collection exists
- Try resetting the vector store
-
Frontend can't reach API
- Verify the proxy configuration in
client/package.json
- Check that the backend is running on port 3001
- Look for CORS issues in browser console
- Verify the proxy configuration in
Enable debug logging:
NODE_ENV=development npm start
- Fork the repository
- Create a feature branch
- Make your changes
- Add tests if applicable
- Submit a pull request
MIT License - see LICENSE file for details
- OpenTelemetry community for excellent documentation
- LangChain for RAG capabilities
- All the AI providers for making this possible
For more information, check out the OpenTelemetry documentation or ask the chatbot! π