A full-stack AI assistant that helps job seekers with resume feedback, mock interview preparation, and career advice using Retrieval-Augmented Generation (RAG) and Model Context Protocol (MCP).
πΊ Watch the Full Demo on YouTube - 15-20 minute walkthrough covering design decisions, architecture, and live demonstration of all features.
This project focuses on Career Coaching and Job Search Assistance, a domain that is particularly well-suited for RAG + tools because:
- Rich Knowledge Base: Career guidance requires access to extensive, up-to-date information about resume writing, interview techniques, industry trends, and career transitions
- Practical Tools: Job seekers need actionable tools like resume analysis and mock interview generation that can provide personalized feedback
- Dynamic Content: Career advice needs to be current and relevant, making RAG essential for retrieving the most recent and applicable guidance
- Personalization: Different career stages and industries require tailored advice, which tools can provide based on user input
- Measurable Impact: Career coaching has clear success metrics (job placement, interview success, salary negotiation outcomes)
- Resume Analysis: Get detailed feedback on your resume with specific improvement suggestions using LLM-enhanced analysis
- Mock Interview Preparation: Receive role-specific interview questions and preparation tips
- Career Advice: Access curated career guidance from indexed documents using RAG
- Enhanced Chat Interface: Intelligent chat with automatic tool selection and multiple formatting options
- Streaming Chat: Real-time streaming responses with Server-Sent Events for better user experience
- RAG Pipeline: Semantic search across 20+ career documents for relevant advice
- MCP Server: FastMCP-based tools for seamless AI interactions
- Modern Web UI: Beautiful React frontend with Material UI
- Multi-LLM Support: Gemini (default) with OpenAI fallback
- ReAct Pattern: Transparent reasoning and tool usage
- Docker and Docker Compose installed
- Google API key for Gemini (recommended) or OpenAI API key
cp env.example .env
# Add your Google API key to .env for Gemini (recommended)
# Or add OpenAI API key as fallback# Production mode
docker-compose up -d
# Development mode (with hot reload)
docker-compose -f docker-compose.yml -f docker-compose.override.yml up -d- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
- Health Check: http://localhost:8000/health
docker compose down-
Install Dependencies:
pip install -r requirements.txt
-
Index Documents:
python rag/indexing.py
-
Start MCP Server:
python mcp_server/server.py
-
Install Dependencies:
cd frontend npm install -
Start Development Server:
npm run dev
- Document Indexing: 20+ PDFs/markdown files on resume writing, interview strategies, and career advice
- Vector Storage: FAISS for efficient similarity search
- Chunking Strategy: Semantic chunking with overlap for better context preservation
- Multi-query Reformulation: Enhanced document retrieval through query expansion
- Advanced Technique: Multi-query reformulation with paraphrasing models
- Tool 1:
analyze_resume(resume_text)- Comprehensive resume analysis with LLM enhancement - Tool 2:
mock_interview(position)- Role-specific interview questions and preparation tips - Resource:
career_guides://featured- Curated career tips using RAG - Chat Endpoint:
/chat- Main interface with ReAct pattern implementation
- Default: Google Gemini 1.5 Flash
- Fallback: OpenAI GPT-4o-mini
- Prompt Engineering: ReAct pattern with structured reasoning
- Error Handling: Graceful degradation with rule-based fallbacks
- React 18: Modern React with hooks
- Material UI: Professional UI components
- Vite: Fast build tool and dev server
- Responsive Design: Works on all devices
User: "Can you analyze my resume?"
Assistant:
Thought: User needs resume analysis. I should call the analyze_resume tool.
Action: analyze_resume(resume_text="[user's resume]")
Observation: Tool returns detailed feedback on structure, content, and improvements
Final Answer: Here's your comprehensive resume analysis with specific improvements...
User: "I have an interview for a data scientist position"
Assistant:
Thought: User needs interview preparation for a specific role. I should call the mock_interview tool.
Action: mock_interview(position="data scientist")
Observation: Tool returns role-specific technical and behavioral questions
Final Answer: Here are likely interview questions for a data scientist role...
User: "How do I transition from marketing to data science?"
Assistant:
Thought: User needs career transition advice. I should retrieve relevant information.
Action: retrieve(query="transition from marketing to data science")
Observation: RAG returns career transition and skills development guides
Final Answer: Here's a step-by-step plan to transition from marketing to data science...
User: "Can you help me with my resume?"
Assistant: I'd be happy to analyze your resume! Please paste your resume text below, and I'll provide detailed feedback on structure, content, and improvements.
User: [pastes resume text]
Assistant:
## Resume Analysis Results
**Analysis Type:** llm_enhanced
**Resume Length:** 1,247 characters
### Feedback:
- Strong use of action verbs: developed, managed, created
- Consider adding more quantifiable achievements
- Good structure with clear sections
- Skills section could be more specific to your target role
---
*Analysis completed using AI-enhanced tools*
The enhanced chat interface automatically:
- Detects user intent and selects appropriate tools
- Supports markdown, plain text, and code formatting
- Maintains conversation context
- Shows which tools were used for transparency
User: "Can you help me prepare for a data scientist interview?"
Assistant: [Streaming response appears in real-time]
π€ Analyzing your request...
π― Detected intent: INTERVIEW_PREP
βοΈ Generating interview questions for data scientist...
π I can help you prepare for interviews! What position are you interviewing for? I'll generate relevant questions and preparation tips.
π Tools used: ['mock_interview']
π― Action: request_interview_position
β
Streaming completed!
The streaming chat interface provides:
- Real-time responses: See responses as they're generated
- Progress indicators: Visual feedback during processing
- Stream cancellation: Stop responses mid-stream
- Smooth UX: No waiting for complete responses
- Server-Sent Events: Efficient streaming protocol
project/
βββ README.md
βββ requirements.txt
βββ env.example
βββ docker-compose.yml
βββ Dockerfile.backend
βββ data/ # 20+ Career documents (markdown)
βββ rag/
β βββ indexing.py # Document indexing and vector storage
β βββ retrieval.py # RAG retrieval with multi-query reformulation
βββ mcp_server/
β βββ server.py # FastMCP server with tools and resources
β βββ llm_client.py # Multi-LLM client (Gemini + OpenAI)
βββ prompts/
β βββ rag_prompt.py # ReAct pattern prompt engineering
βββ frontend/ # React frontend
β βββ src/
β βββ package.json
β βββ Dockerfile.frontend
βββ evaluation/
β βββ test_queries.json # 10 test queries for evaluation
β βββ results.md # Comprehensive evaluation results
βββ examples/
βββ demo.md # Example interactions with ReAct pattern
The system has been comprehensively evaluated with:
- Average Relevance Score: 4.4/5
- Tool Accuracy: 93.5%
- RAG Precision: 78%
- Average Response Time: 2.3 seconds
- 10 Realistic Test Queries: Covering resume analysis, interview preparation, career transitions
- Manual Scoring: 1-5 scale for relevance and quality
- Precision Metrics: Precision@3, Precision@5, Recall, F1 Score
- Failure Mode Analysis: 4 identified failure modes with mitigation strategies
- Strong performance in core capabilities
- Good tool selection accuracy
- Effective RAG retrieval for career advice
- Robust fallback mechanisms when LLM unavailable
- Backend: Python, FastAPI, FAISS, SentenceTransformers
- Frontend: React 18, Material UI, Vite
- LLM: Google Gemini 1.5 Flash (default), OpenAI GPT-4o-mini (fallback)
- Vector Database: FAISS
- MCP Framework: FastMCP
- Prompt Engineering: ReAct pattern with structured reasoning
- Containerization: Docker, Docker Compose
- Add markdown files to
data/directory - Rebuild the backend container:
docker-compose build backend - Restart:
docker-compose up -d
- Make changes in
frontend/src/ - Changes will auto-reload in development mode
- For production, rebuild:
docker-compose build frontend
- Modify
mcp_server/server.py - Changes will auto-reload in development mode
- Check API docs at http://localhost:8000/docs
- Set
GOOGLE_API_KEYin.envfor Gemini (recommended) - Set
OPENAI_API_KEYas fallback - Change
DEFAULT_LLMin.envto switch providers
- Port conflicts: Ensure ports 3000 and 8000 are available
- API connection: Check that the backend is running and accessible
- Document indexing: Verify that markdown files are in the
data/directory - Environment variables: Ensure
.envfile is properly configured - LLM availability: Check API keys and service status
# View all logs
docker-compose logs
# View specific service logs
docker-compose logs backend
docker-compose logs frontendcurl http://localhost:8000/healthMIT License - see LICENSE file for details.