Generate Ready-to-Send, ATS-Friendly Resumes Instantly
- About the Project
- Features
- Architecture Overview
- Tech Stack
- Installation
- Usage Guide
- MCP Tools Overview
- Security & Data Handling
- Contributing
- Roadmap
- License
- Support
- Credits
Resume Generator MCP Server is a powerful, AI-driven backend server that automatically generates ATS-friendly, professional resumes from any input β whether itβs a text, file, or LinkedIn profile.
It uses FastMCP for the Model Context Protocol layer and integrates OpenAI, Meta LLama, LangChain, and other cloud tools to produce high-quality Word and PDF resumes β instantly uploaded to AWS with secure download links.
- π§Ύ Generate resumes from raw text, existing resume, or LinkedIn profile
- π Enhance and tailor resumes for specific job descriptions
- π§ Uses LLM intelligence (OpenAI + LLama) for resume optimization
- βοΈ Cloud-powered (AWS S3, Textract, CloudConvert, Neon DB)
- π Secure resume storage (auto-deletes after 7 days)
- π Supports multiple templates dynamically rendered via
docxtpl + Jinja2 - βοΈ Modular, scalable, and fully documented MCP server
- π Deployed with CI/CD via GitHub Actions on Render & Vercel
(Add your architecture diagram here)
(A placeholder is left for the diagram you will add later)
High-Level Flow:
- Choose a predefined resume template
- Provide your input (LinkedIn URL, text, or file)
- AI extracts, structures, and enhances your information
- MCP Server generates Word + PDF resumes
- You instantly receive secure download links
Core Technologies
- FastMCP
- Python (LangChain, docxtpl, Jinja2)
- AWS (S3, Textract, IAM)
- PostgreSQL (Neon)
- CloudConvert, ScrapingDog APIs
- OpenAI + Meta LLama
- ReactJS (Documentation & Demo UI)
- Alembic (DB Migration)
- Stytch Authentication
- CI/CD with GitHub Actions + Render Deployment
Ensure you have the following installed:
- Python 3.10+
- PostgreSQL (Neon DB connection string)
- AWS Account with S3, IAM, and Textract access
- CloudConvert & ScrapingDog API keys
- Stytch Authentication credentials
- OpenAI and Meta API keys
# Clone the repo
git clone https://github.com/1abhi6/Resume-Generator-MCP-Server.git
# Navigate to project directory
cd Resume-Generator-MCP-Server
# uv setup
uv init
uv venv
.venv/Scripts/activate
# Install dependencies
uv sync
# Set environment variables
cp .env.example .env
# Update .env with your credentials
# Run the MCP Server
uv run main.pyWe recommend using ngrok
Once the server is running locally or deployed:
-
Choose an input mode:
- Raw text
- Resume file (PDF/DOCX/Image)
- LinkedIn profile URL
-
Call the corresponding API endpoint or MCP client tool.
-
Receive instant Word & PDF download links.
Example API usage:
POST /generate-resume
{
"input": "I am a software engineer with 5 years of experience in Python and AI..."
}Response:
{
"pdf_link": "https://s3.amazonaws.com/xyz/resume.pdf",
"docx_link": "https://s3.amazonaws.com/xyz/resume.docx"
}Primary Tools
Generate Resume from Raw TextEnhance Existing ResumeGenerate Resume based on Job DescriptionGenerate Resume from LinkedIn Profile
Utility Tools
Check Server HealthUpload Resume FileCheck Uploaded File
Each tool returns Word + PDF download links of the generated resume.
- All resumes are stored in AWS S3 for 7 days only (auto-expiry).
- Input/output guardrails prevent processing of harmful or sensitive data.
- OAuth implemented via Stytch Authentication.
- Secure role-based IAM policies for AWS resources.
- Add more customizable templates
- Introduce multi-language support
- Add analytics dashboard for resume performance
- Enable user accounts with history tracking
- Integrate with more LLM providers
Contributions are welcome!
Please follow these steps:
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
Ensure your code follows proper linting and modular design.
This project is licensed under the MIT License β see the LICENSE file for details.
For support, questions, or feedback:
Special thanks to:
- OpenAI & Meta for LLMs
- AWS, Stytch, and Neon for cloud infrastructure
- FastMCP for protocol design
- All open-source contributors helping make this project better!
β If you like this project, give it a star! It helps others find it and motivates future development.