Resume Analyser
Resume Analyser
A PROJECT REPORT
Submitted by
NAME : M.SURIYA
REGNO : 0121127046
of
BACHELOR
OF
COMPUTER APPLICATIONS
(APRIL 2024)
ALAGAPPA GOVERNMENT ARTS COLLEGE, KARAIKUDI-630 003.
(Grade- I college and Re-accredited with “B” Grade by NAAC)
BONAFIDE CERTIFICATE
This is to certify that the project entitled “AI RESUME ANALYZER” is the
bonafide work done by M.SURIYA(0121127046) who carried out the project work under my
supervision and submitted during the academic year 2021-2024.
I thank God, our almighty for his mercy to his towards me in concluding this
project in a very successful way.
M.Phil., Ph.D., Alagappa Government Arts College, Karaikudi who has provided all
facilities are out the project successfully.
I would also like to thank all the staff members and Lab assistant of the Department of the
Computer Application, who has helped us during the project.
I wish to express my heart full thanks to my parents who always encourage me in path of
success.
Finally, I wish to thank our friends for their constant encouragement and valuable suggestions
ABSTRACT
The Resume Analyzer is an advanced tool designed to streamline and enhance the recruitment
process by automating the analysis of job applicants' resumes. This innovative system
leverages natural language processing (NLP) and machine learning (ML) techniques to extract
valuable insights from resumes, providing recruiters with a comprehensive overview of
candidates' qualifications and suitability for specific roles.
Utilizes advanced NLP algorithms to parse and understand the content of resumes.
Extracts key information such as education, work experience, skills, and certifications.
Skill and Keyword Matching:
Applies machine learning models to match candidates' skills with job requirements.
Improves the accuracy of identifying relevant keywords and industry-specific terminology.
TABLE OF CONTENTS
ACKNOWLEDGMENT
ABSTRACT PAGE
1. INTRODUCTION
1.1 Description 1
1.2 Existing system 2
1.3 Proposed system 5
2. SYSTEM DESIGN
3. SYSTEM IMPLEMENTATION 12
3.1 Software 15
4. SYSTEM TESTING 16
5. FUTURE ENHANCEMENT 21
6. CONCLUSION 23
REFERENCES 24
APPENDIX I
i) Hardware Requirements 25
ii) Software Requirements 25
APPENDIX II
The Resume Analyzer is an intelligent and efficient tool designed to revolutionize the
recruitment process. Leveraging state-of-the-art natural language processing (NLP) and
machine learning (ML) technologies, this platform is dedicated to automating the
meticulous task of resume analysis, enabling recruiters to make well-informed decisions
with speed and precision.
Our system employs advanced NLP algorithms to meticulously extract key information
from resumes, including education, work experience, skills, and certifications.
Utilizing machine learning models, the Resume Analyzer intelligently matches candidate
skills with job requirements, ensuring a more accurate and efficient screening process.
Through sophisticated semantic analysis, the platform goes beyond keyword matching to
understand the contextual meaning of sentences, uncovering nuanced details about
candidates' achievements and responsibilities.
1
1.2 EXISTING SYSTEM
1. Talentsoft Resume Parser:Talentsoft offers a resume parsing tool that uses natural
language processing to extract information from resumes and CVs. It focuses on
capturing essential details like skills, education, work experience, and contact
information.
4. Jobscan: Jobscan is a resume optimization tool that allows job seekers to tailor their
resumes for specific job descriptions. While it is primarily focused on helping
candidates improve their resumes, it indirectly contributes to the analysis process by
optimizing content for applicant tracking systems.
6. Zoho Recruit: Zoho Recruit is an ATS that integrates resume parsing functionality. It
extracts data from resumes, such as skills, education, and work experience, making it
easier for recruiters to manage candidate information.
7. LinkedIn Talent Hub:LinkedIn offers a suite of tools for talent acquisition, and
LinkedIn Talent Hub includes features for parsing and analyzing resumes. It allows
recruiters to review candidate profiles, gather insights, and make data-driven decision
2
DISADVANTAGES
1. Bias and Fairness Concerns: Resume analyzers can inadvertently perpetuate biases
present in the training data. If the training data is biased, the system may favor certain
demographics over others, leading to potential discrimination in the hiring process.
3
8. Inaccuracy in Parsing:Despite advancements in natural language processing, resume
analyzers may still encounter challenges in accurately parsing complex or poorly
formatted resumes, leading to errors in data extraction.
4
1.3 PROPOSED SYSTEM
5
9. Time Efficiency: Reduces manual effort in resume evaluation, allowing recruiters to
focus on strategic decision-making.Accelerates the screening process, enabling
quicker identification of top candidates.
11. Fair and Inclusive Hiring: Mitigates biases in the hiring process through continuous
monitoring and adjustment of algorithms.Promotes diversity by ensuring a fair and
unbiased evaluation of all candidates.
12. Adaptability and Customization: Adapts to evolving job market trends and
industry requirements.Provides customizable features that align with the unique hiring
needs of organizations.
6
ADVANTAGES
2. Quick Screening: Enables recruiters to quickly identify relevant candidates and prioritize
their efforts on the most promising profiles.
7
2. SYSTEM DESIGN
1. System Components: Define the main components of the system, including the front
end (user interface), back end (server-side processing), and database for storing
resume data and analysis results.
3. NLP Pipeline: Design a robust NLP pipeline for parsing and understanding resume
content. This pipeline should include modules for text preprocessing, entity
recognition (e.g., extracting education, work experience, skills), and sentiment
analysis (if necessary).
4. Machine Learning Models: Implement machine learning models for skill and
keyword matching. Train these models using labeled data to accurately match
candidates' skills with job requirements and improve the identification of relevant
keywords.
5. API Integration: Create APIs to facilitate seamless integration with external systems
such as job boards or applicant tracking systems (ATS). These APIs should enable
data exchange and trigger analysis processes.
6. Scalability and Performance: Ensure the system is designed to handle large volumes
of resume data efficiently. Employ techniques such as load balancing, caching, and
horizontal scaling to optimize performance and scalability.
8
8. Monitoring and Logging: Set up monitoring and logging mechanisms to track
system performance, identify issues proactively, and troubleshoot errors. Utilize tools
like Prometheus for monitoring and ELK stack (Elasticsearch, Logstash, Kibana) for
logging and log analysis.
9
2.2 MODULE DESCRIPTION
1. User Interface Module: This module is responsible for providing an intuitive and
user-friendly interface for recruiters to interact with the system. It includes features
such as uploading resumes, viewing analysis results, and managing candidate profiles.
3. Database Module: The database module manages the storage and retrieval of resume
data and analysis results. It includes functionalities for CRUD operations on candidate
profiles, storing metadata, and maintaining data integrity.
4. Machine Learning Module: This module contains machine learning models for skill
and keyword matching. It leverages labeled data to train models that accurately match
candidates' skills with job requirements and improve keyword identification.
5. API Integration Module: Responsible for integrating with external systems such as
job boards or applicant tracking systems (ATS). It provides APIs for data exchange,
enabling seamless integration with third-party platforms.
6. Scalability Module: Ensures the system can scale to handle large volumes of resume
data efficiently. It includes features such as load balancing, caching, and horizontal
scaling to optimize performance and accommodate increased demand.
10
8. Monitoring and Logging Module: Monitors system performance and logs events for
analysis and troubleshooting. It integrates with monitoring tools for real-time
performance monitoring and the ELK stack for centralized logging and log analysis.
11
3.SYSTEM IMPLEMENTATION
To bridge the gap between human-readable commands and binary code, computer
programming languages have been created. These languages, such as Python and SQL
for backend , provide a medium for programmers to articulate essential commands in
a more understandable syntax.
In the specific case of our "AI RESUME ANALYZER" project, Python serves as the
backbone for the system's logic and functionality. utilized for crafting the user
interface, ensuring an interactive and visually appealing chat experience. enhances the
user interface with dynamic elements and client-side interactivity.
The amalgamation of these programming languages and the results in a cohesive and
intelligent system capable of handling documents. Through this implementation, the
system provides users with a conversational interface, efficient information extraction
from PDFs, and additional features like an The diversity of programming languages
used in this project showcases their respective strengths, offering a comprehensive
solution for the dynamic requirements of the "AI RESUME ANALYZER" system.
1. NLP Processing:Python offers numerous libraries and frameworks for NLP tasks,
such as NLTK (Natural Language Toolkit), spaCy, and Gensim. These libraries
provide functionalities for text preprocessing, entity recognition, and sentiment
analysis, essential for parsing and understanding resume content.
12
2. Machine Learning Model Development:Python is widely used in ML and data science
due to libraries like scikit-learn, TensorFlow, and PyTorch. These libraries enable the
development and training of machine learning models for tasks such as skill and
keyword matching, leveraging algorithms like linear regression, decision trees, or
neural networks.
7. Monitoring and Logging:Python offers logging libraries like Python's built-in logging
module or third-party libraries like Loguru for configuring logging and capturing
system events. Additionally, Python can be used to integrate with monitoring
platforms like Prometheus for real-time monitoring and alerting.
13
8. CI/CD Pipeline Implementation:Python can be utilized in CI/CD pipelines for tasks
such as automated testing, code quality checks, and deployment scripts. Testing
frameworks like pytest or unitest can be used for automated testing, while CI/CD
tools like Jenkins or GitHub Actions can automate the build, testing, and deployment
processes.
14
3.1 SOFTWARE
PYTHON
15
4. SYSTEM TESTING
Testing is a series of different tests that whose primary purpose is to fully exercise the
computer based system. Although each test has a different purpose, all work should verify
that all system element have been properly integrated and performed allocated function.
Testing is the process of checking whether the developed system works according to the
actual requirement and objectives of the system. The philosophy behind testing is to find the
errors. A good test is one that has a high probability of finding an undiscovered error. A
successful test is one that uncovers the undiscovered error. Test cases are devised with this
purpose in mind. A test case is a set of data that the system will process as an input.
Types of Testing
System testing
After a system has been verified, it needs to be thoroughly tested to ensure that every
component of the system is performing in accordance with the specific requirements and that
it is operating as it should including when the wrong functions are requested or the wrong
data is introduced.
Testing measures consist of developing a set of test criteria either for the entire
system or for specific hardware, software and communications components. For an important
and sensitive system such as an electronic voting system, a structured system testing program
may be established to ensure that all aspects of the system are thoroughly tested.
• Applying functional tests to determine whether the test criteria have been met
• Applying qualitative assessments to determine whether the test criteria have been met.
• Conducting tests over an extended period of time to ensure systems can perform
consistently.
• Conducting “load tests”, simulating as close as possible likely conditions while using
or exceeding the amounts of data that can be expected to be handled in an actual
situation.
16
Test measures for hardware may include:
• Testing “hard wired” code in hardware (firmware) to ensure its logical correctness
and that appropriate standards are followed.
• Testing all programs to ensure its logical correctness and that appropriate design,
development and implementation standards have been followed.
17
Unit Testing:
Unit testing is a software testing technique that involves evaluating individual units or
components of a software application in isolation to ensure they function as intended.
The primary goal of unit testing is to validate that each unit of the software performs
correctly and as expected. Here are some key points to elaborate on unit testing:
Automation: Unit tests are typically automated, allowing for frequent and
consistent execution throughout the development process. Automation ensures
that tests can be run quickly and effortlessly, facilitating continuous integration
and delivery practices.
Early Detection of Bugs: Unit testing helps in the early detection of defects or
bugs in the code. By identifying issues at the unit level, developers can address
problems before they escalate to more complex and costly integration or system-
level issues.
Regression Testing: Unit tests act as a form of regression testing, ensuring that
changes or enhancements to the codebase do not introduce new issues or break
18
existing functionality. This is particularly important in agile development
environments where frequent changes are made.
Framework and Tools: Various unit testing frameworks and tools are available
for different programming languages. Examples include JUnit for Java, NUnit
for .NET, and pytest for Python. These frameworks provide a structure for
organizing and executing tests.
Test Cases: Unit tests consist of individual test cases that cover different aspects
of the unit's behavior. Test cases are designed to validate specific inputs, outputs,
and edge cases, ensuring comprehensive coverage.
Continuous Integration (CI): Unit testing is often integrated into the CI/CD
(Continuous Integration/Continuous Delivery) pipeline. This integration allows
for automated testing whenever changes are committed to the version control
system, providing rapid feedback to developers.
By employing unit testing, developers can enhance the reliability, maintainability, and
overall quality of software applications, contributing to a more robust and efficient
development process.
19
Integration Testing:
During integration testing, the interactions between various modules are examined to detect
any potential issues arising from the combination of these individual elements. The goal is to
identify and address integration-related problems, such as data flow issues, communication
errors, or inconsistencies in how different modules collaborate.
Integration testing is a crucial step in the testing process as it verifies the correct functioning
of the software when different components come together. This helps in uncovering defects
that may not be apparent during unit testing, where components are tested in isolation.
The testing process generally progresses from unit testing (where individual modules are
tested independently) to integration testing, ensuring that each step in the development cycle
is validated before moving on to broader system testing. Additionally, integration testing
plays a pivotal role in preparing for system testing by validating that the integrated
components form a cohesive and functional unit.
Integration testing can be performed using various approaches, including top-down, bottom-
up, or a combination of both. In a top-down approach, testing starts from the higher-level
modules down to the lower-level ones, while in a bottom-up approach, lower-level modules
are tested first and gradually integrated to form larger units.
In the context of software releases, integration testing is conducted after the product is code
complete and has undergone unit testing. It is a critical step in ensuring the stability and
reliability of the overall software system before progressing to more extensive system testing.
Integration testing helps developers identify and fix issues early in the development process,
contributing to the creation of a robust and well-functioning software application.
20
5. FUTURE ENHANCEMENT
For future enhancements to the AI Resume Analyzer project, several avenues for
improvement and expansion can be explored. Here are some ideas:
2. Multimodal Analysis: Integrate support for analyzing not only textual resumes but
also other formats such as PDFs, images, or videos. This could involve implementing
optical character recognition (OCR) for extracting text from images and videos and
incorporating image recognition techniques for analyzing visual content.
4. Bias Detection and Mitigation: Develop algorithms to detect and mitigate biases in
the recruitment process, ensuring fair and equitable treatment of candidates. This
could involve analyzing historical hiring data for patterns of bias and implementing
measures to mitigate bias in resume analysis and candidate selection.
5. Integration with External Datasets: Integrate with external datasets such as job
market trends, salary data, and industry-specific benchmarks to provide additional
context and insights to recruiters. This could involve scraping data from job boards,
salary websites, and industry reports and incorporating this data into the analysis
process.
21
6. Natural Language Generation (NLG): Implement NLG techniques to automatically
generate personalized feedback and recommendations for candidates based on the
analysis results. This could involve generating customized feedback reports
highlighting strengths, areas for improvement, and suggested career paths for
candidates.
7. Real-time Analysis and Alerts: Enhance the system to provide real-time analysis of
incoming resumes and generate alerts for recruiters based on predefined criteria. This
could involve implementing streaming data processing techniques and integrating
with communication platforms like email or messaging apps to deliver timely alerts to
recruiters.
22
6.CONCLUSION
At its core, the AI Resume Analyzer embodies efficiency, accuracy, and objectivity. By
automating the analysis of job applicants' resumes, it liberates recruiters from the arduous
task of manual screening, empowering them to allocate their time and resources more
strategically. Through advanced NLP algorithms and ML models, the system delivers
comprehensive insights into candidates' qualifications, skills, and suitability for specific
roles, thereby minimizing the risk of human bias and subjective decision-making.
23
REFERENCES
24
APPENDIX I
i) HARDWARE REQUIREMENTS
• Processor : 223MHz
• RAM :128 MB SD
• Hard Disk :2-4GB
• Compact Disk :4x
• Monitor :640x480 display
25
APPENDIX II
iii) SAMPLE SCREENSHOT
26
27
28
29
30
31
32
33
34
iv)SAMPLE SOURCE CODING
import streamlit as st
import nltk
import spacy
nltk.download('stopwords')
spacy.load('en_core_web_sm')
import pandas as pd
import base64, random
import time, datetime
from pyresparser import ResumeParser
import pdfminer3.layout
from pdfminer3.pdfpage import PDFPage
from pdfminer3.pdfinterp import PDFResourceManager
from pdfminer3.pdfinterp import PDFPageInterpreter
from pdfminer3.converter import TextConverter
import io, random
from streamlit_tags import st_tags
from PIL import Image
import pymysql
from Courses import ds_course, web_course, android_course, ios_course, uiux_course,
resume_videos, interview_videos
import pafy
import plotly.express as px
import youtube_dl
def fetch_yt_video(link):
video = pafy.new(link)
return video.title
35
def pdf_reader(file):
resource_manager = PDFResourceManager()
fake_file_handle = io.StringIO()
converter = TextConverter(resource_manager, fake_file_handle,
laparams=pdfminer3.layout.LAParams())
page_interpreter = PDFPageInterpreter(resource_manager, converter)
with open(file, 'rb') as fh:
for page in PDFPage.get_pages(fh,
caching=True,
check_extractable=True):
page_interpreter.process_page(page)
print(page)
text = fake_file_handle.getvalue()
def show_pdf(file_path):
with open(file_path, "rb") as f:
base64_pdf = base64.b64encode(f.read()).decode('utf-8')
# pdf_display = f'<embed src="data:application/pdf;base64,{base64_pdf}" width="700"
height="1000" type="application/pdf">'
pdf_display = F'<iframe src="data:application/pdf;base64,{base64_pdf}" width="700"
height="1000" type="application/pdf"></iframe>'
st.markdown(pdf_display, unsafe_allow_html=True)
def course_recommender(course_list):
36
return rec_course
st.set_page_config(
page_title="Smart Resume Analyzer",
page_icon='./Logo/SRA_Logo.ico',
)
def run():
st.title("Smart Resume Analyser")
st.sidebar.markdown("# Choose User")
activities = ["Normal User", "Admin"]
choice = st.sidebar.selectbox("Choose among the given options:", activities)
# link = '[©Developed by Spidy20](http://github.com/spidy20)'
# st.sidebar.markdown(link, unsafe_allow_html=True)
img = Image.open('./Logo/SRA_Logo.jpg')
img = img.resize((250, 250))
st.image(img)
# Create the DB
db_sql = """CREATE DATABASE IF NOT EXISTS SRA;"""
cursor.execute(db_sql)
connection.select_db("sra")
# Create table
DB_table_name = 'user_data'
37
table_sql = "CREATE TABLE IF NOT EXISTS " + DB_table_name + """
(ID INT NOT NULL AUTO_INCREMENT,
Name varchar(100) NOT NULL,
Email_ID VARCHAR(50) NOT NULL,
resume_score VARCHAR(8) NOT NULL,
Timestamp VARCHAR(50) NOT NULL,
Page_no VARCHAR(5) NOT NULL,
Predicted_Field VARCHAR(25) NOT NULL,
User_level VARCHAR(30) NOT NULL,
Actual_skills VARCHAR(300) NOT NULL,
Recommended_skills VARCHAR(300) NOT NULL,
Recommended_courses VARCHAR(600) NOT NULL,
PRIMARY KEY (ID));
"""
cursor.execute(table_sql)
if choice == 'Normal User':
# st.markdown('''<h4 style='text-align: left; color: #d73b5c;'>* Upload your resume, and
get smart recommendation based on it."</h4>''',
# unsafe_allow_html=True)
pdf_file = st.file_uploader("Choose your Resume", type=["pdf"])
if pdf_file is not None:
# with st.spinner('Uploading your Resume....'):
# time.sleep(4)
save_image_path = './Uploaded_Resumes/' + pdf_file.name
with open(save_image_path, "wb") as f:
f.write(pdf_file.getbuffer())
show_pdf(save_image_path)
resume_data = ResumeParser(save_image_path).get_extracted_data()
if resume_data:
## Get the whole resume data
resume_text = pdf_reader(save_image_path)
st.header("**Resume Analysis**")
st.success("Hello " + resume_data['name'])
st.subheader("**Your Basic info**")
try:
st.text('Name: ' + resume_data['name'])
st.text('Email: ' + resume_data['email'])
st.text('Contact: ' + resume_data['mobile_number'])
st.text('Resume pages: ' + str(resume_data['no_of_pages']))
except:
pass
cand_level = ''
if resume_data['no_of_pages'] == 1:
cand_level = "Fresher"
38
st.markdown('''<h4 style='text-align: left; color: #d73b5c;'>You are looking
Fresher.</h4>''',
unsafe_allow_html=True)
elif resume_data['no_of_pages'] == 2:
cand_level = "Intermediate"
st.markdown('''<h4 style='text-align: left; color: #1ed760;'>You are at
intermediate level!</h4>''',
unsafe_allow_html=True)
elif resume_data['no_of_pages'] >= 3:
cand_level = "Experienced"
st.markdown('''<h4 style='text-align: left; color: #fba171;'>You are at experience
level!''',
unsafe_allow_html=True)
st.subheader("**Skills Recommendation💡**")
## Skill shows
keywords = st_tags(label='### Skills that you have',
text='See our skills recommendation',
value=resume_data['skills'], key='1')
## recommendation
ds_keyword = ['tensorflow', 'keras', 'pytorch', 'machine learning', 'deep Learning',
'flask',
'streamlit']
web_keyword = ['react', 'django', 'node jS', 'react js', 'php', 'laravel', 'magento',
'wordpress',
'javascript', 'angular js', 'c#', 'flask']
android_keyword = ['android', 'android development', 'flutter', 'kotlin', 'xml', 'kivy']
ios_keyword = ['ios', 'ios development', 'swift', 'cocoa', 'cocoa touch', 'xcode']
uiux_keyword = ['ux', 'adobe xd', 'figma', 'zeplin', 'balsamiq', 'ui', 'prototyping',
'wireframes',
'storyframes', 'adobe photoshop', 'photoshop', 'editing', 'adobe illustrator',
'illustrator', 'adobe after effects', 'after effects', 'adobe premier pro',
'premier pro', 'adobe indesign', 'indesign', 'wireframe', 'solid', 'grasp',
'user research', 'user experience']
recommended_skills = []
reco_field = ''
rec_course = ''
## Courses recommendation
for i in resume_data['skills']:
## Data science recommendation
if i.lower() in ds_keyword:
print(i.lower())
39
reco_field = 'Data Science'
st.success("** Our analysis says you are looking for Data Science Jobs.**")
recommended_skills = ['Data Visualization', 'Predictive Analysis', 'Statistical
Modeling',
'Data Mining', 'Clustering & Classification', 'Data Analytics',
'Quantitative Analysis', 'Web Scraping', 'ML Algorithms',
'Keras',
'Pytorch', 'Probability', 'Scikit-learn', 'Tensorflow', "Flask",
'Streamlit']
recommended_keywords = st_tags(label='### Recommended skills for you.',
text='Recommended skills generated from System',
value=recommended_skills, key='2')
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>Adding this skills to resume
40
st.success("** Our analysis says you are looking for Android App
Development Jobs **")
recommended_skills = ['Android', 'Android development', 'Flutter', 'Kotlin',
'XML', 'Java',
'Kivy', 'GIT', 'SDK', 'SQLite']
recommended_keywords = st_tags(label='### Recommended skills for you.',
text='Recommended skills generated from System',
value=recommended_skills, key='4')
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>Adding this skills to resume
## Ui-UX Recommendation
elif i.lower() in uiux_keyword:
print(i.lower())
reco_field = 'UI-UX Development'
st.success("** Our analysis says you are looking for UI-UX Development Jobs
**")
41
recommended_skills = ['UI', 'User Experience', 'Adobe XD', 'Figma', 'Zeplin',
'Balsamiq',
'Prototyping', 'Wireframes', 'Storyframes', 'Adobe Photoshop',
'Editing',
'Illustrator', 'After Effects', 'Premier Pro', 'Indesign', 'Wireframe',
'Solid', 'Grasp', 'User Research']
recommended_keywords = st_tags(label='### Recommended skills for you.',
text='Recommended skills generated from System',
value=recommended_skills, key='6')
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>Adding this skills to resume
#
## Insert into table
ts = time.time()
cur_date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
cur_time = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
timestamp = str(cur_date + '_' + cur_time)
if 'Declaration' in resume_text:
resume_score = resume_score + 20
st.markdown(
42
'''<h4 style='text-align: left; color: #1ed760;'>[+] Awesome! You have added
Delcaration✍/h4>''',
unsafe_allow_html=True)
else:
st.markdown(
'''<h4 style='text-align: left; color: #fabc10;'>[-] According to our
recommendation please add Declaration✍. It will give the assurance that everything written
on your resume is true and fully acknowledged by you</h4>''',
unsafe_allow_html=True)
your Hobbies⚽</h4>''',
unsafe_allow_html=True)
else:
st.markdown(
'''<h4 style='text-align: left; color: #fabc10;'>[-] According to our
recommendation please add Hobbies⚽. It will show your persnality to the Recruiters and give
the assurance that you are fit for this role or not.</h4>''',
unsafe_allow_html=True)
if 'Achievements' in resume_text:
resume_score = resume_score + 20
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>[+] Awesome! You have added
recommendation please add Achievements🏅. It will show that you are capable for the required
position.</h4>''',
unsafe_allow_html=True)
if 'Projects' in resume_text:
resume_score = resume_score + 20
43
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>[+] Awesome! You have added
recommendation please add Projects👨💻. It will show that you have done work related the
required position or not.</h4>''',
unsafe_allow_html=True)
st.subheader("**Resume Score📝**")
st.markdown(
"""
<style>
.stProgress > div > div > div > div {
background-color: #d73b5c;
}
</style>""",
unsafe_allow_html=True,
)
my_bar = st.progress(0)
score = 0
for percent_complete in range(resume_score):
score += 1
time.sleep(0.1)
my_bar.progress(percent_complete + 1)
st.success('** Your Resume Writing Score: ' + str(score) + '**')
st.warning(
"** Note: This score is calculated based on the content that you have added in
your Resume. **")
st.balloons()
44
resume_vid = random.choice(resume_videos)
res_vid_title = fetch_yt_video(resume_vid)
connection.commit()
else:
st.error('Something went wrong..')
else:
## Admin Side
st.success('Welcome to Admin Side')
# st.sidebar.subheader('**ID / Password Required!**')
ad_user = st.text_input("Username")
ad_password = st.text_input("Password", type='password')
if st.button('Login'):
if ad_user == 'machine_learning_hub' and ad_password == 'mlhub123':
st.success("Welcome Kushal")
# Display Data
cursor.execute('''SELECT*FROM user_data''')
data = cursor.fetchall()
st.header("**User's👨💻 Data**")
df = pd.DataFrame(data, columns=['ID', 'Name', 'Email', 'Resume Score',
'Timestamp', 'Total Page',
'Predicted Field', 'User Level', 'Actual Skills', 'Recommended
Skills',
'Recommended Course'])
st.dataframe(df)
st.markdown(get_table_download_link(df, 'User_Data.csv', 'Download Report'),
unsafe_allow_html=True)
## Admin Side Data
query = 'select * from user_data;'
plot_data = pd.read_sql(query, connection)
45
## Pie chart for predicted field recommendations
labels = plot_data.Predicted_Field.unique()
print(labels)
values = plot_data.Predicted_Field.value_counts()
print(values)
else:
st.error("Wrong ID & Password Provided")
run()
Courses.py
46
['Programming for Data Science with
Python','https://www.udacity.com/course/programming-for-data-science-nanodegree--
nd104'],
['Programming for Data Science with
R','https://www.udacity.com/course/programming-for-data-science-nanodegree-with-R--
nd118'],
['Introduction to Data Science','https://www.udacity.com/course/introduction-to-data-
science--cd0017'],
['Intro to Machine Learning with TensorFlow','https://www.udacity.com/course/intro-
to-machine-learning-with-tensorflow-nanodegree--nd230']]
47
['Building an Android App with Architecture
Components','https://www.linkedin.com/learning/building-an-android-app-with-architecture-
components'],
['Android App Development Masterclass using
Kotlin','https://www.udemy.com/course/android-oreo-kotlin-app-masterclass/'],
['Flutter & Dart - The Complete Flutter App Development
Course','https://www.udemy.com/course/flutter-dart-the-complete-flutter-app-development-
course/'],
['Flutter App Development Course [Free]','https://youtu.be/rZLR5olMR64']]
48
['Adobe XD Tutorial: User Experience Design Course
[Free]','https://youtu.be/68w2VwalD5w'],
['Adobe XD for Beginners [Free]','https://youtu.be/WEljsc2jorI'],
['Adobe XD in Simple Way','https://learnux.io/course/adobe-xd']]
resume_videos = ['https://youtu.be/y8YH0Qbu5h4','https://youtu.be/J-4Fv8nq1iA',
'https://youtu.be/yp693O87GmM','https://youtu.be/UeMmCex9uTU',
'https://youtu.be/dQ7Q8ZdnuN0','https://youtu.be/HQqqQx5BCFY',
'https://youtu.be/CLUsplI4xMU','https://youtu.be/pbczsLkv7Cc']
interview_videos = ['https://youtu.be/Ji46s5BHdr0','https://youtu.be/seVxXHi2YMs',
'https://youtu.be/9FgfsLa_SmY','https://youtu.be/2HQmjLu-6RQ',
'https://youtu.be/DQd_AlIvHUw','https://youtu.be/oVVdezJ0e7w'
'https://youtu.be/JZK1MZwUyUU','https://youtu.be/CyXLhHQS3KY']
49