0% found this document useful (0 votes)
1K views57 pages

Resume Analyser

The document describes an AI-based resume analyzer system. It discusses the existing manual resume analysis process and several existing commercial resume analyzer tools. It then outlines the proposed new resume analyzer system, which uses natural language processing and machine learning to automatically extract key information from resumes and match candidates to jobs.

Uploaded by

leoodass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views57 pages

Resume Analyser

The document describes an AI-based resume analyzer system. It discusses the existing manual resume analysis process and several existing commercial resume analyzer tools. It then outlines the proposed new resume analyzer system, which uses natural language processing and machine learning to automatically extract key information from resumes and match candidates to jobs.

Uploaded by

leoodass
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 57

AI RESUME ANALYZER

A PROJECT REPORT

Submitted by

NAME : M.SURIYA

REGNO : 0121127046

in partial fulfilment of the requirements

for the award of the degree

of

BACHELOR

OF

COMPUTER APPLICATIONS

Under the Guidance of

Mrs. S.VIGNESHNARTHI M.Sc.,B.Ed.,M.Phil

DEPARTMENT OF COMPUTER APPLICATIONS


ALAGAPPA GOVERNMENT ARTS COLLEGE
KARAIKUDI-630 003.

(Grade- I college and Re-accredited with “B” Grade by NAAC)

(APRIL 2024)
ALAGAPPA GOVERNMENT ARTS COLLEGE, KARAIKUDI-630 003.
(Grade- I college and Re-accredited with “B” Grade by NAAC)

DEPARTMENT OF COMPUTER APPLICATIONS

BONAFIDE CERTIFICATE

This is to certify that the project entitled “AI RESUME ANALYZER” is the
bonafide work done by M.SURIYA(0121127046) who carried out the project work under my
supervision and submitted during the academic year 2021-2024.

The Viva- Voce held on ……………………

INTERNAL EXAMINER EXTERNAL EXAMINER

HEAD OF THE DEPARTMENT


ACKNOWLEDGEMENT

I thank God, our almighty for his mercy to his towards me in concluding this
project in a very successful way.

I wish to extend my thanks to our principal Dr . A.PETHALAKSHMI M.Sc.,

M.Phil., Ph.D., Alagappa Government Arts College, Karaikudi who has provided all
facilities are out the project successfully.

I wish to thank to Dr . K.C. CHANDRASEKARAN M.Sc., M.Phil., PGDCA.,


Ph.D., our Head of the Department, Department of Computer Application, Alagappa Government
Arts College, Karaikudi for approving and allowing us to carry out this project work.

I express my sincere thanks to my respected guide Mrs. S.VIGNESHNARTHI


M.Sc.,B.Ed.,M.Phil., Department of Computer Application, Alagappa Government Arts
College, Karaikudi for her valuable suggestion and encouragement given throughout the project.

I would also like to thank all the staff members and Lab assistant of the Department of the
Computer Application, who has helped us during the project.

I wish to express my heart full thanks to my parents who always encourage me in path of
success.

Finally, I wish to thank our friends for their constant encouragement and valuable suggestions
ABSTRACT

The Resume Analyzer is an advanced tool designed to streamline and enhance the recruitment
process by automating the analysis of job applicants' resumes. This innovative system
leverages natural language processing (NLP) and machine learning (ML) techniques to extract
valuable insights from resumes, providing recruiters with a comprehensive overview of
candidates' qualifications and suitability for specific roles.

Utilizes advanced NLP algorithms to parse and understand the content of resumes.
Extracts key information such as education, work experience, skills, and certifications.
Skill and Keyword Matching:

Applies machine learning models to match candidates' skills with job requirements.
Improves the accuracy of identifying relevant keywords and industry-specific terminology.
TABLE OF CONTENTS

ACKNOWLEDGMENT

ABSTRACT PAGE

1. INTRODUCTION

1.1 Description 1
1.2 Existing system 2
1.3 Proposed system 5

2. SYSTEM DESIGN

2.1 Architectural design 8


2.2 Module description 10

3. SYSTEM IMPLEMENTATION 12

3.1 Software 15

4. SYSTEM TESTING 16

5. FUTURE ENHANCEMENT 21

6. CONCLUSION 23

REFERENCES 24

APPENDIX I
i) Hardware Requirements 25
ii) Software Requirements 25

APPENDIX II

iii) Sample screen shots 26


iv) Sample source coding 35
1.INTRODUCTION
1.1 DESCRIPTION

The Resume Analyzer is an intelligent and efficient tool designed to revolutionize the
recruitment process. Leveraging state-of-the-art natural language processing (NLP) and
machine learning (ML) technologies, this platform is dedicated to automating the
meticulous task of resume analysis, enabling recruiters to make well-informed decisions
with speed and precision.

Our system employs advanced NLP algorithms to meticulously extract key information
from resumes, including education, work experience, skills, and certifications.

Utilizing machine learning models, the Resume Analyzer intelligently matches candidate
skills with job requirements, ensuring a more accurate and efficient screening process.

Through sophisticated semantic analysis, the platform goes beyond keyword matching to
understand the contextual meaning of sentences, uncovering nuanced details about
candidates' achievements and responsibilities.

The customizable scoring system allows recruiters to prioritize candidates based on


specific criteria, ensuring a tailored approach that aligns with the unique requirements of
each role.

1
1.2 EXISTING SYSTEM

1. Talentsoft Resume Parser:Talentsoft offers a resume parsing tool that uses natural
language processing to extract information from resumes and CVs. It focuses on
capturing essential details like skills, education, work experience, and contact
information.

2. Textkernel:Textkernel provides a suite of HR and recruitment tools, including a


resume parsing solution. Their technology uses machine learning to extract and
categorize information from resumes, offering a detailed analysis of candidates'
qualifications.

3. SmartRecruiters:SmartRecruiters is an applicant tracking system (ATS) that


includes resume parsing capabilities. It helps recruiters and hiring managers automate
the extraction of candidate information, making the screening process more efficient.

4. Jobscan: Jobscan is a resume optimization tool that allows job seekers to tailor their
resumes for specific job descriptions. While it is primarily focused on helping
candidates improve their resumes, it indirectly contributes to the analysis process by
optimizing content for applicant tracking systems.

5. Workday Recruiting:Workday, an enterprise cloud suite, includes a recruiting


module with features for parsing and analyzing resumes. It streamlines the hiring
process by automating various tasks, including resume analysis and candidate
evaluation.

6. Zoho Recruit: Zoho Recruit is an ATS that integrates resume parsing functionality. It
extracts data from resumes, such as skills, education, and work experience, making it
easier for recruiters to manage candidate information.

7. LinkedIn Talent Hub:LinkedIn offers a suite of tools for talent acquisition, and
LinkedIn Talent Hub includes features for parsing and analyzing resumes. It allows
recruiters to review candidate profiles, gather insights, and make data-driven decision

2
DISADVANTAGES

1. Bias and Fairness Concerns: Resume analyzers can inadvertently perpetuate biases
present in the training data. If the training data is biased, the system may favor certain
demographics over others, leading to potential discrimination in the hiring process.

2. Inability to Assess Soft Skills: Many resume analyzers focus on extracting


quantitative data such as skills and experiences, but they may struggle to accurately
assess soft skills, interpersonal abilities, and other qualitative attributes that are crucial
for certain roles.

3. Over-Reliance on Keywords:Some resume analyzers heavily rely on keyword


matching, potentially overlooking candidates who express their skills and experiences
using different terminology or in a more context-specific manner.

4. Lack of Contextual Understanding:Resume analyzers may struggle with


understanding the contextual nuances of certain industries or job roles. They might
misinterpret the significance of certain achievements or responsibilities due to a lack
of contextual understanding.

5. Limited Adaptability to Industry Jargon:Industries often have unique jargon or


terminology. Resume analyzers may not be sufficiently adaptable to understand and
accurately interpret industry-specific language or terms.

6. Data Privacy and Security Concerns: Handling sensitive personal information in


resumes raises concerns about data privacy and security. Ensuring compliance with
privacy regulations and safeguarding candidate data is crucial to avoid legal and
ethical issues.

7. Inability to Gauge Cultural Fit:Assessing cultural fit within an organization is a


complex task that goes beyond the information present in a resume. Resume analyzers
may not be equipped to evaluate whether a candidate aligns with the company's
culture and values.

3
8. Inaccuracy in Parsing:Despite advancements in natural language processing, resume
analyzers may still encounter challenges in accurately parsing complex or poorly
formatted resumes, leading to errors in data extraction.

9. Exclusion of Non-Traditional Resumes:Individuals with non-traditional career


paths or experiences may not fit neatly into the templates recognized by resume
analyzers, potentially excluding qualified candidates with diverse backgrounds.

10. Overlooking Career Progression:Resume analyzers may not always effectively


capture the progression and growth of a candidate's career, particularly if they have
changed industries or roles.

4
1.3 PROPOSED SYSTEM

1. Intelligent Resume Parsing:Utilizes NLP techniques for precise extraction of


relevant information, including education, work experience, skills, and
certifications.Ensures accurate and comprehensive data capture from various resume
formats.

2. Dynamic Skill Matching:Implements a dynamic skill matching system that adapts to


evolving job requirements and industry trends.Utilizes machine learning models to
continuously enhance the accuracy of skill-to-job matching.

3. Contextual Semantic Analysis: Employs advanced semantic analysis to


comprehend the contextual meaning of sentences and phrases.Enhances
understanding of candidates' achievements, responsibilities, and the impact of their
contributions.

4. Personalized Scoring Mechanism: Introduces a personalized scoring mechanism


that allows recruiters to define and prioritize specific criteria based on the unique
needs of each role.
5. Provides flexibility to adjust scoring weights according to the organization's hiring
priorities.

6. Bias Detection and Mitigation:Incorporates proactive measures to detect and


mitigate biases in the analysis process.Regularly updated bias detection algorithms
ensure fairness and promote diversity in the hiring process.

7. Interactive User Interface:Features an intuitive and user-friendly interface for


recruiters to navigate through analyzed data.Presents actionable insights in a visually
appealing format, facilitating quick and informed decision-making.

8. Seamless ATS Integration:Integrates seamlessly with popular Applicant Tracking


Systems, ensuring a cohesive and streamlined recruitment workflow.Maintains
compatibility with existing tools to minimize disruption during implementation.

5
9. Time Efficiency: Reduces manual effort in resume evaluation, allowing recruiters to
focus on strategic decision-making.Accelerates the screening process, enabling
quicker identification of top candidates.

10. Enhanced Accuracy:Improves the accuracy of candidate evaluation through


advanced NLP and ML algorithms.Minimizes the risk of overlooking qualified
candidates by providing a holistic view of their qualifications.

11. Fair and Inclusive Hiring: Mitigates biases in the hiring process through continuous
monitoring and adjustment of algorithms.Promotes diversity by ensuring a fair and
unbiased evaluation of all candidates.

12. Adaptability and Customization: Adapts to evolving job market trends and
industry requirements.Provides customizable features that align with the unique hiring
needs of organizations.

6
ADVANTAGES

1. Time Efficiency:Automation: Reduces the time spent manually reviewing resumes by


automating the parsing and analysis of candidate documents.

2. Quick Screening: Enables recruiters to quickly identify relevant candidates and prioritize
their efforts on the most promising profiles.

3. Improved Accuracy: Natural Language Processing (NLP): Utilizes NLP algorithms to


accurately extract and understand information from resumes, minimizing the risk of
human error in data interpretation.
4. Consistency: Ensures a consistent and standardized evaluation process for all applicants,
reducing the likelihood of overlooking crucial details.

5. Enhanced Objectivity:Bias Reduction: Incorporates algorithms designed to detect and


mitigate biases, promoting fair and unbiased evaluations.

6. Data-Driven Decisions: Provides objective insights based on predefined criteria,


reducing the influence of subjective biases in the screening process.

7. Customization and Flexibility:Tailored Criteria: Allows organizations to define and


customize evaluation criteria, ensuring alignment with specific job requirements and
company values.
8. Scalability: Adapts to different industries, job roles, and evolving hiring needs, making it
a versatile solution for diverse organizations.

9. Increased Productivity:Focus on Strategic Tasks: Frees up recruiters' time, allowing


them to concentrate on more strategic and value-added aspects of the recruitment process,
such as candidate interviews and relationship building.

7
2. SYSTEM DESIGN

2.1 ARCHITECTURAL DESIGN:

1. System Components: Define the main components of the system, including the front
end (user interface), back end (server-side processing), and database for storing
resume data and analysis results.

2. Microservices Architecture: Adopt a microservices architecture to ensure


scalability, flexibility, and maintainability. Decompose the system into smaller,
loosely coupled services that can be independently deployed and managed.

3. NLP Pipeline: Design a robust NLP pipeline for parsing and understanding resume
content. This pipeline should include modules for text preprocessing, entity
recognition (e.g., extracting education, work experience, skills), and sentiment
analysis (if necessary).

4. Machine Learning Models: Implement machine learning models for skill and
keyword matching. Train these models using labeled data to accurately match
candidates' skills with job requirements and improve the identification of relevant
keywords.

5. API Integration: Create APIs to facilitate seamless integration with external systems
such as job boards or applicant tracking systems (ATS). These APIs should enable
data exchange and trigger analysis processes.

6. Scalability and Performance: Ensure the system is designed to handle large volumes
of resume data efficiently. Employ techniques such as load balancing, caching, and
horizontal scaling to optimize performance and scalability.

7. Security Measures: Implement robust security measures to protect sensitive data


contained within resumes and ensure compliance with data privacy regulations. This
includes encryption of data at rest and in transit, access controls, and regular security
audits.

8
8. Monitoring and Logging: Set up monitoring and logging mechanisms to track
system performance, identify issues proactively, and troubleshoot errors. Utilize tools
like Prometheus for monitoring and ELK stack (Elasticsearch, Logstash, Kibana) for
logging and log analysis.

9. Continuous Integration/Continuous Deployment (CI/CD): Establish a CI/CD


pipeline to automate the build, testing, and deployment processes. This ensures rapid
and reliable delivery of updates and new features while maintaining system stability.

9
2.2 MODULE DESCRIPTION

1. User Interface Module: This module is responsible for providing an intuitive and
user-friendly interface for recruiters to interact with the system. It includes features
such as uploading resumes, viewing analysis results, and managing candidate profiles.

2. NLP Processing Module: This module incorporates advanced natural language


processing (NLP) algorithms to parse and understand the content of resumes. It
handles text preprocessing, entity recognition (e.g., education, work experience,
skills), and sentiment analysis to extract key information accurately.

3. Database Module: The database module manages the storage and retrieval of resume
data and analysis results. It includes functionalities for CRUD operations on candidate
profiles, storing metadata, and maintaining data integrity.

4. Machine Learning Module: This module contains machine learning models for skill
and keyword matching. It leverages labeled data to train models that accurately match
candidates' skills with job requirements and improve keyword identification.

5. API Integration Module: Responsible for integrating with external systems such as
job boards or applicant tracking systems (ATS). It provides APIs for data exchange,
enabling seamless integration with third-party platforms.

6. Scalability Module: Ensures the system can scale to handle large volumes of resume
data efficiently. It includes features such as load balancing, caching, and horizontal
scaling to optimize performance and accommodate increased demand.

7. Security Module: This module implements security measures to protect sensitive


data contained within resumes and ensure compliance with data privacy regulations. It
includes encryption, access controls, and auditing functionalities.

10
8. Monitoring and Logging Module: Monitors system performance and logs events for
analysis and troubleshooting. It integrates with monitoring tools for real-time
performance monitoring and the ELK stack for centralized logging and log analysis.

9. CI/CD Module: Implements continuous integration/continuous deployment (CI/CD)


pipelines for automating the build, testing, and deployment processes. It ensures rapid
and reliable delivery of updates and new features while maintaining system stability.

11
3.SYSTEM IMPLEMENTATION

In the context of the "AI RESUME ANALYZER" project, system implementation


involves the development of code to instruct the computer on how to perform specific
tasks. At its core, computers comprehend only the binary language of on and off
switches or transistors, represented by the digits 1 and 0. The intricate capabilities of a
computer emerge from countless combinations of these binary codes.

To bridge the gap between human-readable commands and binary code, computer
programming languages have been created. These languages, such as Python and SQL
for backend , provide a medium for programmers to articulate essential commands in
a more understandable syntax.

In the specific case of our "AI RESUME ANALYZER" project, Python serves as the
backbone for the system's logic and functionality. utilized for crafting the user
interface, ensuring an interactive and visually appealing chat experience. enhances the
user interface with dynamic elements and client-side interactivity.

Additionally, the integration of the brings advanced natural language processing


capabilities to the system.allowing the application to comprehend and generate
responses based on user queries.

The amalgamation of these programming languages and the results in a cohesive and
intelligent system capable of handling documents. Through this implementation, the
system provides users with a conversational interface, efficient information extraction
from PDFs, and additional features like an The diversity of programming languages
used in this project showcases their respective strengths, offering a comprehensive
solution for the dynamic requirements of the "AI RESUME ANALYZER" system.

1. NLP Processing:Python offers numerous libraries and frameworks for NLP tasks,
such as NLTK (Natural Language Toolkit), spaCy, and Gensim. These libraries
provide functionalities for text preprocessing, entity recognition, and sentiment
analysis, essential for parsing and understanding resume content.

12
2. Machine Learning Model Development:Python is widely used in ML and data science
due to libraries like scikit-learn, TensorFlow, and PyTorch. These libraries enable the
development and training of machine learning models for tasks such as skill and
keyword matching, leveraging algorithms like linear regression, decision trees, or
neural networks.

3. API Development:Python's simplicity and readability make it an ideal choice for


developing APIs. Frameworks like Flask or Django can be used to create RESTful
APIs for communication between different modules of the system, allowing for
seamless integration and data exchange.

4. Database Integration:Python provides libraries for interacting with databases, such as


SQLAlchemy for relational databases like PostgreSQL or MongoDB for NoSQL
databases. These libraries facilitate database setup, data retrieval, and management of
candidate profiles within the system.

5. Scalability and Performance Optimization:Python's extensive ecosystem includes


tools and libraries for optimizing system performance, such as asyncio for
asynchronous processing and Celery for task queuing. These tools can enhance
scalability and efficiency, enabling the system to handle large volumes of resume data
effectively.

6. Security Implementation:Python supports various security measures, including


encryption libraries like cryptography for data encryption and authentication
frameworks like Flask-Security for API security. These tools can be used to
implement robust security measures to protect sensitive data stored within the system.

7. Monitoring and Logging:Python offers logging libraries like Python's built-in logging
module or third-party libraries like Loguru for configuring logging and capturing
system events. Additionally, Python can be used to integrate with monitoring
platforms like Prometheus for real-time monitoring and alerting.

13
8. CI/CD Pipeline Implementation:Python can be utilized in CI/CD pipelines for tasks
such as automated testing, code quality checks, and deployment scripts. Testing
frameworks like pytest or unitest can be used for automated testing, while CI/CD
tools like Jenkins or GitHub Actions can automate the build, testing, and deployment
processes.

14
3.1 SOFTWARE

PYTHON

Python is a high-level, interpreted programming language known for its


simplicity, readability, and versatility. It was created by Guido van Rossum
and first released in 1991. Python has gained widespread popularity and is
widely used in various domains, including web development, data science,
artificial intelligence, machine learning, automation, and scientific computing.

15
4. SYSTEM TESTING

Testing is a series of different tests that whose primary purpose is to fully exercise the
computer based system. Although each test has a different purpose, all work should verify
that all system element have been properly integrated and performed allocated function.
Testing is the process of checking whether the developed system works according to the
actual requirement and objectives of the system. The philosophy behind testing is to find the
errors. A good test is one that has a high probability of finding an undiscovered error. A
successful test is one that uncovers the undiscovered error. Test cases are devised with this
purpose in mind. A test case is a set of data that the system will process as an input.

Types of Testing

System testing

After a system has been verified, it needs to be thoroughly tested to ensure that every
component of the system is performing in accordance with the specific requirements and that
it is operating as it should including when the wrong functions are requested or the wrong
data is introduced.

Testing measures consist of developing a set of test criteria either for the entire
system or for specific hardware, software and communications components. For an important
and sensitive system such as an electronic voting system, a structured system testing program
may be established to ensure that all aspects of the system are thoroughly tested.

Testing measures that could be followed include:

• Applying functional tests to determine whether the test criteria have been met

• Applying qualitative assessments to determine whether the test criteria have been met.

• Conducting tests in “laboratory” conditions and conducting tests in a variety of “real


life” conditions.

• Conducting tests over an extended period of time to ensure systems can perform
consistently.

• Conducting “load tests”, simulating as close as possible likely conditions while using
or exceeding the amounts of data that can be expected to be handled in an actual
situation.

16
Test measures for hardware may include:

• Applying “non-operating” tests to ensure that equipment can stand up to expected


levels of physical handling.

• Testing “hard wired” code in hardware (firmware) to ensure its logical correctness
and that appropriate standards are followed.

• Tests for software components also include:

• Testing all programs to ensure its logical correctness and that appropriate design,
development and implementation standards have been followed.

• Conducting “load tests”, simulating as close as possible a variety of “real life”


conditions using or exceeding the amounts of data that could be expected in an actual
situation.

• Verifying that integrity of data is maintained throughout its required manipulation.

17
Unit Testing:

Unit testing is a software testing technique that involves evaluating individual units or
components of a software application in isolation to ensure they function as intended.
The primary goal of unit testing is to validate that each unit of the software performs
correctly and as expected. Here are some key points to elaborate on unit testing:

 Scope:Unit testing focuses on testing the smallest testable parts of a software


application, known as "units" or "components." These units can be functions,
methods, or classes.

 Isolation: Each unit is tested in isolation, meaning that dependencies external to


the unit are either eliminated or replaced with simulated objects or "mocks." This
isolation ensures that the test results accurately reflect the behavior of the specific
unit being tested.

 Automation: Unit tests are typically automated, allowing for frequent and
consistent execution throughout the development process. Automation ensures
that tests can be run quickly and effortlessly, facilitating continuous integration
and delivery practices.

 Early Detection of Bugs: Unit testing helps in the early detection of defects or
bugs in the code. By identifying issues at the unit level, developers can address
problems before they escalate to more complex and costly integration or system-
level issues.

 Maintainability: Unit tests contribute to code maintainability by serving as a


form of documentation. New developers can understand the expected behavior of
units by reviewing unit tests. Additionally, as code evolves, unit tests can be rerun
to verify that existing functionality remains intact.

 Regression Testing: Unit tests act as a form of regression testing, ensuring that
changes or enhancements to the codebase do not introduce new issues or break

18
existing functionality. This is particularly important in agile development
environments where frequent changes are made.

 Framework and Tools: Various unit testing frameworks and tools are available
for different programming languages. Examples include JUnit for Java, NUnit
for .NET, and pytest for Python. These frameworks provide a structure for
organizing and executing tests.

 Test Cases: Unit tests consist of individual test cases that cover different aspects
of the unit's behavior. Test cases are designed to validate specific inputs, outputs,
and edge cases, ensuring comprehensive coverage.

 Continuous Integration (CI): Unit testing is often integrated into the CI/CD
(Continuous Integration/Continuous Delivery) pipeline. This integration allows
for automated testing whenever changes are committed to the version control
system, providing rapid feedback to developers.

 Red-Green-Refactor: Unit testing follows the Red-Green-Refactor cycle.


Initially, a test is written and executed (Red), then code is written to make the test
pass (Green), and finally, the code and tests are refactored for clarity and
efficiency.

By employing unit testing, developers can enhance the reliability, maintainability, and
overall quality of software applications, contributing to a more robust and efficient
development process.

19
Integration Testing:

Integration Testing is a phase in software testing where individual modules or components of


a software application are combined and tested as a group. These modules can include code
modules, individual applications, or even source and destination applications on a network.
The primary objective of integration testing is to ensure that these combined components
work seamlessly together and function as intended when integrated. This phase follows unit
testing and precedes system testing in the software development life cycle.

During integration testing, the interactions between various modules are examined to detect
any potential issues arising from the combination of these individual elements. The goal is to
identify and address integration-related problems, such as data flow issues, communication
errors, or inconsistencies in how different modules collaborate.

Integration testing is a crucial step in the testing process as it verifies the correct functioning
of the software when different components come together. This helps in uncovering defects
that may not be apparent during unit testing, where components are tested in isolation.

The testing process generally progresses from unit testing (where individual modules are
tested independently) to integration testing, ensuring that each step in the development cycle
is validated before moving on to broader system testing. Additionally, integration testing
plays a pivotal role in preparing for system testing by validating that the integrated
components form a cohesive and functional unit.

Integration testing can be performed using various approaches, including top-down, bottom-
up, or a combination of both. In a top-down approach, testing starts from the higher-level
modules down to the lower-level ones, while in a bottom-up approach, lower-level modules
are tested first and gradually integrated to form larger units.

In the context of software releases, integration testing is conducted after the product is code
complete and has undergone unit testing. It is a critical step in ensuring the stability and
reliability of the overall software system before progressing to more extensive system testing.
Integration testing helps developers identify and fix issues early in the development process,
contributing to the creation of a robust and well-functioning software application.

20
5. FUTURE ENHANCEMENT

For future enhancements to the AI Resume Analyzer project, several avenues for
improvement and expansion can be explored. Here are some ideas:

1. Semantic Analysis: Enhance the NLP capabilities by incorporating semantic analysis


techniques to better understand the context and meaning of resume content. This
could involve utilizing pre-trained language models like BERT or GPT to extract
deeper insights from text data.

2. Multimodal Analysis: Integrate support for analyzing not only textual resumes but
also other formats such as PDFs, images, or videos. This could involve implementing
optical character recognition (OCR) for extracting text from images and videos and
incorporating image recognition techniques for analyzing visual content.

3. Personalized Recommendations: Implement a recommendation system that


provides personalized recommendations to recruiters based on their preferences, past
hiring decisions, and the success of previous hires. This could involve analyzing
historical hiring data and candidate performance metrics to identify patterns and make
informed recommendations.

4. Bias Detection and Mitigation: Develop algorithms to detect and mitigate biases in
the recruitment process, ensuring fair and equitable treatment of candidates. This
could involve analyzing historical hiring data for patterns of bias and implementing
measures to mitigate bias in resume analysis and candidate selection.

5. Integration with External Datasets: Integrate with external datasets such as job
market trends, salary data, and industry-specific benchmarks to provide additional
context and insights to recruiters. This could involve scraping data from job boards,
salary websites, and industry reports and incorporating this data into the analysis
process.

21
6. Natural Language Generation (NLG): Implement NLG techniques to automatically
generate personalized feedback and recommendations for candidates based on the
analysis results. This could involve generating customized feedback reports
highlighting strengths, areas for improvement, and suggested career paths for
candidates.

7. Real-time Analysis and Alerts: Enhance the system to provide real-time analysis of
incoming resumes and generate alerts for recruiters based on predefined criteria. This
could involve implementing streaming data processing techniques and integrating
with communication platforms like email or messaging apps to deliver timely alerts to
recruiters.

8. Advanced Analytics and Reporting: Develop advanced analytics and reporting


features that provide insights into recruitment metrics such as time-to-hire, candidate
conversion rates, and source effectiveness. This could involve visualizing data using
dashboards and implementing analytics algorithms to identify trends and patterns in
recruitment data.

22
6.CONCLUSION

In the fast-paced landscape of modern recruitment, the AI Resume Analyzer emerges as a


beacon of innovation, revolutionizing traditional hiring practices through the seamless
integration of cutting-edge technologies. This project, built upon the foundation of natural
language processing (NLP) and machine learning (ML), signifies a paradigm shift in how
organizations approach candidate evaluation and selection.

At its core, the AI Resume Analyzer embodies efficiency, accuracy, and objectivity. By
automating the analysis of job applicants' resumes, it liberates recruiters from the arduous
task of manual screening, empowering them to allocate their time and resources more
strategically. Through advanced NLP algorithms and ML models, the system delivers
comprehensive insights into candidates' qualifications, skills, and suitability for specific
roles, thereby minimizing the risk of human bias and subjective decision-making.

Moreover, the AI Resume Analyzer transcends mere functionality, offering a


transformative candidate experience. Job applicants benefit from timely feedback and
transparent communication, fostering a sense of engagement and trust throughout the
recruitment journey. With its modular architecture and scalability features, the system
adapts seamlessly to the needs of organizations of all sizes and industries, ensuring
flexibility and customization to suit diverse recruitment requirements.

As a catalyst for continuous improvement, the AI Resume Analyzer empowers recruiters


to iterate, refine, and optimize recruitment processes iteratively. Feedback mechanisms
and real-time analytics facilitate data-driven decision-making, enabling recruiters to fine-
tune analysis criteria, algorithms, and workflows based on actionable insights and
performance metrics. This iterative approach not only enhances the efficacy of
recruitment efforts but also drives organizational success by attracting top talent and
fostering a culture of innovation and excellence.

23
REFERENCES

[1]. Github link https://github.com/deepakpadhi986/AI-Resume-Analyzer


[2]. Reference youtube link https://youtu.be/CbpsDMwFG2g?si=z9DmKn7rhKcgmwH_
[3]. Reference Youtube link https://youtu.be/hqu5EYMLCUw?si=y9m0s6ZT894k0orW
[4]. Python Documentation https://docs.python.org/3/

24
APPENDIX I

The following will be needed in order to complete this project

i) HARDWARE REQUIREMENTS

• Processor : 223MHz
• RAM :128 MB SD
• Hard Disk :2-4GB
• Compact Disk :4x
• Monitor :640x480 display

ii) SOFTWARE REQUIREMENTS


• Front End :PYTHON
• Back End :SQL

25
APPENDIX II
iii) SAMPLE SCREENSHOT

Run using Streamlit Run app.py Command

26
27
28
29
30
31
32
33
34
iv)SAMPLE SOURCE CODING

Under Project Folder Create a Python file named app.py

import streamlit as st
import nltk
import spacy
nltk.download('stopwords')
spacy.load('en_core_web_sm')

import pandas as pd
import base64, random
import time, datetime
from pyresparser import ResumeParser
import pdfminer3.layout
from pdfminer3.pdfpage import PDFPage
from pdfminer3.pdfinterp import PDFResourceManager
from pdfminer3.pdfinterp import PDFPageInterpreter
from pdfminer3.converter import TextConverter
import io, random
from streamlit_tags import st_tags
from PIL import Image
import pymysql
from Courses import ds_course, web_course, android_course, ios_course, uiux_course,
resume_videos, interview_videos
import pafy
import plotly.express as px
import youtube_dl

def fetch_yt_video(link):
video = pafy.new(link)
return video.title

def get_table_download_link(df, filename, text):


"""Generates a link allowing the data in a given panda dataframe to be downloaded
in: dataframe
out: href string
"""
csv = df.to_csv(index=False)
b64 = base64.b64encode(csv.encode()).decode() # some strings <-> bytes conversions
necessary here
# href = f'<a href="data:file/csv;base64,{b64}">Download Report</a>'
href = f'<a href="data:file/csv;base64,{b64}" download="{filename}">{text}</a>'
return href

35
def pdf_reader(file):
resource_manager = PDFResourceManager()
fake_file_handle = io.StringIO()
converter = TextConverter(resource_manager, fake_file_handle,
laparams=pdfminer3.layout.LAParams())
page_interpreter = PDFPageInterpreter(resource_manager, converter)
with open(file, 'rb') as fh:
for page in PDFPage.get_pages(fh,
caching=True,
check_extractable=True):
page_interpreter.process_page(page)
print(page)
text = fake_file_handle.getvalue()

# close open handles


converter.close()
fake_file_handle.close()
return text

def show_pdf(file_path):
with open(file_path, "rb") as f:
base64_pdf = base64.b64encode(f.read()).decode('utf-8')
# pdf_display = f'<embed src="data:application/pdf;base64,{base64_pdf}" width="700"
height="1000" type="application/pdf">'
pdf_display = F'<iframe src="data:application/pdf;base64,{base64_pdf}" width="700"
height="1000" type="application/pdf"></iframe>'
st.markdown(pdf_display, unsafe_allow_html=True)

def course_recommender(course_list):

st.subheader("**Courses & Certificates🎓 Recommendations**")


c=0
rec_course = []
no_of_reco = st.slider('Choose Number of Course Recommendations:', 1, 10, 4)
random.shuffle(course_list)
for c_name, c_link in course_list:
c += 1
st.markdown(f"({c}) [{c_name}]({c_link})")
rec_course.append(c_name)
if c == no_of_reco:
break

36
return rec_course

connection = pymysql.connect(host='localhost', user='root', password='')


cursor = connection.cursor()

def insert_data(name, email, res_score, timestamp, no_of_pages, reco_field, cand_level,


skills, recommended_skills,
courses):
DB_table_name = 'user_data'
insert_sql = "insert into " + DB_table_name + """
values (0,%s,%s,%s,%s,%s,%s,%s,%s,%s,%s)"""
rec_values = (
name, email, str(res_score), timestamp, str(no_of_pages), reco_field, cand_level, skills,
recommended_skills,
courses)
cursor.execute(insert_sql, rec_values)
connection.commit()

st.set_page_config(
page_title="Smart Resume Analyzer",
page_icon='./Logo/SRA_Logo.ico',
)

def run():
st.title("Smart Resume Analyser")
st.sidebar.markdown("# Choose User")
activities = ["Normal User", "Admin"]
choice = st.sidebar.selectbox("Choose among the given options:", activities)
# link = '[©Developed by Spidy20](http://github.com/spidy20)'
# st.sidebar.markdown(link, unsafe_allow_html=True)
img = Image.open('./Logo/SRA_Logo.jpg')
img = img.resize((250, 250))
st.image(img)

# Create the DB
db_sql = """CREATE DATABASE IF NOT EXISTS SRA;"""
cursor.execute(db_sql)
connection.select_db("sra")

# Create table
DB_table_name = 'user_data'

37
table_sql = "CREATE TABLE IF NOT EXISTS " + DB_table_name + """
(ID INT NOT NULL AUTO_INCREMENT,
Name varchar(100) NOT NULL,
Email_ID VARCHAR(50) NOT NULL,
resume_score VARCHAR(8) NOT NULL,
Timestamp VARCHAR(50) NOT NULL,
Page_no VARCHAR(5) NOT NULL,
Predicted_Field VARCHAR(25) NOT NULL,
User_level VARCHAR(30) NOT NULL,
Actual_skills VARCHAR(300) NOT NULL,
Recommended_skills VARCHAR(300) NOT NULL,
Recommended_courses VARCHAR(600) NOT NULL,
PRIMARY KEY (ID));
"""
cursor.execute(table_sql)
if choice == 'Normal User':
# st.markdown('''<h4 style='text-align: left; color: #d73b5c;'>* Upload your resume, and
get smart recommendation based on it."</h4>''',
# unsafe_allow_html=True)
pdf_file = st.file_uploader("Choose your Resume", type=["pdf"])
if pdf_file is not None:
# with st.spinner('Uploading your Resume....'):
# time.sleep(4)
save_image_path = './Uploaded_Resumes/' + pdf_file.name
with open(save_image_path, "wb") as f:
f.write(pdf_file.getbuffer())
show_pdf(save_image_path)
resume_data = ResumeParser(save_image_path).get_extracted_data()
if resume_data:
## Get the whole resume data
resume_text = pdf_reader(save_image_path)

st.header("**Resume Analysis**")
st.success("Hello " + resume_data['name'])
st.subheader("**Your Basic info**")
try:
st.text('Name: ' + resume_data['name'])
st.text('Email: ' + resume_data['email'])
st.text('Contact: ' + resume_data['mobile_number'])
st.text('Resume pages: ' + str(resume_data['no_of_pages']))
except:
pass
cand_level = ''
if resume_data['no_of_pages'] == 1:
cand_level = "Fresher"

38
st.markdown('''<h4 style='text-align: left; color: #d73b5c;'>You are looking
Fresher.</h4>''',
unsafe_allow_html=True)
elif resume_data['no_of_pages'] == 2:
cand_level = "Intermediate"
st.markdown('''<h4 style='text-align: left; color: #1ed760;'>You are at
intermediate level!</h4>''',
unsafe_allow_html=True)
elif resume_data['no_of_pages'] >= 3:
cand_level = "Experienced"
st.markdown('''<h4 style='text-align: left; color: #fba171;'>You are at experience
level!''',
unsafe_allow_html=True)

st.subheader("**Skills Recommendation💡**")
## Skill shows
keywords = st_tags(label='### Skills that you have',
text='See our skills recommendation',
value=resume_data['skills'], key='1')

## recommendation
ds_keyword = ['tensorflow', 'keras', 'pytorch', 'machine learning', 'deep Learning',
'flask',
'streamlit']
web_keyword = ['react', 'django', 'node jS', 'react js', 'php', 'laravel', 'magento',
'wordpress',
'javascript', 'angular js', 'c#', 'flask']
android_keyword = ['android', 'android development', 'flutter', 'kotlin', 'xml', 'kivy']
ios_keyword = ['ios', 'ios development', 'swift', 'cocoa', 'cocoa touch', 'xcode']
uiux_keyword = ['ux', 'adobe xd', 'figma', 'zeplin', 'balsamiq', 'ui', 'prototyping',
'wireframes',
'storyframes', 'adobe photoshop', 'photoshop', 'editing', 'adobe illustrator',
'illustrator', 'adobe after effects', 'after effects', 'adobe premier pro',
'premier pro', 'adobe indesign', 'indesign', 'wireframe', 'solid', 'grasp',
'user research', 'user experience']

recommended_skills = []
reco_field = ''
rec_course = ''
## Courses recommendation
for i in resume_data['skills']:
## Data science recommendation
if i.lower() in ds_keyword:
print(i.lower())

39
reco_field = 'Data Science'
st.success("** Our analysis says you are looking for Data Science Jobs.**")
recommended_skills = ['Data Visualization', 'Predictive Analysis', 'Statistical
Modeling',
'Data Mining', 'Clustering & Classification', 'Data Analytics',
'Quantitative Analysis', 'Web Scraping', 'ML Algorithms',
'Keras',
'Pytorch', 'Probability', 'Scikit-learn', 'Tensorflow', "Flask",
'Streamlit']
recommended_keywords = st_tags(label='### Recommended skills for you.',
text='Recommended skills generated from System',
value=recommended_skills, key='2')
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>Adding this skills to resume

will boost🚀 the chances of getting a Job💼</h4>''',


unsafe_allow_html=True)
rec_course = course_recommender(ds_course)
break

## Web development recommendation


elif i.lower() in web_keyword:
print(i.lower())
reco_field = 'Web Development'
st.success("** Our analysis says you are looking for Web Development Jobs
**")
recommended_skills = ['React', 'Django', 'Node JS', 'React JS', 'php', 'laravel',
'Magento',
'wordpress', 'Javascript', 'Angular JS', 'c#', 'Flask', 'SDK']
recommended_keywords = st_tags(label='### Recommended skills for you.',
text='Recommended skills generated from System',
value=recommended_skills, key='3')
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>Adding this skills to resume

will boost🚀 the chances of getting a Job💼</h4>''',


unsafe_allow_html=True)
rec_course = course_recommender(web_course)
break

## Android App Development


elif i.lower() in android_keyword:
print(i.lower())
reco_field = 'Android Development'

40
st.success("** Our analysis says you are looking for Android App
Development Jobs **")
recommended_skills = ['Android', 'Android development', 'Flutter', 'Kotlin',
'XML', 'Java',
'Kivy', 'GIT', 'SDK', 'SQLite']
recommended_keywords = st_tags(label='### Recommended skills for you.',
text='Recommended skills generated from System',
value=recommended_skills, key='4')
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>Adding this skills to resume

will boost🚀 the chances of getting a Job💼</h4>''',


unsafe_allow_html=True)
rec_course = course_recommender(android_course)
break

## IOS App Development


elif i.lower() in ios_keyword:
print(i.lower())
reco_field = 'IOS Development'
st.success("** Our analysis says you are looking for IOS App Development
Jobs **")
recommended_skills = ['IOS', 'IOS Development', 'Swift', 'Cocoa', 'Cocoa
Touch', 'Xcode',
'Objective-C', 'SQLite', 'Plist', 'StoreKit', "UI-Kit", 'AV
Foundation',
'Auto-Layout']
recommended_keywords = st_tags(label='### Recommended skills for you.',
text='Recommended skills generated from System',
value=recommended_skills, key='5')
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>Adding this skills to resume

will boost🚀 the chances of getting a Job💼</h4>''',


unsafe_allow_html=True)
rec_course = course_recommender(ios_course)
break

## Ui-UX Recommendation
elif i.lower() in uiux_keyword:
print(i.lower())
reco_field = 'UI-UX Development'
st.success("** Our analysis says you are looking for UI-UX Development Jobs
**")

41
recommended_skills = ['UI', 'User Experience', 'Adobe XD', 'Figma', 'Zeplin',
'Balsamiq',
'Prototyping', 'Wireframes', 'Storyframes', 'Adobe Photoshop',
'Editing',
'Illustrator', 'After Effects', 'Premier Pro', 'Indesign', 'Wireframe',
'Solid', 'Grasp', 'User Research']
recommended_keywords = st_tags(label='### Recommended skills for you.',
text='Recommended skills generated from System',
value=recommended_skills, key='6')
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>Adding this skills to resume

will boost🚀 the chances of getting a Job💼</h4>''',


unsafe_allow_html=True)
rec_course = course_recommender(uiux_course)
break

#
## Insert into table
ts = time.time()
cur_date = datetime.datetime.fromtimestamp(ts).strftime('%Y-%m-%d')
cur_time = datetime.datetime.fromtimestamp(ts).strftime('%H:%M:%S')
timestamp = str(cur_date + '_' + cur_time)

### Resume writing recommendation

st.subheader("**Resume Tips & Ideas💡**")


resume_score = 0
if 'Objective' in resume_text:
resume_score = resume_score + 20
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>[+] Awesome! You have added
Objective</h4>''',
unsafe_allow_html=True)
else:
st.markdown(
'''<h4 style='text-align: left; color: #fabc10;'>[-] According to our
recommendation please add your career objective, it will give your career intension to the
Recruiters.</h4>''',
unsafe_allow_html=True)

if 'Declaration' in resume_text:
resume_score = resume_score + 20
st.markdown(

42
'''<h4 style='text-align: left; color: #1ed760;'>[+] Awesome! You have added

Delcaration✍/h4>''',
unsafe_allow_html=True)
else:
st.markdown(
'''<h4 style='text-align: left; color: #fabc10;'>[-] According to our

recommendation please add Declaration✍. It will give the assurance that everything written
on your resume is true and fully acknowledged by you</h4>''',
unsafe_allow_html=True)

if 'Hobbies' or 'Interests' in resume_text:


resume_score = resume_score + 20
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>[+] Awesome! You have added

your Hobbies⚽</h4>''',
unsafe_allow_html=True)
else:
st.markdown(
'''<h4 style='text-align: left; color: #fabc10;'>[-] According to our

recommendation please add Hobbies⚽. It will show your persnality to the Recruiters and give
the assurance that you are fit for this role or not.</h4>''',
unsafe_allow_html=True)

if 'Achievements' in resume_text:
resume_score = resume_score + 20
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>[+] Awesome! You have added

your Achievements🏅 </h4>''',


unsafe_allow_html=True)
else:
st.markdown(
'''<h4 style='text-align: left; color: #fabc10;'>[-] According to our

recommendation please add Achievements🏅. It will show that you are capable for the required
position.</h4>''',
unsafe_allow_html=True)

if 'Projects' in resume_text:
resume_score = resume_score + 20

43
st.markdown(
'''<h4 style='text-align: left; color: #1ed760;'>[+] Awesome! You have added

your Projects👨‍💻 </h4>''',


unsafe_allow_html=True)
else:
st.markdown(
'''<h4 style='text-align: left; color: #fabc10;'>[-] According to our

recommendation please add Projects👨‍💻. It will show that you have done work related the
required position or not.</h4>''',
unsafe_allow_html=True)

st.subheader("**Resume Score📝**")
st.markdown(
"""
<style>
.stProgress > div > div > div > div {
background-color: #d73b5c;
}
</style>""",
unsafe_allow_html=True,
)
my_bar = st.progress(0)
score = 0
for percent_complete in range(resume_score):
score += 1
time.sleep(0.1)
my_bar.progress(percent_complete + 1)
st.success('** Your Resume Writing Score: ' + str(score) + '**')
st.warning(
"** Note: This score is calculated based on the content that you have added in
your Resume. **")
st.balloons()

insert_data(resume_data['name'], resume_data['email'], str(resume_score),


timestamp,
str(resume_data['no_of_pages']), reco_field, cand_level,
str(resume_data['skills']),
str(recommended_skills), str(rec_course))

## Resume writing video

st.header("**Bonus Video for Resume Writing Tips💡**")

44
resume_vid = random.choice(resume_videos)
res_vid_title = fetch_yt_video(resume_vid)

st.subheader("✅ **" + res_vid_title + "**")


st.video(resume_vid)

## Interview Preparation Video

st.header("**Bonus Video for Interview👨‍💼 Tips💡**")


interview_vid = random.choice(interview_videos)
int_vid_title = fetch_yt_video(interview_vid)

st.subheader("✅ **" + int_vid_title + "**")


st.video(interview_vid)

connection.commit()
else:
st.error('Something went wrong..')
else:
## Admin Side
st.success('Welcome to Admin Side')
# st.sidebar.subheader('**ID / Password Required!**')

ad_user = st.text_input("Username")
ad_password = st.text_input("Password", type='password')
if st.button('Login'):
if ad_user == 'machine_learning_hub' and ad_password == 'mlhub123':
st.success("Welcome Kushal")
# Display Data
cursor.execute('''SELECT*FROM user_data''')
data = cursor.fetchall()

st.header("**User's👨‍💻 Data**")
df = pd.DataFrame(data, columns=['ID', 'Name', 'Email', 'Resume Score',
'Timestamp', 'Total Page',
'Predicted Field', 'User Level', 'Actual Skills', 'Recommended
Skills',
'Recommended Course'])
st.dataframe(df)
st.markdown(get_table_download_link(df, 'User_Data.csv', 'Download Report'),
unsafe_allow_html=True)
## Admin Side Data
query = 'select * from user_data;'
plot_data = pd.read_sql(query, connection)

45
## Pie chart for predicted field recommendations
labels = plot_data.Predicted_Field.unique()
print(labels)
values = plot_data.Predicted_Field.value_counts()
print(values)

st.subheader("📈 **Pie-Chart for Predicted Field Recommendations**")


fig = px.pie(df, values=values, names=labels, title='Predicted Field according to the
Skills')
st.plotly_chart(fig)

### Pie chart for User's👨‍💻 Experienced Level


labels = plot_data.User_level.unique()
values = plot_data.User_level.value_counts()

st.subheader("📈 ** Pie-Chart for User's👨‍💻 Experienced Level**")

fig = px.pie(df, values=values, names=labels, title="Pie-Chart📈 for User's👨‍💻


Experienced Level")
st.plotly_chart(fig)

else:
st.error("Wrong ID & Password Provided")

run()

Courses.py

ds_course = [['Machine Learning Crash Course by Google [Free]',


'https://developers.google.com/machine-learning/crash-course'],
['Machine Learning A-Z by
Udemy','https://www.udemy.com/course/machinelearning/'],
['Machine Learning by Andrew NG','https://www.coursera.org/learn/machine-
learning'],
['Data Scientist Master Program of Simplilearn
(IBM)','https://www.simplilearn.com/big-data-and-analytics/senior-data-scientist-masters-
program-training'],
['Data Science Foundations: Fundamentals by
LinkedIn','https://www.linkedin.com/learning/data-science-foundations-fundamentals-5'],
['Data Scientist with Python','https://www.datacamp.com/tracks/data-scientist-with-
python'],

46
['Programming for Data Science with
Python','https://www.udacity.com/course/programming-for-data-science-nanodegree--
nd104'],
['Programming for Data Science with
R','https://www.udacity.com/course/programming-for-data-science-nanodegree-with-R--
nd118'],
['Introduction to Data Science','https://www.udacity.com/course/introduction-to-data-
science--cd0017'],
['Intro to Machine Learning with TensorFlow','https://www.udacity.com/course/intro-
to-machine-learning-with-tensorflow-nanodegree--nd230']]

web_course = [['Django Crash course [Free]','https://youtu.be/e1IyzVyrLSU'],


['Python and Django Full Stack Web Developer
Bootcamp','https://www.udemy.com/course/python-and-django-full-stack-web-developer-
bootcamp'],
['React Crash Course [Free]','https://youtu.be/Dorf8i6lCuk'],
['ReactJS Project Development
Training','https://www.dotnettricks.com/training/masters-program/reactjs-certification-
training'],
['Full Stack Web Developer - MEAN Stack','https://www.simplilearn.com/full-stack-
web-developer-mean-stack-certification-training'],
['Node.js and Express.js [Free]','https://youtu.be/Oe421EPjeBE'],
['Flask: Develop Web Applications in
Python','https://www.educative.io/courses/flask-develop-web-applications-in-python'],
['Full Stack Web Developer by Udacity','https://www.udacity.com/course/full-stack-
web-developer-nanodegree--nd0044'],
['Front End Web Developer by Udacity','https://www.udacity.com/course/front-end-
web-developer-nanodegree--nd0011'],
['Become a React Developer by Udacity','https://www.udacity.com/course/react-
nanodegree--nd019']]

android_course = [['Android Development for Beginners


[Free]','https://youtu.be/fis26HvvDII'],
['Android App Development
Specialization','https://www.coursera.org/specializations/android-app-development'],
['Associate Android Developer Certification','https://grow.google/androiddev/#?
modal_active=none'],
['Become an Android Kotlin Developer by
Udacity','https://www.udacity.com/course/android-kotlin-developer-nanodegree--nd940'],
['Android Basics by Google','https://www.udacity.com/course/android-basics-
nanodegree-by-google--nd803'],
['The Complete Android Developer
Course','https://www.udemy.com/course/complete-android-n-developer-course/'],

47
['Building an Android App with Architecture
Components','https://www.linkedin.com/learning/building-an-android-app-with-architecture-
components'],
['Android App Development Masterclass using
Kotlin','https://www.udemy.com/course/android-oreo-kotlin-app-masterclass/'],
['Flutter & Dart - The Complete Flutter App Development
Course','https://www.udemy.com/course/flutter-dart-the-complete-flutter-app-development-
course/'],
['Flutter App Development Course [Free]','https://youtu.be/rZLR5olMR64']]

ios_course = [['IOS App Development by


LinkedIn','https://www.linkedin.com/learning/subscription/topics/ios'],
['iOS & Swift - The Complete iOS App Development
Bootcamp','https://www.udemy.com/course/ios-13-app-development-bootcamp/'],
['Become an iOS Developer','https://www.udacity.com/course/ios-developer-
nanodegree--nd003'],
['iOS App Development with Swift
Specialization','https://www.coursera.org/specializations/app-development'],
['Mobile App Development with Swift','https://www.edx.org/professional-
certificate/curtinx-mobile-app-development-with-swift'],
['Swift Course by
LinkedIn','https://www.linkedin.com/learning/subscription/topics/swift-2'],
['Objective-C Crash Course for Swift
Developers','https://www.udemy.com/course/objectivec/'],
['Learn Swift by Codecademy','https://www.codecademy.com/learn/learn-swift'],
['Swift Tutorial - Full Course for Beginners [Free]','https://youtu.be/comQ1-x2a1Q'],
['Learn Swift Fast - [Free]','https://youtu.be/FcsY1YPBwzQ']]
uiux_course = [['Google UX Design Professional
Certificate','https://www.coursera.org/professional-certificates/google-ux-design'],
['UI / UX Design Specialization','https://www.coursera.org/specializations/ui-ux-
design'],
['The Complete App Design Course - UX, UI and Design
Thinking','https://www.udemy.com/course/the-complete-app-design-course-ux-and-ui-
design/'],
['UX & Web Design Master Course: Strategy, Design,
Development','https://www.udemy.com/course/ux-web-design-master-course-strategy-
design-development/'],
['The Complete App Design Course - UX, UI and Design
Thinking','https://www.udemy.com/course/the-complete-app-design-course-ux-and-ui-
design/'],
['DESIGN RULES: Principles + Practices for Great UI
Design','https://www.udemy.com/course/design-rules/'],
['Become a UX Designer by Udacity','https://www.udacity.com/course/ux-designer-
nanodegree--nd578'],

48
['Adobe XD Tutorial: User Experience Design Course
[Free]','https://youtu.be/68w2VwalD5w'],
['Adobe XD for Beginners [Free]','https://youtu.be/WEljsc2jorI'],
['Adobe XD in Simple Way','https://learnux.io/course/adobe-xd']]

resume_videos = ['https://youtu.be/y8YH0Qbu5h4','https://youtu.be/J-4Fv8nq1iA',
'https://youtu.be/yp693O87GmM','https://youtu.be/UeMmCex9uTU',
'https://youtu.be/dQ7Q8ZdnuN0','https://youtu.be/HQqqQx5BCFY',
'https://youtu.be/CLUsplI4xMU','https://youtu.be/pbczsLkv7Cc']

interview_videos = ['https://youtu.be/Ji46s5BHdr0','https://youtu.be/seVxXHi2YMs',
'https://youtu.be/9FgfsLa_SmY','https://youtu.be/2HQmjLu-6RQ',
'https://youtu.be/DQd_AlIvHUw','https://youtu.be/oVVdezJ0e7w'
'https://youtu.be/JZK1MZwUyUU','https://youtu.be/CyXLhHQS3KY']

49

You might also like