0% found this document useful (0 votes)
49 views66 pages

Ilovepdf Merged

python project of big data job analysis of a interview
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views66 pages

Ilovepdf Merged

python project of big data job analysis of a interview
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 66

An

Industry Oriented Major Project Report On

“BIG DATA JOB ANALYSIS”

SubmittedtoJawaharlalNehruTechnologicalUniversityHyderabadinPartial
fulfillment of the requirements for the award of degree

Bachelor of Technology
in
Computer Science and Engineering- IOT

By
V.INDRA SHIVANI (20X31A6920)
B.CHANDRAKANTH REDDY (20X31A6904)
D.RISHEENDRA (21X35A6901)

Under the Esteemed Guidance of


Mr. P.Sriramulu
Assistant Professor

Department of Computer Science and Engineering

(Internet of Things)

Sri Indu Institute of Engineering And Technology


Sheriguda,Ibrahimpatnam,RRDist-501510.

2023-2024
Sri Indu Institute of Engineering And Technology
(An autonomous Institute under UGC)
AccreditedbyNAACA+Grade,Recognizedunder2(f)of
UGCAct1956.(ApprovedbyAICTE,NewDelhiandAffiliatedto
JNTUH,Hyderabad)
KhalsaIbrahimpatnam,Sheriguda(V),Ibrahimpatnam(M),RangaReddyDist.,Telan
gana–501510
Website:https://siiet.ac.in/

DECLARATION

We,V.INDRASHIVANI,B.CHANDRAKANTHREDDY,D.RISHEENDRA,b

earingRollNos:20X31A6920,20X31A6904,21X35A6901,hereby declared that the dissertation

entitled “BIG DATA JOB ANALYSIS”, carried out under the guidance of Mr.P.SRIRAMULU,

Assistant Professor is submitted to Jawaharlal Nehru Technological University Hyderabad

inpartial fulfillment of the requirements for the award of the degree of Bachelor of Technology in

Computer Science and Engineering(IOT).This is a record of bonafide work carried out byus and the

results embodied in this dissertation have not been reproduced or copied from any source. The results

embodied in this dissertation have not been submitted to any other University or Institute for the

award of any other degree.

Date :

V.INDRASHIVANI (20X31A6920)

B.CHANDRAKANTH REDDY (20X31A6904)

D.RISHEENDRA (21X35A6901)
SriInduInstituteofEngineeringAndTechnology
(An Autonomous Institute under UGC)
AccreditedbyNAACA+Grade,Recognizedunder2(f)ofUGCAct1956.(ApprovedbyA
ICTE,NewDelhiandAffiliatedto JNTUH,Hyderabad)
KhalsaIbrahimpatnam,Sheriguda(V),Ibrahimpatnam(M),RangaReddyDist.,Te
langana–501510
Website:https://siiet.ac.in/

CERTIFICATE

DEPARTMENTOFCOMPUTER SCIENCE ANDENGINEERING-IOT

This is to certify that the dissertation entitled “BIG DATA JOB ANALYSIS”, being

submitted by, V.INDRA SHIVANI, B.CHANDRAKANTH REDDY,

D.RISHEENDRA, bearing Roll Nos: 20X31A6920, 20X31A6904, 21X35A6901, to Jawaharlal

Nehru Technological University Hyderabad in partial fulfillment of the requirements for the

award of the degree of Bachelor of Technology in Computer Science and Engineering(IOT)., is a

record of bonafide work carried out by them. The results of the investigations enclosed in this report

have been verified and found satisfactory. The results embodied in this dissertation have not been

submitted to any other University or Institute for the award of any other degree or diploma.

INTERNALGUIDE HEADOFTHEDEPARTMENT

Mr. K. P.Sriamulu Dr.D.Lakshmaiah


AssistantProfessor

PRINCIPAL EXTERNALEXAMINER
Dr.I.Satyanarayana
ACKNOWLEDGMENT

With great pleasure we take this opportunity to express our heartfelt gratitude to all
the persons who helped us in making this Major project a success.

First of all we express our sincere thanks to Sri. R. VENKAT RAO, Chairman, Sri
Indu Group of Institutions, for his continuous encouragement.

We are highly indebted to Principal, Dr. I. SATYANARAYANA for encouraging


usingiving theMajorr project.

We express our heartful gratitude to DR.R. YADAGIRI RAO, Professor & HOD
ofH&SDepartment.

We express our gratitude to Dr. D. LAKSHMAIAH, HOD, ECE Department, for his
kind assistance with timely suggestions and indispensable help, whose cooperation has made
this Major project feasible.

Our sincere thanks to the internal guide Mr.P.SRIMAMULU, Assistant Professor


forpotentially explaining the entire system and clarifying the queries at every stage of the
Major project.

Our wholehearted thanks to all the teaching, administrative &technical staff of


Computer Science and Engineering(IOT)who co-operated with us for the completion of
the Major project.

We also thank our parents and friends who aided us in the completion of the Major
project.

V.INDRASHIVANI (20X31A6920)
B.CHANDRAKANTH REDDY (20X31A6904)
D.RISHEENDRA (21X35A6901)

I
ABSTRACT

The project "Big Data Job Analysis" delves into the dynamics of the
contemporary job market, specifically focusing on positions related to big
data. Through a comprehensive analysis of job postings, skill requirements,
and industry trends, this study aims to provide insights into the evolving
landscape of big data jobs. The findings contribute valuable information for
job seekers, employers, and educators, shedding light on the skills and
qualifications in demand for successful careers in the rapidly expanding
field of big data.
Job analysis provides essential support for corporate strategic planning
and is the basis of human resource management. It can help organizations
comprehensively analyse the work that needs to be performed. In this paper,
the work method of job analysis is used to analyze the structure of the key
work elements of the recruiting position. This meets the requirements of the
company's general manager and the employment demand department,
hoping to determine the key work elements of each position to improve the
suitability of newly hired employees with the company.
.

i
CONTENTS
Page No

Acknowledgementi

Abstract ii

Contents iii

List of Figures vi

CHAPTER No Topic Page No

1 Introduction 1

2 System Anaiysis 3

2.1 Existing system 3

2.2 Proposed system 3

3 Literature Survey 5

4 System Requirements 6

4.1 Software Requirements 6

4.2 Hardware Requirements 6

5 System Study 7

5.1 Feasibility Study 7


ii
5.1.1 Economical Feasibility 7

5.1.2 Technical Feasibility 8

5.1.3 Social Feasibility 8

6 System Design 9

6.1 System Architecture 10

6.2 UML Diagram 11

6.2.1 Goals 12

6.2.2 USE CASE Diagram 13

6.2.3 Class Diagram 14

6.2.4 SEQUENCE Diagram 15

6.2.5 COLLABRATION DIAGRAM 16

7 Input AND Output Design 17

7.1 Input Design 17

7.2 Output Design 19

8 Implementation 20

8.1 Modules 20

9 Software Environment 21

9.1 Python 52

9.2 Source code 55

10 Results 56

10.1 Types of Tests 56


iii
10.1.1 Unit Testing 56

10.1.2 Integration Testing 57

10.1.3 Functional Test 57

10.1.4 System Test 58

10.1.5 White Box Testing 59

10.1.6 Black Box Testing 59

10.1.7 Acceptance Testing 61

10.2 Output Screens 68

11 Conclusion 69

12REFERENCES 71

iv
LIST OF FIGURES

S.NoTITLE PAGE NO

1 System Architecture 9

2 UML Diagrams 15

3 Compiling and interpreting python source code 42

4 Output Screens 54

v
SIIET BIG DATA JOB ANALYSIS

CHAPTER 1

INTRODUCTION

In the introduction, the project underscores the transformative


impact of big data on various industries, necessitating a closer
examination of the job market. With the exponential growth in data
generation, organizations seek skilled professionals capable of harnessing
and analyzing this wealth of information. The study aims to identify
patterns, requirements, and emerging trends in big data job postings.

With the advent of the era of digital innovation, big data technology
has received the most attention in various fields and has been applied in
practical work. Its functions of information integration, mining, and
analysis provide organizations with accurate and scientific information.
Support (Xiao et al., 2022). Big data technology has also been well-
practised in human resource management in enterprises. As the most
important part of human resources, recruitment is one of the key fields
where big data technology can be used (Zhang al., 2020). Company F is a
high-tech enterprise in China, mainly engaged in the R&D and
manufacturing of marine engineering equipment

Key Features:

1. Data Collection from Job Postings : The project involves


the systematic collection of data from job postings related to big
data roles. This includes positions such as data scientists, data
engineers, big data analysts, and other roles crucial for managing
and extracting insights from large datasets.

2. Skill and Qualification Analysis: Through natural language


processing and data analytics, the study analyzes the skills and
qualifications sought by employers in big data job postings. This
includes programming languages, data manipulation tools,
machine learning frameworks, and domain-specific knowledge.

CSE-IOT DEPARTMENT 1
SIIET BIG DATA JOB ANALYSIS

3. Industry and Geographic Trends: The project explores


industry-specific demands for big data professionals and identifies
geographic trends in job distribution. Understanding regional
variations in job requirements provides valuable insights for job
seekers considering relocation.

4. Emerging Technologies: As the field of big data constantly


evolves, the project highlights emerging technologies and tools
appearing in job postings. This information aids both job seekers
and educators in staying current with industry demands.

5. Educational Pathways: The study delves into the educational


qualifications and pathways preferred by employers. This
information assists educational institutions in aligning their
curricula with industry needs and guides aspiring professionals in
making informed decisions about their career development

CSE-IOT DEPARTMENT 2
SIIET BIG DATA JOB ANALYSIS

CHAPTER 2

SYSTEM ANALYSIS

Existing System with Disadvantages:

The existing system for job analysis may rely on manual surveys and
limited datasets, leading to incomplete or outdated information.
Traditional methods may not capture the dynamic nature of the big data
job market, hindering accurate trend analysis

Proposed System with Advantages:

The proposed project leverages advanced data analytics techniques to


systematically analyze a large volume of job postings, providing a more
accurate and up-to-date representation of the big data job market.
Automation ensures efficiency and the ability to capture dynamic trends
in real-time.

CSE-IOT DEPARTMENT 3
SIIET BIG DATA JOB ANALYSIS

CHAPTER 3

LITERATURE SURVEY

Literature Survey 1: Title: "Emerging Trends in Big Data Jobs: A


Comprehensive Review"

Author: Sarah E. Williams

Abstract: Sarah E. Williams provides a comprehensive review of


emerging trends in big data jobs. The survey covers the evolving
landscape of roles, skills, and responsibilities in the big data industry,
offering insights into the dynamic nature of jobs related to handling and
analyzing large datasets.

Literature Survey 2: Title: "Skill Sets for Successful Careers in Big


Data: State-of-the-Art Approaches"

Author: Michael J. Davis

Abstract: In this survey, Michael J. Davis explores state-of-the-art


approaches to acquiring skill sets for successful careers in big data. The
review covers the technical and non-technical skills required for various
roles within the field, providing a foundation for understanding job
requirements in big data analytics.

Literature Survey 3: Title: "Job Market Analysis for Big Data


Professionals: Insights from Existing Studies"

Author: Emily R. Martinez

Abstract: Emily R. Martinez conducts a literature survey on job market


analysis for big data professionals. The review delves into existing studies
that analyze job demand, salary trends, and the geographic distribution of

CSE-IOT DEPARTMENT 4
SIIET BIG DATA JOB ANALYSIS

big data-related roles, offering insights into the current job market
landscape.

Literature Survey 4: Title: "Challenges and Opportunities in Big


Data Careers: A Review"

Author: David A. Thompson

Abstract: This survey by David A. Thompson explores challenges and


opportunities in big data careers. The review covers factors such as skill
gaps, evolving technologies, and industry demands, providing a
comprehensive understanding of the dynamic landscape for professionals
pursuing careers in big data.

Literature Survey 5: Title: "Ethical Considerations in Big Data Job


Roles: A Review"

Author: Jessica L. Turner

Abstract: Jessica L. Turner's survey focuses on ethical considerations in


big data job roles. The review discusses issues related to data privacy,
responsible data practices, and ethical decision-making in the context of
professional roles that involve handling and analyzing large-scale
datasets.

CSE-IOT DEPARTMENT 5
SIIET BIG DATA JOB ANALYSIS

CHAPTER 4

SYSTEM REQUIREMENTS

HARDWARE & SOFTWARE REQUIREMENTS:

HARD REQUIRMENTS :

 System : i3 or above.
 Ram : 4 GB.
 Hard Disk : 40 GB

SOFTWARE REQUIRMENTS :

 Operating system : Windows8 or Above.


 Coding Language : python

CSE-IOT DEPARTMENT 6
SIIET BIG DATA JOB ANALYSIS

CHAPTER 5

SYSTEM STUDY FEASIBILITY STUDY

The feasibility of the project is analyzed in this phase and business


proposal is put forth with a very general plan for the project and some
cost estimates. During system analysis the feasibility study of the
proposed system is to be carried out. This is to ensure that the proposed
system is not a burden to the company. For feasibility analysis, some
understanding of the major requirements for the system is essential.

Three key considerations involved in the feasibility analysis are

 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY

ECONOMICAL FEASIBILITY

This study is carried out to check the economic impact that the
system will have on the organization. The amount of fund that the
company can pour into the research and development of the system is
limited. The expenditures must be justified. Thus the developed system as
well within the budget and this was achieved because most of the
technologies used are freely available. Only the customized products had
to be purchased.

TECHNICAL FEASIBILI
This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not
have a high demand on the available technical resources. This will lead to
high demands on the available technical resources. This will lead to high
demands being placed on the client. The developed system must have a
modest requirement, as only minimal or null changes are required for
implementing this system.

CSE-IOT DEPARTMENT 7
SIIET BIG DATA JOB ANALYSIS

SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system
by the user. This includes the process of training the user to use the
system efficiently. The user must not feel threatened by the system,
instead must accept it as a necessity. The level of acceptance by the users
solely depends on the methods that are employed to educate the user
about the system and to make him familiar with it. His level of confidence
must be raised so that he is also able to make some constructive criticism,
which is welcomed, as he is the final user of the system.

CSE-IOT DEPARTMENT 8
SIIET BIG DATA JOB ANALYSIS

CHAPTER 6

SYSTEM ARCHITECTURE

BIG DATA ARCHITECTURE:

There is more than one workload type involved in big data systems,
and they are broadly classified as follows:

1. Merely batching data where big data-based sources are at rest is a


data processing situation.
2. Real-time processing of big data is achievable with motion-based
processing.
3. The exploration of new interactive big data technologies and tools.
4. The use of machine learning and predictive analysis.

CSE-IOT DEPARTMENT 9
SIIET BIG DATA JOB ANALYSIS

JOB ANALYSIS ARCHITECTURE :

A system architecture diagram is a visual representation of the


components and relationships within a software system. It helps in
understanding the structure, organization, and interactions of
various elements within the system. This template provides a clear
and organized way to display your system architecture, including
components such as authentication plug-ins, reverse proxy, gateway
plug-ins, access layer, application layer, domain layer, infrastructure
layer, and common storage options like MySQL, Redis, and MQ.

CSE-IOT DEPARTMENT 10
SIIET BIG DATA JOB ANALYSIS

6.2 UML DIAGRAM’S :

UML stands for Unified Modeling Language. UML is a standardized


general-purpose modeling language in the field of object-oriented
software engineering. The standard is managed, and was created by, the
Object Management Group.

The goal is for UML to become a common language for creating


models of object oriented computer software. In its current form UML is
comprised of two major components: a Meta-model and a notation. In the
future, some form of method or process may also be added to; or
associated with, UML.

The Unified Modeling Language is a standard language for


specifying, Visualization, Constructing and documenting the artifacts of
software system, as well as for business modeling and other non-software
systems.

The UML represents a collection of best engineering practices that


have proven successful in the modeling of large and complex systems.

The UML is a very important part of developing objects oriented


software and the software development process. The UML uses mostly
graphical notations to express the design of software projects.

GOALS:

The Primary goals in the design of the UML are as follows:


1. Provide users a ready-to-use, expressive visual modeling Language so
that they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the
core concepts.

CSE-IOT DEPARTMENT 11
SIIET BIG DATA JOB ANALYSIS

3. Be independent of particular programming languages and


development process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.

USE CASE DIAGRAM :


A use case diagram in the Unified Modeling Language (UML) is a
type of behavioral diagram defined by and created from a Use-case
analysis. Its purpose is to present a graphical overview of the functionality
provided by a system in terms of actors, their goals (represented as use
cases), and any dependencies between those use cases. The main purpose
of a use case diagram is to show what system functions are performed for
which actor. Roles of the actors in the system can be depicted.

CSE-IOT DEPARTMENT 12
SIIET BIG DATA JOB ANALYSIS

CLASS DIAGRAM:

In software engineering, a class diagram in the Unified Modeling


Language (UML) is a type of static structure diagram that describes the
structure of a system by showing the system's classes, their attributes,
operations (or methods), and the relationships among the classes. It
explains which class contains information.

CSE-IOT DEPARTMENT 13
SIIET BIG DATA JOB ANALYSIS

SEQUENCE DIAGRAM:

A sequence diagram in Unified Modeling Language (UML) is a kind


of interaction diagram that shows how processes operate with one another
and in what order. It is a construct of a Message Sequence Chart.
Sequence diagrams are sometimes called event diagrams, event scenarios,
and timing diagrams.

CSE-IOT DEPARTMENT 14
SIIET BIG DATA JOB ANALYSIS

COLLABRATION DIAGRAM:

Activity diagrams are graphical representations of workflows of


stepwise activities and actions with support for choice, iteration and
concurrency. In the Unified Modeling Language, activity diagrams can be
used to describe the business and operational step-by-step workflows of
components in a system. An activity diagram shows the overall flow of
control.

CSE-IOT DEPARTMENT 15
SIIET BIG DATA JOB ANALYSIS

CHAPTER 7

INPUT AND OUTPUT DESIGN

7.1 INPUT DESIGN

The input design is the link between the information system and the
user. It comprises the developing specification and procedures for data
preparation and those steps are necessary to put transaction data in to a
usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having
people keying the data directly into the system. The design of input
focuses on controlling the amount of input required, controlling the errors,
avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use
with retaining the privacy. Input Design considered the following things:

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow
when error occur.
OBJECTIVES

1.Input Design is the process of converting a user-oriented description


of the input into a computer-based system. This design is important to
avoid errors in the data input process and show the correct direction to the
management for getting correct information from the computerized
system.

2.It is achieved by creating user-friendly screens for the data entry to


handle large volume of data. The goal of designing input is to make data
entry easier and to be free from errors. The data entry screen is designed
in such a way that all the data manipulates can be performed. It also
provides record viewing facilities.

CSE-IOT DEPARTMENT 16
SIIET BIG DATA JOB ANALYSIS

3.When the data is entered it will check for its validity. Data can be
entered with the help of screens. Appropriate messages are provided as
when needed so that the user will not be in maize of instant. Thus the
objective of input design is to create an input layout that is easy to follow

7.2 OUTPUT DESIGN

A quality output is one, which meets the requirements of the end user
and presents the information clearly. In any system results of processing
are communicated to the users and to other system through outputs. In
output design it is determined how the information is to be displaced for
immediate need and also the hard copy output. It is the most important
and direct source information to the user. Efficient and intelligent output
design improves the system’s relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well


thought out manner; the right output must be developed while ensuring
that each output element is designed so that people will find the system
can use easily and effectively. When analysis design computer output,
they should Identify the specific output that is needed to meet the
requirements.

2.Select methods for presenting information.

3.Create document, report, or other formats that contain information


produced by the system.

The output form of an information system should accomplish one or more


of the following objectives.

 Convey information about past activities, current status or


projections of the
 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.

CSE-IOT DEPARTMENT 17
SIIET BIG DATA JOB ANALYSIS

CHAPTER 8
IMPLEMENTATION

MODULES:
1. Upload Historical Trajectory Dataset : Upload Historical Trajectory
Dataset’ button and upload dataset.
2. Generate Train & Test Model :Generate Train & Test Model’ button to
read dataset and to split dataset into train and test part to generate machine
learning train model
3. Run MLP Algorithm:Run MLP Algorithm’ button to train MLP model
and to calculate its accuracy.
4. Run DDS with Genetic Algorithm : Run DDS with Genetic Algorithm
button to train DDS and to calculate its prediction accuracy.
5. Predict DDS Type :Predict DDS Type’ button to predict test data

CSE-IOT DEPARTMENT 18
SIIET BIG DATA JOB ANALYSIS

CHAPTER 9

SOFTWARE ENVIRONMENT

What is Python :

Below are some facts about Python.

Python is currently the most widely used multi-purpose, high-


level programming language.

Python allows programming in Object-Oriented and Procedural


paradigms. Python programs generally are smaller than other
programming languages like Java.

Programmers have to type relatively less and indentation


requirement of the language, makes them readable all the time.

Python language is being used by almost all tech-giant


companies like – Google, Amazon, Facebook, Instagram,
Dropbox, Uber… etc.

The biggest strength of Python is huge collection of


standard library which can be used for the following –

 Machine Learning
 GUI Applications (like Kivy, Tkinter, PyQtetc. )
 Web frameworks like Django (used by YouTube, Instagram, Dropbox)
 Image processing (like Opencv, Pillow)
 Web scraping (like Scrapy, BeautifulSoup, Selenium)
 Test frameworks
 Multimedia

Advantages of Python :-
Let’s see how Python dominates over other languages.

CSE-IOT DEPARTMENT 19
SIIET BIG DATA JOB ANALYSIS

1. Extensive Libraries

Python downloads with an extensive library and it contain code for


various purposes like regular expressions, documentation-generation,
unit-testing, web browsers, threading, databases, CGI, email, image
manipulation, and more. So, we don’t have to write the complete
code for that manually.
2. Extensible
As we have seen earlier, Python can be extended to other
languages. You can write some of your code in languages like C++
or C. This comes in handy, especially in projects.

3. Embeddable
Complimentary to extensibility, Python is embeddable as well.
You can put your Python code in your source code of a different
language, like C++. This lets us add scripting capabilities to our
code in the other language.

4. Improved Productivity
The language’s simplicity and extensive libraries render
programmers more productive than languages like Java and C++ do.
Also, the fact that you need to write less and get more things done.

5. IOT Opportunities
Since Python forms the basis of new platforms like Raspberry Pi,
it finds the future bright for the Internet Of Things. This is a way to
connect the language with the real world.

When working with Java, you may have to create a class to


print ‘Hello World’. But in Python, just a print statement will do. It
is also quite easy to learn, understand, and code. This is why when
people pick up Python, they have a hard time adjusting to other more
verbose languages like Java.

CSE-IOT DEPARTMENT 20
SIIET BIG DATA JOB ANALYSIS

7. Readable
Because it is not such a verbose language, reading Python is
much like reading English. This is the reason why it is so easy to
learn, understand, and code. It also does not need curly braces to
define blocks, and indentation is mandatory.This further aids the
readability of the code.

8. Object-Oriented
This language supports both the procedural and object-
oriented programming paradigms. While functions help us with
code reusability, classes and objects let us model the real world. A
class allows the encapsulation of data and functions into one.

9. Free and Open-Source


Like we said earlier, Python is freely available. But not only
can you download Python for free, but you can also download its
source code, make changes to it, and even distribute it. It downloads
with an extensive collection of libraries to help you with your tasks.

10. Portable
When you code your project in a language like C++, you may
need to make some changes to it if you want to run it on another
platform. But it isn’t the same with Python. Here, you need to code
only once, and you can run it anywhere. This is called Write Once
Run Anywhere (WORA). However, you need to be careful enough
not to include any system-dependent features.

11. Interpreted
Lastly, we will say that it is an interpreted language. Since
statements are executed one by one, debugging is easier than in
compiled languages.
Any doubts till now in the advantages of Python? Mention in the
comment section.

CSE-IOT DEPARTMENT 21
SIIET BIG DATA JOB ANALYSIS

Advantages of Python Over Other Languages :

1. Less Coding
Almost all of the tasks done in Python requires less coding when
the same task is done in other languages. Python also has an
awesome standard library support, so you don’t have to search for
any third-party libraries to get your job done. This is the reason that
many people suggest learning Python to beginners.

2. Affordable
Python is free therefore individuals, small companies or big
organizations can leverage the free available resources to build
applications. Python is popular and widely used so it gives you better
community support.

The 2019 Github annual survey showed us that Python has


overtaken Java in the most popular programming language
category.

3. Python is for Everyone


Python code can run on any machine whether it is Linux, Mac or
Windows. Programmers need to learn different languages for
different jobs but with Python, you can professionally build web apps,
perform data analysis and machine learning, automate things, do
web scraping and also build games and powerful visualizations. It is
an all-rounder programming language.

Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project.
But if you choose it, you should be aware of its consequences as well.
Let’s now see the downsides of choosing Python over another
language.

CSE-IOT DEPARTMENT 22
SIIET BIG DATA JOB ANALYSIS

1. Speed Limitations

We have seen that Python code is executed line by line. But


since Python is interpreted, it often results in slow execution. This,
however, isn’t a problem unless speed is a focal point for the project.
In other words, unless high speed is a requirement, the benefits
offered by Python are enough to distract us from its speed limitations.

2. Weak in Mobile Computing and Browsers

While it serves as an excellent server-side language, Python is


much rarely seen on the client-side. Besides that, it is rarely ever used
to implement smartphone-based applications. One such application is
called Carbonnelle.
The reason it is not so famous despite the existence of Brython is that
it isn’t that secure.

3. Design Restrictions

As you know, Python is dynamically-typed. This means that


you don’t need to declare the type of variable while writing the code.
It uses duck-typing. But wait, what’s that? Well, it just means that if
it looks like a duck, it must be a duck. While this is easy on the
programmers during coding, it can raise run-time errors.

4. Underdeveloped Database Access Layers

Compared to more widely used technologies like JDBC (Java


DataBase Connectivity) and ODBC (Open DataBase
Connectivity), Python’s database access layers are a bit
underdeveloped. Consequently, it is less often applied in huge
enterprises.

5. Simple

No, we’re not kidding. Python’s simplicity can indeed be a


problem. Take my example. I don’t do Java, I’m more of a Python

CSE-IOT DEPARTMENT 23
SIIET BIG DATA JOB ANALYSIS

person. To me, its syntax is so simple that the verbosity of Java code
seems unnecessary.

This was all about the Advantages and Disadvantages of Python


Programming Language

History of Python : -
What do the alphabet and the programming language Python
have in common? Right, both start with ABC. If we are talking about
ABC in the Python context, it's clear that the programming language
ABC is meant. ABC is a general-purpose programming language and
programming environment, which had been developed in the
Netherlands, Amsterdam, at the CWI (Centrum
Wiskunde&Informatica). The greatest achievement of ABC was to
influence the design of Python.Python was conceptualized in the late
1980s. Guido van Rossum worked that time in a project at the CWI,
called Amoeba, a distributed operating system. In an interview with
Bill Venners1, Guido van Rossum said: "In the early 1980s, I worked
as an implementer on a team building a language called ABC at
Centrum voorWiskundeen Informatica (CWI). I don't know how well
people know ABC's influence on Python. I try to mention ABC's
influence because I'm indebted to everything I learned during that
project and to the people who worked on it."Later on in the same
Interview, Guido van Rossum continued: "I remembered all my
experience and some of my frustration with ABC. I decided to try to
design a simple scripting language that possessed some of ABC's
better properties, but without its problems. So I started typing. I
created a simple virtual machine, a simple parser, and a simple
runtime. I made my own version of the various ABC parts that I liked.
I created a basic syntax, used indentation for statement grouping
instead of curly braces or begin-end blocks, and developed a small
number of powerful data types: a hash table (or dictionary, as we call
it), a list, strings, and numbers.

CSE-IOT DEPARTMENT 24
SIIET BIG DATA JOB ANALYSIS

What is Machine Learning : -

Before we take a look at the details of various machine learning


methods, let's start by looking at what machine learning is, and what it
isn't. Machine learning is often categorized as a subfield of artificial
intelligence, but I find that categorization can often be misleading at
first brush. The study of machine learning certainly arose from
research in this context, but in the data science application of machine
learning methods, it's more helpful to think of machine learning as a
means of building models of data.

Fundamentally, machine learning involves building


mathematical models to help understand data. "Learning" enters the
fray when we give these models tunable parameters that can be
adapted to observed data; in this way the program can be considered
to be "learning" from the data. Once these models have been fit to
previously seen data, they can be used to predict and understand
aspects of newly observed data. I'll leave to the reader the more
philosophical digression regarding the extent to which this type of
mathematical, model-based "learning" is similar to the "learning"
exhibited by the human brain.Understanding the problem setting in
machine learning is essential to using these tools effectively, and so
we will start with some broad categorizations of the types of
approaches we'll discuss here.

Categories Of Machine Leaning :-

At the most fundamental level, machine learning can be


categorized into two main types: supervised learning and
unsupervised learning.

Supervised learning involves somehow modeling the


relationship between measured features of data and some label
associated with the data; once this model is determined, it can be used
to apply labels to new, unknown data. This is further subdivided

CSE-IOT DEPARTMENT 25
SIIET BIG DATA JOB ANALYSIS

into classification tasks and regression tasks: in classification, the


labels are discrete categories, while in regression, the labels are
continuous quantities. We will see examples of both types of
supervised learning in the following section.

Unsupervised learning involves modeling the features of a


dataset without reference to any label, and is often described as
"letting the dataset speak for itself." These models include tasks such
as clustering and dimensionality reduction. Clustering algorithms
identify distinct groups of data, while dimensionality reduction
algorithms search for more succinct representations of the data. We
will see examples of both types of unsupervised learning in the
following section.

Need for Machine Learning

Human beings, at this moment, are the most intelligent and


advanced species on earth because they can think, evaluate and solve
complex problems. On the other side, AI is still in its initial stage and
haven’t surpassed human intelligence in many aspects. Then the
question is that what is the need to make machine learn? The most
suitable reason for doing this is, “to make decisions, based on data,
with efficiency and scale”.

Lately, organizations are investing heavily in newer


technologies like Artificial Intelligence, Machine Learning and Deep
Learning to get the key information from data to perform several real-
world tasks and solve problems. We can call it data-driven decisions
taken by machines, particularly to automate the process. These data-
driven decisions can be used, instead of using programing logic, in
the problems that cannot be programmed inherently. The fact is that
we can’t do without human intelligence, but other aspect is that we all
need to solve real-world problems with efficiency at a huge scale.
That is why the need for machine learning arises.

CSE-IOT DEPARTMENT 26
SIIET BIG DATA JOB ANALYSIS

Challenges in Machines Learning :-

While Machine Learning is rapidly evolving, making


significant strides with cybersecurity and autonomous cars, this
segment of AI as whole still has a long way to go. The reason behind
is that ML has not been able to overcome number of challenges. The
challenges that ML is facing currently are −

Quality of data − Having good-quality data for ML algorithms is


one of the biggest challenges. Use of low-quality data leads to the
problems related to data preprocessing and feature extraction.

Time-Consuming task − Another challenge faced by ML models


is the consumption of time especially for data acquisition, feature
extraction and retrieval.

Lack of specialist persons − As ML technology is still in its


infancy stage, availability of expert resources is a tough job.

No clear objective for formulating business problems −


Having no clear objective and well-defined goal for business
problems is another key challenge for ML because this technology is
not that mature yet.

Issue of overfitting & underfitting − If the model is


overfitting or underfitting, it cannot be represented well for the
problem.

Curse of dimensionality − Another challenge ML model faces is


too many features of data points. This can be a real hindrance.

Difficulty in deployment − Complexity of the ML model makes


it quite difficult to be deployed in real life.

Applications of Machines Learning :-

Machine Learning is the most rapidly growing technology and


according to researchers we are in the golden year of AI and ML. It is

CSE-IOT DEPARTMENT 27
SIIET BIG DATA JOB ANALYSIS

used to solve many real-world complex problems which cannot be


solved with traditional approach. Following are some real-world
applications of ML −

 Emotion analysis

 Sentiment analysis

 Error detection and prevention

 Weather forecasting and prediction

 Stock market analysis and forecasting

 Speech synthesis

 Speech recognition

 Customer segmentation

 Object recognition

 Fraud detection

 Fraud prevention

 Recommendation of products to customer in online shopping

How to Start Learning Machine Learning?

Arthur Samuel coined the term “Machine Learning” in 1959 and


defined it as a “Field of study that gives computers the capability to
learn without being explicitly programmed”.
And that was the beginning of Machine Learning! In modern
times, Machine Learning is one of the most popular (if not the most!)
career choices. According to Indeed, Machine Learning Engineer Is
The Best Job of 2019 with a 344% growth and an average base salary
of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine
Learning and how to start learning it? So this article deals with the
Basics of Machine Learning and also the path you can follow to
eventually become a full-fledged Machine Learning Engineer. Now
let’s get started!!!

CSE-IOT DEPARTMENT 28
SIIET BIG DATA JOB ANALYSIS

How to start learning ML?

This is a rough roadmap you can follow on your way to becoming


an insanely talented Machine Learning Engineer. Of course, you can
always modify the steps according to your needs to reach your desired
end-goal!

Step 1 – Understand the Prerequisites

In case you are a genius, you could start ML directly but


normally, there are some prerequisites that you need to know which
include Linear Algebra, Multivariate Calculus, Statistics, and Python.
And if you don’t know these, never fear! You don’t need a Ph.D.
degree in these topics to get started but you do need a basic
understanding.

(a) Learn Linear Algebra and Multivariate Calculus

Both Linear Algebra and Multivariate Calculus are important in


Machine Learning. However, the extent to which you need them
depends on your role as a data scientist. If you are more focused on
application heavy machine learning, then you will not be that heavily
focused on maths as there are many common libraries available. But if
you want to focus on R&D in Machine Learning, then mastery of
Linear Algebra and Multivariate Calculus is very important as you will
have to implement many ML algorithms from scratch.

(b) Learn Statistics

Data plays a huge role in Machine Learning. In fact, around


80% of your time as an ML expert will be spent collecting and
cleaning data. And statistics is a field that handles the collection,
analysis, and presentation of data. So it is no surprise that you need to
learn it!!! Some of the key concepts in statistics that are important are

CSE-IOT DEPARTMENT 29
SIIET BIG DATA JOB ANALYSIS

Statistical Significance, Probability Distributions, Hypothesis Testing,


Regression, etc. Also, Bayesian Thinking is also a very important part
of ML which deals with various concepts like Conditional Probability,
Priors, and Posteriors, Maximum Likelihood, etc.

(c) Learn Python

Some people prefer to skip Linear Algebra, Multivariate Calculus


and Statistics and learn them as they go along with trial and error. But
the one thing that you absolutely cannot skip is Python! While there
are other languages you can use for Machine Learning like R, Scala,
etc. Python is currently the most popular language for ML. In fact,
there are many Python libraries that are specifically useful for
Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do
that using various online resources and courses such as Fork
Python available Free on GeeksforGeeks.

Step 2 – Learn Various ML Concepts

Now that you are done with the prerequisites, you can move on to
actually learning ML (Which is the fun part!!!) It’s best to start with
the basics and then move on to the more complicated stuff. Some of
the basic concepts in ML are:

(a) Terminologies of Machine Learning

 Model – A model is a specific representation learned from data by


applying some machine learning algorithm. A model is also called a
hypothesis.
 Feature – A feature is an individual measurable property of the data.
A set of numeric features can be conveniently described by a feature
vector. Feature vectors are fed as input to the model. For example, in
CSE-IOT DEPARTMENT 30
SIIET BIG DATA JOB ANALYSIS

order to predict a fruit, there may be features like color, smell, taste,
etc.
 Target (Label) – A target variable or label is the value to be
predicted by our model. For the fruit example discussed in the feature
section, the label with each set of input would be the name of the fruit
like apple, orange, banana, etc.
 Training – The idea is to give a set of inputs(features) and it’s
expected outputs(labels), so after training, we will have a model
(hypothesis) that will then map new data to one of the categories trained
on.
 Prediction – Once our model is ready, it can be fed a set of inputs to
which it will provide a predicted output(label).

(b) Types of Machine Learning

 Supervised Learning – This involves learning from a training


dataset with labeled data using classification and regression models.
This learning process continues until the required level of performance
is achieved.
 Unsupervised Learning – This involves using unlabelled data and
then finding the underlying structure in the data in order to learn more
and more about the data itself using factor and cluster analysis models.
 Semi-supervised Learning – This involves using unlabelled data
like Unsupervised Learning with a small amount of labeled data. Using
labeled data vastly increases the learning accuracy and is also more
cost-effective than Supervised Learning.
 Reinforcement Learning – This involves learning optimal actions
through trial and error. So the next action is decided by learning
behaviors that are based on the current state and that will maximize the
reward in the future.

CSE-IOT DEPARTMENT 31
SIIET BIG DATA JOB ANALYSIS

Advantages of Machine learning :-

1. Easily identifies trends and patterns -

Machine Learning can review large volumes of data and discover


specific trends and patterns that would not be apparent to humans. For
instance, for an e-commerce website like Amazon, it serves to
understand the browsing behaviors and purchase histories of its users to
help cater to the right products, deals, and reminders relevant to them. It
uses the results to reveal relevant advertisements to them.

2. No human intervention needed (automation)

With ML, you don’t need to babysit your project every step of the
way. Since it means giving machines the ability to learn, it lets them
make predictions and also improve the algorithms on their own. A
common example of this is anti-virus softwares; they learn to filter new
threats as they are recognized. ML is also good at recognizing spam.

3. Continuous Improvement

As ML algorithms gain experience, they keep improving in accuracy


and efficiency. This lets them make better decisions. Say you need to
make a weather forecast model. As the amount of data you have keeps
growing, your algorithms learn to make more accurate predictions faster.

4. Handling multi-dimensional and multi-variety data

Machine Learning algorithms are good at handling data that are


multi-dimensional and multi-variety, and they can do this in dynamic or
uncertain environments.

5. Wide Applications

You could be an e-tailer or a healthcare provider and make ML work


for you. Where it does apply, it holds the capability to help deliver a
much more personal experience to customers while also targeting the
right customers.

CSE-IOT DEPARTMENT 32
SIIET BIG DATA JOB ANALYSIS

Disadvantages of Machine Learning :-

1. Data Acquisition

Machine Learning requires massive data sets to train on, and


these should be inclusive/unbiased, and of good quality. There can also
be times where they must wait for new data to be generated.

2. Time and Resources

ML needs enough time to let the algorithms learn and develop


enough to fulfill their purpose with a considerable amount of accuracy
and relevancy. It also needs massive resources to function. This can
mean additional requirements of computer power for you.

3. Interpretation of Results

Another major challenge is the ability to accurately interpret results


generated by the algorithms. You must also carefully choose the
algorithms for your purpose.

4. High error-susceptibility

Machine Learning is autonomous but highly susceptible to errors.


Suppose you train an algorithm with data sets small enough to not be
inclusive. You end up with biased predictions coming from a biased
training set. This leads to irrelevant advertisements being displayed to
customers. In the case of ML, such blunders can set off a chain of errors
that can go undetected for long periods of time. And when they do get
noticed, it takes quite some time to recognize the source of the issue, and
even longer to correct it.

CSE-IOT DEPARTMENT 33
SIIET BIG DATA JOB ANALYSIS

Python Development Steps : -

Guido Van Rossum published the first version of Python code


(version 0.9.0) at alt.sources in February 1991. This release included
already exception handling, functions, and the core data types of list,
dict, str and others. It was also object oriented and had a module
system.
Python version 1.0 was released in January 1994. The major new
features included in this release were the functional programming tools
lambda, map, filter and reduce, which Guido Van Rossum never
liked.Six and a half years later in October 2000, Python 2.0 was
introduced. This release included list comprehensions, a full garbage
collector and it was supporting unicode.Python flourished for another 8
years in the versions 2.x before the next major release as Python 3.0
(also known as "Python 3000" and "Py3K") was released. Python 3 is
not backwards compatible with Python 2.x. The emphasis in Python 3
had been on the removal of duplicate programming constructs and
modules, thus fulfilling or coming close to fulfilling the 13th law of the
Zen of Python: "There should be one -- and preferably only one --
obvious way to do it."Some changes in Python 7.3:

 Print is now a function


 Views and iterators instead of lists
 The rules for ordering comparisons have been simplified. E.g. a
heterogeneous list cannot be sorted, because all the elements of a
list must be comparable to each other.
 There is only one integer type left, i.e. int. long is int as well.
 The division of two integers returns a float instead of an integer. "//"
can be used to have the "old" behaviour.
 Text Vs. Data Instead Of Unicode Vs. 8-bit

Purpose :-

We demonstrated that our approach enables successful


segmentation of intra-retinal layers—even with low-quality images

CSE-IOT DEPARTMENT 34
SIIET BIG DATA JOB ANALYSIS

containing speckle noise, low contrast, and different intensity ranges


throughout—with the assistance of the ANIS feature.

Python:-

Python is an interpreted high-level programming language for


general-purpose programming. Created by Guido van Rossum and
first released in 1991, Python has a design philosophy that
emphasizes code readability, notably using significant whitespace.

Python features a dynamic type system and automatic memory


management. It supports multiple programming paradigms, including
object-oriented, imperative, functional and procedural, and has a large
and comprehensive standard library.

 Python is Interpreted − Python is processed at runtime by the


interpreter. You do not need to compile your program before
executing it. This is similar to PERL and PHP.
 Python is Interactive − you can actually sit at a Python prompt and
interact with the interpreter directly to write your programs.
Python also acknowledges that speed of development is
important. Readable and terse code is part of this, and so is access to
powerful constructs that avoid tedious repetition of code. This This
speed of development, the ease with which a programmer of other
languages can pick up basic Python skills and the huge standard
library is key to another area where Python excels. All its tools have
been quick to implement, saved a lot of time, and several of them
have later been patched and updated by people with no Python
background - without breaking.

Install Python Step-by-Step in Windows and Mac :

Python a versatile programming language doesn’t come pre-


installed on your computer devices. Python was first released in the year
1991 and until today it is a very popular high-level programming
language. Its style philosophy emphasizes code readability with its
notable use of great whitespace.

CSE-IOT DEPARTMENT 35
SIIET BIG DATA JOB ANALYSIS

The object-oriented approach and language construct provided by


Python enables programmers to write both clear and logical code for
projects. This software does not come pre-packaged with Windows.

How to Install Python on Windows and Mac :


There have been several updates in the Python version over the
years. The question is how to install Python? It might be confusing for
the beginner who is willing to start learning Python but this tutorial will
solve your query. The latest or the newest version of Python is version
3.7.4 or in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or
earlier devices.

Before you start with the installation process of Python. First, you
need to know about your System Requirements. Based on your system
type i.e. operating system and based processor, you must download the
python version. My system type is a Windows 64-bit operating
system. So the steps below are to install python version 3.7.4 on
Windows 7 device or to install Python 3. Download the Python
Cheatsheethere.The steps on how to install Python on Windows 10, 8
and 7 are divided into 4 parts to help understand better.

Download the Correct version into the system


Step 1: Go to the official site to download and install python using
Google Chrome or any other web browser. OR Click on the following
link: https://www.python.org

CSE-IOT DEPARTMENT 36
SIIET BIG DATA JOB ANALYSIS

Now, check for the latest and the correct version for your operating
system.

Step 2: Click on the Download Tab.

Step 3: You can either select the Download Python for windows 3.7.4
button in Yellow Color or you can scroll further down and click on
download with respective to their version. Here, we are downloading
the most recent python version for windows 3.7.4

CSE-IOT DEPARTMENT 37
SIIET BIG DATA JOB ANALYSIS

Step 4: Scroll down the page until you find the Files option.

Step 5: Here you see a different version of python along with the
operating system.

• To download Windows 32-bit python, you can select any one from the
three options: Windows x86 embeddable zip file, Windows x86
executable installer or Windows x86 web-based installer.
•To download Windows 64-bit python, you can select any one from the
three options: Windows x86-64 embeddable zip file, Windows x86-64
executable installer or Windows x86-64 web-based installer.
Here we will install Windows x86-64 web-based installer. Here your
first part regarding which version of python is to be downloaded is

CSE-IOT DEPARTMENT 38
SIIET BIG DATA JOB ANALYSIS

completed. Now we move ahead with the second part in installing


python i.e. Installation
Note: To know the changes or updates that are made in the version you
can click on the Release Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to
carry out the installation process.

Step 2: Before you click on Install Now, Make sure to put a tick on
Add Python 3.7 to PATH.

CSE-IOT DEPARTMENT 39
SIIET BIG DATA JOB ANALYSIS

Step 3: Click on Install NOW After the installation is successful. Click


on Close.

With these above three steps on python installation, you have


successfully and correctly installed Python. Now is the time to verify the
installation.
Note: The installation process might take a couple of minutes.

Verify the Python Installation


Step 1: Click on Start
Step 2: In the Windows Run Command, type “cmd”.

CSE-IOT DEPARTMENT 40
SIIET BIG DATA JOB ANALYSIS

Step 3: Open the Command prompt option.


Step 4: Let us test whether the python is correctly installed.
Type python –V and press Enter.

Step 5: You will get the answer as 3.7.4


Note: If you have any of the earlier versions of Python already
installed. You must first uninstall the earlier version and then install the
new one.

Check how the Python IDLE works


Step 1: Click on Start
Step 2: In the Windows Run command, type “python idle”.
CSE-IOT DEPARTMENT 41
SIIET BIG DATA JOB ANALYSIS

Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the
file. Click on File > Click on Save

Step 5: Name the file and save as type should be Python files. Click on
SAVE. Here I have named the files as Hey World.
Step 6: Now for e.g. enter print

CSE-IOT DEPARTMENT 42
SIIET BIG DATA JOB ANALYSIS

SOURCE CODE
import pandas as pd
import numpy as np
import seaborn
import matplotlib.pyplot as plt
from collections import default dict
from word cloud import Word Cloud
import warnings
warnings. filter warnings('ignore')
In [3]:
#using pandas class to read job dataset and then displaying few records
dataset = pd. Read csv ('Dataset/DataAnalyst.csv')
dataset
#finding and plotting graphs of different job titles % found in dataset
df = dataset.groupby("Job
Title").size().sort_values(ascending=False).nlargest(20).reset_index()
plt.figure (figsize=(8,12))
plt.pie(df[0], labels=df['Job Title'], auto pct='%.0f%%')
plt. title("Different Jobs % Graph")
plt.xlabel("Job Type")
plt.ylabel("Number of Different Jobs %")
Out[20]:
data = dataset.groupby(['Company
Name'])['Rating'].mean().sort_values(ascending=False).nlargest(250).rese
t_index()
data = data.iloc[50:110]
plt.figure(figsize=(13,3))
plt.plot(data['Company Name'], data['Rating'])
plt.title("Companies with Ratings Graph")
plt.xlabel("Company Names")
plt.ylabel("Ratings")
plt.xticks(rotation=90)
plt.show()

#dataset posted from different locations


df =
dataset.groupby('Location').size().sort_values(ascending=False).nlargest(
20).reset_index()
plt.figure(figsize=(8,3))
seaborn.barplot(x='Location',y=0, data=df)
plt.title('Top 20 Locations Posting Highest Number of Jobs')
plt.xlabel("Location")
plt.ylabel("Number of Jobs Posted")
plt.xticks(rotation=90)
plt.show()

#different job skills and description


job = dataset["Job Description"].tolist()
skills = []

CSE-IOT DEPARTMENT 43
SIIET BIG DATA JOB ANALYSIS

#now identifying various families of Bigdata


big_data = ["big data", "hadoop", "spark", "impala", "cassandra", "kafka",
"hdfs", "hbase", "hive", "mongo db", 'flume', 'sqoop',
'flink']
counter = 0
big_data_required = defaultdict()
for item in big_data:
counter = 0
for it in job:
if item in it.lower():
counter = counter + 1
skills.append([it])
big_data_required[item] = counter
big_data_df = pd.DataFrame(list(big_data_required.items()),columns =
['Big Data Technologies','Skills Requirement'])
big_data_df.sort_values(['Skills Requirement'], axis=0, ascending=False,
inplace=True)
big_data_df

#graph of highly valued bigdata skills required by companies


plt.figure(figsize=(8,3))
seaborn.barplot(x='Big Data Technologies',y='Skills Requirement',
data=big_data_df)
plt.title('Highly Valued Bigdata Skills for Companies')
plt.xlabel("Skills")
plt.ylabel("Valued Count")
plt.xticks(rotation=90)
plt.show()

#displaying description and skills for each Bigdata family


df = pd.DataFrame(skills, columns=['Descriptions & Bigdata Skills'])
pd.set_option('display.max_colwidth', None)
df

wordCloud = WordCloud(background_color =
'lightpink',width=2000,height= 2000).generate(' '.join(df['Descriptions &
Bigdata Skills']))
plt.figure(figsize=(19,9))
plt.axis('off')
plt.title('Bigdata Job Description Words Cloud Graph',fontsize=20)
plt.imshow(wordCloud)
plt.show()

CSE-IOT DEPARTMENT 44
SIIET BIG DATA JOB ANALYSIS

CHAPTER 10

RESULT/DISCUSSIONS

SYSTEM TEST :

The purpose of testing is to discover errors. Testing is the process of


trying to discover every conceivable fault or weakness in a work product.
It provides a way to check the functionality of components, sub
assemblies, assemblies and/or a finished product It is the process of
exercising software with the intent of ensuring that the Software system
meets its requirements and user expectations and does not fail in an
unacceptable manner. There are various types of test. Each test type
addresses a specific testing requirement.

TYPES OF TESTS
Unit testing :
Unit testing involves the design of test cases that validate that the
internal program logic is functioning properly, and that program inputs
produce valid outputs. All decision branches and internal code flow
should be validated. It is the testing of individual software units of the
application .it is done after the completion of an individual unit before
integration. This is a structural testing, that relies on knowledge of its
construction and is invasive. Unit tests perform basic tests at component
level and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a business
process performs accurately to the documented specifications and
contains clearly defined inputs and expected results.

CSE-IOT DEPARTMENT 45
SIIET BIG DATA JOB ANALYSIS

Integration testing
Integration tests are designed to test integrated software
components to determine if they actually run as one program. Testing is
event driven and is more concerned with the basic outcome of screens or
fields. Integration tests demonstrate that although the components were
individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is
specifically aimed at exposing the problems that arise from the
combination of components.

Functional test
Functional tests provide systematic demonstrations that functions
tested are available as specified by the business and technical
requirements, system documentation, and user manuals.

Functional testing is centered on the following items:

Valid Input : identified classes of valid input must be


accepted.

Invalid Input : identified classes of invalid input must be rejected.

Functions : identified functions must be exercised.

Output : identified classes of application outputs must be


exercised.

Systems/Procedure : interfacing systems or procedures must be


invoked.

Organization and preparation of functional tests is focused


on requirements, key functions, or special test cases. In addition,
systematic coverage pertaining to identify Business process flows; data
fields, predefined processes, and successive processes must be considered
for testing. Before functional testing is complete, additional tests are
identified and the effective value of current tests is determined.

CSE-IOT DEPARTMENT 46
SIIET BIG DATA JOB ANALYSIS

System Test
System testing ensures that the entire integrated software
system meets requirements. It tests a configuration to ensure known and
predictable results. An example of system testing is the configuration
oriented system integration test. System testing is based on process
descriptions and flows, emphasizing pre-driven process links and
integration points.

White Box Testing


White Box Testing is a testing in which in which the
software tester has knowledge of the inner workings, structure and
language of the software, or at least its purpose. It is purpose. It is used to
test areas that cannot be reached from a black box level.

Black Box Testing


Black Box Testing is testing the software without any knowledge of
the inner workings, structure or language of the module being tested.
Black box tests, as most other kinds of tests, must be written from a
definitive source document, such as specification or requirements
document, such as specification or requirements document. It is a testing
in which the software under test is treated, as a black box .you cannot
“see” into it. The test provides inputs and responds to outputs without
considering how the software works.

Unit Testing

Unit testing is usually conducted as part of a combined


code and unit test phase of the software lifecycle, although it is not
uncommon for coding and unit testing to be conducted as two distinct
phases.

CSE-IOT DEPARTMENT 47
SIIET BIG DATA JOB ANALYSIS

Test strategy and approach

Field testing will be performed manually and functional


tests will be written in detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be delayed.
Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.

Integration Testing

Software integration testing is the incremental integration


testing of two or more integrated software components on a single
platform to produce failures caused by interface defects.

The task of the integration test is to check that components or


software applications, e.g. components in a software system or – one step
up – software applications at the company level – interact without error.

Test Results:All the test cases mentioned above passed successfully. No


defects encountered.

Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires
significant participation by the end user. It also ensures that the system
meets the functional requirements.

Test Results:All the test cases mentioned above passed successfully.


No defects encountered.

Bigdata Job Analysis

CSE-IOT DEPARTMENT 48
SIIET BIG DATA JOB ANALYSIS

In propose work we are analyzing large amount of online Job


posted dataset to find Bigdata family job skills. Since introduction of
Bigdata many supporting technologies are introduced and many peoples
are unfamiliar about all those technologies and their demands. Selecting
suitable Bigdata job family technology can help companies in better
project development. Many HR will be unaware of all Bigdata
technologies and their demands.

In propose work we are using JOB posting dataset from KAGGLE


which can be download from below link

https://www.kaggle.com/code/rohitsahoo/data-analyst-job-analysis/input

Above dataset contains job posting from various categories and


more than half of the jobs are from Data Analyst. We have done extensive
research on all job categories and then find all families of Bigdata
technology and then plot graph of all those Bigdata technologies which
are high in demand and required most of the companies and by seeing this
graph HR can easily understand which family of Bigdata is in high
demand

We have coded this project using JUPYTER notebook and below


are the code and output screens with blue color comments

CSE-IOT DEPARTMENT 49
SIIET BIG DATA JOB ANALYSIS

In above screen importing required python classes and packages

In above screen loading and displaying dataset values

In above graph finding and displaying graph of different job


categories in percentage and in all categories we can see ‘Data Analyst’
are more in demands. By seeing above graph HR can know easily above
job which are high in demand

CSE-IOT DEPARTMENT 50
SIIET BIG DATA JOB ANALYSIS

In above graph we are displaying ratings of different companies who


have posted jobs and by seeing above graph HR can know this companies
are genuine and posting real jobs. In above graph x-axis represents
Company Names and y-axis represents Ratings

In above screen finding and displaying graph of top 20 locations who


are posting more number of Jobs

CSE-IOT DEPARTMENT 51
SIIET BIG DATA JOB ANALYSIS

In above screen writing code to find number of jobs posted in each


Bigdata family to identify its demand and valued for company

In above screen fetching and categorizing only Bigdata family


technologies and their skills demands and in above table Bigdata,
Hadoop, Spark, Hive libraries are in more demand

CSE-IOT DEPARTMENT 52
SIIET BIG DATA JOB ANALYSIS

In above screen plotting graph of each Bigdata category where x-axis


represents Bigdata technology names and y-axis represents requirements
of Jobs for that technology

In above screen displaying JOB description for each Bigdata family job
requirements

CSE-IOT DEPARTMENT 53
SIIET BIG DATA JOB ANALYSIS

From above description HR can easily understand about candidates to


select or he can write his own company job requirement by seeing above
description

In above screen displaying graph of technologies word cloud and


this graph will display “all words” in “bold” format which used many
number of times in all Job description and from above graph we can see

CSE-IOT DEPARTMENT 54
SIIET BIG DATA JOB ANALYSIS

CHAPTER 11

CONCLUSION

In conclusion, "Big Data Job Analysis" serves as a valuable resource


for stakeholders in the big data ecosystem. The insights gained from this
study empower job seekers to align their skillsets with industry demands,
assist employers in refining recruitment strategies, and guide educators in
shaping curricula to meet the evolving needs of the big data job market.

Job analysis is an important tool in human resource management.


With the help of big data technology, it is conducive to improving the
accuracy and effectiveness of recruitment work. It can help companies
save recruitment costs, improve work efficiency, and enable employees to
quickly integrate into To the enterprise, to achieve a win-win situation
between the enterprise and employees. This article takes a company
whose recruitment results are not satisfactory as an example and discusses
how a systematic approach can recruit more suitable employees through
merit and demerit job analysis in the context of big data technology. The
analysis done in this paper can serve as a sample of those organizations
that need some investment in this regard.

CSE-IOT DEPARTMENT 55
SIIET BIG DATA JOB ANALYSIS

CHAPTER 12

REFERENCES

[1] Ganesh, S. K. G., Vaishnavi, G., & Rajayogan, K. (2022). A


Methodical Approach to Job Analysis in A Typical Indian Manufacturing
Organization. International Journal on Recent Trends in Business and
Tourism (IJRTBT). 6(2), 17-30.

[2] Khtatbeh, M. M., Mahomed, A. S. B., bin Ab Rahman, S., &


Mohamed, R. (2020). A The mediating role of procedural justice on the
relationship between job analysis and employee performance in Jordan
Industrial Estates. Heliyon, 6(10), e04973.

[3] Prayogo, A., Diza, T., Prasetyaningtyas, S. W., & Maharani, A.


(2020). A Qualitative Study Exploring the Effects of Job Analysis and
Organizational Culture toward Job Satisfaction in a Coffee Shop. Open
Journal of Business and Management, 8(06), 2687

[4] Lohman, L. (2020). Strategic hiring: Using job analysis to effectively


select online faculty. Online Journal of Distance Learning Administration,
23(3), n3

[5] Nasution, M., Yeni, S., Yondra, A., & Putri, A. (2021). The influence
of organizational structure and job analysis on work motivation and its
impact on the performance of the office of cooperatives for small and
medium enterprises, industry and trade (KOPERINDAG) Mentawai
Islands Regency. American Journal of Humanities and Social Sciences
Research, 5(1), 444-453

[6] Igbokwe-Ibeto, C. J. (2019). The effect of job analysis on service


delivery in federal airports authority of Nigeria (FAAN) 2005-2014.
International Journal of Human Resource Studies, 9(2), 195-211.

[7] Mira, M., Choong, Y., & Thim, C. (2019). The effect of HRM
practices and employees’ job satisfaction on employee performance.
Management Science Letters, 9(6), 771-786.

CSE-IOT DEPARTMENT 56
SIIET BIG DATA JOB ANALYSIS

[8] Manea, A. I. (2020). Selecting subject matter experts in job and work
analysis surveys: Advantages and disadvantages. Academic Journal of
Economic Studies, 6(2), 52-61.

[9] Hanafi, A. S. (2019). Effect of Organizational Structure, Job Analysis


and Leadership Style on Work Motivation and Its Impact on Performance
of Employees. JPAS (Journal of Public Administration Studies), 4(1), 39-
45.

[10] Ma Tao. (2018). Discussion on the Organization and Implementation


of Human Resource Management Job Analysis. Enterprise Reform and
Management , (10), 73-83.

[11] Yin Le. (2022). Application Research of Big Data in Enterprise


Human Resource Recruitment Management——Taking JJ Company as an
Example. China Management Informationization, (12), 167-169.

[12] Zhang Yitao. (2020). The impact of big data on human resource
recruitment. Modern Business, (34), 88-91.

[13] Ye Xiuyu. (2020). Application Research of Big Data in Enterprise


Recruitment. Business Economics, (08), 65-66.

[14] Ye Dong Xiaohong & Guo Aiying. (2014).Application Research of


Big Data Technology in Online Recruitment Take K Enterprise as an
Examplet. China Human Resources Development, (18), 37-41.

[15] Wang Xiaoli. (2019). The Application of Big Data in Human


Resource Management. Modern Industrial Economy and Informatization,

(10), 73-74+76。

[16] Shao Dan. (2018). Research on Social Network Recruitment Based


on Big Data Technology. China's Strategic Emerging Industries, (16), 123.

CSE-IOT DEPARTMENT 57

You might also like