0% found this document useful (0 votes)
66 views24 pages

Bda (Unit 1)

The document provides an introduction to Big Data, discussing its platforms, challenges faced by conventional systems, and the nature of data. It outlines various types of data (structured, unstructured, semi-structured) and highlights the characteristics of Big Data, including volume, velocity, and variety. Additionally, it covers the importance of Big Data platforms, their features, and examples of leading technologies and vendors in the field.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
66 views24 pages

Bda (Unit 1)

The document provides an introduction to Big Data, discussing its platforms, challenges faced by conventional systems, and the nature of data. It outlines various types of data (structured, unstructured, semi-structured) and highlights the characteristics of Big Data, including volume, velocity, and variety. Additionally, it covers the importance of Big Data platforms, their features, and examples of leading technologies and vendors in the field.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

UNIT - I : INTRODUCTION TO BIG DATA

Syllabus:
Introduction to BigData Platform – Challenges of Conventional Systems - Intelligent data
analysis – Nature of Data - Analytic Processes and Tools - Analysis vs Reporting
Table of Contents

SL No. Topic Page No.

1 Introduction to BigData Platform 01

2 Challenges of Conventional Systems 13

3 Intelligent data analysis 16

4 Nature of Data 17

5 Analytic Processes and Tools 20

6 Analysis vs Reporting 24

1. INTRODUCTION TO BIGDATA PLATFORM


1.1 Introduction Data
1.1.1 Data and Information

Data are plain facts.

The word "data" is plural for "datum."

Data is nothing but facts and statistics stored or free flowing over a network, generally it's
raw and unprocessed.

When data are processed, organized, structured or presented in a given context so as to make
them useful, they are called Information.

It is not enough to have data (such as statistics on the economy).

Data themselves are fairly useless, but when these data are interpreted and processed to
determine its true meaning, they becomes useful and can be called Information.
• What is Data?
– The quantities, characters, or symbols on which operations are performed by a
computer,
– which may be stored and transmitted in the form of electrical signals and
– recorded on magnetic, optical, or mechanical recording media.
• 3 Actions on Data
– Capture
– Transform
– Store
BigData
• Big Data may well be the Next Big Thing in the IT world.
• Big data burst upon the scene in the first decade of the 21st century.
• The first organizations to embrace it were online and startup firms.
• Firms like Google, eBay, LinkedIn, and Facebook were built around big data from the
beginning.
1
• Like many new information technologies,
• big data can bring about dramatic cost reductions,
• substantial improvements in the time required to perform a computing task, or new
product and service offerings.
• Walmart handles more than 1 million customer transactions every hour.
• Facebook handles 40 billion photos from its user base.
• Decoding the human genome originally took 10years to process; now it can be achieved in
one week.
• What is Big Data?
– Big Data is also data but with a huge size.
– Big Data is a term used to describe a collection of data that is huge in size and yet
growing exponentially with time.
– In short such data is so large and complex that none of the traditional data
management tools are able to store it or process it efficiently.
No single definition; here is from Wikipedia:
• Big data is the term for
– a collection of data sets so large and complex that it becomes difficult to process using
on-hand database management tools or traditional data processing applications.
Examples of Bigdata
• Following are some the examples of Big Data-
– The New York Stock Exchange generates about one terabyte of new trade data per
day.
– Other examples of Big Data generation includes
• stock exchanges,
• social media sites,
• jet engines,
• etc.
Types Of Big Data
• BigData could be found in three forms:
1. Structured
2. Unstructured
3. Semi-structured
What is Structured Data?
• Any data that can be stored, accessed and processed in the form of fixed format is
termed as a 'structured' data.
• Developed techniques for working with such kind of data (where the format is well
known in advance) and also deriving value out of it.
• Foreseeing issues of today :
– when a size of such data grows to a huge extent, typical sizes are being in the rage of
multiple zetta bytes.
• Do you know?
• 1021 bytes equal to 1 zettabyte or one billion terabytes forms a zettabyte.
– That is why the name Big Data is given and imagine the challenges involved in its
storage and processing?
• Do you know?
2
– Data stored in a relational database management system is one example of a
'structured' data.

• An 'Employee' table in a database is an example of Structured Data:

Employee_ID Employee_Name Gender Department Salary_In_lacs

2365 Rajesh Kulkarni Male Finance 650000

3398 Pratibha Joshi Female Admin 650000

7465 Shushil Roy Male Admin 500000

7500 Shubhojit Das Male Finance 500000

7699 Priya Sane Female Finance 550000


Unstructured Data
• Any data with unknown form or the structure is classified as unstructured data.
• In addition to the size being huge,
– un-structured data poses multiple challenges in terms of its processing for
deriving value out of it.
– A typical example of unstructured data is
• a heterogeneous data source containing a combination of simple text files,
images, videos etc.
• Now day organizations have wealth of data available with them but unfortunately,
3
– they don't know how to derive value out of it since this data is in its raw form or
unstructured format.
• Example of Unstructured data
– The output returned by 'Google Search'
Semi-structured Data
• Semi-structured data can contain both the forms of data.
• Semi-structured data as a structured in form
– but it is actually not defined with e.g. a table definition in relational DBMS.
• Example of semi-structured data is
– a data represented in an XML file.
• Personal data stored in an XML file.
<rec>
<name>Prashant Rao</name>
<sex>Male</sex>
<age>35</age>
</rec>
<rec>
<name>Seema R.</name>
<sex>Female</sex>
<age>41</age>
</rec>
<rec>
<name>Satish Mane</name>
<sex>Male</sex>
<age>29</age>
</rec>
<rec>
<name>Subrato Roy</name>
<sex>Male</sex>
<age>26</age>
</rec>
<rec>
<name>Jeremiah J.</name>
<sex>Male</sex>
<age>35</age></rec>
Characteristics of BD OR 3Vs of Big Data
• Three Characteristics of Big Data V3s:
1) Volume

Data quantity
2) Velocity

Data Speed
3) Variety

Data Types

4
Growth of Big Data

Storing Big Data


• Analyzing your data characteristics
– Selecting data sources for analysis
– Eliminating redundant data
– Establishing the role of NoSQL
• Overview of Big Data stores
– Data models: key value, graph, document, column-family
– Hadoop Distributed File System (HDFS)
– Hbase
– Hive
Processing Big Data
• Integrating disparate data stores
– Mapping data to the programming framework
– Connecting and extracting data from storage
– Transforming data for processing
– Subdividing data in preparation for Hadoop MapReduce
• Employing Hadoop MapReduce
– Creating the components of Hadoop MapReduce jobs
– Distributing data processing across server farms
– Executing Hadoop MapReduce jobs
– Monitoring the progress of job flows
Why Big Data?
• Growth of Big Data is needed
– Increase of storage capacities
– Increase of processing power
– Availability of data(different data types)
– Every day we create 2.5 quintillion bytes of data; 90% of the data in the world
today has been created in the last two years alone
 Huge storage need in Real Time Applications
5
– FB generates 10TB daily
– Twitter generates 7TB of data Daily
– IBM claims 90% of today’s stored data was generated in just the last two years.
How Is Big Data Different?
1) Automatically generated by a machine (e.g. Sensor embedded in an engine)
2) Typically an entirely new source of data(e.g. Use of the internet)
3) Not designed to be friendly(e.g. Text streams)
4) May not have much values
– Need to focus on the important part
Big Data sources
• Users
• Application
• Systems
• Sensors
Risks of Big Data
• Will be so overwhelmed
– Need the right people and solve the right problems
• Costs escalate too fast
– Isn’t necessary to capture 100%
• Many sources of big data is privacy
– self-regulation
– Legal regulation
Leading Technology Vendors

Example Vendors Commonality

IBM – Netezza • MPP architectures

EMC – Greenplum • Commodity Hardware

Oracle – Exadata • RDBMS based

• Full SQL compliance


6
1.1.2 Basics of Bigdata Platform
 Big Data platform is IT solution which combines several Big Data tools and utilities into one
packaged solution for managing and analyzing Big Data.
 Big data platform is a type of IT solution that combines the features and capabilities of
several big data application and utilities within a single solution.
 It is an enterprise class IT platform that enables organization in developing, deploying,
operating and managing a big data infrastructure /environment.
What is Big Data Platform?
 Big Data Platform is integrated IT solution for Big Data management which combines several
software system, software tools and hardware to provide easy to use tools system to
enterprises.
 It is a single one-stop solution for all Big Data needs of an enterprise irrespective of size and
data volume. Big Data Platform is enterprise class IT solution for developing, deploying and
managing Big Data.
 There are several Open source and commercial Big Data Platform in the market with
varied features which can be used in Big Data environment.
 Big data platform is a type of IT solution that combines the features and capabilities of
several big data application and utilities within a single solution.
 It is an enterprise class ITplatformthat enables organization in developing, deploying,
operating and managing abig datainfrastructure /environment.
 Big data platform generally consists of big data storage, servers, database, big data
management, business intelligence and other big data management utilities
 It also supports custom development, querying and integration with other systems.
 The primary benefit behind a big data platform is to reduce the complexity of multiple
vendors/ solutions into a one cohesive solution.
 Big data platform are also delivered through cloud where the provider provides an all
inclusive big data solutions and services.
1.1.2.2 Features of Big Data Platform
Here are most important features of any good Big Data Analytics Platform:
a) Big Data platform should be able to accommodate new platforms and tool based on the
business requirement. Because business needs can change due to new technologies or due to
change in business process.
b) It should support linear scale-out
c) It should have capability for rapid deployment
d) It should support variety of data format
e) Platform should provide data analysis and reporting tools
f) It should provide real-time data analysis software
g) It should have tools for searching the data through large data sets
Big data is a term for data sets that are so large or complex that traditional data processing
applications are inadequate.
Challenges include
 Analysis,
 Capture,
 Data Curation,(Integration)
 Search,
 Sharing,
 Storage,
 Transfer,
 Visualization,
 Querying,
7
 Updating
Information Privacy.
 The term often refers simply to the use of predictive analytics or certain other advanced
methods to extract value from data, and seldom to a particular size of data set.
 ACCURACY in big data may lead to more confident decision making, and better
decisions can result in greater operational efficiency, cost reduction and reduced risk.
 Big data usually includes data sets with sizes beyond the ability of commonly used
 software tools to capture, curate, manage, and process data within a tolerable elapsed
time. Big data "size" is a constantly moving target.
 Big data requires a set of techniques and technologies with new forms of integration to
 reveal insights from datasets that are diverse, complex, and of a massive scale
1.1.2.3 List of BigData Platforms
a) Hadoop
b) Cloudera
c) Amazon Web Services
d) Hortonworks
e) MapR
f) IBM Open Platform
g) Microsoft HDInsight
h) Intel Distribution for Apache Hadoop
i) Datastax Enterprise Analytics
j) Teradata Enterprise Access for Hadoop
k) Pivotal HD
a) Hadoop
What is Hadoop?
 Hadoop is open-source, Java based programming framework and server software which is
used to save and analyze data with the help of 100s or even 1000s of commodity servers in
a clustered environment.
 Hadoop is designed to storage and process large datasets extremely fast and in fault
tolerant way.
 Hadoop uses HDFS (Hadoop File System) for storing data on cluster of commodity
computers. If any server goes down it know how to replicate the data and there is no loss of
data even in hardware failure.
 Hadoop is Apache sponsored project and it consists of many software packages which
runs on the top of the Apache Hadoop system.
 Top Hadoop based Commercial Big Data Analytics Platform
 Hadoop provides set of tools and software for making the backbone of the Big Data
analytics system.
 Hadoop ecosystem provides necessary tools and software for handling and analyzing Big
Data.
 On the top of the Hadoop system many applications can be developed and plugged-in to
provide ideal solution for Big Data needs.
b) Cloudera
 Cloudra is one of the first commercial Hadoop based Big Data Analytics Platform
offering Big Data solution.
 Its product range includes Cloudera Analytic DB, Cloudera Operational DB, Cloudera
Data Science & Engineering and Cloudera Essentials.
 All these products are based on the Apache Hadoop and provides real-time processing and
analytics of massive data sets.
8
Website: https://www.cloudera.com
c) Amazon Web Services
 Amazon is offering Hadoop environment in cloud as part of its Amazon Web Services
package.
 AWS Hadoop solution is hosted solution which runs on Amazon’s Elastic Cloud
Compute and Simple Storage Service (S3).
 Enterprises can use the Amazon AWS to run their Big Data processing analytics in the
cloud environment.
 Amazon EMR (Elastic MapReduce) allows companies to setup and easily scale
Apache Hadoop, Spark, HBase, Presto, Hive, and other Big Data Frameworks using
its cloud hosting environment.
Website: https://aws.amazon.com/emr/
o 1. Amazon EC2 (Elastic Cloud Compute)
o 2. Amazon RDS (Relational Database Services)
o 3. Amazon S3 (Simple Storage Service)
o 4. Amazon IAM (Identity and Access Management)
o 5. Amazon EBS (Elastic Block Store)
o 6. Amazon Lambda
o 7. Amazon EFS (Elastic File System)
o 8. Amazon CloudFront
o 9. Amazon SNS (Simple Notification Service)
o 10. Amazon VPC (Virtual Private Cloud)
d) Hortonworks
 Hortonworks is using 100% open-source software without any propriety software.
Hortonworks were the one who first integrated support for Apache HCatalog.
 The Hortonworks is a Big Data company based in California.
 This company is developing and supports application for Apache Hadoop.
Hortonworks Hadoop distribution is 100% open source and its enterprise ready with following
features:
 Centralized management and configuration of clusters
 Security and data governance are built in feature of the system
 Centralized security administration across the system
Website: https://hortonworks.com/
e) MapR
 MapR is another Big Data platform which us using the Unix file system for handling
data.
 It is not using HDFS and this system is easy to learn anyone familiar with the Unix
system.
 This solution integrates Hadoop, Spark, and Apache Drill with a real-time data
processing feature.
 Website: https://mapr.com
f) IBM Open Platform
 IBM also offers Big Data Platform which is based on the Hadoop eco-system software.
 IBM well knows company in software and data computing.
It uses the latest Hadoop software and provides following features (IBM Open Platform
Features):
 Based on 100% Open source software
 Native support for rolling Hadoop upgrades
 Support for long running applications within YEARN.
9
 Support for heterogeneous storage which includes HDFS for in-memory and SSD in
addition to HDD
 Native support for Spark, developers can use Java, Python and Scala to written program
 Platform includes Ambari, which is a best tool for provisioning, managing & monitoring
Apache Hadoop clusters
 IBM Open Platform includes all the software of Hadoop ecosystem e.g. HDFS, YARN,
MapReduce, Ambari, Hbase, Hive, Oozie, Parquet, Parquet Format, Pig, Snappy, Solr,
Spark, Sqoop, Zookeeper, Open JDK, Knox, Slider
 Developer can download the trial Docker Image or Native installer for testing and
learning the system
 Application is well supported by IBM technology team
Website: https://www.ibm.com/analytics/us/en/technology/hadoop/
g) Microsoft HDInsight
 The Microsoft HDInsight is also based on the Hadoop distribution and it’s a commercial Big
Data platform from Microsoft.
 Microsoft is software giant which is into development of windows operating system for
Desktop users and Server users.
 This is the big Hadoop distribution offering which runs on the Windows and Azure
environment.
 It offer customized, optimized open source Hadoop based analytics clusters which uses
Spark, Hive, MapReduce, HBase, Strom, Kafka and R Server which runs on the Hadoop
system on windows/Azure environment.
Website: https://azure.microsoft.com/en-in/services/hdinsight/
h) Distribution for Apache Hadoop
 Intel also offers its package distribution of Hadoop software which includes company’s
Graph builder and Analytics toolkit.
 This distribution can be purchased with various channel partners and come with support and
yearly subscription.
Website: http://www.intel.com/content/www/us/en/software/intel-distribution-for-apache-
hadoop-software-solutions.html
i) Datastax Enterprise Analytics
 Datastax Enterprise Analytics is another play in the Big Data Analytics platform which
offers its own distribution which is based on Apache Cassandra database management
system which runs on the top of Apache Hadoop installation.
 It also included propriety system with a dashboard which is used for security
management, searching data, dashboard for viewing various details and visualization
engine.
 It can handle analysis of 10 million data points every second, so it’s a powerful system.
Features:
 It provides powerful indexing, search, analytics and graph functionality into the Big
Data system
 It supports advanced indexing and searching features
 It comes with powerful integrated analytics system
 It provides multi-model support into the platform. It supports key-value, tabular,
JSON/Document and graph data formats. Powerful search features enables the users to get
required data in real-time
Website: http://www.datastax.com/
j) Teradata Enterprise Access for Hadoop
 Teradata Enterprise Access for Hadoop is another player into Big Data Platform and it
offers package Hadoop distribution which again based on Hortonworks distribution.
10
 Teradata Enterprise Access for Hadoop offers Hardware and software in its Big Data
solution which can be used by enterprise to process its data sets.
Company offers:
 Teradata
 Teradata Aster and
 Hadoop
as part of its package solution.
Website: http://www.teradata.com
k) Pivotal HD
Pivotal HD offers is another Hadoop distribution with includes includes database tools
Greenplum and analytics platform Gemfire.
Features:
 It can be installed on-premise and in public clouds
 This system is based on the open source software
 It supports data evolution within the 3 years subscription period.
Indian railways, BMW, China Citic Bank and many other big players are using this distribution of
Big Data Platform.
Website: https://pivotal.io/
1.1.3 Open Source Big Data Platform
There are various open-source Big Data Platform which can be used for Big Data handling and
data analytics in real-time environment.
Both small and Big Enterprise can use these tools for managing their enterprise data for getting
best value from their enterprise data.
i) Apache Hadoop
 Apache Hadoop is Big Data platform and software package which is Apache sponsored
project.
 Under Apache Hadoop project various other software is being developed which runs on the
top of Hadoop system to provide enterprise grade data management and analytics solutions
to enterprise.
 Apache Hadoop is open-source, distributed file system which provides data processing and
analysis engine for analyzing large set of data.
 Hadoop can run on Windows, Linux and OS X operating systems, but it is mostly used on
Ubuntu and other Linux variants.
ii) MapReduce
 The MapReduce engine was originally written by Google and this is the system which
enables the developers to write program which can run in parallel on 100 or even 1000s of
computer nodes to process vast data sets.
 After processing all the job on the different nodes it comes the results and return it to the
program which executed the MapReduce job.
 This software is platform independent and runs on the top of Hadoop ecosystem. It can
process tremendous data at very high speed in Big Data environment.
iii) GridGain
 GridGain is another software system for parallel processing of data just like MapRedue.
GridGain is an alternative of Apache MapReduce.
 GridGain is used for the processing of in-memory data and its is based on Apache Iginte
framework.
 GridGain is compatable with the Hadoop HDFS and runs on the top of Hadoop
ecosystem.

11
 Then enterprise version of GridGain can be purchased from official website of
GridGain. While free version can be downloaded from GitHub repository.
Website: https://www.gridgain.com/
iv) HPCC Systems
 HPCC Systems stands for "high performance computing cluster” and this system is
developed by LexisNexis Risk Solutions.
 According to the company this software is much faster than Hadoop and can be used in the
cloud environment.
 HPCC Systems is developed in C++ and compiled into binary code for distribution.
 HPCC Systems is open-source, massive parallel processing system which is installed in
cluster to process data in real-time.
 It requires Linux operating system and runs on the commodity servers connected with
high-speed network.
 It is scalable from one node to 1000s of nodes to provide performance and scalability.
 Website: https://hpccsystems.com/
v) Apache Storm
 Apache Storm is a software for real-time computing and distributed processing.
 Its free and open-source software developed at Apache Software foundation. It’s a real-
time, parallel processing engine.
 Apache Storm is highly scalable, fault-tolerant which supports almost all the
programming language.
vi) Apache Strom can be used in:
 Realtime analytics
 Online machine learning
 Continuous computation
 Distributed RPC
 ETL
 And all other places where real-time processing is required.
Apache Strom is used by Yahoo, Twitter, Spotify, Yelp, Flipboard and many other data giants.
Website: http://storm.apache.org/
vii) Apache Spark
 Apache Spark is software that runs on the top of Hadoop and provides API for real-time, in-
memory processing and analysis of large set of stored in the HDFS.
 It stores the data into memory for faster processing.
 Apache Spark runs program 100 times faster in-memory and 10 times faster on disk as
compared to the MapRedue.
 Apache Spark is here to faster the processing and analysis of big data sets in Big Data
environment.
 Apache Spark is being adopted very fast by the business to analyze their data set to get real
value of their data.
 Website: http://spark.apache.org/
viii) SAMOA
 SAMOA stands for Scalable Advanced Massive Online Analysis,
 It’s a system for mining the Big Data streams.
 SAMOA is open-source software distributed at GitHub, which can be used as distributed
machine learning framework also.
 Website: https://github.com/yahoo/samoa
Thus, the Big Data industry is growing very fast in 2017 and companies are fast moving their
data to Big Data Platform. There is huge requirement of Big Data in the job market; many
companies are providing training and certifications in Big Data technologies.
12
1.2. CHALLENGES OF CONVENTIONAL SYSTEMS
1.2.1 Introduction to Conventional Systems
What is Conventional System?
Conventional Systems.
 The system consists of one or more zones each having either manually operated call
points or automatic detection devices, or a combination of both.
 Big data is huge amount of data which is beyond the processing capacity
Of conventional data base systems to manage and analyze the data in a specific time
interval.
Difference between conventional computing and intelligent computing
 The conventional computing functions logically with a set of rules and calculations
while the neural computing can function via images, pictures, and concepts.
 Conventional computing is often unable to manage the variability of data obtained in the
real world.
 On the other hand, neural computing, like our own brains, is well suited to situations that
have no clear algorithmic solutions and are able to manage noisy imprecise data. This
allows them to excel in those areas that conventional computing often finds difficult.
1.2.2 Comparison of Big Data with Conventional Data

Big Data Conventional Data


Huge data sets Data set size in control.
Unstructured data such as text, video, Normally structured data such as numbers
and audio. and categories, but it can take other forms
as well.

Hard-to-perform queries and analysis Relatively easy-to-perform queries and


analysis.
Needs a new methodology for analysis. Data analysis can be achieved by using
conventional methods.
Need tools such as Hadoop, Hive, Tools such as SQL, SAS, R, and Excel
Hbase, Pig, Sqoop, and so on. alone may be sufficient.
The aggregated or sampled or filtered data. Raw transactional data.

Used for reporting, basic analysis, and Used for reporting, advanced analysis, and
text mining. Advanced analytics is only in predictive modeling .
a starting stage in big data.
Big data analysis needs both Analytical skills are sufficient for
programming skills (such as Java) and conventional data; advanced analysis tools
analytical skills to perform analysis. don’t require expert programing skills.

Petabytes/exabytes of data. Megabytes/gigabytes of data.

Millions/billions of accounts. Thousands/millions of accounts.

Billions/trillions of transactions. Millions of transactions

13
Generated by big financial institutions, Generated by small enterprises and small
Facebook, Google, Amazon, eBay, banks.
Walmart, and so on.
1.2.2 List of challenges of Conventional Systems
The following list of challenges has been dominating in the case Conventional systems in real
time scenarios:
1) Uncertainty of Data Management Landscape
2) The Big Data Talent Gap
3) The talent gap that exists in the industry Getting data into the big data platform
4) Need for synchronization across data sources
5) Getting important insights through the use of Big data analytics
1) Uncertainty of Data Management Landscape:
 Because big data is continuously expanding, there are new companies and technologies
that are being developed everyday.
 A big challenge for companies is to find out which technology works bests for them
without the introduction of new risks and problems.
2) The Big Data Talent Gap:
 While Big Data is a growing field, there are very few experts available in this field.
 This is because Big data is a complex field and people who understand the complexity and
intricate nature of this field are far few.
3) The talent gap that exists in the industry Getting data into the big data platform:
 Data is increasing every single day. This means that companies have to tackle limitless
amount of data on a regular basis.
 The scale and variety of data that is available today can overwhelm any data practitioner and
that is why it is important to make data accessibility simple and convenient for mangers and
owners.
4) Need for synchronization across data sources:
 As data sets become more diverse, there is a need to incorporate them into an analytical
platform.
 If this is ignored, it can create gaps and lead to wrong insights and messages.
5) Getting important insights(understanding a situation) through the use of Big data
analytics:
 It is important that companies gain proper insights from big data analytics and it is important
that the correct department has access to this information.
 A major challenge in the big data analytics is bridging this gap in an effective fashion.
Other Three challenges of Conventional systems
Three Challenges That big data face.
1. Data
2. Process
3. Management
1. Data Challenges
Volume
1.The volume of data, especially machine-generated data, is exploding,
2.how fast that data is growing every year, withnew sources of data that are emerging.
3.For example, in the year 2000, 800,000petabytes (PB) of data were stored in the world, and it is
expected to reach 35 zetta bytes (ZB) by2020 (according to IBM).
14
Social media plays a key role: Twitter generates 7+ terabytes (TB) of data every day. Facebook, 10
TB.
•Mobile devices play a key role as well, as there were estimated 6 billion mobile phones in
2011.
•The challenge is how to deal with the size of Big Data.
Variety, Combining Multiple Data Sets
•More than 80% of today’s information is unstructured and it is typically too big to manage
effectively.
•Today, companies are looking to leverage a lot more
data from a wider variety of sources both inside and outside the organization.
•Things like documents, contracts, machine data, sensor data, social media, health records,
emails, etc. The list is endless really.
Variety
•A lot of this data is unstructured, or has a complex structure that’s hard to represent in rows
and columns.
2. Processing
 More than 80% of today’s information is unstructured and it is typically too big to
manage effectively.
 Today, companies are looking to leverage a lot more data from a wider variety of
sources both inside and outside the organization.
 Things like documents, contracts, machine data, sensor data, social media, health
records, emails, etc. The list is endless really.
3. Management
 A lot of this data is unstructured, or has acomplex structure that’s hard to represent in
rows and columns.
Big Data Challenges
– The challenges include capture, duration, storage, search, sharing, transfer,
– analysis, and visualization.
• Big Data is trend to larger data sets
• due to the additional information derivable from analysis of a single large set of related data,
– as compared to separate smaller sets with the same total amount of data, allowing
correlations to be found to
• "spot business trends, determine quality of research, prevent diseases, link legal
citations, combat crime, and determine real-time roadway traffic conditions.”
Challenges of Big Data
The following are the five most important challenges of the Big Data
a) Meeting the need for speed
In today’s hypercompetitive business environment, companies not only have to find and
analyze the relevant data they need, they must find it quickly.
b) Visualization helps organizations perform analyses and make decisions much more
rapidly, but the challenge is going through the sheer volumes of data and accessing the
level of detail needed, all at a high speed.
c) The challenge only grows as the degree of granularity(the level of details) increases. One
possible solution is hardware. Some vendors are using increased memory and powerful
parallel processing to crunch large volumes of data extremely quickly
d) Understanding the data
 It takes a lot of understanding to get data in the RIGHT SHAPE so that you can use
visualization as part of data analysis.
d) Addressing data quality
15
 Even if you can find and analyze data quickly and put it in the proper context for the
audience that will be consuming the information
e) Displaying meaningful results
 Plotting points on a graph for analysis becomes difficult when dealing with extremely
large amounts of information or a variety of categories of information.
 For example, imagine you have 10 billion rows of retail SKU data that you’re trying to
compare. The user trying to view 10 billion plots on the screen will have a hard time
seeing so many data points.
 . By grouping the data together, or “binning,” you can more effectively visualize the
data.
1.3. INTELLIGENT DATA ANALYSIS
1.3.1 INTRODUCTION TO INTELLIGENT DATA ANALYSIS (IDA)
Intelligent Data Analysis (IDA) is one of the hot issues in the field of artificial
intelligence and information.
Intelligent data analysis reveals implicit, previously unknown and potentially valuable
information or knowledge from large amounts of data.
Intelligent data analysis is also a kind of decision support process.
It Based on artificial intelligence, machine learning, pattern recognition, statistics, database
and visualization technology mainly, IDA automatically extracts useful information, necessary
knowledge and interesting models from a lot of online data in order to help decision makers make
the right choices.
The process of IDA generally consists of the following three stages:
(1) data preparation
(2) rule finding or data mining
(3) result validation and explanation.
Data preparation involves selecting the required data from the relevant data source and
integrating this into a data set to be used for data mining.
Rule finding is working out rules contained in the data set by means of certain methods
or algorithms.
Result validation requires examining these rules, and result explanation is giving intuitive,
reasonable and understandable descriptions using logical reasoning.
As the goal of intelligent data analysis is to extract useful knowledge, the process
demands a combination of extraction, analysis, conversion, classification, organization,
reasoning, and so on.
It is challenging and fun working out how to choose appropriate methods to resolve the
difficulties encountered in the process.
Intelligent data analysis methods and tools, as well as the authenticity of obtained results
pose us continued challenges.
1,3,2 Uses / Benefits of IDA
Intelligent Data Analysis provides a forum for the examination of issues related to the research and
applications of Artificial Intelligence techniques in data analysis across a variety of disciplines and
the techniques include (but are not limited to):
The benefit areas are:
 Data Visualization
 Data pre-processing (fusion, editing, transformation, filtering, sampling)
 Data Engineering
16
 Database mining techniques, tools and applications
 Use of domain knowledge in data analysis
 Big Data applications
 Evolutionary algorithms
 Machine Learning(ML)
 Neural nets
 Fuzzy logic
 Statistical pattern recognition
 Knowledge Filtering and
 Post-processing
1.3.4 Intelligent Data Examples:
Example of IDA

Epidemiological study (1970-1990)

Sample of examinees died from cardiovascular diseases during the period
Evaluation of IDA results

Absolute & relative accuracy

Sensitivity & specificity

False positive & false negative

Error rate

Reliability of rules
1. 4 NATURE OF DATA
1.4.1 INTRODUCTION
Data
 Data is a set of values of qualitative or quantitative variables; restated, pieces of
data are individual pieces of information.
 Data is measured, collected and reported, and analyzed, where upon it can be visualized
using graphs or images.
Properties of Data
For examining the properties of data, reference to the various definitions of data.
Reference to these definitions reveals that following are the properties of data:
a) Amenability of use(willing to conform)
b) Clarity
c) Accuracy
d) Essence
e) Aggregation
f) Compression
g) Refinement
a) Amenability of use: From the dictionary meaning of data it is learnt that data are facts used
in deciding something. In short, data are meant to be used as a base for arriving at definitive
conclusions.
b) Clarity: Data are a crystallized presentation. Without clarity, the meaning desired to be
communicated will remain hidden.
c) Accuracy: Data should be real, complete and accurate. Accuracy is thus, an essential
property of data.
d) Essence: A large quantities of data are collected and they have to be Compressed and
refined. Data so refined can present the essence or derived qualitative value, of the matter.
e) Aggregation: Aggregation is cumulating or adding up.

17
f) Compression: Large amounts of data are always compressed to make them more
meaningful. Compress data to a manageable size.Graphs and charts are some examples of
compressed data.
g) Refinement: Data require processing or refinement. When refined, they are capable of
leading to conclusions or even generalizations. Conclusions can be drawn only when data
are processed or refined.
1.4.2 TYPES OF DATA
 In order to understand the nature of data it is necessary to categorize them into various
types.
 Different categorizations of data are possible.
 The first such categorization may be on the basis of disciplines, e.g., Sciences, Social
Sciences, etc. in which they are generated.
 Within each of these fields, there may be several ways in which data can be categorized into
types.
There are four types of data:
 Nominal
 Ordinal
 Interval
 Ratio
Each offers a unique set of characteristics, which impacts the type of analysis that can be
performed.

The distinction between the four types of scales center on three different characteristics:
1. The order of responses – whether it matters or not
2. The distance between observations – whether it matters or is interpretable(explainable)
3. The presence or inclusion of a true zero(absence of something being measured ex: zero
objects --negative numbers not accepted )
1.4.2.1 Nominal Scales
Nominal scales measure categories and have the following characteristics:
 Order: The order of the responses or observations does not matter.
 Distance: Nominal scales do not hold distance. The distance between a 1 and a 2 is not the
same as a 2 and 3.
 True Zero: There is no true or real zero. In a nominal scale, zero is uninterruptable.
Appropriate statistics for nominal scales: mode, count, frequencies
Displays: histograms or bar charts
1.4.2.2 Ordinal Scales
At the risk of providing a tautological definition, ordinal scales measure, well, order. So, our
characteristics for ordinal scales are:
 Order: The order of the responses or observations matters.
 Distance: Ordinal scales do not hold distance. The distance between first and second is
unknown as is the distance between first and third along with all observations.

18
 True Zero: There is no true or real zero. An item, observation, or category cannot finish
zero.
Appropriate statistics for ordinal scales: count, frequencies, mode
Displays: histograms or bar charts
1.4.2.3 Interval Scales
Interval scales provide insight into the variability of the observations or data.
Classic interval scales are Likert scales (e.g., 1 - strongly agree and 9 - strongly disagree) and
Semantic Differential scales (e.g., 1 - dark and 9 - light).
The characteristics of interval scales are:
 Order: The order of the responses or observations does matter.
 Distance: Interval scales do offer distance. That is, the distance from 1 to 2 appears the
same as 4 to 5. Also, six is twice as much as three and two is half of four. Hence, we can
perform arithmetic operations on the data.
 True Zero: There is no zero with interval scales. However, data can be rescaled in a
manner that contains zero. An interval scales measure from 1 to 9 remains the same as 11 to
19 because we added 10 to all values. Similarly, a 1 to 9 interval scale is the same a -4 to 4
scale because we subtracted 5 from all values.
Appropriate statistics for interval scales: count, frequencies, mode, median, mean, standard
deviation (and variance), skewness, and kurtosis.
Displays: histograms or bar charts, line charts, and scatter plots.
1.4.2.4 Ratio Scales
Ratio scales appear as nominal scales with a true zero.
They have the following characteristics:
 Order: The order of the responses or observations matters.
 Distance: Ratio scales do have an interpretable distance.
 True Zero: There is a true zero. Income
is a classic example of a ratio scale:
 Order is established. We would all prefer $100 to $1!
 Zero dollars means we have no income (or, in accounting terms, our revenue exactly
equals our expenses!)
 Distance is interpretable, in that $20 appears as twice $10 and $50 is half of a $100.
For the web analyst, the statistics for ratio scales are the same as for interval scales.
Appropriate statistics for ratio scales: count, frequencies, mode, median, mean, standard
deviation (and variance), skewness, and kurtosis.
Displays: histograms or bar charts, line charts, and scatter plots.
The table below summarizes the characteristics of all four types of scales.

19
1.5. ANALYTIC PROCESS AND TOOLS
• There are 6 analytic processes:
1. Deployment
2. Business Understanding
3. Data Exploration
4. Data Preparation
5. Data Modeling
6. Data Evaluation

Step 1: Deployment
• Here we need to:
– plan the deployment and monitoring and maintenance,
– we need to produce a final report and review the project.
– In this phase,
• we deploy the results of the analysis.
• This is also known as reviewing the project.
Step 2: Business Understanding
• Business Understanding
– The very first step consists of business understanding.
– Whenever any requirement occurs, firstly we need to determine the business
objective,
– assess the situation,
– determine data mining goals and then
– produce the project plan as per the requirement.
• Business objectives are defined in this phase.
Step 3: Data Exploration
• The second step consists of Data understanding.
– For the further process, we need to gather initial data, describe and explore the data
and verify data quality to ensure it contains the data we require.

20
– Data collected from the various sources is described in terms of its application and
the need for the project in this phase.
– This is also known as data exploration.
• This is necessary to verify the quality of data collected.
Data exploration helps identify:
 Patterns and relationships
 Anomalies(irregularities)
 Trends
 Errors or outliers
Step 4: Data Preparation
• From the data collected in the last step,
– we need to select data as per the need, clean it, construct it to get useful
information and
– then integrate it all.
• Finally, we need to format the data to get the appropriate data.
• Data is selected, cleaned, and integrated into the format finalized for the analysis in this
phase.
Step 5: Data Modeling
• we need to
– select a modeling technique, generate test design, build a model and assess the
model built.
• The data model is build to
– analyze relationships between various selected objects in the data,
– test cases are built for assessing the model and model is tested and implemented on
the data in this phase.
• Where processing is hosted?
– Distributed Servers / Cloud (e.g. Amazon EC2)
• Where data is stored?
– Distributed Storage (e.g. Amazon S3)
• What is the programming model?
– Distributed Processing (e.g. MapReduce)
• How data is stored & indexed?
– High-performance schema-free databases (e.g. MongoDB)
• What operations are performed on data?
– Analytic / Semantic Processing
5. Data Evaluation
Here, we evaluate the results from the last step, review the scope of error, and determine the next
steps to perform. We evaluate the results of the test cases and review the scope of errors in this
phase.
Understanding these processes is essential, but the right tools can make or break the big data
analytics journey. The right tools can simplify and enhance your big data analytics process. The top
tools that are shaping the world of big data analytics are:
1. Hadoop - open source software framework is a powerhouse for storing and processing large
data sets across clusters of computers -It is designedtoscale upfrom asingle server to
thousands of machines, each offering local computation and storage.
2. Spark - Open source, distributed computing system excels at real-time processing. It’s
21
lightning fast and can handle both batch and streaming workloads and it's compatible with
hadoop making it a versatile tool in big data analytics.
3. Flink - Stream processing framework provides high throughput, low latency and exactly-
once semantics, making it ideal for event-driven applications. It's excellent for real time
analytics and complex event processing.
4. Hive - Data warehouse software facilitates reading, writing and managing large datasets
residing in distributed storage. It's fantastic for ad hoc querying and analysis of structured
and semi-structured data.
5. Tableau - Business Intelligence software excels at data visualization. It helps to turn raw data
into easily understandable visuals, making the analysis process more intuitive and accessible.
1. Hadoop
Apache Hadoop is the most prominent and used tool in big data industry with its
enormous capability of large-scale processing data. This is 100% open source framework and runs
on commodity hardware in an existing data center. Furthermore, it can run on a cloud infrastructure.
Hadoop consists of four parts:
 Hadoop Distributed File System: Commonly known as HDFS, it is a distributed file
system compatible with very high scale bandwidth.
 MapReduce: A programming model for processing big data.
 YARN: It is a platform used for managing and scheduling Hadoop’s resources in
Hadoop infrastructure.
 Libraries: To help other modules to work with Hadoop.
2. Apache Spark
Apache Spark is the next hype in the industry among the big data tools. The key point of this
open source big data tool is it fills the gaps of Apache Hadoop concerning data processing.
Interestingly, Spark can handle both batch data and real-time data. As Spark does in-memory data
processing, it processes data much faster than traditional disk processing. This is indeed a plus
point for data analysts handling certain types of data to achieve the faster outcome.
Apache Spark is flexible to work with HDFS as well as with other data stores, for
example with OpenStack Swift or Apache Cassandra. It’s also quite easy to run Spark on a single
local system to make development and testing easier. Spark Core is the heart of the project,
and it facilitates many things like
 distributed task transmission
 scheduling
 I/O functionality
Spark is an alternative to Hadoop’s MapReduce. Spark can run jobs 100 times faster
than Hadoop’s MapReduce.
3. Apache Storm
Apache Storm is a distributed real-time framework for reliably processing the
unbounded data stream. The framework supports any programming language. The unique
features of Apache Storm are:
 Massive scalability
 Fault-tolerance

22
 “fail fast, auto restart” approach
 The guaranteed process of every tuple
 Written in Clojure
 Runs on the JVM
 Supports multiple languages
 Supports protocols like JSON
Storm topologies can be considered similar to MapReduce job. However, in case of
Storm, it is real-time stream data processing instead of batch data processing. Based on the
topology configuration, Storm scheduler distributes the workloads to nodes. Storm can
interoperate with Hadoop’s HDFS through adapters if needed which is another point that makes it
useful as an open source big data tool.
4. Cassandra
Apache Cassandra is a distributed type database to manage a large set of data across the
servers. This is one of the best big data tools that mainly process structured data sets. It provides
highly available service with no single point of failure. Additionally, it has certain capabilities which
no other relational database and any NoSQL database can provide. These capabilities are:
 Continuous availability as a data source
 Linear scalable performance
 Simple operations
 Across the data centers easy distribution of data
 Cloud availability points
 Scalability
 Performance
Apache Cassandra architecture does not follow master-slave architecture, and all nodes play
the same role. It can handle numerous concurrent users across data centers. Hence, adding a new
node is no matter in the existing cluster even at its up time.
6. MongoDB
MongoDB is an open source NoSQL database which is cross-platform compatible with many
built-in features. It is ideal for the business that needs fast and real-time data for instant decisions.
It is ideal for the users who want data-driven experiences. It runs on MEAN software stack, NET
applications and, Java platform.
Some notable features of MongoDB are:
 It can store any type of data like integer, string, array, object, Boolean, date etc.
 It provides flexibility in cloud-based infrastructure.
 It is flexible and easily partitions data across the servers in a cloud structure.
 MongoDB uses dynamic schemas. Hence, you can prepare data on the fly and
quickly. This is another way of cost saving.
7. R Programming Tool
This is one of the widely used open source big data tools in big data industry for
statistical analysis of data. The most positive part of this big data tool is – although used for statistical
23
analysis, as a user you don’t have to be a statistical expert. R has its own public library CRAN
(Comprehensive R Archive Network) which consists of more than 9000 modules and
algorithms for statistical analysis of data.
R can run on Windows and Linux server as well inside SQL server. It also supports Hadoop
and Spark. Using R tool one can work on discrete data and try out a new analytical algorithm for
analysis. It is a portable language. Hence, an R model built and tested on a local data source can be
easily implemented in other servers or even against a Hadoop data lake.
1.6 ANALYSIS AND REPORTING
1.6.1 INTRODUCTION TO ANALYSIS AND REPORTING
What is Analysis?
• The process of exploring data and reports
– in order to extract meaningful insights,
– which can be used to better understand and improve business performance.
• What is Reporting ?
• Reporting is
– “the process of organizing data
– into informational summaries
– in order to monitor how different areas of a business are performing.”
1.6.2 COMPARING ANALYSIS WITH REPORTING
• Reporting is “the process of organizing data in to informational summaries in order to
monitor how different areas of a business are performing.”
• Measuring core metrics and presenting them — whether in an email, a slidedeck, or
online dashboard — falls under this category.
• Analytics is “the process of exploring data and reports in order to extract meaningful
insights, which can be used to better understand and improve business performance.”
• Reporting helps companies to monitor their online business and be alerted to when data
falls outside of expected ranges.
• Good reporting
• should raise questions about the business from its end users.
• The goal of analysis is
• to answer questions by interpreting the data at a deeper level and providing
actionable recommendations.
• A firm may be focused on the general area of analytics (strategy, implementation,
reporting, etc.)
– but not necessarily on the specific aspect of analysis.
• It’s almost like some organizations run out of gas after the initial set-up-related
activities and don’t make it to the analysis stage

A reporting activity deliberately proposes Analysis activity.

24

You might also like