0% found this document useful (0 votes)
59 views36 pages

Blood Level

jhguhg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
59 views36 pages

Blood Level

jhguhg
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 36

Design and Implementation of Low cost

Hemoglobin Level Prediction using


Machine Learning

ABSTRACT:
There is worldwide demand for an affordable
hemoglobin measurement solution, which is a particularly
urgent need in developing countries. The smartphone, which is
the most penetrated device in both rich and resource-constrained
areas, would be a suitable choice to build this solution.This
Project proposes a noninvasive hemoglobin level measurement
processes. Also its compared the variation in data collection
sites, biosignal processing techniques, theoretical foundations,
photoplethysmogram (PPG) signal and features extraction
process, machine-learning algorithms, and prediction models to
calculate hemoglobin levels.This analysis was then used to
recommend realistic approaches to build a smartphone-based
point-of-care tool for hemoglobin measurement in a noninvasive
manner.
Keyword: photoplethysmogram ,machine-learning algorithms,
hemoglobin, biosignal processing, prediction
INTRODUCTION:
Blood is the essential life-maintaining flfluid that strews
over the whole body and is responsible for carrying heat,
hormones, antibodies, immune cells, etc. necessary for
everycell. Everything related to the life cycle depends on blood
directly or indirectly. As a result, the measurement and analysis
of blood component levels can indicate the possibility of
numerous diseases. Too much glucose level in the blood
indicates the possibility of diabetes. Diabetes is the most
common chronic, metabolic disorder and the leading cause of
death in the worldwide . Patients with long-time diabetes can be
affected by several diseases like heart disease, kidneys damage,
and lead to blindness . Complications that occur due to diabetes
can be minimized by regular monitoring of blood glucose level.
Similarly, hemoglobin is the protein molecule in red blood cells,
and low hemoglobin count is the reason behind anemia . So,
continuous measurement of blood hemoglobin level is essential
for the treatment of anemic patients and premature babies , and
dengue fever . Likewise, the elevated level of creatinine intends
weakened kidney . So, for evaluating one’s medical condition, it
is imperative to measure the blood component levels regularly.
Blood component levels measurement techniques can be
grouped into invasive, minimally-invasive and noninvasive .
Conventional measurement techniques require a blood sample
of patients which is collected using fifinger-prick or
venepuncture . These methods are painful, inconvenient and
costly for the patient due to frequent blood collection, and do
not allow real-time monitoring , . However, invasive techniques
are more precise and reliableneeds a well-resourced laboratory
with trained personnel, most of whom are inaccessible in a
remote area . On the contrary, the non-invasive technique needs
only a bio-signal (image or spectrum) to estimate the blood
component levels. Non-invasive technology can overcome the
above shortcomings and has become a more popular topic in
smart health care research . Several techniques already exist for
this purpose, but those have some limitations like portability,
relatively high cost, and poor penetration of light. So, we have
been motivated to develop a system that can measure the blood
component levels conveniently and painlessly. Now, near-
infrared (NIR) spectroscopy and photoplethysmogram (PPG) are
most widely used to measure blood component levels non-
invasively . PPG is an optical measurement technique that has
been applied to measure the changes in blood volume in certain
parts of the body. It reflflects the movement of blood from the
heart to the fifingertip. The PPG system is designed with a light
source, and a photodetector, where the light source illuminates
the tissues region (e.g., fifinger) and the detector senses the
reflflected light. The amount of light absorbed varies
periodically according to the variations of blood volume in the
circulation system and can be used to acquire the PPG signal. A
lot of researches have been done based on the PPG signal to
monitor different physiological parameters because of its
simple, low cost, and comfortable setup . For example, heart rate
monitoring , anemia detection heart-rate validation , blood
pressure estimation and blood glucose level . Conventionally,
PPG signals are acquired using optical techniques like sensor-
based device, chip, or pulse oximeters Recently, several
smartphones which have built-in sensor systems for
instantaneous measurement of heart rate, oxygen saturation
based on PPG signals . These non-invasive techniques are not
only useful for patients those required regular health monitoring
but also needed for healthcare professionals. Several studies are
described brieflfly in the literature concerning the measurement
of the different blood component levels non-invasively based on
smartphones and RGB analysing. Among these, Wang et al.
developed a smartphone-based application called HemaApp
using Nexus- 5p to predict the hemoglobin concentration in
blood in 2016 . In 2017, the same authors used Nexus-6p and
updated the hardware confifiguration for the same purpose . In
2018, Hasan et al. developed a smartphone application called
SmartHeLP using Nexus-4p to measure the hemoglobin
value .In 2019, Zhang et al. proposed a non-invasive technique
to estimate the blood glucose level based on smartphone PPG
signal . A few PPG signal acquisition methods have been
described in the literature, but most of them used professional
equipments. Therefore, we aim at using the smartphone camera
and external lighting source to capture a video of the fifingertip
that can be used for the extraction of the PPG signal.
LITERATURE SURVEY:
1TITLE:A Novel Technique for Non-Invasive Measurement of
Human Blood Component Levels From Fingertip Video Using DNN
Based Model.
AUTHOR NAME:m.taslim Uddin raju/2021
DESCRIPTION:Blood components such as hemoglobin, glucose,
creatinine measuring are essential for monitoring one's health
condition. The current blood component measurement
approaches still depend on invasive techniques that are painful,
and uncomfortable for the patients. To facilitate measurement at
home, we proposed a novel non-invasive technique to measure
blood hemoglobin, glucose, and creatinine level based on PPG
signal using Deep Neural Networks (DNN). Fingertip videos
from 93 subjects have been collected using a smartphone. The
PPG signal is generated from each video, and 46 characteristic
features are then extracted from the PPG signal, its derivatives
(1 st and 2 nd ) and from Fourier analysis. Additionally, age and
gender are also included to feature because of the significant
effects on hemoglobin, glucose, and creatinine. A correlation-
based feature selection (CFS) using genetic algorithms (GA) has
been used to select the optimal features to avoid redundancy and
over-fitting.
2. TITLE:Prediction of anemia and estimation of
hemoglobin concentration using a smartphone camera.
AUTHOR NAME: James Rayner, Alison B. Chambers,
Janette Baird/2021.
DESCRIPTION:
Anemia, defined as a low hemoglobin
concentration, has a large impact on the health of the world’s
population. We describe the use of a ubiquitous device, the
smartphone, to predict hemoglobin concentration and screen for
anemia. This was a prospective convenience sample study
conducted in Emergency Department (ED) patients of an
academic teaching hospital. In an algorithm derivation phase,
images of both conjunctiva were obtained from 142 patients in
Phase 1 using a smartphone. A region of interest targeting the
palpebral conjunctiva was selected from each image. Image-
based parameters were extracted and used in stepwise regression
analyses to develop a prediction model of estimated hemoglobin
(HBc). In Phase 2, a validation model was constructed using
data from 202 new ED patients. The final model based on all
344 patients was tested for accuracy in anemia and transfusion
thresholds. Hemoglobin concentration ranged from 4.7 to 19.6
g/dL (mean 12.5). In Phase 1, there was a significant association
between HBc and laboratory-predicted hemoglobin (HBl) slope
= 1.07 (CI = 0.98–1.15), p<0.001. Accuracy, sensitivity, and
specificity of HBc for predicting anemia was 82.9 [79.3, 86.4],
90.7 [87.0, 94.4], and 73.3 [67.1, 79.5], respectively. In Phase 2,
accuracy, sensitivity and specificity decreased to 72.6 [71.4,
73.8], 72.8 [71, 74.6], and 72.5 [70.8, 74.1]. Accuracy for low
(<7 g/dL) and high (<9 g/dL) transfusion thresholds was 94.4
[93.7, 95] and 86 [85, 86.9] respectively.

3. TITLE:Machine learning in predicting hemoglobin


variants.
AUTHORNAME:Monalisha Saikia Borah, Dr Mauchumi
saikia PathakYear-2018.
DESCRIPTION:
Disease diagnosis is of the utmost importance
in providing appropriate medical treatment. Genetic diseases,
such as hemoglobinopathies and thalassemia, need to be
diagnosed accurately and on time. Though Hb variants are
diagnosed using a HPLC-based hemoglobin typing machine.
appropriate interpretation of the data obtained is still necessary
and this requires trained professionals. Machine learning helps
to interpret the obtained data and in predicting the type of Hb
variants, thus reducing the workload of health professionals. In
this study, the obtained data are classified using the following
classifiers, namely logistic regression, support vector classifier
(SVC), k-nearest neighbor (KNN), Gaussian naïve bayes,
perceptron classifier, linear SVC, stochastic gradient descent,
decision tree, random forest, and multi-layer perceptron. The
pre-processing, visualization and the classification steps were
implemented using Python 2.7 on an Intel Core i5 computer.
The performance of each classifier was then tested by initially
creating a confusion matrix. Indices including “precision,”
“recall,” and “f1-score” were used to quantify the quality of
each model. KNN, decision tree, and random forest show better
classification results in comparison to the other classifiers. With
a precision of 93.89%, recall of 92.78%, and f1-score of
93.33%, the decision tree and random forest classifiers prove to
be better classifiers in predicting the Hb variants with a higher
accuracy rate.
4. TITLE:A noninvasive, image-based smartphone app for
diagnosing anemia.
AUTHOR NAME:Robert G. ManninoYear- 2018.
DESCRIPTION:
Smartphone-based telehealth is steadily
transforming the delivery of medical care worldwide, moving
diagnosis of disease from the clinic to the home to potentially
anywhere in the globe. Smartphone images alone have recently
been used by physicians to remotely diagnose a myriad of
diseases. However, smartphone telehealth approaches have yet
to non-invasively replace blood-based testing, which remains a
major cornerstone of disease diagnosis in modern medicine.
While the addition of specialized smartphone attachments and
supplemental calibration tools may enable point-of-care
diagnosis and analysis of tissue and bodily fluid samples, the
additional burden of blood and/or tissue sample collections
combined with the additional cost and inconvenience associated
with this equipment, prevents worldwide use of these potentially
disruptive approaches. Therefore, a smartphone-based system,
requiring nothing other than the smartphones native technology
and capable of non-invasively replacing blood-based
diagnostics, would transform the very nature of telehealth and
the delivery of healthcare worldwide. Towards that end, I
specifically focused on anemia, a potentially lifethreatening
disorder characterized by low blood hemoglobin (Hgb) levels
that affects approximately 2 billion people worldwide. Despite
the high prevalence of anemia, all existing diagnostic
approaches to measure Hgb require specialized equipment and
represent tradeoffs between invasiveness, accuracy,
infrastructure needs, and expense. Aside from being cost-
prohibitive, the necessary invasive blood sampling to measure
Hgb levels causes discomfort and trauma in younger pediatric
patients.
5. TITLE:‘‘Analyzing the existing noninvasive hemoglobin
measurement techniques.
AUTHORNAME:M. K. Hasan, N. Sakib, R. R. Love, and S. I.
Ahamed.
DESCRIPTION:
To meet the potential research challenges, such
as cost, accuracy, and portability, a number of research has been
conducted around the world in the domain of noninvasive
measurement techniques. In this paper, we explored, analyzed,
and critiqued concurrent research approaches that endeavored to
meet the research challenges in noninvasive hemoglobin (Hb)
measurement. Here, besides the conventional invasive Hb
measurement methods, we covered the Hb measurement
techniques based on various scientific contribution and
advancement. We studied smartphone-image, IR-LED, and
spectra-based solutions as well as noninvasive point-of-care
(POC) devices. From that insight, in this paper, we delineated
the novel smartphone-based approaches to measure hemoglobin
level noninvasively that can leverage the computation,
portability, and affordability of smartphone devices, and assure
the desired accuracy and consistency. Apart from that, this paper
describes how these novel proposals can overcome the
limitations of the data collection and analysis in the state-of-the-
art approaches in this domain.
EXISTING SYSTEM:
a non-invasive hemoglobin, glucose, and
creatinine measurement system based on PPG signal obtained
from fingertip video using the DNN model.A correlation-based
features selection method using a geneticalgorithm is used to
select appropriate features Finally, DNN based models are
developed, and 10-fold cross-validation method is applied to
validated the models.
PROPOSED SYSTEM:
We proposed to design Hb level measurement is a blood
diagnosis process to determine the concentration of Hb in the
blood. Clinicians measure Hb in several ways, although the
invasive (blood sample collection) approach remains the most
common. Invasive processes involve the addition of various
chemicals to a blood sample and then optical variations are
calculated using spectroscopic data to measure the Hb level ). By
contrast, a noninvasive (without blood sample collection)
approach involves data obtained from image sensors ,
spectroscopic information, and output of a
photoplethysmographic (PPG) sensor to calculate the Hb level.
In addition, a minimally invasive process requires only a couple
of drops of blood to calculate Hb, and then collects image and
spectra-based information from the blood sample for an
estimation. Such minimally invasive techniques are
comparatively less painful and have fewer complications in
collecting sample data.
BLOCK DIAGRAM:
1D CNN:

Human activity recognition is the problem of


classifying sequences of accelerometer data recorded by
specialized harnesses or smart phones into known well-defined
movements.

Classical approaches to the problem involve hand crafting


features from the time series data based on fixed-sized windows
and training machine learning models, such as ensembles of
decision trees. The difficulty is that this feature engineering
requires deep expertise in the field.
Recently, deep learning methods such as recurrent neural
networks and one-dimensional convolutional neural networks,
or CNNs, have been shown to provide state-of-the-art results on
challenging activity recognition tasks with little or no data
feature engineering, instead using feature learning on raw data.

In this tutorial, you will discover how to develop one-


dimensional convolutional neural networks for time series
classification on the problem of human activity recognition.

After completing this tutorial, you will know:

 How to load and prepare the data for a standard human activity
recognition dataset and develop a single 1D CNN model that
achieves excellent performance on the raw data.
 How to further tune the performance of the model, including
data transformation, filter maps, and kernel sizes.
 How to develop a sophisticated multi-headed one-dimensional
convolutional neural network model that provides an ensemble-
like result.
MODULES:
 COLOUR IMAGE COLLECTION
 IMAGE FILTER
 SEGMENTATION
 CLASSIFI THE BLOOD DATA
COLOUR IMAGE COLLECTION:

IMAGE FILTER:
Typically an image can be represented with
the help of three colour componenets. Images generated by the
digital microscope are usually in RGB color space which is
visually difficult to segment. For better colour based
segmentation we map the RGB image to L*a*b (LAB) colour
space. The LAB space consists of luminosity layer L*,
chromaticity layers a* and b*. Since all the colour information
is in the chromaticity layers, we use these two componenets for
colour based Red Blood Cell segmentation.
SEGMENTATION:
Segmentation refers to the process of
partitioning a digital image into multiple segments (sets of
pixels, also known as superpixels). The goal of segmentation is
to simplify and/or change the representation of an image into
something that is more meaningful and easier to analyze. Image
segmentation is typically used to locate objects and boundaries
(lines, curves, etc) in images. More precisely, image
segmentation is the process of assigning a label to every pixel in
an image such that pixels with the same label share certain
visual characteristics. The anemia blood smear will be
segmented by appropriate segmentation methods.
CLASSIFI THE BLOOD DATA:
the various kinds of features
were generated from the previous step, now they are used for
classifying the different kinds of red blood cells (RBC) present
in the blood smear image. After using proper grouping
technique, we find that the form factor values extracted from the
geometrical features is of our great interest. For different values
of form factor, cells are sorted out as normal cell, sickle-cell,
ovalocyte and anisopoikilocyte. Their numbers and the total
number of blood cells present in the smear are also calculated.
SYSTEM REQUIREMENTS:

HARDWARE REQUIREMENTS:

 System : Pentium IV 2.4 GHz.


 Hard Disk : 40 GB.

• Floppy Drive : 1.44 Mb.


• Monitor : 15 VGA Colour.
• Mouse : Logitech.
• Ram : 512 Mb.

SOFTWARE REQUIREMENTS:
 Operating System : Windows XP.
 Coding Language : PYTHON
 Coding tool : ANACONDA

SOFTWARE ENVIRONEMT:

Python is a general-purpose interpreted, interactive,


object-oriented, and high-level programming language. It was
created by Guido van Rossum during 1985- 1990. Like Perl,
Python source code is also available under the GNU General
Public License (GPL). This tutorial gives enough understanding
on Python programming language.

Python is a MUST for students and working professionals to


become a great Software Engineer specially when they are
working in Web Development Domain. I will list down some of
the key advantages of learning Python:

 Python is Interpreted − Python is processed at runtime


by the interpreter. You do not need to compile your
program before executing it. This is similar to PERL and
PHP.
 Python is Interactive − You can actually sit at a Python
prompt and interact with the interpreter directly to write
your programs.
 Python is Object-Oriented − Python supports Object-
Oriented style or technique of programming that
encapsulates code within objects.
 Python is a Beginner's Language − Python is a great
language for the beginner-level programmers and supports
the development of a wide range of applications from
simple text processing to WWW browsers to games.

Characteristics of Python

Following are important characteristics of Python


Programming −

 It supports functional and structured programming


methods as well as OOP.
 It can be used as a scripting language or can be compiled
to byte-code for building large applications.
 It provides very high-level dynamic data types and
supports dynamic type checking.
 It supports automatic garbage collection.
 It can be easily integrated with C, C++, COM, ActiveX,
CORBA, and Java.
Hello World using Python.

Just to give you a little excitement about Python, I'm going to


give you a small conventional Python Hello World program,
You can try it using Demo link.

Applications of Python

As mentioned before, Python is one of the most widely used


language over the web. I'm going to list few of them here:

 Easy-to-learn − Python has few keywords, simple


structure, and a clearly defined syntax. This allows the
student to pick up the language quickly.
 Easy-to-read − Python code is more clearly defined and
visible to the eyes.
 Easy-to-maintain − Python's source code is fairly easy-to-
maintain.
 A broad standard library − Python's bulk of the library
is very portable and cross-platform compatible on UNIX,
Windows, and Macintosh.
 Interactive Mode − Python has support for an interactive
mode which allows interactive testing and debugging of
snippets of code.
 Portable − Python can run on a wide variety of hardware
platforms and has the same interface on all platforms.
 Extendable − You can add low-level modules to the
Python interpreter. These modules enable programmers to
add to or customize their tools to be more efficient.
 Databases − Python provides interfaces to all major
commercial databases.
 GUI Programming − Python supports GUI applications
that can be created and ported to many system calls,
libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.
 Scalable − Python provides a better structure and support
for large programs than shell scripting.

Python is a high-level, interpreted, interactive and object-


oriented scripting language. Python is designed to be highly
readable. It uses English keywords frequently where as other
languages use punctuation, and it has fewer syntactical
constructions than other languages.

 Python is Interpreted − Python is processed at runtime


by the interpreter. You do not need to compile your
program before executing it. This is similar to PERL and
PHP.
 Python is Interactive − You can actually sit at a Python
prompt and interact with the interpreter directly to write
your programs.
 Python is Object-Oriented − Python supports Object-
Oriented style or technique of programming that
encapsulates code within objects.
 Python is a Beginner's Language − Python is a great
language for the beginner-level programmers and supports
the development of a wide range of applications from
simple text processing to WWW browsers to games.

History of Python

Python was developed by Guido van Rossum in the late eighties


and early nineties at the National Research Institute for
Mathematics and Computer Science in the Netherlands.

Python is derived from many other languages, including ABC,


Modula-3, C, C++, Algol-68, SmallTalk, and Unix shell and
other scripting languages.

Python is copyrighted. Like Perl, Python source code is now


available under the GNU General Public License (GPL).

Python is now maintained by a core development team at the


institute, although Guido van Rossum still holds a vital role in
directing its progress.
Python Features

Python's features include −

 Easy-to-learn − Python has few keywords, simple


structure, and a clearly defined syntax. This allows the
student to pick up the language quickly.
 Easy-to-read − Python code is more clearly defined and
visible to the eyes.
 Easy-to-maintain − Python's source code is fairly easy-to-
maintain.
 A broad standard library − Python's bulk of the library
is very portable and cross-platform compatible on UNIX,
Windows, and Macintosh.
 Interactive Mode − Python has support for an interactive
mode which allows interactive testing and debugging of
snippets of code.
 Portable − Python can run on a wide variety of hardware
platforms and has the same interface on all platforms.
 Extendable − You can add low-level modules to the
Python interpreter. These modules enable programmers to
add to or customize their tools to be more efficient.
 Databases − Python provides interfaces to all major
commercial databases.
 GUI Programming − Python supports GUI applications
that can be created and ported to many system calls,
libraries and windows systems, such as Windows MFC,
Macintosh, and the X Window system of Unix.
 Scalable − Python provides a better structure and support
for large programs than shell scripting.

Apart from the above-mentioned features, Python has a big list


of good features, few are listed below −

 It supports functional and structured programming


methods as well as OOP.
 It can be used as a scripting language or can be compiled
to byte-code for building large applications.
 It provides very high-level dynamic data types and
supports dynamic type checking.
 It supports automatic garbage collection.
 It can be easily integrated with C, C++, COM, ActiveX,
CORBA, and Java.
 Variables are nothing but reserved memory locations to
store values. This means that when you create a variable
you reserve some space in memory.
 Based on the data type of a variable, the interpreter
allocates memory and decides what can be stored in the
reserved memory. Therefore, by assigning different data
types to variables, you can store integers, decimals or
characters in these variables.
 Assigning Values to Variables
 Python variables do not need explicit declaration to
reserve memory space. The declaration happens
automatically when you assign a value to a variable. The
equal sign (=) is used to assign values to variables.
 The operand to the left of the = operator is the name of the
variable and the operand to the right of the = operator is
the value stored in the variable. For example −
 A module allows you to logically organize your Python
code. Grouping related code into a module makes the code
easier to understand and use. A module is a Python object
with arbitrarily named attributes that you can bind and
reference.
 Simply, a module is a file consisting of Python code. A
module can define functions, classes and variables. A
module can also include runnable code.
 Example

 The Python code for a module named aname normally


resides in a file named aname.py. Here's an example of a
simple module, support.py

Python has been an object-oriented language since it existed.


Because of this, creating and using classes and objects are
downright easy. This chapter helps you become an expert in
using Python's object-oriented programming support.

INPUT DESIGN:
The input design is the link between the information
system and the user. It comprises the developing specification
and procedures for data preparation and those steps are
necessary to put transaction data in to a usable form for
processing can be achieved by inspecting the computer to read
data from a written or printed document or it can occur by
having people keying the data directly into the system. The
design of input focuses on controlling the amount of input
required, controlling the errors, avoiding delay, avoiding extra
steps and keeping the process simple. The input is designed in
such a way so that it provides security and ease of use with
retaining the privacy. Input Design considered the following
things:
 What data should be given as input?
 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing
input.
 Methods for preparing input validations and steps to follow
when error occur.
OBJECTIVES:

1.Input Design is the process of converting a user-oriented


description of the input into a computer-based system. This
design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct
information from the computerized system.

2. It is achieved by creating user-friendly screens for the data


entry to handle large volume of data. The goal of designing
input is to make data entry easier and to be free from errors. The
data entry screen is designed in such a way that all the data
manipulates can be performed. It also provides record viewing
facilities.
3.When the data is entered it will check for its validity. Data can
be entered with the help of screens. Appropriate messages are
provided as when needed so that the user
will not be in maize of instant. Thus the objective of input
design is to create an input layout that is easy to follow
OUTPUT DESIGN
A quality output is one, which meets the requirements of the end
user and presents the information clearly. In any system results
of processing are communicated to the users and to other system
through outputs. In output design it is determined how the
information is to be displaced for immediate need and also the
hard copy output. It is the most important and direct source
information to the user. Efficient and intelligent output design
improves the system’s relationship to help user decision-
making.
1. Designing computer output should proceed in an organized,
well thought out manner; the right output must be developed
while ensuring that each output element is designed so that
people will find the system can use easily and effectively. When
analysis design computer output, they should Identify the
specific output that is needed to meet the requirements.
2.Select methods for presenting information.
3.Create document, report, or other formats that contain
information produced by the system.
The output form of an information system should accomplish
one or more of the following objectives.
 Convey information about past activities, current status or
projections of the
 Future.
 Signal important events, opportunities, problems, or
warnings.
 Trigger an action.
 Confirm an action.
SYSTEM STUDY:
FEASIBILITY STUDY

The feasibility of the project is analyzed in this


phase and business proposal is put forth with a very general plan
for the project and some cost estimates. During system analysis
the feasibility study of the proposed system is to be carried out.
This is to ensure that the proposed system is not a burden to the
company. For feasibility analysis, some understanding of the
major requirements for the system is essential.
Three key considerations involved in the feasibility analysis are
 ECONOMICAL FEASIBILITY
 TECHNICAL FEASIBILITY
 SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic
impact that the system will have on the organization. The
amount of fund that the company can pour into the research and
development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget
and this was achieved because most of the technologies used are
freely available. Only the customized products had to be
purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical
feasibility, that is, the technical requirements of the system. Any
system developed must not have a high demand on the available
technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands
being placed on the client. The developed system must have a
modest requirement, as only minimal or null changes are
required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of
the system by the user. This includes the process of training the
user to use the system efficiently. The user must not feel
threatened by the system, instead must accept it as a necessity.
The level of acceptance by the users solely depends on the
methods that are employed to educate the user about the system
and to make him familiar with it. His level of confidence must
be raised so that he is also able to make some constructive
criticism, which is welcomed, as he is the final user of the
system.
SYSTEM TESTING
The purpose of testing is to discover errors. Testing is the
process of trying to discover every conceivable fault or
weakness in a work product. It provides a way to check the
functionality of components, sub assemblies, assemblies and/or
a finished product It is the process of exercising software with
the intent of ensuring that the
Software system meets its requirements and user expectations
and does not fail in an unacceptable manner. There are various
types of test. Each test type addresses a specific testing
requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate
that the internal program logic is functioning properly, and that
program inputs produce valid outputs. All decision branches and
internal code flow should be validated. It is the testing of
individual software units of the application .it is done after the
completion of an individual unit before integration. This is a
structural testing, that relies on knowledge of its construction
and is invasive. Unit tests perform basic tests at component level
and test a specific business process, application, and/or system
configuration. Unit tests ensure that each unique path of a
business process performs accurately to the documented
specifications and contains clearly defined inputs and expected
results.
Integration testing
Integration tests are designed to test integrated
software components to determine if they actually run as one
program. Testing is event driven and is more concerned with
the basic outcome of screens or fields. Integration tests
demonstrate that although the components were individually
satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration
testing is specifically aimed at exposing the problems that arise
from the combination of components.
Functional test
Functional tests provide systematic demonstrations that
functions tested are available as specified by the business and
technical requirements, system documentation, and user
manuals.
Functional testing is centered on the following items:
Valid Input : identified classes of valid input must be
accepted.
Invalid Input : identified classes of invalid input must be
rejected.
Functions : identified functions must be exercised.
Output : identified classes of application outputs
must be exercised.
Systems/Procedures: interfacing systems or procedures must be
invoked.
Organization and preparation of functional tests is focused
on requirements, key functions, or special test cases. In addition,
systematic coverage pertaining to identify Business process
flows; data fields, predefined processes, and successive
processes must be considered for testing. Before functional
testing is complete, additional tests are identified and the
effective value of current tests is determined.
System Test
System testing ensures that the entire integrated software
system meets requirements. It tests a configuration to ensure
known and predictable results. An example of system testing is
the configuration oriented system integration test. System
testing is based on process descriptions and flows, emphasizing
pre-driven process links and integration points.
White Box Testing
White Box Testing is a testing in which in which the
software tester has knowledge of the inner workings, structure
and language of the software, or at least its purpose. It is
purpose. It is used to test areas that cannot be reached from a
black box level.
Black Box Testing
Black Box Testing is testing the software without any
knowledge of the inner workings, structure or language of the
module being tested. Black box tests, as most other kinds of
tests, must be written from a definitive source document, such as
specification or requirements document, such as specification or
requirements document. It is a testing in which the software
under test is treated, as a black box .you cannot “see” into it.
The test provides inputs and responds to outputs without
considering how the software works.
6.1 Unit Testing:
Unit testing is usually conducted as part of a combined
code and unit test phase of the software lifecycle, although it is
not uncommon for coding and unit testing to be conducted as
two distinct phases.
Test strategy and approach
Field testing will be performed manually and functional
tests will be written in detail.
Test objectives
 All field entries must work properly.
 Pages must be activated from the identified link.
 The entry screen, messages and responses must not be
delayed.
Features to be tested
 Verify that the entries are of the correct format
 No duplicate entries should be allowed
 All links should take the user to the correct page.
6.2 Integration Testing
Software integration testing is the incremental
integration testing of two or more integrated software
components on a single platform to produce failures caused by
interface defects.
The task of the integration test is to check that components
or software applications, e.g. components in a software system
or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed
successfully. No defects encountered.
6.3 Acceptance Testing
User Acceptance Testing is a critical phase of any project
and requires significant participation by the end user. It also
ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed
successfully. No defects encountered.
CONCLUSION:
It provides a good basis for observing
hemoglobin, glucose, and creatinine in real-time from home.
Firstly, fingertip video is recorded using a smartphone,
illuminating the finger with 850 nm NIR LED. Then, PPG
signals are generated from the video. From the PPG signal, its
derivatives (1 st and 2 nd) and Fourier analysis, 46 characteristic
features are extracted. A correlation-based features selection
method using a is used to select appropriate features.
REFERENCE:
[1] J. Li and C. Fernando, ‘‘Smartphone-based personalized blood
glucose prediction,’’ ICT Exp., vol. 2, no. 4, pp. 150–154, Dec. 2016.
[2] Z. Li, G. Li, W.-J. Yan, and L. Lin, ‘‘Classifification of diabetes
and measurement of blood glucose concentration noninvasively using
near infrared spectroscopy,’’ Infr. Phys. Technol., vol. 67, pp. 574–582,
Nov. 2014.
[3] E. J. Wang, W. Li, J. Zhu, R. Rana, and S. N. Patel, ‘‘Noninvasive
hemoglobin measurement using unmodifified smartphone camera and
whiteflflash,’’ in Proc. 39th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc.
(EMBC), Jul. 2017, pp. 2333–2336.
[4] S. S. Morris, M. T. Ruel, R. J. Cohen, K. G. Dewey, B. de la Brière,
and M. N. Hassan, ‘‘Precision, accuracy, and reliability of hemoglobin
assessment with use of capillary blood,’’ Amer. J. Clin. Nutrition, vol. 69,
no. 6, pp. 1243–1248, Jun. 1999.
[5] M. K. Hasan, N. Sakib, R. R. Love, and S. I. Ahamed, ‘‘Analyzing
the existing noninvasive hemoglobin measurement techniques,’’ in Proc.
IEEE 8th Annu. Ubiquitous Comput., Electron. Mobile Commun. Conf.
(UEMCON), Oct. 2017, pp. 442–448.
[6] J. L. A. Nirupa and V. J. Kumar, ‘‘Non-invasive measurement of
hemoglobin content in blood,’’ in Proc. IEEE Int. Symp. Med. Meas.
Appl. (MeMeA), Jun. 2014, pp. 1–5.
[7] L. Scarfe, A. Rak-Raszewska, S. Geraci, D. Darssan, J. Sharkey,
J. Huang, N. C. Burton, D. Mason, P. Ranjzad, S. Kenny, N. Gretz,
R. Lévy, B. Kevin Park, M. García-Fiñana, A. S. Woolf, P. Murray,
and B. Wilm, ‘‘Measures of kidney function by minimally invasive tech
niques correlate with histological glomerular damage in SCID mice with
adriamycin-induced nephropathy,’’ Sci. Rep., vol. 5, no. 1, Oct. 2015,
Art. no. 13601.
[8] S. A. Siddiqui, Y. Zhang, J. Lloret, H. Song, and Z.
Obradovic, ‘‘Painfree blood glucose monitoring using wearable
sensors: Recent advancements and future prospects,’’ IEEE Rev.
Biomed. Eng., vol. 11, pp. 21–35, 2018.
[9] H. Von Schenck, M. Falkensson, and B. Lundberg,
‘‘Evaluation of ‘hemocue’ a new device for determining
hemoglobin,’’ Clin. Chem., vol. 32, no. 3, pp. 526–529, 1986.
[10] P. P. Pai, P. K. Sanki, S. K. Sahoo, A. De, S. Bhattacharya,
and S. Banerjee, ‘‘Cloud computing-based non-invasive glucose
monitoring for diabetic care,’’ IEEE Trans. Circuits Syst. I, Reg.
Papers, vol. 65, no. 2, pp. 663–676, Feb. 2018.
[11] S. Lekha and S. M., ‘‘Real-time non-invasive detection and
classifification of diabetes using modifified convolution neural
network,’’ IEEE J. Biomed. Health Informat., vol. 22, no. 5, pp.
1630–1636, Sep. 2018.
[12] M. Parker, Z. Han, E. Abu-Haydar, E. Matsiko, D.
Iyakaremye,L. Tuyisenge, A. Magaret, and A. Lyambabaje,
‘‘An evaluation of hemoglobin measurement tools and their
accuracy and reliability when screening for child anemia in
rwanda: A randomized study,’’ PLoS ONE, vol. 13, no. 1, Jan.
2018, Art. no. e0187663.
[13] J. Harvey, S. M. A. Salehizadeh, Y. Mendelson, and K. H.
Chon, ‘‘OxiMA: A frequency-domain approach to address
motion artifacts in photoplethysmograms for improved
estimation of arterial oxygen saturation and pulse rate,’’ IEEE
Trans. Biomed. Eng., vol. 66, no. 2, pp. 311–318, Feb. 2019.
[14] J. Kraitl, D. Klinger, D. Fricke, U. Timm, and H. Ewald,
‘‘Noninvasivemeasurement of blood components,’’ in
Advancement in Sensing Technology. Berlin, Germany:
Springer, 2013, pp. 237–262.
[15] F. Rundo, S. Conoci, A. Ortis, and S. Battiato, ‘‘An
advanced bio-inspired PhotoPlethysmoGraphy (PPG) and ECG
pattern recognition system for medical assessment,’’ Sensors,
vol. 18, no. 2, p. 405, Jan. 2018

You might also like