Blue Eyes Technology
Blue Eyes Technology
During the functional design phase we used UML standard use case notation,
which shows the functions the system offers to particular users. Blue Eyes has three groups of
users: operators, supervisors and system administrators. Operator is a person whose
physiological parameters are supervised. The operator wears the DAU. The only functions
offered to that user are authorization in the system and receiving alarm alerts. Such limited
functionality assures the device does not disturb the work of the operator (Fig. 2).
Receiving alerts – the function supplies the operator with the information about the most
important alerts regarding his or his co-workers’ condition and mobile device state (e.g.
connection lost, battery low). Alarms are signaled by using a beeper, earphone providing
central system sound feedback and a small alphanumeric LCD display, which shows more
detailed information.
Supervisor is a person responsible for analyzing operators’ condition and performance. The
supervisor receives tools for inspecting present values of the parameters (On-line browsing)
as well as browsing the results of long-term analysis (Off-line browsing).
During the on-line browsing it is possible to watch a list of currently working
operators and the status of their mobile devices. Selecting one of the operators enables the
supervisor to check the operator’s current physiological condition (e.g. a pie chart showing
active brain involvement) and a history of alarms regarding the operator. All new incoming
alerts are displayed immediately so that the supervisor is able to react fast. However, the
presence of the human supervisor is not necessary since the system is equipped with
reasoning algorithms and can trigger user-defined actions (e.g. to inform the operator’s co-
workers).
During off-line browsing it is possible to reconstruct the course of the
operator’s duty with all the physiological parameters, audio and video data. A comprehensive
data analysis can be performed enabling the supervisor to draw conclusions on operator’s
overall performance and competency (e.g. for working night shifts).
System administrator is a user that maintains the system. The administrator delivers tools
for adding new operators to the database, defining alarm conditions, configuring logging tools
and creating new analyzer modules.
While registering new operators the administrator enters appropriate data (and a photo
if available) to the system database and programs his personal ID card.
Defining alarm conditions – the function enables setting up user-defined alarm
conditions by writing condition-action rules (e.g. low saccadic activity during a longer period
of time inform operator’s co-workers, wake him up using the beeper or playing appropriate
sound and log the event in the database).
Designing new analyzer modules-based on earlier recorded data the
administrator can create new analyzer module that can recognize other behaviors than those
which are built-in the system. The new modules are created using decision tree induction
algorithm. The administrator names the new behavior to be recognized and points the data
associated with it. The results received from the new modules can be used in alarm
conditions.
Monitoring setup enables the administrator to choose the parameters to
monitor as well as the algorithms of the desired accuracy to compute parameter values.
Logger setup provides tools for selecting the parameters to be recorded. For
audio data sampling frequency can be chosen. As regards the video signal, a delay between
storing consecutive frames can be set (e.g. one picture in every two seconds).
Database maintenance – here the administrator can remove old or “uninteresting” data from
the database. The “uninteresting” data is suggested by the built-in reasoning system.
This section deals with the hardware part of the Blue Eyes system with regard to the
physiological data sensor, the DAU hardware components and the microcontroller software.
DAU software is written in assembler code, which assures the highest program
efficiency and the lowest resource utilization. The DAU communicates with the Bluetooth
module using Host Controller Interface (HCI) commands
The received data is stored in an internal buffer; after the whole frame is
completed the DAU encapsulates the data in an ACL frame and sends it over the Bluetooth
link. (The fetching phase takes up approx. 192s (24 frames x 8s) and the sending phase
takes at 115200 bps approx. 2,8 ms, so the timing fits well in the 4ms window.) In every state
removing the ID card causes the device to enter the No ID card state, terminating all the
established connections.
The second groups of events handled in the Data Processing state are
system messages and alerts. They are sent from the central system using the Bluetooth link.
Since the communication also uses microcontrollers interrupt system the events are delivered
instantly.
The remaining time of the microcontroller is utilized performing LCD display,
checking the state of the buttons, ID card presence and battery voltage level. Depending on
which button is pressed appropriate actions are launched. In every state removing the ID card
causes the device to enter the No ID card state terminating all the established connections.
In the DAU there are two independent data sources-Jazz sensor and Bluetooth
Host Controller. Since they are both handled using the interrupt system it is necessary to
decide which of the sources should have higher priority. Giving the sensor data the highest
priority may result in losing some of the data sent by the Bluetooth module,as the
transmission of the sensor data takes twice as much time as receiving one byte from UART.
Missing one single byte sent from the Bluetooth causes the loss of control over the
transmission. On the other hand, giving the Bluetooth the highest priority will make the DAU
stop receiving the sensor data until the Host Controller finishes its transmission. Central
system alerts are the only signals that can appear during sensor data fetching after all the
unimportant Bluetooth events have been masked out. The best solution would be to make the
central unit synchronize the alerts to be sent with the Bluetooth data reception. As the
delivered operating system is not a real-time system, the full synchronization is not possible.
As the Bluetooth module communicates asynchronously with the
microcontroller there was a need of implementing a cyclic serial port buffer, featuring UART
CTS/RTS flow control and a producer-consumer synchronization mechanism.
The module performs the analysis of the raw sensor data in order to obtain
information about the operator’s physiological condition. The separately running Data
Analysis Module supervises each of the working operators. The module consists of a number
of smaller analyzers extracting different types of information. Each of the analyzers registers
at the appropriate Operator Manager or another analyzer as a data consumer and, acting as a
producer, provides the results of the analysis. An analyzer can be either a simple signal filter
(e.g. Finite Input Response (FIR) filter) or a generic data extractor (e.g. signal variance,
saccade detector) or a custom detector module. As it is not able to predict all the supervisors’
needs, the custom modules are created by applying a supervised machine learning algorithm
to a set of earlier recorded examples containing the characteristic features to be recognized. In
the prototype we used an improved C4.5 decision tree induction algorithm. The computed
features can be e.g. the operator’s position (standing, walking and lying) or whether his eyes
are closed or opened.
As built-in analyzer modules we implemented a saccade detector, visual
attention level, blood oxygenation and pulse rate analyzers.
The saccade detector registers as an eye movement and accelerometer signal
variance data consumer and uses the data to signal saccade occurrence. Since saccades are the
fastest eye movements the algorithm calculates eye movement velocity and checks
physiological Constraints. The algorithm has two main steps:
User adjustment step. The phase takes up 5 s. After buffering approx. 5 s of the signal
differentiate it using three point central difference algorithm, which will give eye velocity
time series. Sort the velocities by absolute value and calculate upper 15% of the border
velocity along both X – v0x and Y – v0y axes . As a result v0x and v0y are cut-off velocities.
The Pulse rate analyzer registers for the oxyhemoglobin and deoxyhemoglobin
level data streams. Since both signals contain a strong sinusoidal component related to
heartbeat, the pulse rate can be calculated measuring the time delay between subsequent
extremes of one of the signals. We decided not to process only one of the data streams – the
algorithm is designed to choose dynamically one of them on the grounds of the signal level.
Unfortunately, the both signals are noised so they must be filtered before further processing.
We considered a number of different algorithms and decided to implement average value
based smoothing. More detailed discussion is presented in section 3.3.5 Tradeoffs and
Optimization. The algorithm consists in calculating an average signal value in a window of
100 samples. In every step the window is advanced 5 samples in order to reduce CPU load.
This approach lowers the sampling rate from 250 Hz down to 50 Hz. However, since the
Visual heartbeat frequency is at most 4 Hz the Nyquist condition remains satisfied. The figures
show the signal before (Fig. 10a) and after filtering (Fig 10b).
Figure: 10(a) Figure: 10(b)
After filtering the signal the pulse calculation algorithm is applied. The
algorithm chooses the point to be the next maximum if it satisfies three conditions: points on
the left and on the right have lower values, the previous extreme was a minimum, and the
time between the maximums is not too short (physiological constraint). The new pulse value
is calculated based on the distance between the new and the previous maximum detected. The
algorithm gets the last 5 calculated pulse values and discards 2 extreme values to average the
rest. Finally, it does the same with the minimums of the signal to obtain the second pulse rate
value, which gives the final result after averaging.
Additionally, we implemented a simple module that calculates average blood
oxygenation level. Despite its simplicity the parameter is an important measure of the
operator’s physiological condition.
The other signal features that are not recognized by the built-in analyzers can
be extracted using custom modules created by Decision Tree Induction module. The custom
module processes the generated decision tree, registers for needed data streams and produces
the desired output signal.
Decision Tree Induction module generates the decision trees, which are binary
trees with an attribute test in each node. The decision tree input data is an object described by
means of a set of attribute-value pairs. The algorithm is not able to process time series
directly. The attributes therefore are average signal value, signal variance and the strongest
sinusoidal components. As an output the decision tree returns the category the object belongs
to. In the Decision Tree Induction module we mainly use C 4.5 algorithm [2], but also
propose our own modifications. The algorithm is a supervised learning from examples i.e. it
considers both attributes that describe the case and a correct answer. The main idea is to use a
divide-and-conquer approach to split the initial set of examples into subsets using a simple
rule (i-th attribute less than a value). Each division is based on entropy calculation – the
distribution with the lowest entropy is chosen. Additionally, we propose many modifications
concerning some steps of the algorithm and further exploration of the system.
For each case to be classified C 4.5 traverses the tree until reaching the leaf
where appropriate category id is stored. To increase the hit ratio our system uses more
advanced procedure. For single analysis we develop a group of k trees (where k is a
parameter), which we call a decision forest. Initial example set S is divided randomly into k+1
subsets S0 ... Sk. S0 is left to test the whole decision forest. Each tree is induced using various
modifications of the algorithm to provide results’ independence. Each i-th tree is taught using
S\S0\Si set (S without S0 and Si sets) and tested with Si that estimates a single tree error ratio.
Furthermore we extract all wrongly classified examples and calculate correlation matrix
between each pair of the trees. In an exploring phase we use unequal voting rule – each tree
has a vote of strength of its reliability. Additionally, if two trees give the same answer their
vote is weakened by the correlation ratio.
Alarm Dispatcher Module is a very important part of the Data Analysis
module. It registers for the results of the data analysis, checks them with regard to the user-
defined alarm conditions and launches appropriate actions when needed. The module is a
producer of the alarm messages, so that they are accessible in the logger and visualization
modules.
2.3.3. DATA LOGGER MODULE
The module provides support for storing the monitored data in order to enable
the supervisor to reconstruct and analyze the course of the operator’s duty. The module
registers as a consumer of the data to be stored in the database. Each working operator’s data
is recorded by a separate instance of the Data Logger. Apart from the raw or processed
physiological data, alerts and operator’s voice are stored. The raw data is supplied by the
related Operator Manager module, whereas the Data Analysis module delivers the processed
data. The voice data is delivered by a Voice Data Acquisition module. The module registers as
an operator’s voice data consumer and optionally processes the sound to be stored (i.e.
reduces noise or removes the fragments when the operator does not speak). The Logger’s task
is to add appropriate time stamps to enable the system to reconstruct the voice.
Additionally, there is a dedicated video data logger, which records the data
supplied by the Video Data Acquisition module (in the prototype we use JPEG compression).
The module is designed to handle one or more cameras using Video for Windows standard.
The Data Logger is able to use any ODBC-compliant database system. In the prototype we
used MS SQL Server, which is a part of the Project Kit.
The module provides user interface for the supervisors. It enables them to
watch each of the working operator’s physiological condition along with a preview of
selected video source and his related sound stream. All the incoming alarm messages are
instantly signaled to the supervisor. Moreover, the visualization module can be set in the off-
line mode, where all the data is fetched from the database. Watching all the recorded
physiological parameters, alarms, video and audio data the supervisor is able to reconstruct
the course of the selected operator’s duty.
To test the possibilities and performance of the remaining parts of the Project
Kit (computer, camera and database software) Blue Capture (Fig. 12) was created. The
tool supports capturing video data from various sources (USB web-cam, industrial camera)
and storing the data in the MS SQL Server database. Additionally, the application performs
sound recording. After filtering and removing insignificant fragments (i.e. silence) the audio
data is stored in the database. Finally, the program plays the recorded audiovisual stream.
They used the software to measure database system performance and to optimize some of the
SQL queries (e.g. we replaced correlated SQL queries with cursor operations).
Figure 12: BlueCapture
Also a simple tool for recording Jazz Multisensory measurements was
introduced. The program reads the data using a parallel port and writes it to a file. To program
the operator’s personal ID card we use a standard parallel port, as the EEPROMs and the port
are both TTL-compliant. A simple dialog-based application helps to accomplish the task.
This work explores a new direction in utilizing eye gaze for computer input. Gaze
tracking has long been considered as an alternative or potentially superior pointing method for
computer input. We believe that many fundamental limitations exist with traditional gaze
pointing. In particular, it is unnatural to overload a perceptual channel such as vision with a
motor control task. We therefore propose an alternative approach, dubbed MAGIC (Manual
And Gaze Input Cascaded) pointing. With such an approach, pointing appears to the user to be
a manual task, used for fine manipulation and selection. However, a large portion of the
cursor movement is eliminated by warping the cursor to the eye gaze area, which
encompasses the target. Two specific MAGIC pointing techniques, one conservative and one
liberal, were designed, analyzed, and implemented with an eye tracker we developed. They
were then tested in a pilot study. This early stage exploration showed that the MAGIC
pointing techniques might offer many advantages, including reduced physical effort and
fatigue as compared to traditional manual pointing, greater accuracy and naturalness than
traditional gaze pointing, and possibly faster speed than manual pointing. The pros and cons
of the two techniques are discussed in light of both performance data and subjective reports.
There are two fundamental shortcomings to the existing gaze pointing techniques,
regardless of the maturity of eye tracking technology. First, given the one-degree size of the
fovea and the subconscious jittery motions that the eyes constantly produce, eye gaze is not
precise enough to operate UI widgets such as scrollbars, hyperlinks, and slider handles on
today’s GUI interfaces. At a 25-inch viewing distance to the screen, one degree of arc
corresponds to 0.44 in, which is twice the size of a typical scroll bar and much greater than
the size of a typical character.
Second, and perhaps more importantly, the eye, as one of our primary perceptual
devices, has not evolved to be a control organ. Sometimes its movements are voluntarily
controlled while at other times it is driven by external events. With the target selection by
dwell time method, considered more natural than selection by blinking, one has to be
conscious of where one looks and how long one looks at an object. If one does not look at a
target continuously for a set threshold (e.g., 200 ms), the target will not be successfully
selected. On the other hand, if one stares at an object for more than the set threshold, the
object will be selected, regardless of the user’s intention. In some cases there is not an adverse
effect to a false target selection. Other times it can be annoying and counter-productive (such
as unintended jumps to a web page). Furthermore, dwell time can only substitute for one
mouse click. There are often two steps to target activation. A single click selects the target
(e.g., an application icon) and a double click (or a different physical button click) opens the
icon (e.g., launches an application). To perform both steps with dwell time is even more
difficult. In short, to load the visual perception channel with a motor control task seems
fundamentally at odds with users’ natural mental model in which the eye searches for and
takes in information and the hand produces output that manipulates external objects. Other
than for disabled users, who have no alternative, using eye gaze for practical pointing does not
appear to be very promising.
Are there interaction techniques that utilize eye movement to assist the control task
but do not force the user to be overly conscious of his eye movement? We wanted to design a
technique in which pointing and selection remained primarily a manual control task but were
also aided by gaze tracking. Our key idea is to use gaze to dynamically redefine (warp) the
“home” position of the pointing cursor to be at the vicinity of the target, which was
presumably what the user was looking at, thereby effectively reducing the cursor movement
amplitude needed for target selection.
Once the cursor position had been redefined, the user would need to only make a small
movement to, and click on, the target with a regular manual input device. In other words, we
wanted to achieve Manual And Gaze Input Cascaded (MAGIC) pointing, or Manual
Acquisition with Gaze Initiated Cursor. There are many different ways of designing a MAGIC
pointing technique. Critical to its effectiveness is the identification of the target the user
intends to acquire. We have designed two MAGIC pointing techniques, one liberal and the
other conservative in terms of target identification and cursor placement. The liberal approach
is to warp the cursor to every new object the user looks at (See Figure 1).
The user can then take control of the cursor by hand near (or on) the target, or ignore
it and search for the next target. Operationally, a new object is defined by sufficient distance
(e.g., 120 pixels) from the current cursor position, unless the cursor is in a controlled motion
by hand. Since there is a 120-pixel threshold, the cursor will not be warped when the user
does continuous manipulation such as drawing. Note that this MAGIC pointing technique is
different from traditional eye gaze control, where the user uses his eye to point at targets
either without a cursor or with a cursor that constantly follows the jittery eye gaze motion.
The liberal approach may appear “pro-active,” since the cursor waits readily in the
vicinity of or on every potential target. The user may move the cursor once he decides to
acquire the target he is looking at. On the other hand, the user may also feel that the cursor is
over-active when he is merely looking at a target, although he may gradually adapt to ignore
this behavior. The more conservative MAGIC pointing technique we have explored does not
warp a cursor to a target until the manual input device has been actuated. Once the manual
input device has been actuated, the cursor is warped to the gaze area reported by the eye
tracker. This area should be on or in the vicinity of the target. The user would then steer the
cursor annually towards the target to complete the target acquisition. As illustrated in Figure
2, to minimize directional uncertainty after the cursor appears in the conservative technique,
we introduced an “intelligent” bias. Instead of being placed at the enter of the gaze area, the
cursor position is offset to the intersection of the manual actuation vector and the boundary f
the gaze area. This means that once warped, the cursor is likely to appear in motion towards
the target, regardless of how the user actually actuated the manual input device. We hoped that
with the intelligent bias the user would not have to Gaze position reported by eye tracker Eye
tracking boundary with 95% confidence True target will be within the circle with 95%
probability. The cursor is warped to eye tracking position, which is on or near the true target
Previous cursor position, far from target (e.g., 200 pixels) Figure 1.
The liberal MAGIC pointing technique: cursor is placed in the vicinity of a target that
the user fixates on. Actuate input device, observe the cursor position and decide in which
direction to steer the cursor. The cost to this method is the increased manual movement
amplitude. Figure 2. The conservative MAGIC pointing technique with “intelligent offset” To
initiate a pointing trial, there are two strategies available to the user. One is to follow “virtual
inertia:” move from the cursor’s current position towards the new target the user is looking at.
This is likely the strategy the user will employ, due to the way the user interacts with today’s
interface. The alternative strategy, which may be more advantageous but takes time to learn, is
to ignore the previous cursor position and make a motion which is most convenient and least
effortful to the user for a given input device.
The goal of the conservative MAGIC pointing method is the following. Once the user looks at
a target and moves the input device, the cursor will appear “out of the blue” in motion
towards the target, on the side of the target opposite to the initial actuation vector. In
comparison to the liberal approach, this conservative approach has both pros and cons. While
with this technique the cursor would never be over-active and jump to a place the user does
not intend to acquire, it may require more hand-eye coordination effort. Both the liberal and
the conservative MAGIC pointing techniques offer the following potential advantages:
1. Reduction of manual stress and fatigue, since the cross screen long-distance cursor
movement is eliminated from manual control.
2. Practical accuracy level. In comparison to traditional pure gaze pointing whose accuracy is
fundamentally limited by the nature of eye movement, the MAGIC pointing techniques let the
hand complete the pointing task, so they can be as accurate as any other manual input
techniques.
3. A more natural mental model for the user. The user does not have to be aware of the role of
the eye gaze. To the user, pointing continues to be a manual task, with a cursor conveniently
appearing where it needs to be.
4. Speed. Since the need for large magnitude pointing operations is less than with pure manual
cursor control, it is possible that MAGIC pointing will be faster than pure manual pointing.
5. Improved subjective speed and ease-of-use. Since the manual pointing amplitude is smaller,
the user may perceive the MAGIC pointing system to operate faster and more pleasantly than
pure manual control, even if it operates at the same speed or more slowly.
The fourth point wants further discussion. According to the well
accepted Fitts’ Law, manual pointing time is logarithmically proportional to the A/W ratio,
where A is the movement distance and W is the target size. In other words, targets which are
smaller or farther away take longer to acquire.
For MAGIC pointing, since the target size remains the same but the cursor
movement distance is shortened, the pointing time can hence be reduced. It is less clear if eye
gaze control follows Fitts’ Law. In Ware and Mikaelian’s study, selection time was shown to
be logarithmically proportional to target distance, thereby conforming to Fitts’ Law. To the
contrary, Silbert and Jacob [9] found that trial completion time with eye tracking input
increases little with distance, therefore defying Fitts’ Law. In addition to problems with
today’s eye tracking systems, such as delay, error, and inconvenience, there may also be many
potential human factor disadvantages to the MAGIC pointing techniques, including the
following:
1. With the more liberal MAGIC pointing technique, the cursor warping can be overactive at
times, since the cursor moves to the new gaze location whenever the eye gaze moves more
than a set distance (e.g., 120 pixels) away from the cursor. This could be particularly
distracting when the user is trying to read. It is possible to introduce additional constraint
according to the context. For example, when the user’s eye appears to follow a text reading
pattern, MAGIC pointing can be automatically suppressed.
2. With the more conservative MAGIC pointing technique, the uncertainty of the exact
location at which the cursor might appear may force the user, especially a novice, to adopt a
cumbersome strategy: take a touch (use the manual input device to activate the cursor), wait
(for the cursor to appear), and move (the cursor to the target manually). Such a strategy may
prolong the target acquisition time. The user may have to learn a novel hand-eye coordination
pattern to be efficient with this technique. Gaze position reported by eye tracker Eye tracking
boundary with 95% confidence True target will be within the circle with 95% probability The
cursor is warped to the boundary of the gaze area, along the initial actuation vector Previous
cursor position, far from target Initial manual actuation vector
3. With pure manual pointing techniques, the user, knowing the current cursor location, could
conceivably perform his motor acts in parallel to visual search. Motor action may start as soon
as the user’s gaze settles on a target. With MAGIC pointing techniques, the motor action
computation (decision) cannot start until the cursor appears. This may negate the time saving
gained from the MAGIC pointing technique’s reduction of movement amplitude. Clearly,
experimental (implementation and empirical) work is needed to validate, refine, or invent
alternative MAGIC pointing techniques.
4. CONCLUSION
The Blue Eyes system is developed because of the need for a real-time
monitoring system for a human operator. The approach is innovative since it helps supervise
the operator not the process, as it is in presently available solutions. We hope the system in its
commercial release will help avoid potential threats resulting from human errors, such as
weariness, oversight, tiredness or temporal indisposition. However, the prototype developed is
a good estimation of the possibilities of the final product. The use of a miniature CMOS
camera integrated into the eye movement sensor will enable the system to calculate the point
of gaze and observe what the operator is actually looking at. Introducing voice recognition
algorithm will facilitate the communication between the operator and the central system and
simplify authorization process.
Despite considering in the report only the operators working in control rooms,
our solution may well be applied to everyday life situations. Assuming the operator is a driver
and the supervised process is car driving it is possible to build a simpler embedded on-line
system, which will only monitor conscious brain involvement and warn when necessary. As in
this case the logging module is redundant, and the Bluetooth technology is becoming more
and more popular, the commercial implementation of such a system would be relatively
inexpensive.
The final thing is to explain the name of the system. Blue Eyes emphasizes
the foundations of the project – Bluetooth technology and the movements of the eyes.
Bluetooth provides reliable wireless communication whereas the eye movements enable us to
obtain a lot of interesting and important information.