0% found this document useful (0 votes)
156 views11 pages

A Survey of Interactive Systems Based On Brain-Computer Interfaces

This document summarizes a survey of interactive systems based on brain-computer interfaces (BCIs). It discusses how BCIs allow users to interact with computers using only their brain signals, without muscles. It reviews different noninvasive technologies for capturing brain signals, including EEG and fNIRS. The document also discusses applications of BCIs in areas like human-computer interaction and virtual reality, and identifies challenges in these areas from an interactive perspective.

Uploaded by

Panos Lionas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
156 views11 pages

A Survey of Interactive Systems Based On Brain-Computer Interfaces

This document summarizes a survey of interactive systems based on brain-computer interfaces (BCIs). It discusses how BCIs allow users to interact with computers using only their brain signals, without muscles. It reviews different noninvasive technologies for capturing brain signals, including EEG and fNIRS. The document also discusses applications of BCIs in areas like human-computer interaction and virtual reality, and identifies challenges in these areas from an interactive perspective.

Uploaded by

Panos Lionas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 11

SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013 3

A Survey of Interactive Systems based on Brain-


Computer Interfaces

Alessandro L. Stamatto Ferreira, Leonardo Cunha de Miranda, Erica E. Cunha de Miranda, Sarah Gomes Sakamoto
Department of Informatics and Applied Mathematics
Federal University of Rio Grande do Norte (UFRN)
Natal, RN, Brazil
alexmatto@ppgsc.ufrn.br, leonardo@dimap.ufrn.br, erica@dimap.ufrn.br, sarahsakamoto@ppgsc.ufrn.br

Abstract—Brain-Computer Interface (BCI) enables users to lost its function along with human evolution – to control a
interact with a computer only through their brain biological television. However, it is possible to go further and use a part
signals, without the need to use muscles. BCI is an emerging of human body’s central axis, already presented in all forms of
research area but it is still relatively immature. However, it is human interaction: the brain.
important to reflect on the different aspects of the Human-
Computer Interaction (HCI) area related to BCIs, considering Brain-computer interface is a research field been studied
that BCIs will be part of interactive systems in the near future. since middle of 70s in diverse areas of knowledge such as
BCIs most attend not only to handicapped users, but also healthy neuroscience, biomedicine, automation and control engineering
ones, improving interaction for end-users. Virtual Reality (VR) is and computer science. Meanwhile only recently cost and
also an important part of interactive systems, and combined with accuracy required for civilian use have been achieved. People
BCI could greatly enhance user interactions, improving the user with severe motor impairments are main beneficiaries of brain-
experience by using brain signals as input with immersive computer interface researches, as persons with locked-in
environments as output. This paper addresses only noninvasive syndrome, i.e. a rare condition characterized by paralysis of
BCIs, since this kind of capture is the only one to not present risk voluntary muscles except for the eyes. Nevertheless, we realize
to human health. As contributions of this work we highlight the that people without any disability are also potential users of
survey of interactive systems based on BCIs focusing on HCI and solutions which promote interaction between humans and
VR applications, and a discussion on challenges and future of this computers through cerebral signals, in the most possible
subject matter.
natural way.
Keywords—Brain-Computer Interface; BCI; Headset; EEG; However, interactive aspects of BCIs remain poorly
Human-Computer Interaction; HCI. explored by researchers, probably due to the intrinsic
complexity of areas involved in this research topic. New
I. INTRODUCTION studies are coming out guided by some computing areas such
The evolution of technology provides significant changes as Human-Computer Interaction (HCI) and Virtual Reality
(VR). A prime example is a work by Solovey et al. [30], which
in the way users use interactive systems. With the ever-
increasing usage of tablets and smartphones, it can be observed describes the use of brain-computer interaction in a multi-
modal interface. Also, Friedman et al. [49] present a brain-
that interaction between users and applications will take place
computer interface with virtual reality. There is therefore a
through smaller displays and touchscreens. Whereas modern
controls such as WiiMote and Kinect highlight the need for strong need for a detailed study to identify clearly and
objectively current limitation of brain-computer interface from
interaction adjustment considering user physical movements in
their context of use to support appropriate utilization of an interactive perspective.
systems. Therefore, design and development of interactive Millán et al. [22] present a brain-computer interface review
systems should follow new trends of technologies in order to focused on motor substitution with neuroprostheses and
provide better user experience, increasing productivity and recovery through neurorehabilitation. The authors discuss
offering intuitive actions for execution of different tasks. brain-computer interface applications in an assistive
technology context, such as using sensors in a wheelchair for
With technological advancements different kinds of
better brain-computer interface control. Lotte et al. [50] review
interaction which use our bodies have emerged, enabling the
and explore brain-computer interface works that use VR,
use of various body parts other than our hands. For example,
focusing on the design of brain-computer interface based on
Harrison et al. [10] demonstrated the possibility of using
VR applications. In this paper, we discuss brain-computer
human skin as a touch interface, Nam et al. [24] presented a
interface solutions in an interactive perspective, such as
wheelchair controlled by tongue movements, and Liu et al. [17]
evaluation of user’s cognitive workload and the lack of
proposed an eye-tracking system as well as several examples of
eye-tracking for human-computer interaction. Also, Vernon freedom regarding visual attention. Furthermore, our work
and Joshi [31] propose using a muscle above the ear – which focus on HCI and VR aspects, while taking in consideration

ISSN: 2236-3297
4 SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013

both healthy and impaired users. Afterwards, we identify and Fig. 1 shows different equipment to gather cerebral
discuss several challenges in this context. information in a noninvasive way with the above mentioned
techniques, i.e. EEG electrodes capturing electrical signals
This paper is organized as follows: Section II (Fig. 1a), scanner fMRI with magnetic resonance imaging (Fig.
contextualizes brain-computer interface area, introducing basic 1b), and spectroscopic sensors with near-infrared radiation
concepts and technologies; Section III presents a survey of
(Fig. 1c). Nowadays, only EEG and fNIRS enable to gather
brain-computer interface; Section IV presents several cerebral information in a real usage scenario due its relative
challenges related to brain-computer interfaces; Section V low-cost and portability. Moreover, EEG has the best temporal
discusses this research topic; and Section VI concludes the resolution, which means that it captures signals faster than
paper.
others, and hence this method is the most used in BCIs.

II. BRAIN-COMPUTER INTERFACE


Brain-Computer Interface (BCI) is a mode of interaction
between human beings and computers which does not use any
muscle, since system is controlled through user’s mental
activity captured with specific equipment. According to
Wolpaw et al. [34], BCI is a communication system with two
adaptive components that mutually complement each other.
For these authors, at the current technology stage, users should
fit into BCI to control the system since it should adapt itself to
user’s mental signals. Hence, user must understand the system Fig. 1. Nonivasive equipments used to capture cerebral information (a) EEG
electrodes (b) fMRI scanner (c) spectroscopic sensors. Sources: [1],1,2.
which must adjust itself to user, both required for BCI to
succeed. There are different capture devices, which vary greatly in
BCI requires reception of brain signals captured directly shape and may be a cap, tiara, headband, helmet, or even loose
from human brain. There are three different ways to capture electrodes. In this paper we unify all these terms in a single
these signals, i.e. (i) invasive, (ii) partially invasive, and (iii) one: headset. Hence, we consider as headset a set of sensors
noninvasive. Invasive capture is characterized by introduction placed on user’s head. For marketing purpose, companies have
of implants into user’s encephalic mass, directly into the gray been developed more portable headsets with attractive designs
matter, providing high quality signal reading; however it at lower costs. These devices aims to provide greater comfort
causes great inconvenience and risks to human health. In as compared with equipment showed in Fig. 1. In 2009,
partially invasive capture, implants are placed beneath the NeuroSky3 has launched MindSet, a wireless headset with a
skull without drilling the brain. Despite its lower quality single EEG electrode and capable of measuring user
signals, this signal capture form presents lower risks to health concentration. This company has others headsets available
as compared with invasive approach. Lastly, noninvasive such as MindWave, launched in 2011. In 2009, other company
capture enables gathering information without any implant named Emotiv4, launched EPOC: a wireless headset in a tiara
since sensors are placed on the scalp, fully external to the body. format. EPOC has 14 EEG electrodes and a gyroscope, which
Noninvasive BCIs are more convenient and easy to use, and measures head movements. BCI researches with fNIRS in real
due to technological advancements of current solutions, usage scenarios usually use sensors covered with a headband in
provide good quality signal capture. It is also the only one to order to maximize comfort. Fig. 2 shows some BCI headsets,
not present risk to users’ health. For this reason, this paper i.e. NeuroSky MindSet (Fig. 2a), Emotiv EPOC (Fig. 2b), and
focuses only on noninvasive BCIs. fNIRS sensors covered with a headband (Fig. 2c).
There are three most common techniques to obtain
cerebral information, i.e. (i) electroencephalogram (EEG),
(ii) functional magnetic resonance imaging (fMRI), and (iii)
functional near-infrared spectroscopy (fNIRS). With EEG,
brain activity is captured through sensors called electrodes. It is
possible because neurons communicate with each other via
electrical signals, which eventually reach brain surface and
then are captured by electrodes. The fMRI technique measures
brain activity through blood oxygenation and flow, which
Fig. 2. BCI headsets (a) NeuroSky MindSet (b) Emotiv EPOC (c) fNIRS
increase in the specific area involved in mental process. This snsors covered with a headband. Sources: [7],[1],[29].
capture technique requires usage of equipment with
considerable dimensions and a scanner with a large magnetic
field. And fNIRS method also measure brain activity through
blood oxygenation and flow, but it is based on identifying
variation of optical properties in brain images. Near-infrared 1
http://blogs.oem.indiana.edu/scholarships/index.php/2009/10/26/neurons-an
light is sent into the user’s forehead and, through light d-electrodes/fmri_groot/.
2
detectors, the reflected rays are picked up and correlated to http://www.spiegel.de/fotostrecke/fotostrecke-13782-3.html.
3
specific concentration of oxygen. http://www.neurosky.com.
4
http://www.emotiv.com.

ISSN: 2236-3297
SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013 5

A. Thoughts Recognition
BCIs require recognizing a thought or mental activity in
order to activate an action. An ideal scenario should be to think
about turning a lamp on and then, BCI system recognizes this
thought and turns a lamp on automatically. Currently,
recognizing specific thought such as “turn lamp on” is still
very difficult. However, there are three mental activities
recognized with certain precision, which are commonly used Fig. 3. Spellers based on (a) P300 (b) SSVEP. Sources: [26],[13].
on BCIs applications, i.e. (i) concentration, in which Alpha
and Beta waves are used to estimate user’s attention and In BCIs, processing can be performed online and offline.
relaxation/meditation, (ii) stimulus response, in which brain Online processing occurs in real time while the user utilizes a
responses are detected when user focus on certain flashing BCI and offline processing is performed after user experiment
graphic elements (visual stimulus) and/or special sound with a post-processing approach in order to obtain the
patterns (sound stimulus), and (iii) imagined movement, in maximum precision. There is also a BCI classification
which is possible to detect kinetic thoughts, such as imagining regarding rhythm, i.e. synchronous and asynchronous. In
your right hand opening and closing, due to the synchronous BCIs, commands are interpreted at a constant
synchronization and desynchronization of Mu rhythm. time rate. Therefore, after every certain amount of time, a
command is recognized regardless of user’s intent. Whereas
The detection for a stimulus response (ii) is subdivided asynchronous BCIs – also called self-paced – give control to
into two types, i.e. oscillating stimulus and transient stimulus. the user to recognize a command only when wanted.
In oscillating stimulus, elements are differentiated by
frequency, such as LEDs where each one flashes – oscillates – We consider pertinent presenting some fundamental
in a different frequency, inducing a natural response from brain concepts of the BCI area in order to provide a refined
and generating electrical activity in the same or multiple understanding about the literature works presented in this
frequency of stimulus. Other example of oscillating stimulus is survey. The objective of this background is to present a
in the case of two sounds from different frequencies, which theoretical overview, not an introduction tutorial about BCIs.
generates a specific response to the focus in each one. The The following section presents a survey that comprises a
response for a visual oscillating stimulus is called Steady State relevant part of the literature about this research topic.
Visually-Evoked Potential (SSVEP) and a response for a
sound oscillating stimulus is called Steady State Auditory III. SURVEY
Evoked Potential (SSAEP). Transient stimulus are The survey presented in this paper describes works which
differentiated by response to a transition from a visual/sound address new BCIs. These interfaces are related to daily task
state to another. When an individual waits for a certain accomplishment, now possible through cerebral waves.
stimulus among other similar stimulus, a wave called P300 is Therefore works presented demonstrate the potential of new
generated as a response. An example would be five squares, off interaction forms with interactive systems through BCIs.
most of the time, which each one turns on for a short time and Moreover, to a greater identification of proposes and
turns off again. User concentrates in one square and when this challenges of this research area we grouped works according
square turns on, a P300 wave is generated due to the small detection approach it utilizes, i.e. (a) visual stimulus response,
“surprise” caused by the transition from off to on. Likewise, it (b) sound stimulus response, (c) concentration, (d) imagined
is possible to identify a user’s response when listening a sound movement, and (e) neurofeedback.
repeatedly and then suddenly, a different sound occurs.
The search strategy consisted of automatic and manual
One of the first visual stimulus-based applications searches in scientific libraries and bibliographic databases.
commonly used in tests is typing, called speller. In this test, Automatic search was conducted in IEEE Xplore, ACM DL,
screen contains letters from ‘A’ to ‘Z’ arranged in a grid/matrix Springer, Elsevier, Scielo, Scopus, ISI Web of Knowledge, and
and user must concentrate in a specific letter. BCI recognizes also in Google Scholar; manual search was made in PLoS
which letter and presents it to user. Then, user can focus and Biology and Frontiers in Neuroscience Journals. In the search
concentrates in other letter and therefore, letter by letter to process we used a combination of the following keywords in
form words. In SSVEP-based BCIs, each letter flashes English (presented in this paper in alphabetical order): BCI,
intermittently in different frequencies, whereas in P300-based brain, computer, HCI, human, interaction, interface, reality,
BCIs, one line/column flashes at time in a random pattern. virtual, and VR. For this study we select only recent works, i.e.
When the line corresponding to the user’s chosen letter flashes, paper published in the last five years.
P300 wave is recognized, indicating that user is focusing on
that line. Likewise, when the column flashes this wave is
A. Visual Stimulus Response
identified. Thus, with both the line and column recognized, it is
possible to identify the letter chosen by user. Fig. 3 shows two Mauro et al. [21] exhibit the use of a BCI to control the
different spellers which enables word entry like as a keyboard. cursor of a desktop operating system. They implemented two
In Fig. 3a the speller is used on P300-based solutions and Fig. BCIs based on P300, one exogenous and other endogenous; in
3b illustrates a speller for SSVEP-based solutions. an exogenous interface the user focus is external, while on an
endogenous interface the user focus should be at the center
(internal). Both interfaces allow four movement directions with

ISSN: 2236-3297
6 SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013

four squares – one in each side – representing position interface offers an increased comfort with a good precision
objectives for testing purposes. On the exogenous interface, (83%).
those squares flash, and the user has to focus on the square in
desired direction. The squares do not flash in the exogenous Ceccoti [4] developed an asynchronous BCI speller based
interface. Instead, one letter is showed on the center of screen, on SSVEP. The speller objective was to achieve an intuitive
system where even inexperienced users could successfully use
alternating between the initials of Italian words for directions,
i.e. alto (up), destra (right), basso (down), and sinistra (left). it, while causing the least possible discomfort. Therefore, the
The user has to count the occurrences of the letter representing letters are divided into groups and what flashes is the contour
the desired direction. Eight patients, half healthy and the other of those groups. The system automatically configures the BCI
and the asynchronous nature of the interface leaves the user
half in advanced state of paralysis, participated in the
experiment. Results demonstrated that there was no difference more relaxed. Another interesting work is the one presented by
on the precision of the healthy group versus the paralyzed one, Campbell et al. [2] that proposes the NeuroPhone, a iPhone
which indicates that the interfaces devised do not depend on application in which phone calls are made through a BCI using
the EPOC. A contact photo grid is showed to the user and one
motor abilities.
photo flashes at time in a P300 fashion. Then, the user focuses
Hood et al. [42] developed a BCI control for a car driving on the contact to call. Fig. 5 shows the process of calling a
virtual simulation using CARRS-Q. CARRS-Q simulator contact.
consists of a platform with 180 degrees frontal projection and
three simulated mirrors. The BCI uses three LEDs as SSVEP
stimulus, each one offering a configurable command. One
possible configuration would be with three LEDs, i.e. one to
right steer the wheel, another one for left steering it, and the
last for straight steering. It reached good precision rates but
still cannot be applied in real situations, because it is not safe
enough. In the authors opinion, a virtual ambient will be
essential for improvements on the interface and safety
guarantees of future car-driving BCIs. Fig. 4 showcases the
system being used.
Fig. 5. Contacts flash, one by one, detecting the user visual focus. A phone
call is made to the one choosed. Source: [2].

A remote robot is controlled in the SSVEP BCI made by


Gergondet et al. [6]. In this system, a real-time video displays
the robot “vision”, i.e. a camera coupled on the robot. On the
controlling machine – notebook – the interface mixes the robot
vision with four red squares at the four sides – top, bottom, left,
Fig. 4. BCI for car driving in virtual simulator CARRS-Q. Source: [42]. and right – that act as the visual stimulus. Focusing on one of
these directions, the robot increases speed in the same
A different BCI application is a web browser created by direction. For testing purposes the authors use a robot
Mugler et al. [23]. The BCI is based on the P300 and follows benchmark known in the robotic area as SLALOM, the robot
Mankoff et al. [20] web usability needs. For Mankoff et al. BCI successfully passes in the benchmark test. Yuksel et al.
[20] all web browser systems must offer the following features: [37] employ a P300 BCI for object selection. These objects are
web navigation, page navigation with fewest commands arranged in a multi-touch table display and a computer vision
possible, history browsing, bookmarks, and text input. In algorithm computes the approximated shape for each object.
addition to those ones, the authors add functionalities like an This shape is expanded and flashed under the object following
URL bar. Speed is considered an interesting factor but not a the same process in P300 spellers with an object grid instead of
need. Another BCI web browser is the one created by Xu et al. a letter grid. Fig. 6 exhibits the BCI table in action.
[35], which uses the SSVEP approach. The application allows
searching in Google and inputting text. The authors mounted a
hardware, in which a small LED board with six LEDs is
attached on a conventional notebook. Each LED represents an
option and these LEDs are used to input characters, to navigate
a 6x6 menu, and to select web browsing commands like “next
URL” and “HOME”. The interface achieves great precision
(92%) but it is very slow (four and a half minutes for a simple
Google search). Although the described application is Fig. 6. Multitouch display interface, where objects are enlightened one by
interesting, Liu et al. [18] argue that SSVEP BCIs have a one, triggering a P300 wave. Source: [37].
fatigue factor due constant and intermittent flashes. For this
Grierson e Kiefer [7] test MindSet – conventionally used to
reason, these authors created an alternative BCI. Like the usual
detect concentration – for a BCI based on P300. The efficacy
SSVEP, the interface uses a visual stimulus differentiated by
frequency and also a movement frequency instead of the usual was benchmarked through an experiment, in which two
squares – one blue and one red – take turns flashing and the
light frequency. According to the authors, this new SSVEP

ISSN: 2236-3297
SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013 7

BCI has to detect which one the user is focusing. With squares working correctly). Those tests indicate that Friend still needs
of same size the precision is good (78.5%), while with squares improvement for real world usage but it has great potential.
of different sizes the precision is close to 100%. The results
indicate that it is possible to use a commercial headset for BCI
applications based on visual stimulus. Wang et al. [33]
developed a smartphone BCI application based on SSVEP for
calling contacts by number typing. The interface contains
numbers 0 to 9, a confirmation option (Enter), and a correction
option (Backspace). Frequencies between 9 and 11Hz are used
as visual stimulus. Normally those frequencies cannot be Fig. 7. A user selects the “prepare meal” in the FRIEND BCI system.
achieved in a smartphone due to screen refresh rate but the Source: [46].
authors employ a special technique of black/white alternating
patterns to circumvent this restriction. A good precision was Kaufmann et al. [15] developed the Optimized
achieved (close to 100%). Communication System, a P300 BCI speller where a single
button automatically configures and adjusts the system. The
Hakvoort et al. [9] implemented a SSVEP BCI game, user presses the button one time to start EEG signal capture,
whose goal is to lead sheep into a sheep pen by controlling and a second time to stop the capture and use the data collected
shepherds dogs. Kapeller et al. [14] remark the importance of to configure and calibrate the system. It also improves on other
analyzing BCIs in a distracting context. They did a benchmark spellers in information transmission speed through word auto
where a user has to focus on SSVEP visual stimulus, overlaid completion – the word prevision was written in Python –, the
on a movie. The experiment showed that some precision – 6% complete words appear together with the letters in the grid.
on average – is lost in this distraction context. One of the Following the idea of less configuration as possible, the
participants had a very large loss of precision (40%) and application creates the word base automatically - navigating
according to the authors this indicates that some users are more the web and computing word frequencies for a determined
sensible to visual distractions. idiom. Poli et al. [41] take BCI domain to space interaction,
Escolano et al. [45] developed a BCI telepresence system, creating an spaceship navigation BCI controller. BCI usage on
wherein a robot is remotely controlled from a different space applications have great potential since they allow
geographic location. A camera on top of the robot shows his piloting, collaboration and machine control without the, in
current “vision”. The BCI is based on the P300 and the several situations restrict, movement of hands. The developed
graphical interface imposes options as augmented reality on the BCI uses P300 approach with an innovative graphical
robot vision with commands like “turn left” appearing as icons. disposition. Eight gray circles form a greater circle and one by
Besides the camera, the robot is also equipped with a laser one is flashed to red or green – randomly selected – color.
sensor, wheelchair and a location tracker based on Three volunteers participated in a simulation, whose goal was
measurements of the rotations of the wheel. The BCI to make a path with the ship, passing as close as possible to the
developed has two operating modes: robot navigation, where a sun. The authors also devised a cooperative mode, in which the
point grid is used as Visual Stimulus for choosing a destination control is made simultaneous by more than one person. On the
and camera exploration, in which the point grid is used to experiments the cooperative mode had better precision than the
indicate where the camera should look at. Five users single-person mode. Still, the BCI developed has to improve to
participated in an experiment, having to navigate the robot achieve high-enough precision for real world usage.
through close space situations, all users managed to control the Nevertheless, it is an excellent result and offers an optimistic
robot successfully to the final position. vision for future BCI space applications.
An assistive BCI application is the one by Grigorescu et al. Edlinger et al. [43][44] developed a domotics BCI
[46], where the interface controls a robotic assistant for helping application based on an hybrid approach, using SSVEP for
people with motor deficiencies. The system is named by the turning on and off the application, while using P300 for
authors as FRIEND (Functional Robot with dexterous arm and command selection and activation. They define four
user-frIENdly interface for Disabled people). The BCI was an requirements for domotics BCIs, i.e. (i) signal amplifiers most
addition to a new generation of the robot system to supply work even on noisy environments, (ii) EEG capture has to be
quadriplegic needs. FRIEND combines several modules, i.e. made with a portable device, to avoid collisions and user
wheelchair, robotic arm with gripper, EEG headset, monitor irritations, (iii) for real time experiments it's necessary to
and camera (used for machine vision). The BCI uses the connect the BCI to a virtual reality simulation, and (iv) the
SSVEP approach, where five LEDs act as visual stimulus, each communication interface between BCI and VR needs to offer a
one representing a menu option. FRIEND is a semi- satisfactory degree of freedom. In the VR the user is equipped
autonomous robot and has two modes, i.e. (i) complete with 3D glasses and a head-position tracker. EEG signals are
autonomous mode, wherein objects are automatically captured through g.MOBIllab+ amplifier, those are send to a
recognized and manipulated, and (ii) shared control mode, PC which controls the virtual ambient using XVR (eXtreme
where the user assists the system with ambient information, VR). Video output is projected into a high resolution surface
like approximate object positions. System performance is (powerwall). The virtual ambient is composed of three rooms,
measured in four scenarios: prepare and serve a drink, prepare each one with controllable devices like television, music
and serve a meal (showed in Fig. 7), tasks at a library service player, telephone, lights and door. Commands are divided into
desk, and tasks for keyboard maintenance (checking if they're seven categories: lights, music, telephone, temperature,

ISSN: 2236-3297
8 SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013

television, move and “go to”. Each of those categories act as an open attention gates, while he needs to relax to open meditation
interface “mask”, one screen that has only the relevant gates.
commands for that category. Fig. 8 shows the BCI and
controlled virtual ambient. Marchesi [40] presents a BCI prototype for interactive
cinema, the Neu system. Neu system measures the user degree
of concentration/relaxedness using a MindWave and those
measurements affect course of events in the history of the
interactive movie. Neu is an evolution of MOBIE System,
developed by the same author. Mobie monitors and records the
degree of concentration/relaxedness of the user while he
watches a movie. From this feedback the user engagement in
each scene is obtained. Neu takes this concept one step beyond
offering, in the authors’ viewpoint, a BCI interactive,
Fig. 8. A (a) P300-based BCI for controlling (b) a virtual domotics immersive, and personal experience.
environment. Source: [44].
D. Imagined Movement
B. Sound Stimulus Response Poor et al. [27] assess EPOC capacity in a BCI based on the
imagination of kinetic actions. On the experiments the
Lotte et al. [19] developed a sound stimulus based BCI
objective was to rotate a cube after an initial brain signal
using a P300 approach. Their objective is to present a BCI
recording and calibration. The precision was low (59%) but the
efficient in a scenario of great mobility, where concentration in
authors attribute the cause to training lack and immaturity of
a visual stimulus is a hard task and user movement can cause
Stimulus-Less BCI systems and techniques. Friedman et al.
interference on EEG capture, since movement is responsible
[49] conduct research on navigation in a cave virtual
for a great part of brain electrical activity. The BCI developed
environment based on imagined movement. The VR consists
uses two sounds as stimulus, one rarely appears – a ring bell
of a street with people spread out and stores on the sides - those
sound (“Ding Dong”), while the other sound frequently plays –
can be projected in stereo view shutter glasses for increased
a buzzer sound. The user has to focus on the ring bell sound,
realism and experience. To walk, the user has to imagine the
counting the number of occurrences, eliciting a P300 wave
user feet moving, while head rotation – tracked through an
each time the sound plays. They conducted an analysis to
accelerometer – is used for changing walking direction and
identify movement interference on P300 detection and three
imagined hands movement – to pass the impression of “touch”
movement states were tested, i.e. sitting, standing and walking.
– is used for interacting with other persons on the street. The
Experiment results were promising: no significant precision
virtual people remain at still until the user interacts with them.
was lost due to movement interference and a good EEG
In that instant, they start to walk to indicate interaction success.
capture was possible in all states tested.
One of the first authors’ experiments [48] was conducted
On the work of Kim et al. [16] a BCI based on steady state
with a quadriplegic patient, who successfully moved across the
evoked potential was conceived. However, sound frequency
virtual environment as displayed in Fig. 9. Another experiment
(SSAEP) was used instead of the usual visual flashes frequency
[47] with 10 participants compared BCI precision in two
(SSVEP). Two sounds of different frequencies are used, each
different scenarios: a controlled choice scenario and a free
one in different sides of the user – left/right – to strengthen
choice scenario. In the first one, user receives a sound cue for
contrast between them. A good precision was achieved, 71%
the action he must realize and in the other one, user freely
online and 86% offline. One disadvantage of such sound based
chooses which action to perform. The precision on the
BCIs is the binary choice, i.e. the user can only choose
controlled choice scenario was higher (82.1%) than the free
between two options (sounds). Hill and Schölkopf [12]
choice scenario (75%), which presents a challenge to be
research ways to improve sound based BCIs. They use the
overcome, since freedom of choice is essential in interactive
same approach of SSAEP – sound frequency – combined with
systems.
spatial location – one on the left of the user, the other on the
right – in a way that resembles the “surprise” associated to
P300 approaches. Their BCI achieves higher performance,
obtaining 85% precision online.

C. Concentration
Coulton et al. [5] created a smartphone game named Brain
Maze, which uses BCI as one of the controllers using the
MindSet. The game objective is to move a ball from a start
point to a finishing destination. Moving the ball is done
through accelerometer, but some obstacles need to be
Fig. 9. User walks in a virtual street imagining feet movement. Source: [48].
overcome using a BCI. Some paths are blocked by closed
gates. There are two types of gates, i.e. attention and Leeb et al. [39] leverage a conventional game – i.e., which
meditation gates. The user has to increase concentration to was not originally designed to work with a BCI controller –
named PlanetPenguin Racer. In the original game a penguin is

ISSN: 2236-3297
SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013 9

controlled to descend a snow mountain, collecting fishes on the current task is interrupted, and replaced, by another of
way. The game was modified to float all the fish, suspending increased priority, (ii) delay task, where a user receives another
them mid air. Jumping is the only way to catch the fish in this task but chooses to ignore it, implicitly indicating that few
new version and to jump, user has to imagine feet movement. resources must be allocated to that lower priority task, and (iii)
For increased immersion the game happens on a cave virtual dual task, where the user works on two tasks of same priority,
reality, where the user is surrounded by walls with projectors and constantly switch between them. The BCI system
directed to each one of them. As such, the game uses a conceived by the authors adapts to allocate more resource to
multimodal interface: the penguin’s direction is controlled by a tasks of higher priority. For testing this BCI an experiment was
joystick and the jumps by BCI. An experiment with 14 users made with 11 users, they needed to remotely control two
demonstrated that concomitant use of joystick with BCI did not robots, one blue and one red. They could only control one
decreased BCI precision. Furthermore, leg positioned sensors robot at time, switching between them. Both robots expanded
proved that jumps – BCI control – did not use any muscles. A resources to move, and had to reach a specific destination and
pure joystick control achieved highest precision on catching send a signal. A priority was assigned to each robot, and
the fish, as expected by the authors. However, the majority of performance was compared between a Brainput interface doing
users preferred the BCI controls, remarking the fun of jumping the robot control switch, and an interface where the robot
only through mental power. According to authors, the game where more autonomous. The results showed the Brainput
needs only a short time of training, being entertaining without interface as the better one.
leaving the player bored with long sessions of training.
F. Summary
E. Neurofeedback Table I summarizes 29 interactive systems based on BCI
Vi and Subramanian [32] were able to detect an electrical presented above and grouped in this table by year of
potential called Error-Related Negativity, caused by user publication. As previously described, BCIs go beyond of
frustration when an interaction does not occur as planned. An computer control, comprehending domains such as domotics,
example is when the user tries to select an option among others assistive interfaces, robot control and electronic games. It is
but misses, by user or system error, and chooses one he did not important to highlight that most of stimulus-based BCIs uses a
want, getting the user frustrated. A BCI detecting this visual form due to its higher precision. Table I shows approach
frustration – through Error-Related Negativity potential – a used by BCI classifying works in (V)isual stimulus, (S)ound
system could try to auto-correct the interaction error, choosing stimulus, (C)oncentration, (I)magined movement, and
the closest option of the one miss-selected. An experiment was (N)eurofeedback.
made to measure the precision of successfully detecting this TABLE I. LITERATURE WORKS PRESENTED IN THIS SURVEY.
potential. The authors choose to analyze the precision through
an interaction test known as Superflick, where a user has to Work Detection
Brief description
Year Ref. approach
“throw”, with a drag-and-drop movement, a small ball into a
2007 [48] I Avatar control/walking in a virtual street
big target ball. If the user misses the target the system must [35] V Web browsing and speller
auto-correct the interaction, trying to achieve a most 2009
[19] S BCI control while walking
satisfactory state for the user. The BCI application conceived [23] V Web browsing
achieved 70% precision, and proved that is possible to detect [18] V Web browsing and speller
interaction errors, and use this information to provide a better [4] V Asynchronous speller
experience for the user, compensating the error with an action 2010 [2] V Call a contact through smartphone
[37] V Object selection in a real ambient
rollback, giving a small advantage to a player, or selecting [33] V Phone dialing
close objects. Fig. 10 shows a user participating on the [49] I Avatar control/walking in a virtual street
Superflick test, “throwing” a ball while the headset captures [21] V Mouse control
the user frustration. [6] V Robot control with real time vision-camera
[7] V Headset benchmark
[9] V Sheep game
2011
[43] V Domotic control in a virtual house
[16] S SSAEP benchmark
[5] C Smartphone Maze Game
[27] I Commercial headset benchmark
[45] V Telepresence robot with vision-camera
[46] V Auxiliary robot for disabled people
[15] V No configuration speller with word prediction
[44] V Domotic control in a virtual house
[42] V Car control on virtual CAVE simulation
2012
[12] S SSAEP + P300 benchmark
Fig. 10. Superflick interaction test using a BCI for auto-correction. Source: [40] C Interactive movies
[32]. [32] N Superflick interaction test
Resource control through brain concurrency
[30] N
Solovey et al. [30] created a BCI application – Brainput – detection
[41] V Spaceship control in a virtual simulation
that avails the priority of tasks being done by the user. This 2013
[39] I CAVE VR game
priority is estimated through detection, using fNIRS sensors, of
three mental states of concurrency, i.e. (i) branching, when the

ISSN: 2236-3297
10 SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013

IV. CHALLENGES difficult. 3D environments technologies are also impacted by


Through reflection on the BCI literature review, we this challenge since technology which provides the best user
identified several challenges with implications for user experience may vary for each user. So, when mixing both 3D
interaction. These challenges must be faced in order to BCIs and BCI technologies, user’s needs must be considered
been used in a more effective manner with interactive systems. carefully.
To a better understanding, we grouped challenges in topics. In most cases BCIs do not provide mobility to users.
Most of existing BCIs causes a high level of fatigue, Users must obligatorily remain at still and quiet, preferably
demanding high concentration or attention to quick and sitting down, during test application. However, in a real using
intermittent stimulus. In addition to fatigue inconvenience, BCI situation user may need to use BCI while the user walks on the
may not work since user cannot reach enough level of street in order to control a smartphone, for example. In
concentration. In [11], Hasan and Gan try to assure BCI’s addition, BCIs must also provide comfort to user. An EEG
operation even when user is tired. The BCI implemented by headset must be easy to carry on and simple to use on daily
these authors properly monitors user performance and when it routine, as well as a person uses a headphone to listen to music.
declines, system activates an adaptation which reduces the An EEG headset must be lightweight, not only to provide
concentration limit necessary to interact with system. The use mobility but also to enhance use experience; its weight must
of VR in BCI applications may assist in this process, providing not be uncomfortable to user. Other significant inconvenience
a high immersive environment. It motivates the user while is caused by a gel, which is applied on electrodes in most of
interacting with the system, consequently increasing users’ EEG headsets in order to enable signal capture, even though
attention and concentration levels. Guger et al. [8] indicate that dry electrodes are priority to user.
To meet this requirement, authors present a non-commercial
Concentration required to stimulus also causes a mixture EEG headset with dry electrodes and high precision.
between input and output since mental activity is being
constantly monitored and user’s focal point changes the input. Another similar challenge is the possible conflicts between
Instead of relax, user must concentrate on a point as input and different interface devices, i.e. using a Head-Mounted-
look to the output. For example, a user watching a movie; the Display (HMD) together with an EEG headset can prove to be
user has to look at a specific point on screen instead of a part of difficult given that the EEG sensors must stay on position.
scene that the user wants to see. At this stage, interaction has a Choosing an EEG headset becomes a complex task if we
forced aspect, instead of natural aspect presented in the case of consider mobility and comfort requirements. Equipment
user may decide which region of visual output the user wants presented in Fig. 1 and Fig. 2 must be redesigned to be used in
to focus. A similar challenge occurs with traditional real situations. Some of equipment use wires or cables, and
interactions since interaction flux often depends on user most of them require application of gel or saline substance to
perceiving certain feedbacks, mainly the ones issued by capture mental signals with high precision. The contribution of
computational system. In VR environments this issue is HCI in this BCI issue is exactly proposing the redesign of such
exacerbated since the lack of visual freedom may disrupt the equipment considering, for example, accessibility, usability,
immersion. This challenge is also applied in other 3D and ergonomic aspects. An ideal BCI headset must not use
environments such as augmented reality, which may bring wires or cables, which hamper mobility; neither use gel or
other problems such as safety since user has to focus on a saline solutions, which are one more component to be carried
certain point and may not pay attention in what comes ahead. on and make headset’s use more difficult. We understand that
this device must be lightweight and without additional parts,
According to Wolpaw et al. [34], with current BCI for example, batteries.
technologies, users must fit themselves into the system in order
to control it; speed and satisfaction of this adjustment depends In BCIs, system needs to constantly adapt itself to user’s
on system’s intuitiveness. During tasks accomplishments on signals. This adjustment must be fast and with precision.
applications, the study of user’s visual focus and intuitiveness Current BCIs present a very low information transmission
of graphical interface are conducted by HCI researchers. These speed rate, being necessary, for example, almost two minutes
researchers must apply HCI techniques in BCI context in order to “digitalize” a simple word. Nowadays, this challenge is
to develop a visual interface with as fewer nuisances as minimized with the use of word completers, accelerating the
possible regarding constant transitions of user’s visual focus speed, as described in [15]. The BCI precision does not
and also easy-to-use, providing a fast user’s adjustment to the always reach a satisfactory value, mainly in BCI based on
system. visual stimulus. Sometimes, actions repetition or undo are
required, causing discomfort or even discontent in the usage of
Using a BCI system is often a complex task. It is necessary interactive systems with this kind of interface. The sum of
to verify electrodes’ position on user’s head and configure these factors may generate frustrations to user and,
different parameters before using the system. Furthermore, consequently, resistance to the BCI usage. Furthermore, the
users must know which technology is best suited to their needs, performance of low-cost BCI commercial devices must be
including purpose and profile. Randolph [28] evidences that investigated, such as EPOC which presents less electrodes than
factors such as gender, caffeine and experience on videogames other EEG headsets with medical purposes. In a comparison
or musical instruments, affect mental states and waves, which performed by Al-Zubi et al. [1], this headset with 14 electrodes
are tracked by capture technologies. Hence, different people presented only 5% fewer precision than a professional EEG
may have different needs regarding BCIs, which makes the use headset with 128 electrodes.
of this kind of interaction in a practical way even more

ISSN: 2236-3297
SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013 11

BCI tests and experiments are often conducted in input/output depends on system’s graphical interface. BCIs
controlled environments, in laboratory, that does not based on sound stimulus also face a similar difficult, because
correspond to the real context of use of desktop computers, these BCIs require a special concern regarding warning sounds
where users usually perform different tasks in parallel and in order to sounds do not disturb the feedback to sound
work simultaneously, breaking their concentration constantly, stimulus.
either to answer the phone or to fetch a glass of water. This fact
seems to demonstrate that currently asynchronous BCIs have There are no ready-to-use models in which we can model
more advantages in real situations, since they provide greater interaction between users and computers via brain waves. It is
facility to the user when performing tasks in parallel, without henceforth necessary to develop methods, techniques,
approaches and technologies to greater support works in this
prejudicing interaction with computer.
area. The interaction documentation of a specific user based on
BCIs with wireless headsets are more practical and information captured from user’s brain may be used to evaluate
comfortable. Many manufacturers state that EEG signals are interfaces and indicate, as a result, the system’s intuitiveness.
encrypted before transferred to device. However, it is Moreover, user’s behavior pattern directly captured from brain
necessary to note that, in a future with massive use of BCIs, activity may contribute to enhance the interaction quality of
cryptography breaking enables attackers to capture cerebral adaptive interfaces since system may learn about user’s
waves, which transmit not only commands but also mental behavior and automatically provide a molded interface.
states and feelings. As BCI area progresses, this challenge has
more severe consequences since the higher the precision and For widespread use of this interaction form, it is necessary
greater the amount of information, the greater is the risk of to face current limitations and overcome its challenges.
privacy loss. Looking forward, when BCI technologies reach Whereas precision involves partially HCI area, concentration,
advanced stage, information espionage will no longer use speed, comfort, environment, difficulty in its usage, and
privacy are constant concerns of interactive systems. The fact
phone tapping and network sniffers. Instead, it will use mind
tapping and cerebral signals sniffers. Thus, with research that most of works using noninvasive BCIs needs visual
stimulus increases the need of a greater concern from HCI and
advancements, new challenges arise and the presented survey
is taken as starting point to development and evolution of VR communities in BCI research. Even in BCIs without visual
BCIs. stimulus, these areas may contribute since it is important to
consider the feedback, making the command more intuitive
with HCI and more immersive with VR. BCI technologies are
V. DISCUSSION a fundamental step to more transparent ubiquous interactions,
The brain is in constant activity and humans think all the in which we’ll control different devices simply by our “will”.
time, even while they are asleep or dreaming. When a person Through capture and identification our thoughts, no effort will
uses a computer, an information wave is lost, e.g. concentration be required in daily interactions with devices. Using muscles
level, frustrations, cognitive workload and user’s tension. This will not be necessary, except those ones responsible for vital
information could be used in BCIs, enabling interactive activities of organism, such as involuntary movements of
systems to adjust to their users, for example, changing the heartbeat.
amount of text and figures [25], changing desktop screen sizes
to control resources [30] and to offer more space to important We believe that, if HCI knowledge is associated from the
applications, identifying and correcting interaction mistakes beginning, it is possible to advance toward more comfortable,
[32], choosing a more suitable video to user [36], or even suitable and easy-to-use BCIs, so that users may have a higher
presenting more relevant information based on the user degree of satisfaction in interactive systems. Our research
cerebral activity. group is exploring this area, in order to purpose new BCIs
guided by human factors, which are involved in this high
Invasive BCIs based on implanted chips using complex form of (brain)human-computer interaction. In the
biocompatible materials are mature enough to enable monkeys same way, we believe that the area of VR plays a vital role in
to control a mechanical arm only with thought, as for example brain-computer interactions, providing an immersive
presented by Carmena et al. [3]. However, it requires a long environment, easing movement imagination and increasing
period of training. Also, the invasion levels, costs and health focus on visual stimulus.
risks make it an unviable procedure to healthy humans.
Whereas noninvasive BCIs are not mature enough to be used VI. CONCLUSION
as a single input, although it may already be adopted on
multimodal interfaces efficiently in real usage scenarios. In This paper presented a survey of BCI and additionally,
addition, BCIs without stimulus, in which a simple thought based on this review, we identified and discussed several
would control an entire system, are the most desired ones. challenges for BCI in the interactive systems context. We
believe that these challenges must be addressed so that BCIs
However, with current technologies, without stimulus, it is
only possible to recognize mental states with certain precision, may be adopted in interactive systems more effectively.
such as concentration. Furthermore, due to advancements and price reduction of
headsets, BCIs will be common in the near future as well as
Thus, based on the BCI literature review, we identified that other kinds of interface/interaction are today, for example, such
most of noninvasive BCIs depends on visual stimulus to reach as those provided by mobile devices and by Kinect.
satisfactory precision and speed (see Table I). Such a care with
design becomes even more important in these interfaces, since We are aware that thinking about interaction design in this
domain involves various areas of knowledge, especially to

ISSN: 2236-3297
12 SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013

reach its full potential. However, it is important to point out [9] G. Hakvoort, H. Gürkök, D.P-O. Bos, M. Obbink, and M. Poel,
that HCI can contribute to the expansion of knowledge “Measuring immersion and affect in a brain-computer interface game,”
in Proceedings of the 13 th IFIP TC 13 International Conference on
frontiers; outcomes achieved with this study is a concrete Human-Computer Interaction (INTERACT’11), Springer, 2011, pp.
example since it enhances the importance of this review and 115–128, doi: 10.1007/978-3-642-23774-4_12.
outstands the merit of literature works. Considering different [10] C. Harrison, D. Tan, and D. Morris, “Skinput: appropriating the body as
related areas and the diverse use possibilities of BCI, this an input surface,” in Proceedings of the ACM CHI Conference on
research topic deserves greater attention from both HCI and Human Factors in Computing Systems (CHI’10), ACM, 2010, pp. 453–
VR communities in order to undertake further studies on 462, doi: 10.1145/1753326.1753394.
interaction in BCIs. [11] B.A.S. Hasan and J.Q. Gan, “Hangman BCI: an unsupervised adaptive
self-paced brain-computer interface for playing games,” in Computers in
As future works we make an interaction design study and Biology and Medicine, vol. 42, n. 5, Elsevier, 2012, pp. 598–606, doi:
we will implement a visual stimulus-based BCI game for use 10.1016/j.compbiomed.2012.02.004.
with a low-cost noninvasive EEG headset. [12] N.J. Hill and B. Schölkopf, “An online brain-computer interface based
on shifting attention to concurrent streams of auditory stimuli,” in
Journal of Neural Engineering, vol. 9, n. 2, pp. 1–13, 2012, doi:
ACKNOWLEDGMENT 10.1088/1741-2560/9/2/026011.
[13] H-J. Hwang, J-H. Lim, Y-J. Jung, H. Choi, S.W. Lee, and C-H. Im,
This work was partially supported by the Brazilian Federal “Development of an SSVEP-based BCI spelling system adopting a
Agency for Support and Evaluation of Graduate Education QWERTY-style LED keyboard,” in Journal of Neuroscience Methods,
(CAPES) and by the Physical Artifacts of Interaction Research vol. 208, n. 1, Elsevier, 2012, pp. 59–65, doi:
Group (PAIRG) at the Federal University of Rio Grande do 10.1016/j.jneumeth.2012.04.011.
Norte (UFRN), Brazil. The present paper is a extended and [14] C. Kapeller, C. Hintermüller, and C. Guger, “Usability of video-
reviewed version of a previous work published at XI Simpósio overlaying SSVEP based BCIs,” in Proceedings of the 3 rd Augmented
Human International Conference (AH’12), ACM, 2012, doi:
Brasileiro sobre Fatores Humanos em Sistemas 10.1145/2160125.2160151.
Computacionais (IHC’12) entitled “Interfaces Cérebro- [15] T. Kaufmann, S. Völker, L. Gunesch, and A. Kübler, “Spelling is just a
Computador de Sistemas Interativos: Estado da Arte e Desafios click away - a user-centered brain-computer interface including auto-
de IHC” [38]. The authors thank JIS editors for the invitation. calibration and predictive text entry,” in Frontiers in Neuroprosthetics,
vol. 6, Frontiers, 2012, pp. 1–10, doi: 10.3389/fnins.2012.00072.
REFERENCES [16] D-W Kim, J-H Cho, H-J Hwang, J-H Lim, and C-H Im, “A vision-free
brain-computer interface (BCI) paradigm based on auditory selective
[1] H.S. Al-Zubi, N.S. Al-Zubi, and W. Al-Nuaimy, “Toward inexpensive attention,” in Proceedings of the Annual International Conference of the
and practical brain computer interface,” in Proceedings of the IEEE Engineering in Medicine and Biology Society (EMBC’11), IEEE,
Developments in E-systems Engineering (DeSE’11), IEEE, 2011, pp. 2011, pp. 3684–3687, doi: 10.1109/IEMBS.2011.6090623.
98–101, doi: 10.1109/DeSE.2011.116. [17] S.S. Liu, A. Rawicz, S. Rezaei, T. Ma, C. Zhang, K. Lin, and E. Wu,
[2] A. Campbell, T. Choudhury, S. Hu, H. Lu, M.K. Mukerjee, M. Rabbi, “An eye-gaze tracking and human computer interface system for people
and R.D.S. Raizada, “NeuroPhone: brain-mobile phone interface using a with ALS and other locked-in diseases,” in Journal of Medical and
wireless EEG headset,” in Proceedings of the 2nd ACM SIGCOMM Biological Engineering, vol. 32, n. 2, pp. 37–42, 2012.
Workshop on Networking, Systems, and Applications on Mobile [18] T. Liu, L. Goldberg, S. Gao, and B. Hong, “An online brain-computer
Handhelds (MobiHeld’10), ACM, 2010, pp. 3–8, doi: interface using non-flashing visual evoked potentials,” in Journal of
10.1145/1851322.1851326. Neural Engineering, vol. 7, n. 3, 2010, pp. 1–9, doi: 10.1088/1741-
[3] J.M. Carmena, M.A. Lebedev, R.E. Crist, J.E. O’Doherty, D.M. 2560/7/3/036003.
Santucci, D.F. Dimitrov, P.G. Patil, C.S. Henriquez, and M.A.L. [19] F. Lotte, J. Fujisawa, H. Touyama, R. Ito, M. Hirose, and A. Lécuyer,
Nicolelis, “Learning to control a brain-machine interface for reaching “Towards ambulatory brain-computer interfaces: a pilot study with P300
and grasping by primates,” in PLoS Biology, vol. 1, n. 2, 2003, pp. 193– signals,” in Proceedings of the International Conference on Advances in
208, doi: 10.1371/journal.pbio.0000042. Computer Enterntainment Technology (ACE’09), ACM, 2009, pp. 336–
[4] H. Cecotti, “A self-paced and calibration-less SSVEP-based brain- 339, doi: 10.1145/1690388.1690452.
computer interface speller,” in IEEE Transactions on Neural Systems [20] J. Mankoff, A. Dey, U. Batra, and M. Moore, “Web accessibility for low
and Rehabilitation Engineering, vol. 18, n. 2, IEEE, 2010, pp. 127–133, bandwidth input,” in Proceedings of the 5th ACM International
doi: 10.1109/TNSRE.2009.2039594. Conference on Assistive Technologies (ASSETS’02), ACM, 2002, pp.
[5] P. Coulton, C.G. Wylie, and W. Bamford, “Brain interaction for mobile 17–24, doi: 10.1145/638249.638255.
games,” in Proceedings of the 15 th International Academic MindTrek [21] M. Mauro, P. Francesco, S. Stefano, G. Luciano, and P. Konstantinos,
Conference (MindTrek’11), ACM, 2011, pp. 37–44, doi: “Spatial attention orienting to improve the efficacy of a brain-computer
10.1145/2181037.2181045. interface for communication,” in Proceedings of the 9 th ACM SIGCHI
[6] P. Gergondet, S. Druon, A. Kheddar, C. Hintermuller, C. Guger, and M. Italian Chapter International Conference on Computer-Human
Slater, “Using brain-computer interface to steer a humanoid robot,” in Interaction (CHItaly’11), ACM, 2011, pp. 114–117, doi:
Proceedings of the IEEE International Conference on Robotics and 10.1145/2037296.2037325.
Biomimetics (ROBIO’11), IEEE, 2011, pp. 192–197, doi: [22] J.D.R. Millán, R. Rupp, G.R. Müller-Putz, R. Murray-Smith, C.
10.1109/ROBIO.2011.6181284. Giugliemma, M. Tangermann, C. Vidaurre, F. Cincotti, A. Kübler, R.
[7] M. Grierson and C. Kiefer, “Better brain interfacing for the masses: Leeb, C. Neuper, K.R. Müller, and D. Mattia, “Combining brain-
progress in event-related potential detection using commercial brain computer interfaces and assistive technologies: state-of-the-art and
computer interfaces,” in Proceedings of the Extended Abstracts on challenges,” in Frontiers in Neuroprosthetics, vol. 4, Frontiers, 2010,
Human Factors in Computing Systems (CHI EA’11), ACM, 2011, pp. pp. 1–33, doi: 10.3389/fnins.2010.00161.
1681–1686, doi: 10.1145/1979742.1979828. [23] E.M. Mugler, C.A. Ruf, S. Halder, M. Bensch, and A. Kubler, “Design
[8] C. Guger, G. Krausz, B.Z. Allison, and G. Edlinger, “Comparison of dry and implementation of a P300-based brain-computer interface for
and gel based electrodes for P300 brain-computer interfaces,” in controlling an internet browser,” in IEEE Transactions on Neural
Frontiers in Neuroprosthetics, vol. 6, Frontiers, 2012, pp. 1–7, doi: Systems and Rehabilitation Engineering, vol. 18, n. 6, IEEE, 2010, pp.
10.3389/fnins.2012.00060. 599–609, doi: 10.1109/TNSRE.2010.2068059.

ISSN: 2236-3297
SBC Journal on 3D Interactive Systems, volume 4, number 1, 2013 13

[24] Y. Nam, Q. Zhao, A. Cichocki, and S. Choi, “Tongue-Rudder: a ACM CHI Conference on Human Factors in Computing Systems
glossokinetic-potential-based tongue-machine interface,” in IEEE (CHI’10), ACM, 2010, pp. 855–858, doi: 10.1145/1753326.1753452.
Transactions on Biomedical Engineering, vol. 59, n. 1, IEEE, 2012, pp. [38] A.L.S. Ferreira, L.C. Miranda, and E.E.C. Miranda, “Interfaces cérebro-
290–299, doi: 10.1109/TBME.2011.2174058. computador de sistemas interativos: estado da arte e desafios de IHC,”
[25] A. Nijholt, D. Tan, B. Allison, J.R. Milan, and B. Graimann, “Brain- in Anais do XI Simpósio Brasileiro sobre Fatores Humanos em Sistemas
computer interfaces for HCI and games,” in Proceedings of the Computacionais (IHC’12), SBC, 2012, pp. 239–248.
Extended Abstracts on Human Factors in Computing Systems (CHI [39] R. Leeb, M. Lancelle, V. Kaiser, D. Fellner, and G. Pfurtscheller,
EA’08), ACM, 2008, pp. 3925–3928, doi: 10.1145/1358628.1358958. “Thinking Penguin: multimodal brain-computer interface control of a
[26] G. Pires, U. Nunesa, and M. Castelo-Branco, “Statistical spatial filtering VR game,” in IEEE Transactions on Computational Intelligence and AI
for a P300-based BCI: tests in able-bodied, and patients with cerebral in Games, vol. 5, n. 2, IEEE, 2013, pp. 117–128, doi:
palsy and amyotrophic lateral sclerosis,” in Journal of Neuroscience 10.1109/TCIAIG.2013.2242072.
Methods, vol. 195, n. 2, Elsevier, 2011, pp. 270–281, doi: [40] M. Marchesi, “From mobie to Neu: 3D animated contents controlled by
10.1016/j.jneumeth.2010.11.016. a brain-computer interface,” in Proceedings of the Virtual Reality
[27] G.M. Poor, L.M. Leventhal, S. Kelley, J. Ringenberg, and S.D. Jaffee, International Conference (VRIC’12), ACM, 2012, pp. 1–3, doi:
“Thought cubes: exploring the use of an inexpensive brain-computer 10.1145/2331714.2331747.
interface on a mental rotation task,” in Proceedings of the 13 th [41] R. Poli, C. Cinel, A. Matran-Fernandez, F. Sepulveda, and A. Stoica,
International ACM SIGACCESS Conference on Computers and “Towards cooperative brain-computer interfaces for space navigation,”
Accessibility (ASSETS’11), ACM, 2011, pp. 291–292, doi: in Proceedings of the International Conference on Intelligent User
10.1145/2049536.2049612. Interfaces (IUI’13), ACM, 2013, pp. 149–160, doi:
[28] A.B. Randolph, ”Not all created equal: individual-technology fit of 10.1145/2449396.2449417.
brain-computer interfaces,” in Proceedings of the 45 th Hawaii [42] D. Hood, D. Joseph, A. Rakotonirainy, S. Sridharan, and C. Fookes,
International Conference on System Science (HICSS’12), IEEE, 2012, “Use of brain computer interface to drive: preliminary results,” in
pp. 572–578, doi: 10.1109/HICSS.2012.451. Proceedings of the 4 th International Conference on Automotive User
[29] E.T. Solovey, A. Girouard, K. Chauncey, L.M. Hirshfield, A. Sassaroli, Interfaces and Interactive Vehicular Applications (AutomotiveUI’12),
F. Zheng, S. Fantini, and R.J.K. Jacob, “Using fNIRS brain sensing in ACM, 2012, pp. 103–106, doi: 10.1145/2390256.2390272.
realistic HCI settings: experiments and guidelines,” in Proceedings of [43] G. Edlinger, C. Holzner, and C. Guger, “A hybrid brain-computer
the 22nd Annual ACM Symposium on User Interface Software and interface for smart home control,” in Proceedings of the 14 th
Technology (UIST’09), ACM, 2009, pp. 157–166, doi: International Conference on Human-Computer Interaction (HCII’11),
10.1145/1622176.1622207. Springer, 2011, pp. 417–425, doi: 10.1007/978-3-642-21605-3_46.
[30] E. Solovey, P. Schermerhorn, M. Scheutz, A. Sassaroli, S. Fantini, and [44] G. Edlinger and C. Guger, “A hybrid brain-computer interface for
R. Jacob, “Brainput: enhancing interactive systems with streaming improving the usability of a smart home control,” in Proceedings of the
fNIRS brain input,” in Proceedings of the ACM CHI Conference on ICME International Conference on Complex Medical Engineering
Human Factors in Computing Systems (CHI’12), ACM, 2012, pp. 2193– (CME’12), IEEE, 2012, pp. 182–185, doi:
2202, doi: 10.1145/2207676.2208372. 10.1109/ICCME.2012.6275714.
[31] S. Vernon and S.S. Joshi, “Brain-muscle-computer interface: mobile- [45] C. Escolano, J.M. Antelis, and J. Minguez, “A telepresence mobile robot
phone prototype development and testing,” in IEEE Transactions on controlled with a noninvasive brain-computer interface,” in IEEE
Information Technology in Biomedicine, vol. 15, n. 4, IEEE, 2011, pp. Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics,
531–538, doi: 10.1109/TITB.2011.2153208. vol. 42, n. 3, IEEE, 2012, pp. 793–804, doi:
[32] C. Vi and S. Subramanian, “Detecting error-related negativity for 10.1109/TSMCB.2011.2177968.
interaction design,” in Proceedings of the ACM CHI Conference on [46] S.M. Grigorescu, T. Lüth, C. Fragkopoulos, M. Cyriacks, and A. Gräser,
Human Factors in Computing Systems (CHI’12), ACM, 2012, pp. 493– “A BCI-controlled robotic assistant for quadriplegic people in domestic
502, doi: 10.1145/2207676.2207744. and professional life,” in Robotica, vol. 30, n. 3, 2012, pp. 419–431, doi:
[33] Y-T. Wang, Y. Wang, and T-P. Jung, “A cell-phone-based brain- 10.1017/S0263574711000737.
computer interface for communication in daily life,” in Journal of [47] G. Pfurtscheller, R. Leeb, C. Keinrath, D. Friedman, C. Neuper, C.
Neural Engineering, vol. 8, n. 2, 2011, doi: 10.1088/1741- Guger, and M. Slater, “Walking from thought,” in Brain Research, vol.
2560/8/2/025018. 1071, 2006, pp. 145–152.
[34] J.R. Wolpaw, N. Birbaumer, D.J. McFarland, G. Pfurtscheller, and T.M. [48] R. Leeb, D. Friedman, G.R. Müller-Putz, R. Scherer, M. Slater, and G.
Vaughan, “Brain-computer interfaces for communication and control,” Pfurtscheller, “Self-paced (asynchronous) BCI control of a wheelchair in
in Clinical Neurophysiology, vol. 113, n. 6, Elsevier, 2002, pp. 767–791, virtual environments: a case study with a tetraplegic,” in Computational
doi: 10.1016/S1388-2457(02)00057-3. Intelligence and Neuroscience, vol. 2007, 2007, pp. 1–8, doi:
[35] H. Xu, T. Qian, B. Hong, X. Gao, and S. Gao, “Brain-actuated human 10.1155/2007/79642.
computer interface for google search,” in Proceedings of the 2nd [49] D. Friedman, R. Leeb, G. Pfurtscheller, and M. Slater, “Human-
International Conference on Biomedical Engineering and Informatics computer interface issues in controlling virtual reality with brain-
(BMEI’09), IEEE, 2009, pp. 1–4, doi: 10.1109/BMEI.2009.5305708. computer interface,” in Human-Computer Interaction, vol. 25, n. 1,
[36] A. Yazdani, J-S. Lee, J-M. Vesin, and T. Ebrahimi, “Affect recognition Taylor & Francis, 2010, pp. 67–94, doi: 10.1080/07370020903586688.
based on physiological changes during the watching of music videos,” in [50] Lotte, Fabien, Josef Faller, Christoph Guger, Yann Renard, Gert
ACM Transactions on Interactive Intelligent Systems, vol. 2, n. 1, ACM, Pfurtscheller, Anatole Lécuyer, and Robert Leeb. “Combining BCI with
2012, pp. 1–26, doi: 10.1145/2133366.2133373. virtual reality: towards new applications and improved BCI,” Towards
[37] B.F. Yuksel, M. Donnerer, J. Tompkin, and A. Steed, “A novel brain- Practical Brain-Computer Interfaces, pp. 197–220. Springer, 2013, doi:
computer interface using a multi-touch surface,” in Proceedings of the 10.1007/978-3-642-29746-5_10.

ISSN: 2236-3297

You might also like