Anna Beck
ENGW3315 Project 2 Final Draft
Cecelia Musselman
24 June 2019
IEEE Format, Word Count: 3427
Ethics for Robots: A Review of the 2017-2019 Literature
Abstract:
The presence of robots and artificial intelligence in the daily lives of humans has exponentially increased this
past decade and will continue to do so in the years to come. The concept of interactions between humans
and robots has just risen in the lives of people residing in more developed countries, but it comes with
consequences, as robots are not fully developed to be in certain vital situations yet. The implementation of
robots in specific situations can result in bias due to underdeveloped code or errors in code. Debates are
commonly sparked over whether or not a robot should be granted rights such as those humans have, or if
the consequences due to a coding error should be directed back to the programmer. Many changes need to
be made in the hardware and software before robots are fully incorporated into society or the machines will
be a detriment to the world, not a beneficial aspect. Computer programmers intend for robots to be an asset
to society, therefore improvements need to be made to eliminate certain factors that are currently producing
adverse effects.
Introduction:
Numerous advancements have taken place recently with robotics and they come with the price of the
problems that arise along with them. Unaccounted for complications emerge with the technological growth
and those problems can negatively affect people, doing the opposite of what the intentions of such device
planned on doing, which was to have a positive affect and to make life easier for people. Technological
growth exists in the world to enhance one’s life, not to deter it in any way shape or form. If a device causes
many problems and continues to negatively affect the world, it will not be needed and will most likely be
discarded. Many devices have faults and cause problems that would not have arisen without their existence,
but at the same time, continue to benefit society. The use of artificial intelligent robotic creations intends to
benefit society, reducing the number of casualties and making life easier for people, however, fails under
some circumstances when faced with a situation that requires them to make a moral decision.
Human Interactions with Robots:
With improvements in robotic technology, common sights in a workplace may include a human and a robot
working together on a project. Saying the word robot does not imply a human-like figure that can walk and
talk like a human, but for now, it implies a machine that has some human-like capabilities. Viewing both the
positive and negative effects of having humans and robots coworkers, it can be concluded that the negative
effects outweigh all of the positive. The difference between humans and robots can both benefit both or
harm both of them [6]. Mentioned by You and Sangseok on the increment of robotic workers being placed
in the workforce, they talk about the addition of 15,000 robots a year by Amazon to work with employees in
all of their fulfillment centers and how in the next 10 to 20 years, robots are predicted to take over half of
the workforce [6]. The trust that lies between a human and their robotic coworker was evaluated, as trusting
a robot can be interpreted in two ways, trusting the actual robot and treating it as an equal being or trusting
the code which allows the robot to run [6]. Humans tend to feel fear when trusting a machine in certain
situations, however, when the duties done by the machine may appear tough and humans may not be that
comfortable with it [1]. The study concluded with results that showed trust was vital and needed for
Anna Beck
ENGW3315 Project 2 Final Draft
Cecelia Musselman
24 June 2019
IEEE Format, Word Count: 3427
successful workspace collaboration, but the trust does not come easily, as robots are machines and not
beings [6]. In another study conducted, results showed most humans preferred to work with robots of a
similar ethnicity and gender as them, trusting them in the process more as well. Similarly, they prefer to
work with robots that have the same interests and personality traits as them [6].
A difficult concept to grasp in most human to human interaction has proved higher with human to robot
interactions. Studies have found that humans tend to trust robots with more ease than they would trust
another being. Reasons for this could include the fact that robots will not lie, cheat or steal, which are the
main reasons that humans tend to not trust one another. As a machine, robots are programmed to perform
certain methods given certain actions from a human and background activities, which do not include those
detrimental factors that cause humans to lose trust between each other. In a recent study conducted, results
showed that humans tended to trust robots more when they displayed human features, such as blinking and
head turning [8]. The reason for this reaction is because humans tend to trust things that resemble
themselves, such as other humans. If a robot contains human-like features and acts like a human, there will
be a higher level of trust between the two [8]. Humans were placed in a study with robots that exhibited
human-like features and acted like humans. The reactions of humans were studied when the robots
exhibited different human characteristics. A robot named SociBot was used, that has the capability of
exhibiting human-like traits. The robot was capable of blinking, head turning, and could move its head to
mimic facial expressions and other human expressions, such as nodding. The level of trust between the
human and the robot and reactions of the human were recorded [8]. The following table states the cues that
were used and the condition of the cues.
Table 1: Interactive Cues Used by Robots
Condition Number of interactive Head mimicry Appropriate time for
cues praise
No interactive social 0 no no
cues
Low interactive social 1 yes no
cues
High interactive social 2 yes Yes
cues
The robot would ask the humans to complete basic tasks while exhibiting different levels of interactive
social cues. The results of the study were predictable. When a robot nodded its head at random times, and
they showed facial expressions, humans were more compliant with the tasks the robots wanted them to
complete. The tasks were basic, such as asking the human to stand on one leg, or just answer basic
questions. Humans tend to react more positively and trust the robots more when they exhibited more
interactive social cues. They saw the robots more as a person, not as robots, when more social cues were
exhibited [8].
Anna Beck
ENGW3315 Project 2 Final Draft
Cecelia Musselman
24 June 2019
IEEE Format, Word Count: 3427
Robot Interactions with Humans
Conceptualizing the fact that robots may be capable of carrying out a full conversation with a human may
be a tough concept to grasp for some, given that the idea of communicating with a robot may be scary
because of the novelty of the thought. The interactions of robots with humans may not be direct
interactions, but more of interactions where the robot processes certain information they directly obtain
from human beings in their vicinity. Many recent examples of this form of communication stem from
artificial intelligence devices that many people have present in their houses and on their phones, such as
Amazon’s Alexa, Google’s Home and Apple’s Siri. These devices are new to the world and have the
capabilities of recording information at all times when they are turned on, even if the information does not
intend to be directed towards them. Recently, these devices have been of assistance in crime scenes,
recording conversations that may have occurred before a murder or picking up words that may lead to a
suspicion of drug use. For example, if a digital assistant picks up cues that leads it to believe a human is
doing drugs, it would map out all possibilities of what actions to take and take the action that seems the best
at the time. It will consider all options and then remove conflicting demands, such as calling the police or
just letting the person continue to do the drugs. When the conflicting demands are removed, the digital
assistant will follow a path that seems the best and is a medium for the conflicting choices [1]. The robot
will need to make a decision based on their interaction with a human in this situation. An advanced robot
may speak to the human and ask them follow up questions about this certain situation, but a less advanced
one, such as Apple’s Siri or Amazon’s Alexa, may not be able to have a full conversation with a human and
may just record the data, storing it and possibly sending it somewhere given certain keywords that when
heard, trigger the device to send the data to authorities [1].
New technologies have allowed for the recognition of humans in an abstract way. Robots have the ability to
detect a human being by using their RGB-D sensors and by using an OpenNI tracker which establishes
their 3D positions. If someone positioned themselves backwards or sideways, the sensor may detect a
human incorrectly or mistake them for something else, but usually does not go wrong. Altering the contrast
of the RGB-D sensor will allow for the data to have more contrast and limit the amount of errors that occur
[2]. This method is used to help robots determine the position of a human or an object [3]. Objects that are
not needed in a certain situation will be factored out and the needed objects or humans will be factored into
the robots system [3].
Robot Emotions:
Sangseok and You describe in their article how humans prefer to work alongside robots that have a similar
personality to them. A new concept in the world of robotics, programming robots to have a personality and
a mind of their own may shock many, as people associate having a personality with a living organism that
has a soul, not a machine created out of wires and metal parts. To build a machine with a soul would be
impossible, as many associate a soul with real human beings, so saying that a machine created by mankind
has a personality seems merely impossible as well [6]. Newer robotic technology is used in everyday life,
such as Microsoft bands, Fitbits and Apple Watches. These technologies use human physiological data and
analyze the results. They also can detect emotions through heartbeat. However, human emotions cover a
large range of expressions that are unable to be represented by technology and applied to robots and
Anna Beck
ENGW3315 Project 2 Final Draft
Cecelia Musselman
24 June 2019
IEEE Format, Word Count: 3427
artificial intelligence as of now [5]. Humans can express all emotions and problems may arise if robots are
not able to detect all of them. Danger may come up if robots are viewed on the same level as humans, as
they cannot express a wide variety of emotions, yet alone detect them, a distinguishing characteristic of
humans that all other species lack. In the future, this may be re-addressed and all human emotions may be
able to encompassed by robots, but currently technology that can permit that does not exist.
Kanjo introduces the concept of deep learning being applied to the incorporation of human-like aspects in
technological devices. Although an advanced topic, the deep learning process integrated into the
construction of many robots can be simplified. Deep learning, the method of incorporating man-made
neural networks into robots by using data collected from the human brain, one of the newest technologies
used in the robotics field today, requires high levels of education and knowledge to work with. In a way,
deep learning allows people to construct a near-perfect model of the brain, using models to represent neural
networks. These methods allow the robot to make decisions, have a memory and pick up background
information [5]. In addition to brain similarities, this process allows robots to detect nearby objects [3]. This
part of deep learning is known as SLAM (Simultaneously Localization and Mapping) and gives the robot a
vision of their own for detection. Robots can interact with humans when given these characteristic features
and this method of seeing objects but there will still be flaws, given the fact that replicating a brain and
human characteristics cannot be perfectly done with today’s technology [3, 5].
Construction of an Ethical Robot:
A set of legal rules or regulations exists for almost anything that can be brought to mind. Does it include
robots? The amount of robots in the world has recently increased exponentially with all of the advances in
electrical and computer technology and no set of laws or regulations currently pertains to the existence of
the bots. Laws and regulations normally pertain to humans, as the human species has the brain to fully
understand the meaning of them and has the means to follow them. As stated before, the robots are not
beings, they do not have a soul, and therefore should not be considered humans under any circumstances.
Does this mean that they should have to follow a set of rules, or should the potential legal issues shoot back
to the creator of the robot, even if they did not account for a specific situation occurring. To be precise, say
that a robotic self-driving car hit a person who was moving in a pattern they were not programmed to stop
to, would the human being be responsible for not precisely covering that situation? A set of laws must be
created before introducing the concept of self-driving cars into everyone’s driveway to avoid legal issues.
Roboethics, a new type of ethics, commonly defined as the type of ethics that relates to the designers and
the users of the robots, not the robots. One argument is that this term cannot be coined to the actions of
robots because regardless of the robots actions, humans are responsible for whatever happens as a result of
those actions [1]. This argument stands because robots are currently considered experimental technologies,
a term used to name technologies that are still developing and are limited in many ways. The robots are
treated as experimental technologies because they are still limited and do not have all of the capabilities that
are needed for them to be completely sufficient [1]. On the other hand, in situations where robots closely
resemble the looks of humans, some believe that they should be granted rights similar to those of a human
[7]. There is a contrary belief to this situation, but the implementation of human features to a machine may
lead people to treat and view the device as a human, not the machine it actually is [7]. The robots have risks
Anna Beck
ENGW3315 Project 2 Final Draft
Cecelia Musselman
24 June 2019
IEEE Format, Word Count: 3427
and benefits associated with them, but currently, the effects of the risks outweigh the benefits. Commonly,
automated robots are also studied as a social experiment because people are still adjusting to their presence
and the results of their incorporations in society are still unknown [1].
When a human grants a robot full control of itself, such as a device that includes autopilot, there will always
be an option to give complete control back to the user, but when will that control be automatically given
back to the user? At what point will the robot say “I cannot control this situation because I do not have the
knowledge to successfully take control and cannot make the decision that needs to be made?” Situations
that arise from complications in robots that are based on coding error direct attention to the programmers
and companies that built the machine. The programming needs to be spot on, or errors will arise and robots
will not have the ability to perform correctly in certain situations. Many different scenarios where errors in
coding and mechanical engineering have negatively affected humans already have occurred in the world, for
example, recent errors in building cars. Errors in artificial intelligence machines and robots may not only
hurt users physically, but give users incorrect data and therefore affected their lives in other ways. Robots
need to be coded to know what to do in every situation, but that is impossible, due to the unpredictability of
life. The SLAM deep learning method of programming a robot will help by manipulating human features to
prevent these errors [3].
Algorithmic Bias in Robots:
To successfully create a fully functioning human being that contains every single aspect of the brain and has
all of the neurons and pathways that are located within the body that are used to make decisions would be
impossible, so what makes people believe they can code a human-like robot that can make such decisions?
The concept of having a robot in the world that can be a substitute to a human currently exists in the minds
of many humans, regardless of whether or not they are comfortable with. Some aspects of a human that
cannot be replicated in a machine, such as the billions of pathways in the brain that allow for the decision
making processes humans are capable of. In a robot, every aspect must be programmed and the robot will
not have those capabilities that a human brain allows the human species to have. Due to this situation,
current robots cannot detect everything at the level they must be capable of doing so for them to be
completely fused into the daily lives of humans [4, 9].
Recently, there have been training methods used to prevent bias in these devices. The robots are taught to
adapt to certain situations using penalization, but this brings up the concern if robots can even be penalized
because they do not have the emotional capabilities of humans. Training purposes aim for the
implementation of a punishment if robots assume something untrue about a certain thing (e.g. people, cars,
colors, activities, etc.). This method of punishment is incorporated into the training programs of the bots
using adversarial training. An adversarial network will attempt to discriminate between two domains, one
which is correct and one incorrect. The bot will be placed in a situation where it needs to choose the correct
answer (the one that eliminates bias). If the robot chooses the incorrect domain, they will be told it is
incorrect as a penalty (no actual punishment) so it can learn how to choose the correct domain in a certain
situation [4]. This method will help to prevent bias in future programming. It just needs to be perfected so
every case receives coverage and nobody becomes offended or hurt by the actions of a robot due to lack of
training. A new method of robotic training and programming that has not been incorporated into many
Anna Beck
ENGW3315 Project 2 Final Draft
Cecelia Musselman
24 June 2019
IEEE Format, Word Count: 3427
robots yet is the FPC (Fair Proxy Communication) method that ensures bias based decisions are not made
in situations [9]. The robot becomes a neutral human-like figure which can be used in interviews and phone
calls that regard decisions. The robots will not be able to detect gender or race based on this program, but
this can become an issue if race or gender must be detected to proceed. The robot essentially does not have
the ability to detect a humans’ sexuality or race, eliminating all possible stereotypes [9].
Conclusion:
The capabilities of robots are increasing exponentially. Being present in many homes and workplaces, robots
and humans are constantly communicating, either directly or indirectly. Robots have the means to record
and interpret data as a human being would be able to do so. They are becoming more technologically
advanced and more recently have been modeling an internal system that closely represents the human brain.
With all of the new advances in robotic technology, an ethical debate arises when deciding whether to coin
robots as responsible for their actions, but the consequences become directed towards the engineers and
programmers who constructed the robot. Robots cannot be fully integrated into the world yet due to their
current limitations which include the ability to make split second decisions and bias in certain situations.
More research involving the process of deep learning must be conducted to further advance the technology
in the machines for them to more closely resemble the actions of humans. The deep learning analysis
method currently stands as the most successful method of studies with robots, and the technology used in
this process only will progress. Another successful method that should be implemented into research more
often is the study of reflexivity, which an evaluation of how robots affect the lives of humans and the
consequences of their actions and the study of how technology is progressing. Currently, the benefits of
having robots in the world are outweighed by all the negative aspects that come along with their presence,
so they are not completely integrated in society as they could be.
Acknowledgements
I would like to thank Professor Cecelia Musselman for her advice and comments and Sienna Berlinger and
Nelle Lightbourn for their comments in their peer reviews that helped me improve my essay.
Anna Beck
ENGW3315 Project 2 Final Draft
Cecelia Musselman
24 June 2019
IEEE Format, Word Count: 3427
References
[1] Amigoni, F., Schiaffonati, V., “Ethics for Robots as Experimental Technologies: Pairing Anticipation
with
Exploration to Evaluate the Social Impact of Robotics.” IEEE Robotics and Automation Magazine, 2018,
25:
30-36.
[2] Duckworth, P., Hogg, D. C., Cohn, A. G., “Unsupervised human activity analysis for intelligent mobile
robots.” Artificial Intelligence, 2019, 270: 67-92.
[3] Xiao, L.H., et. al., “Dynamic-SLAM: Semantic monocular visual localization and mapping based on deep
learning in dynamic environment,” Robotics and Autonomous Systems, 2019, 117: 1-16.
[4] Gurau, K., Rao, D., Tong, C. H., Posner, I., “Learn from experience: Probabilistic prediction of
perception performance to avoid failure.” International Journal of Robotics Research, 2018, 37: 981-995.
[5] Kanjo, E., Younus, E. M. G, Ang, C. S. “Deep learning analysis of mobile physiological, environment
and
location sensor data for emotion detection.” Information Fusion, 2019, 49: 46-56.
[6] Sangseok, Y., Robert Jr., L. P., “Human--Robot Similarity and Willingness to Work with a Robotic
Co-worker.” ACM/IEEE International Conference on Human-Robot Interaction, 2018, 251-260.
[7] Johnson, D., Verdicchio, M., “Why robots should not be treated like animals.” Ethics and Information
Technology, 2 018, 20: 291-301.
[8] Ghazali, A., et. al., “Assessing the effect of persuasive robots interactive social cues on users’
psychological reactance, liking, trusting beliefs and compliance,” Advanced Robotics, 2019, 33: 325-337.
[9] Skewes, J., et. al., “Social robotics and the modulation of social perception and bias,” Philosophical
Transactions of the Royal Society B-Biological Sciences, 2 019, 374: 1171.