Robot Law and AI Notes
Robot Law and AI Notes
medical robots
electrical
appliances
industrial robots
communication
robots
agriculutral
robots
service robots
construction
robots
transport robots
transportation
drones robots (designed
to move people)
robot suits
Industrial vs Service
- Industrial = “automatically controlled, reprogrammable, multipurpose manipulator,
programmable in three or more axes, which can be either fixed in a place or fixed to a
mobile platform for use in automation applications in an industrial environment”
- (ISO 8373:2021)
- Service = “robot in personal use or professional use that performs useful tasks for
humans or equipment” (ISO 8373)
o Requires “a degree of autonomy” = “ability to perform intended tasks based on
current state and sensing, without human intervention”
Official EU definition of a robot
- There is none
- EU Parliament’s suggestion for a “smart robot”:
1. The acquisition of autonomy through sensors or by exchanging data with its
environment (inter-connectivity) and the trading and analyzing of the data;
2. Self-learning from experience and by interaction (an optional criterion);
3. At least a minor physical support (as opposed to virtual robots, e.g. software);
4. The adaptation of its behavior and actions to the environment; and
5. The absence of life in the biological sense
- EU Commission hasn’t really done anything with this suggestion
What is AI?
- A number of different branches of computer science which all use different techniques
call what they do AI:
o Machine learning
o Knowledge Discovery in a Database (KDD)
o Data Mining /Analytics
o Advanced Statistics
Defining AI
- How we define AI fundamentally impacts how we consider it/ use it societally
- “Intelligence displayed or simulated by code (algorithms) or machines” – Mark
Coeckelbergh, MIT
o But then, how do we define “intelligence”
- “The science and engineering of machines with capabilities that are considered
intelligent by the standard of human intelligence” – Philip Jansen et al
- “ The term “artificial intelligence” has come to mean many different things over the
course of its history, and may be best understood as a marketing term rather than a
fixed object” – AI Now Institute
- “neither artificial nor intelligent” – Kate Crawford
What is intelligence?
- AI functions:
o Learning, perception, planning, natural language processing, reasoning, decision
making , problem solving
o AI “seeks to make computers do the sort of things that minds can do”- Margaret
Boden
- Animals> Transhumanists?
History of AI
- Generally considered to have started in the 1950’s
- 1950: Alan Turing published “Computing Machinery and Intelligence”
o Introduced the Turing test, and speculated more generally about machines that
could learn and think
- 1956: Dartmouth Workshop – Birthplace of contemporary AI:
o Embraced digital machines attempting to simulate human intelligence
o They thought it was just around the corner! Would take no more than one
generation!
- Strong AI vs Weak AI:
o Cognitive tasks that humans perform
o Vs performing specific tasks like playing chess/ classifying images etc
- Until the late 80’s – mostly symbolic AI
AI systems now
- Machine learning:
o Bears “little or no similarity to what might plausibly be going on in human heads”
– Boden
o A statistical process, often focused on pattern recognition
o “The data itself that defines what to do next” – Aplaydin
o Supervised vs unsupervised vs reinforcement
- Other systems:
o Computer vision, natural language process, expert systems, evolutionary
computation
AI Act definition
- AI system means software that is developed with one or more of the techniques and
approaches listed in Annex I and can, for a giver set of human-defined objectives,
generate outputs such as content, predictions, recommendations, or decisions
influencing the environments they interact with;
- Annex I:
o Machine learning approaches, including supervised, unsupervised and
reinforcement learning, using a wide variety of methods including deep learning;
o Logic- and knowledge-based approaches, including knowledge representation,
inductive (logic) programming, knowledge bases, inference and deductive
engines, (symbolic) reasoning and expert systems;
o Statistical approaches, Bayesian estimation, search and optimization methods
Post-truth?
- Disinformation
- What is real?
- Who is real?
- Illusions of companionship
- The deterioration of personal relationships
Security
- What happens if the software is hacked?
- Military applications
- The more infrastructurally reliant, the greater the consequences
Nobody’s fault
- A machine with agency and power but no responsibility
- Who should be responsible?
o Who had knowledge?
o Who has the duty of care?
o How can someone predict outcomes?
o Is it even transparent what happened?
- What is more important – performance or explainability?
A car kills someone
- Who is responsible?
o The programmer?
o The driver?
o The car company?
o Other drivers?
o The state?
- Algorithm interacts with:
o Sensors
o Data
o Hardware
o Software
- ML has various stages:
o Data collection
o Training
Bias
- Arises in:
o The training data
Unrepresentative/ incomplete
o The algorithm
o The input data provided post-training
o In correlation-based decisions
- Unrepresentative dataset may then be used to predict for entire population
o ImageNet uses huge amount of US data, while China & India (massive subset of
population) represent very small subset of training data
- Too little data for some issues:
o Eg murder prediction – not that many murders!
- Developers/ data scientists disproportionately Western white men aged 20-40
- Even where inference is true, is it always fair?
o If person’s family / friends are criminals, should they receive a harsher sentence?
Authorship
- Not a lot of EU law around authorship
- Mostly dealt with at a national level
- Generally, if more than one author collaborates on a work and their individual
contributions can’t be separated, they will be considered co-authors
o Most national laws require a common design between authors, such that is a
“concerted creative effort”
- Only creatively active persons will be considered authors
- So, who is the author of AI-assisted work?
o Users?
o Developers?
- For many Member States, there is a presumption of authorship for the person
indicate as author, unless proven otherwise
Disinformation
- Different from misinformation, since it is intentional
- Threatens:
o Democracy
o Public health
o National security
o Racial equality
- Creates echo chambers and filter bubbles
Algorithmic power
- Due to algorithms on social media, spread of disinformation is massively amplified
- Usually, algorithms are black boxes
Facebook Algorithm
- Aligned with Facebook policy, rather than content users might be interested in
- Designed to recommend content that received most clicks and likes
- This led to “clickbait”
- Redesigned to promote “meaningful social interactions” with large amounts of
comments and replies
- Led to posts that primarily anger and offend users
- Troll farms now exploit this algorithm to create fake news, which generates more
clicks/ ad revenue
Deepfakes
- Visual
o “an Israeli soldier committing an atrocity against a Palestinian child, a
European Commission official offering to end agricultural subsidies on the eve
of an important trade negotiation, and a Rohingya leader advocating violence
against the security forces in Myanmar” – Chesney & Citron
- Textual
o Readers found these articles to be more convincing than those written by
human beings – Robitzski
How to tackle disinformation
- Transparency
o Disinformation algorithms must be made public
- Intelligibility
o Must be intelligible to users
- Accountability
o Platforms should be accountable
- Should there be a Disinformation regulatory body that presides over this?
Regulatory Landscapes
- US has no laws regulating algorithmic disinformation
o Rather, relies of self-regulation
o Market-based approaches
- Some EU countries are bringing about legislation
o E.g. France- “Manipulation of Information Law”
o A number of transparency provisions – yet, some commentators feel its impact
has so far been limited as it has limited accountability
- China has more stringent legislation
o Transparency requirements with greater accountability for platforms, along
with requirements to offer users greater choice
Regulatory Challenges
- Transparency may be of limited value to your average user
o Users will “seek out information about a system, interpret it, and determine its
significance, only then to find out they have little power over things anyway” –
Edwards & Veale
- Freedom of expression
Lecture 4- Privacy, Surveillance and Labour
Current Uses
- Emotional recognition
- Search suggestions/ internet guidance
- Personality predictions
- Recidivism prediction
- Recommending everything
- Credit/ insurance
- Health diagnoses
- Fraud prediction
- Policing
- Journalism
- Education/ Toys
Quick Europe Recap
- Court of Justice of the EU (CJEU):
o EU entity
o It’s the court of justice of the EU;
o Checks the application of the EU treaties, including the Charter of fundamental
rights of the EU (EU Charter)
- European Court of Human Rights
o NOT EU entity
o It’s the international body that controls the application of the European
Convention for Human Rights (ECHR) created by the Council of Europe (also
NOT an EU entity)
o The EU is part of it, so are all its member states
- European Data Protection Board (EDPB)
o EU entity
o Independent authority established by the GDPR
o Coordinates the application of the GDPR
o Coordinates the national Data Protection Authorities (DPAs)
o Issues guidelines and opinions that are very important to understand how to
apply the GDPR
o It used to be called “Article 29 Working Party” under the previous regime;
those opinions still apply in many cases
Privacy vs Data Protection
- ECHR
- EU Charter
The General Data Protection Regulation (“GDPR”)
- Only applied to “personal data” (Art 2(1))
- Personal data is ant information that relates to an identified or identifiable living
individual. Different pieces of information, which collected together can lead to the
identification of a particular person, also constitute personal data
- Personal data that has been de-identified, encrypted or pseudonymized but can be
used to re-identify a person, remains personal data and falls within the scope of the
GDPR
- Processing of personal data only lawful in certain circumstances
o Consent is (perhaps) the most well-known, but not the only
o Can also be performance of a contract, for the public interest, or a number of
other legitimate purposes
- GDPR also has a broad data minimization principle (“collect no more data than
necessary”)
o Collection limitations
o Purpose limitations
o Storage limitations
- Sensitive “special” data receives a heightened level of protection and processing is
only allowed in a more limited set of circumstances (see Art 9)
Has Data Minimization been successful?
- Depends on who you ask
- Core issues have been:
o Difficulty in enforcement (overburdened enforcement agencies)
o Inherent ambiguity interpreting “necessity” and “proportionality”
What qualifies as a legitimate interest of data processing?
Maximizing advertising revenue?
How much data is needed to do so? Who decides?
How much data is needed for security?
o Limited case law on many of these questions
Swedish Data Protection Authority outlawed facial recognition in
schools as against collection limitation principle
Monitoring attendance not a good enough excuse
“Smart” Devices and the Internet of Things
- AI can correlate collected data with other publicly available information to generate
insights on consumers incl:
o Age
o Demographic
o Household income
- Function creep
o E.g. smart energy meter may only track energy usage/ make & model of the
appliance but then use that data to predict you age/ demographic/ household
income
GDPR Success
- App Drivers and Couriers Union successfully sued Uber, claiming they had rights to see
algorithms which determined their pay/ “fraud probability scores” / other profiling
and data held about them
Proposals for Stricter Rules?
- Less broad normative ideas that require judicial analysis
- US proposals:
o Call for ban of data use for targeted advertising
o Alternative, limiting use of sensitive data for all secondary purposed, including
advertising
o Banning biometric collection of children
o Banning biometric collection in specific contexts (workplaces, schools,
employment)
Biometric Surveillance
- Continue to be quietly embedded in public daily life without proper scrutiny
- Wide array of industries and contexts:
o Delivery drivers tracked for emotion recognition, “attractiveness” and
potentially aggressive behavior
o Productivity monitoring software tracking eye movement to police attention of
employees
o Call center employees tone and pitch monitored for emotion recognition and
checked for anger/ frustration
o “Mobile neuroinformatics solutions” measuring cognitive states and providing
feedback on “cognitive performances and needs”
- Huge commercial rollouts, despite significant lack of scientific evidence
Automotive industry
- “In-cabin” cameras and sensors to detect fatigue and distraction
- Can then share data with insurance companies and law enforcement
- Where once automakers sought to use their own tech or contract with smaller
companies so as to retain control, they are now increasingly partnering with Big Tech
Virtual Reality
- A similar pattern playing out in AR/VR landscapes
- Meta also mining body information from new headsets when users are in metaverse
“Safety” and “Productivity”
- Systems which were formerly “facial recognition” systems are now increasingly being
marketed as necessary for safety and productivity
Spyware
- European Council published a draft of the European Media Freedom Act
- Would allow use of spyware against journalists if national government considers it
necessary
- Last year, Pegasus spyware was found on phones of 3 famous journalists in France
- Is use of spyware a proportional measure against national security threats?
Affect Recognition
- UK Information Commissioner’s Office:
o “As it stands, we are yet to see any emotion AI technology develop in a way
that satisfies data protection requirements, and have more general questions
about proportionality, fairness and transparence in this area”
Predictive Privacy
- Is making a prediction about an individual itself a breach of their privacy?
- Crucially, analysis of the data of one individual can negatively impact others
- Much AI data analysis is organized by “pattern matching” input data against a huge
amount of other data points
o E.g. all FB users who declare their sexual orientation forms a training data set
by which sexual orientation of all users can be predicted
- As such, should the individual be allowed to decide in every respect what data they
provide to data behemoths?
- “Predictive privacy of an individual or group is violated when personal information is
predicted about them without their knowledge or against their will, in such a way that
it could result in unequal treatment of an individual or group” (Muhlhoff)
- Privacy as a public value for democratic political participation, and a collective value as
the privacy of the individual now affects all other individuals (Regan)
- “Once a predictive model is created- and there are currently no effective legal
restrictions on this – it can be applied to millions of users in an automated way with
almost no marginal cost” (Muhlhoff)
Unexpected Correlations
- Predicting depression, psychosis, diabetes or high blood pressure (Merchant et al,
2019)
- FB predicting suicidal users (Goggins)
o Edtech which monitors children and predicts if they are unwell outing students
to their parents/ teachers
- FB “likes” to predict “a range of highly sensitive personal attributes including sexual
orientation, ethnicity, religious and political views, personality traits, intelligence,
happiness, use of addictive substances, parental separation, age and gender” (Kosinski
et al, 2013)
Bad Machine Decisions
- Credit ratings based on personal attributes
- People unable to get jobs based on personal attributes (e.g. pregnancy)
- Psychological/ socio-economic factors exploited so that vulnerable people are
targeted for advertising/ other behavioral manipulation
- Hyper-nudging
- GDPR does not combat this
The Limits of Anonymisation
- Promises of anonymization used to leverage consent form users
- Anonymised data does not fall within the scope of the GDPR
- GDPR is focused on the data of the individual (applies to “personal data”)
o Fundamental rights are rights of the individual
- The individual rights of the GDPR:
o Art 15- Right of Access
o Art 16- Rectification
o Art 17- Erasure
o Art 18- Restriction of Processing
o Art 20- Portability
A New Threat
- A means of violating the privacy of the individual without any kind of data theft
- Does it therefore demand a new type of protection?
Possible Improvements
- Restrict availability of consent to situations where consequences solely affect that
user
- Render legal bases of consent insufficient for users where data will be linked to other
people’s data
- Establish collectivist rights, such that groups affected can also assert rights
Workplace Surveillance
- Post-pandemic, workplace surveillance has increased
o Remote working
o Blurred work/home life
o Workplace technologies integrated into personal devices/ spaces
- Low wage workers affected the most radically
- Surveillance tech claims to be:
o Anti-discrimination
o Helpful for management
o Efficiency increasing
o Better at finding workers/ analyzing CVs
Surveillance Interoperability
- Defined by Karen Levy
- Combination of government data collection, corporate surveillance and third party
harvesting
- Data about others aggregated and used to predict “risk assessments” of workers, such
as likelihood that an employee will commit fraud
- Systems are riddles with flaws, yet make decisions about workers
Proposed EU Platform Work Directive
- Promises platform workers:
o Worker Access to Data
o Algorithmic Transparency
o Contestability
Humanyze
- Employees carry socio-meters
- Collect data points on how they behave and communicate at workplace
- Data is analyzed to draw inference on performance, patterns etc using “Emotional AI”
(EAI)
- Analysis then used to make recommendations
- Promise to be 100% anonymous
Policy Suggestions
- Focus less on how data is collected and what happens to that data, and rather on the
actual outcome of the data usage
o This is especially important, given increased reliance on anonymization,
synthetic data production and other ML tools which allow data controllers to
say that they are no longer using individual’s personal data
- Collective rights, not just individual rights
o Include considerations of collective harm
o As such, algorithms can be considered for their effect on collective control
o As such, broader structural inequities can be remedied that do not rely on
proving harm towards an individual
Training Labor
- Hard to predict wages
- Constant geo-moving as to where the work is coming in
- Pay is determined by a surge-pricing mechanism which adjusts for how many
annotators are available and how quickly data is needed
- Pay can be as low as $1-$3/ hr
- Hours of unpaid training required which can then lead to no work
- Every task could be your last, and unclear when work will come in
- Boom-and-bust nature of AI, where huge amounts of data required for initial training,
then no more training needed after this stage
- As such, workers stay up for days without sleeping to take advantage of surges
- Incentivizes beating the system or getting bots to train each other
Lecture 5- AI Act
Why should use of AI be regulated?
- Using AI is not neutral, has an effect
o On what is done itself
o On the responsibility, accountability, liability for what is done
o On the power balance between those involved
- Use of AI may have an effect on human rights:
o Freedom to share and receive information (Art 10 ECHR)
o Freedom of thought, conscience and religion (Art 9 ECHR)
o Equal treatment (Art 14 ECHR)
o Privacy (Art 8 ECHR)
o Data protection (Art 8 ECHR, Art 8 Charter)
o Access to justice (Art 6 ECHR)
Not necessarily just negative
- Use of AI may have an effect on human dignity
o Undermine free will
o Reduce or eliminate autonomy
o Instrumentalization of humans
But it may of course also be very overpowering
- AI itself may develop into an existential threat for humanity
Discrimination
- Differentiate between people on suspect grounds, such as:
o Race
o Sex
o Religion
o Belief
o Sexual orientation
o Disability
- Direct discrimination:
o Explicitly on one such ground; as a rule: forbidden
- Indirect discrimination:
o Not directly on such a ground, but the effect is that the group of people
defined by such a characteristic is disproportionally affected
E.g. part-time workers (mostly women), people in a certain
neighborhood (ethnicity); these characteristics are called proxies
Not forbidden id there is objective justification
Burden of proof
o Proxies allow algorithms to identify such groups, so you don’t even need the
suspect data to discriminate indirectly; but you do need the suspect data to
prove indirect discrimination
- Problem:
o There is widespread discrimination in society:
Less women in high-paid jobs, more women doing unpaid work
Over-representation of certain ethnic groups in crime stats
Social stratification on the basis of race and religion
o Accurate, representative data will reflect this bias and AI will continue it
because:
It is true that women have a lower chance of being successful in a high-
paid job
It is true that an immigrant boy growing up in a certain neighborhood
has a higher chance of ending up in drugs and crime
- Accurate data is biased, unbiased data is inaccurate
- Human dignity, fairness, justice require that we judge people on their own merits and
not their background or the color of their skin
- This is the problem pf making predictions about people’s behavior in risk assessments
Self-fulfilling prophecies, feedback loops
- Insight from Ionica Smeets
o Say that in a state we have two groups; Redhats and Bluehats
o Crime rates reveal that 51% of found crime is committed by RH and 49% by BH
o Police investigates 100 people each month: 51 RH and 49 BH
o 51% of RH is criminal, so 26 people; 49% of BH are criminal, so 24 people
o Next month 52 RH and 48 BH are investigated
o After 2 years, 73% of found crime is committed by RH so crime rated become
rapidly more skewed
o After 7 years, 97% of the found criminals are RH and only 3% are BH; even if
only 51% of RH is in fact criminal – they just have a higher risk of being found
- Resources are scarce, so you investigate people that you think are more likely to
commit crime; and indeed in this way you find more crime
o This is the problem with predictive policing
Latest EP version
- “Artificial Intelligence System” (AI system) means a machine-based system that is
designed to operate with varying levels of autonomy and that can, for explicit or
implicit objectives, generate outputs such as predictions, recommendations, or
decisions that influence physical or virtual environments
o Plus: Generavtive AI added as a foundation model, Art. 28b
Risk-Based Approach
All that contradicts EU values is prohibited (Title II, Article 5)
- Subliminal manipulation resulting in physical/psychological harm
- Exploitation of children or mentally disabled people resulting in physical/
psychological harm
- General purpose social scoring
- Remote biometric identification for law enforcement purposes in publicly accessibly
spaces (with exceptions)
High-risk AI systems
- Certain applications in the following fields:
o Safety components of regulated products
Which are subject to third-party assessment under the relevant
sectorial legislation
o Certain (stand-alone) AI systems in the following fields
Biometric identification and categorization of natural people
Management and operation of critical infrastructure
Education and vocational training
Employment and workers management, access to self-employment
Access to and enjoyment of essential private services and public
services and benefits
Law enforcement
Migration, asylum and border control management
Administration of justice and democratic processes
Requirements for high-risk AI (Title III, chapter 2)
- Establish and implement risk management processes & in light of the intended
purpose of the AI system
o Use high-quality training, validation and testing data (relevant, representative
etc)
o Established documentation and design logging features (traceability &
auditability)
o Ensure appropriate certain degree of transparency and provide users with
information (on how to use the system)
o Ensure human oversight (measures built into the system and/or to be
implemented by users)
o Ensure robustness, accuracy and cybersecurity