0% found this document useful (0 votes)
40 views34 pages

Robot Law and AI Notes

The document discusses the foundations of AI and robotics, defining robots and AI, and exploring their various types and applications. It highlights the historical development of AI, the relationship between robots and AI, and the ethical implications of their use, including issues of bias, privacy, and accountability. Additionally, it addresses the challenges of regulating AI and the moral responsibilities associated with AI systems.

Uploaded by

natalia sigalas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views34 pages

Robot Law and AI Notes

The document discusses the foundations of AI and robotics, defining robots and AI, and exploring their various types and applications. It highlights the historical development of AI, the relationship between robots and AI, and the ethical implications of their use, including issues of bias, privacy, and accountability. Additionally, it addresses the challenges of regulating AI and the moral responsibilities associated with AI systems.

Uploaded by

natalia sigalas
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 34

Lecture 1- Foundations of AI & Robots

What is a robot? What is AI?


- Joseph Engelberger, the father of robotics
o Apparently, in the 60s: “I don’t know how to define one, but I know one when I
see one!”
Many different types of robots

medical robots

electrical
appliances

industrial robots
communication
robots

Robots indoor robots

agriculutral
robots
service robots

construction
robots

field robots disaster


response robots

transport robots
transportation
drones robots (designed
to move people)

robot suits

Industrial vs Service
- Industrial = “automatically controlled, reprogrammable, multipurpose manipulator,
programmable in three or more axes, which can be either fixed in a place or fixed to a
mobile platform for use in automation applications in an industrial environment”
- (ISO 8373:2021)
- Service = “robot in personal use or professional use that performs useful tasks for
humans or equipment” (ISO 8373)
o Requires “a degree of autonomy” = “ability to perform intended tasks based on
current state and sensing, without human intervention”
Official EU definition of a robot
- There is none
- EU Parliament’s suggestion for a “smart robot”:
1. The acquisition of autonomy through sensors or by exchanging data with its
environment (inter-connectivity) and the trading and analyzing of the data;
2. Self-learning from experience and by interaction (an optional criterion);
3. At least a minor physical support (as opposed to virtual robots, e.g. software);
4. The adaptation of its behavior and actions to the environment; and
5. The absence of life in the biological sense
- EU Commission hasn’t really done anything with this suggestion
What is AI?
- A number of different branches of computer science which all use different techniques
call what they do AI:
o Machine learning
o Knowledge Discovery in a Database (KDD)
o Data Mining /Analytics
o Advanced Statistics

Defining AI
- How we define AI fundamentally impacts how we consider it/ use it societally
- “Intelligence displayed or simulated by code (algorithms) or machines” – Mark
Coeckelbergh, MIT
o But then, how do we define “intelligence”
- “The science and engineering of machines with capabilities that are considered
intelligent by the standard of human intelligence” – Philip Jansen et al
- “ The term “artificial intelligence” has come to mean many different things over the
course of its history, and may be best understood as a marketing term rather than a
fixed object” – AI Now Institute
- “neither artificial nor intelligent” – Kate Crawford
What is intelligence?
- AI functions:
o Learning, perception, planning, natural language processing, reasoning, decision
making , problem solving
o AI “seeks to make computers do the sort of things that minds can do”- Margaret
Boden
- Animals> Transhumanists?
History of AI
- Generally considered to have started in the 1950’s
- 1950: Alan Turing published “Computing Machinery and Intelligence”
o Introduced the Turing test, and speculated more generally about machines that
could learn and think
- 1956: Dartmouth Workshop – Birthplace of contemporary AI:
o Embraced digital machines attempting to simulate human intelligence
o They thought it was just around the corner! Would take no more than one
generation!
- Strong AI vs Weak AI:
o Cognitive tasks that humans perform
o Vs performing specific tasks like playing chess/ classifying images etc
- Until the late 80’s – mostly symbolic AI
AI systems now
- Machine learning:
o Bears “little or no similarity to what might plausibly be going on in human heads”
– Boden
o A statistical process, often focused on pattern recognition
o “The data itself that defines what to do next” – Aplaydin
o Supervised vs unsupervised vs reinforcement
- Other systems:
o Computer vision, natural language process, expert systems, evolutionary
computation
AI Act definition
- AI system means software that is developed with one or more of the techniques and
approaches listed in Annex I and can, for a giver set of human-defined objectives,
generate outputs such as content, predictions, recommendations, or decisions
influencing the environments they interact with;
- Annex I:
o Machine learning approaches, including supervised, unsupervised and
reinforcement learning, using a wide variety of methods including deep learning;
o Logic- and knowledge-based approaches, including knowledge representation,
inductive (logic) programming, knowledge bases, inference and deductive
engines, (symbolic) reasoning and expert systems;
o Statistical approaches, Bayesian estimation, search and optimization methods

Relationship between robots and AI


- Ai is increasingly being utilized in robots to expand their functionality
- Enables robots to sense/ respond to their environment
- Optimises performance/ saves money
- Puts AI into a tangible real space, rendering it physically mobile
- Sometimes referred to as “embodied AI”
- Is AI ever actually not embodied? All AI software needs hardware to run
The problems
- Bias and Discrimination
- Privacy
- Taking jobs
- Inequality
- Military applications
- The environment
- Concentration of power
- Opacity of results
Abstraction from reality
- At every stage, we make choices how to represent our reality in these systems
o There is no objective or neutral representation of reality!
- How then do we ascertain whether a model is a “good” representation of reality? By
what metric?
Imagenet (From Atlas of AI)
- Wordnet is a database developed in 1985 that organized the entire English language
into synonym sets- “synsets”
- Imagenet took just the nouns from Wordnet to develop nine categories:
o Plant, geo formation, natural objects, sport, artifact, fungus, person, animal, misc
o Eg Natural Object – Body – Human Body:
 Human Body has subclasses including “adult body”, “male body” and
“female body”
 What does this mean for consideration of gender?
 95% of papers on gender detection treat gender as a binary
o 2,832 subcategories for Person, including race, age, nationality, profession,
economic status, behavior, character and morality
 Includes Debtor, Boss, Acquaintance, Brother, Color-Blind Person
 Also included Drug Addict, Pervert, Slut, Spastic and many other slurs,
until ImageNet has to remove 56% of its categories amidst outrage
o As such, inbuilt politics / morality within a system
- ImageNet’s creators harvested its images from Google, then used Mechanical Turk
workers to label them en masse
- The UTKFace Data has two genders and five races: White, Black , Asian, Indian, Others
Training an LLM
- Massive numbers of peoples labelling data/ training it when it gets confused
- This is organized by worker- facing subsidiaries of multibillion dollar data vendors, who
are contracted by companies like Open AI and the military
- Collect as much labeled data as possible as cheaply as possible
- Due to massive amount of “annotation” required, needs to be crowdsourced
- Despite hope that eventually “annotation” won’t be necessary, the more AI is deployed
into the real world, the more “edge cases”
- “Edge cases” are failures that occur when system encounters something not well
represented in its training data
o i.e Uber self-driving car killing a woman in 2018 due to not understanding a
person walking with a bike
Annotation
- Examples (Dzieza):
o Classifying emotional content of videos
o Categorising emotional content on phone calls
o Whether things are spam
o Sexual provocativeness of ads
o Labelling credit card transactions
o Auditing quality of e-commerce recommendations
o Correcting customer service chatbots
o Identifying corn for autonomous tractors
- Annotations is big business!
- Usually companies buying the data confidentiality
o Annotators told to keep quiet!

Who makes the decisions?


- Core advantages in AI power (per AI Now Report):
o Data advantage:
 Who has extracted the most data
o Computing power advantage
 Material dependencies
 Labor dependencies
 i.e Microsoft has been penalizing customers for developing competitors
to GPT-4, limiting access to Bing
o Geopolitical advantage
 AI as instrument of US-China Tech race
The shaping of AI policy and power through capital
- Lobbying
- Staffing and recruiting from government
- Creating biased information
- Directing the politics of employees and contractors
- Power derived from being too big to fail
(from AI Now Report)
What is a robot/ AI’s legal designation?
- Law divides things into:
o Sources of obligation
 Commodities- objects of the law. Therefore, can have legal binding
effects
 E.g. I broke my neighbour’s TV (the TV is a commodity)
 If not a commodity, not recognized by the law
o Actors
 Subjects whose actions have legal consequences
o Persons
 Hold rights and duties
 Natural persons like humans
 Legal persons like companies
How to regulate AI?
- International
- Regional
- National
- Local
o Many different regulatory areas affected:
 Human rights
 Markets
 Liability
 Privacy/ data protection
 Safety & Security/ Labor
o Should this all be considered AI law, or should it be dealt with specifically in
different regulatory spaces?
o Different approaches between EU & US

The push for no or self-regulation


- Software Alliance opposes legal liability for General Purpose AI models
- Regulation would impede innovation!
- Others argue that it should be regulated by the industry itself
The constraints of EU law
- EU can only operate based on powers established in TFU (eg art. 114= relating to
internal market)
- Subsidiarily on some matters + matters adjacent to those listed
- Needs to justify the basis and type of intervention (such as whether using a Directive or
a Regulation)
o Directives: for harmonizing. Need to be implemented in national law
o Regulations: immediately applicable
- Therefore, many constrains!
Relevant EU laws across lifecycle
- Concept and design / Manufacturing and Transport:
o Establishing obligation for producers and operators, for product safety & marker
monitoring:
 GDPR
 Data Act
 AI Act
 General Product Safety Directive/ Regulation
 Cyber Resilience Act
 Sector specific (machinery, radio equipment, medical, OHS, IPR,..)
o Use and Applications:
 Occupational Health & Safety (framework directive)
Contractual remedies (replace/ repair/ reimburse) : Sale of Goods

Directive, Digital Content & Supply Directive; national rules on
contractual liability
 Extra-contractual liability: AI Liability Directive; national rules on extra-
contractual & strict liability integrated with New Product Liability
Directive; DSA
 Road circulation (incoming, maybe!)
o Retirement:
 Product safety & market monitoring
 Environmental protection
Lecture 2 – Ethics of Robots and AI
Who is responsible?
- The COMPAS algorithm was used to predict reoffenders so as to assist with sentencing
- Its false positives (people who were predicted to reoffend but did not) were
disproportionately black
Predictive Policing
- AI forecasts where crimes are likely to occur
- Creates a “self-proving” feedback loop, where certain socioeconomic or racial groups
are targeted
I know you better than you know yourself
- Cameras on the street identify and read you
- Mental/ bodily health predicted
- Can predict you sexual preferences without access to your own devices
- Your performance is monitored at work by your employer
The trade-off
- This is a trade-off for purported improvement in life quality or “progress”
- Who is this an improvement for?
o Government?
o Citizens?
o People designing the software?
o Business?
o The courts?
o Those on trial?
- Tech currently shaped by a v small and concentrated group of mega corporations
Dystopias
- The recursively self-improving AI
- The human brain uploaded to the system
Is AI intelligence?
- Dreyfus:
o The brain is not a computer
o Unconscious background of common knowledge
o This knowledge is tacit and cannot be formalized
o Therefore, it cannot be captured by AI
o AI can only be methodology
o To have human intelligence, you must be embodied and exist in the world
- Churchland
o The brain is a neural network
o Thoughts and experiences are just brain states
- Dennett:
o “we are sort of robot ourselves”

The Chinese room


- John Searle:
o I am locked in a room and given Chinese documents
o I don’t know Chinese
o I have a rulebook which explains how to answer questions from outside
o I am able to answer outside questions which are in Chinese by following the
guidance in the rulebook
o As such, I can answer the questions without speaking Chinese
- Computer programs can produce output based on an input due to their rules, without
understanding anything
- Meaning is only human
Can AI be a moral agent?
- Deborah Johnson:
o No, computers have no capacity for morality- therefore, all moral decisions must
be made by humans
- Michael and Susan Anderson:
o We should code principles into AI
o Machines can be more rational since they are unemotional
- Others say morality cannot be reduced to the following of the rules
- Coeckelbergh:
o We don’t want an unemotional “psychopath AI” making decisions
Cows being milked
- Once we realise the internet is just a training space for for-profit machines, does that
make us data cattle?
- New potentials for behavioral manipulation? Surveillance? Tech-oligarchy?
- Not to mention the secret labor being exploited
Vulnerable users
- Given the intensive level of surveillance/ data mining/ behavioral manipulation, what
are the ramifications for more vulnerable members of society?
o Children with AI-powered toys
 Internet of Toys manipulating children?
o Elderly
o People with different mental capabilities

Post-truth?
- Disinformation
- What is real?
- Who is real?
- Illusions of companionship
- The deterioration of personal relationships
Security
- What happens if the software is hacked?
- Military applications
- The more infrastructurally reliant, the greater the consequences
Nobody’s fault
- A machine with agency and power but no responsibility
- Who should be responsible?
o Who had knowledge?
o Who has the duty of care?
o How can someone predict outcomes?
o Is it even transparent what happened?
- What is more important – performance or explainability?
A car kills someone
- Who is responsible?
o The programmer?
o The driver?
o The car company?
o Other drivers?
o The state?
- Algorithm interacts with:
o Sensors
o Data
o Hardware
o Software
- ML has various stages:
o Data collection
o Training

Bias
- Arises in:
o The training data
 Unrepresentative/ incomplete
o The algorithm
o The input data provided post-training
o In correlation-based decisions
- Unrepresentative dataset may then be used to predict for entire population
o ImageNet uses huge amount of US data, while China & India (massive subset of
population) represent very small subset of training data
- Too little data for some issues:
o Eg murder prediction – not that many murders!
- Developers/ data scientists disproportionately Western white men aged 20-40
- Even where inference is true, is it always fair?
o If person’s family / friends are criminals, should they receive a harsher sentence?

If bias cannot be avoided, should it be avoided?


- Would we change a machine to be less biased if it makes it less accurate?
- If we remove characteristics we don’t want, will the machine just develop proxies?
- What is a perfect unbiased algorithm?
- Algorithms are inherently discriminatory in their function. They are meant to
discriminate between different possibilities. – Coeckelbergh
- Which discrimination is good discrimination, and which is bad?
- Should algorithms seek to be more reflective or corrective?
Social ramifications
- Job loss
o 47% of all jobs could be automated (Frey & Osborne)
o Do we need to restructure society?
o Should we keep some work for humans only?
- Greater wealth inequality
o Wealthy reap benefits while unemployed / poor reap negative consequences

The speed of change


- How can we adapt the values of a world when it is changing so quickly?
- Can we develop wisdom when the state of reflection is perpetually changing?
Diversity and plurality
- How do we combat a projection of universalism from systems which have emerged from
specific geo-socio-cultural contexts?
- How can AI reflect democratic values? Should the systems seek consensus?
The UN 2015 Agenda
Is AI a distraction?
- Rising inequalities within and among countries
- War and violent extremism
- Poverty and malnutrition
- Lack of access to fresh water
- Lack of effective and democratic institutions
- Ageing populations
- Infectious and epidemic diseases
- Risks related to nuclear energy
- Lack of opportunities for children and young people
- Gender inequality and various forms of discrimination and exclusion
- Humanitarian crises and human rights violations
- Problems related to migration and refugees
- Global warming, intense natural disasters and forms of environmental degradation such
as drought and loss of biodiversity
An alienation machine
- : “An instrument to leave the Earth and deny our vulnerable, bodily, earthly, and
dependent existential condition” – Coeckelbergh
- Will the rich escape the earth they destroy and leave the rest of us on a newly unlivable
planet? – Zimmerman
- Techno-solutionism – will it work?
Lecture 3 – GFM’s, Copyright and Disinformation
US Copyright Lawsuits
- Lawsuit against Github, MS & OpenAI that challenges legality of Copilot, a software that
suggests code in response to programmer prompts, and of Codex, an LLM built on
millions of lines of open source code
- 2 lawsuits against Stability AI challenging legality of Stable Diffusion
- Huge damages sought and injunctions to shut them down
- Claims that both:
o Scraping copyrighted works from internet is infringement of copyright
o The outputs are also infringing copyright as derivative works

(recent case: a whole bunch of authors are suing OpenAI)


US Copyright – Fair Use
- Four factors to consider:
o Purpose & character of the challenged use
 Favored uses: criticism, comment, news, teaching, research & scholarship
o Nature of the copyrighted work
o Amount and substantiality of the taking
o Effect of the challenged use on the market for or value of the work

Training Data Input in EU Law


- 2019 Copyright in the Digital Single Market Directive (CDSM):
o Text and Data Mining (TDM) defined as “any automated analytical technique
aimed at analyzing text and data in digital form in order to generate information
which includes but is not limited to patterns, trends and correlations”
- Exceptions in CDSM:
o Article 3: TDM for the Purposes of Scientific Research
 Allows for “reproductions and extractions made by research
organizations and cultural heritage institutions in order to carry out, for
the purpose of scientific research, text and data mining of works or other
subject matter to which they have lawful access”
o Article 4: Exception or Limitation for TDM
 A broader exception, which essentially allows for commercial TDM
 Subject to reservation by rights holders, including through “machine-
readable means” – opt-out provision
Most models are seemingly relying on the article 4 exception as the basis for
their TDM
The Opt-Out Provision: Good or Bad Approach?
- For:
o Keller – Will increase the bargaining power of rights holders and lead to licensing
deals
o Communia policy paper: “ensures a fair balance”
- Against:
o Trendacosta & Doctorow – Will lead to market concentration and exploitation of
creators by big companies
Transparency
- Communia recommendation:
o The EU should enact a robust general transparency requirement for developers
of generative AI models. Creators need to be able to understand whether their
works are being used as training data and how, so that they can make an
informed choice about whether to reserve the right to TDM or not
Prospective Transparency Provisions in the AI Act
- European Parliament has recently suggested a number of changes to the draft AI Act
- These include:
o Article 52:
AI systems intended to interact with natural persons are designed and
developed in such a way that the AI system, the provider itself or the user
informs the natural persons exposed to an AI system that they are
interacting with an AI system in a timely, clear and intelligible manner
o Article 28b(4)(c):
Providers of foundation models shall “document and make publicly
available a sufficiently detailed summary of the use of training data
protected under copyright law”
Safeguards in the AI Act
- Article 28(4)(b):
o An obligation to “design and develop the foundation model in such a way as to
ensure adequate safeguards against the generation of content in breach of
Union law in line with the generally-acknowledges state of the art, and without
prejudice to fundamental rights, including the freedom of expression”
Copyright Filter in CDSM Directive
- Article 17(7) transformative use exceptions:
(a) quotation, criticism, review;
(b) use for the purpose of caricature, parody or pastiche
- But CDSM provisions here only apply to “online content-sharing service providers”
- Unlikely to apply to GAI then!
- Which transformative uses are therefore acceptable for GAI? We don’t know!
What “work” is protected in EU law?
- Current EU copyright framework is, for the most part, silent on questions of subject
matter/ authorship
- Despite extensive copyright harmonization, no single directive harmonises what a
“work” is - Hugenholtz & Guintais
- Article 1 of the Term Directive comes close to a general definition, which has also
been broadly held up by CJEU rulings
“A literary or artistic work within the meaning of Article 2 of the Berne
Convention”
- Article 2 of the Berne Convention
“Every production in the literary, scientific and artistic domain, whatever may
be the mode or form of its expression”
- Infopaq CJEU 2009:
Work as “the author’s own intellectual creation”
Literary, Scientific or Artistic Domain
- CJEU has not explicitly endorsed this domain test
- In Premier League, it denied copyright to sporting events as they “cannot be regarded
as intellectual creations classifiable as works”
o Yet elsewhere, implied it was rather due to a lack of “originality”, rather than
due to domain issue
- In Levola Hengelo, taste of a food product was denied “work” status
o Yet again, was due to not having “identifiable” expression, rather than its
domain
Human Intellectual Effort
- Berne Convention does not define “author”, but is strongly suggested that it is natural
person (i.e. human) who created the work
- This is again broadly reflected in CJEU case law (without being said outright)
- Painer:
o “Stamp the work with his ‘personal touch’”
o Trstenjak’s opinion (Advocate General): “only human creations are therefore
protected”
- Cofemel:
o “Reflects the personality of its author, as an expression of his free and creative
ideas”
- This does not rule out efforts made with the help of machines
Originality/ Creativity
- Twofold requirement:
o The author’s own
o Intellectual creation
- Does not require artistic merit or aesthetic quality
o High art protected as much as databases or computer softwares
o But, per Cofemel, just because a work “may generate an aesthetic effect”doe
not necessarily qualify it
- Football Dataco:
o CJEU rejected “significant skill and labour” as a relevant factor
- Funke Medien/ Painer:
o Requirement is met “if the author was able to express his creative abilities in
the production of the work by making free and creative choices”
Expression
- The human creator’s creativity must be “expressed” in the final production
- What level of authorial intent of premeditation does this require?
- Hugenholtz & Quintais: “general authorial intent is probably enough”
Three Stages of Creativity
- Conception
o Creating and elaborating the design or plan of a work
- Execution
o Converting the design or plan into what could be considered draft versions
- Redaction
o Processing and reworking the drafts into a finalized cultural product

Authorship
- Not a lot of EU law around authorship
- Mostly dealt with at a national level
- Generally, if more than one author collaborates on a work and their individual
contributions can’t be separated, they will be considered co-authors
o Most national laws require a common design between authors, such that is a
“concerted creative effort”
- Only creatively active persons will be considered authors
- So, who is the author of AI-assisted work?
o Users?
o Developers?
- For many Member States, there is a presumption of authorship for the person
indicate as author, unless proven otherwise
Disinformation
- Different from misinformation, since it is intentional
- Threatens:
o Democracy
o Public health
o National security
o Racial equality
- Creates echo chambers and filter bubbles
Algorithmic power
- Due to algorithms on social media, spread of disinformation is massively amplified
- Usually, algorithms are black boxes
Facebook Algorithm
- Aligned with Facebook policy, rather than content users might be interested in
- Designed to recommend content that received most clicks and likes
- This led to “clickbait”
- Redesigned to promote “meaningful social interactions” with large amounts of
comments and replies
- Led to posts that primarily anger and offend users
- Troll farms now exploit this algorithm to create fake news, which generates more
clicks/ ad revenue
Deepfakes
- Visual
o “an Israeli soldier committing an atrocity against a Palestinian child, a
European Commission official offering to end agricultural subsidies on the eve
of an important trade negotiation, and a Rohingya leader advocating violence
against the security forces in Myanmar” – Chesney & Citron
- Textual
o Readers found these articles to be more convincing than those written by
human beings – Robitzski
How to tackle disinformation
- Transparency
o Disinformation algorithms must be made public
- Intelligibility
o Must be intelligible to users
- Accountability
o Platforms should be accountable
- Should there be a Disinformation regulatory body that presides over this?
Regulatory Landscapes
- US has no laws regulating algorithmic disinformation
o Rather, relies of self-regulation
o Market-based approaches
- Some EU countries are bringing about legislation
o E.g. France- “Manipulation of Information Law”
o A number of transparency provisions – yet, some commentators feel its impact
has so far been limited as it has limited accountability
- China has more stringent legislation
o Transparency requirements with greater accountability for platforms, along
with requirements to offer users greater choice
Regulatory Challenges
- Transparency may be of limited value to your average user
o Users will “seek out information about a system, interpret it, and determine its
significance, only then to find out they have little power over things anyway” –
Edwards & Veale
- Freedom of expression
Lecture 4- Privacy, Surveillance and Labour
Current Uses
- Emotional recognition
- Search suggestions/ internet guidance
- Personality predictions
- Recidivism prediction
- Recommending everything
- Credit/ insurance
- Health diagnoses
- Fraud prediction
- Policing
- Journalism
- Education/ Toys
Quick Europe Recap
- Court of Justice of the EU (CJEU):
o EU entity
o It’s the court of justice of the EU;
o Checks the application of the EU treaties, including the Charter of fundamental
rights of the EU (EU Charter)
- European Court of Human Rights
o NOT EU entity
o It’s the international body that controls the application of the European
Convention for Human Rights (ECHR) created by the Council of Europe (also
NOT an EU entity)
o The EU is part of it, so are all its member states
- European Data Protection Board (EDPB)
o EU entity
o Independent authority established by the GDPR
o Coordinates the application of the GDPR
o Coordinates the national Data Protection Authorities (DPAs)
o Issues guidelines and opinions that are very important to understand how to
apply the GDPR
o It used to be called “Article 29 Working Party” under the previous regime;
those opinions still apply in many cases
Privacy vs Data Protection
- ECHR

- EU Charter
The General Data Protection Regulation (“GDPR”)
- Only applied to “personal data” (Art 2(1))
- Personal data is ant information that relates to an identified or identifiable living
individual. Different pieces of information, which collected together can lead to the
identification of a particular person, also constitute personal data
- Personal data that has been de-identified, encrypted or pseudonymized but can be
used to re-identify a person, remains personal data and falls within the scope of the
GDPR
- Processing of personal data only lawful in certain circumstances
o Consent is (perhaps) the most well-known, but not the only
o Can also be performance of a contract, for the public interest, or a number of
other legitimate purposes
- GDPR also has a broad data minimization principle (“collect no more data than
necessary”)
o Collection limitations
o Purpose limitations
o Storage limitations
- Sensitive “special” data receives a heightened level of protection and processing is
only allowed in a more limited set of circumstances (see Art 9)
Has Data Minimization been successful?
- Depends on who you ask
- Core issues have been:
o Difficulty in enforcement (overburdened enforcement agencies)
o Inherent ambiguity interpreting “necessity” and “proportionality”
 What qualifies as a legitimate interest of data processing?
 Maximizing advertising revenue?
 How much data is needed to do so? Who decides?
 How much data is needed for security?
o Limited case law on many of these questions
 Swedish Data Protection Authority outlawed facial recognition in
schools as against collection limitation principle
 Monitoring attendance not a good enough excuse
“Smart” Devices and the Internet of Things
- AI can correlate collected data with other publicly available information to generate
insights on consumers incl:
o Age
o Demographic
o Household income
- Function creep
o E.g. smart energy meter may only track energy usage/ make & model of the
appliance but then use that data to predict you age/ demographic/ household
income
GDPR Success
- App Drivers and Couriers Union successfully sued Uber, claiming they had rights to see
algorithms which determined their pay/ “fraud probability scores” / other profiling
and data held about them
Proposals for Stricter Rules?
- Less broad normative ideas that require judicial analysis
- US proposals:
o Call for ban of data use for targeted advertising
o Alternative, limiting use of sensitive data for all secondary purposed, including
advertising
o Banning biometric collection of children
o Banning biometric collection in specific contexts (workplaces, schools,
employment)
Biometric Surveillance
- Continue to be quietly embedded in public daily life without proper scrutiny
- Wide array of industries and contexts:
o Delivery drivers tracked for emotion recognition, “attractiveness” and
potentially aggressive behavior
o Productivity monitoring software tracking eye movement to police attention of
employees
o Call center employees tone and pitch monitored for emotion recognition and
checked for anger/ frustration
o “Mobile neuroinformatics solutions” measuring cognitive states and providing
feedback on “cognitive performances and needs”
- Huge commercial rollouts, despite significant lack of scientific evidence
Automotive industry
- “In-cabin” cameras and sensors to detect fatigue and distraction
- Can then share data with insurance companies and law enforcement
- Where once automakers sought to use their own tech or contract with smaller
companies so as to retain control, they are now increasingly partnering with Big Tech
Virtual Reality
- A similar pattern playing out in AR/VR landscapes
- Meta also mining body information from new headsets when users are in metaverse
“Safety” and “Productivity”
- Systems which were formerly “facial recognition” systems are now increasingly being
marketed as necessary for safety and productivity
Spyware
- European Council published a draft of the European Media Freedom Act
- Would allow use of spyware against journalists if national government considers it
necessary
- Last year, Pegasus spyware was found on phones of 3 famous journalists in France
- Is use of spyware a proportional measure against national security threats?
Affect Recognition
- UK Information Commissioner’s Office:
o “As it stands, we are yet to see any emotion AI technology develop in a way
that satisfies data protection requirements, and have more general questions
about proportionality, fairness and transparence in this area”
Predictive Privacy
- Is making a prediction about an individual itself a breach of their privacy?
- Crucially, analysis of the data of one individual can negatively impact others
- Much AI data analysis is organized by “pattern matching” input data against a huge
amount of other data points
o E.g. all FB users who declare their sexual orientation forms a training data set
by which sexual orientation of all users can be predicted
- As such, should the individual be allowed to decide in every respect what data they
provide to data behemoths?
- “Predictive privacy of an individual or group is violated when personal information is
predicted about them without their knowledge or against their will, in such a way that
it could result in unequal treatment of an individual or group” (Muhlhoff)
- Privacy as a public value for democratic political participation, and a collective value as
the privacy of the individual now affects all other individuals (Regan)
- “Once a predictive model is created- and there are currently no effective legal
restrictions on this – it can be applied to millions of users in an automated way with
almost no marginal cost” (Muhlhoff)
Unexpected Correlations
- Predicting depression, psychosis, diabetes or high blood pressure (Merchant et al,
2019)
- FB predicting suicidal users (Goggins)
o Edtech which monitors children and predicts if they are unwell outing students
to their parents/ teachers
- FB “likes” to predict “a range of highly sensitive personal attributes including sexual
orientation, ethnicity, religious and political views, personality traits, intelligence,
happiness, use of addictive substances, parental separation, age and gender” (Kosinski
et al, 2013)
Bad Machine Decisions
- Credit ratings based on personal attributes
- People unable to get jobs based on personal attributes (e.g. pregnancy)
- Psychological/ socio-economic factors exploited so that vulnerable people are
targeted for advertising/ other behavioral manipulation
- Hyper-nudging
- GDPR does not combat this
The Limits of Anonymisation
- Promises of anonymization used to leverage consent form users
- Anonymised data does not fall within the scope of the GDPR
- GDPR is focused on the data of the individual (applies to “personal data”)
o Fundamental rights are rights of the individual
- The individual rights of the GDPR:
o Art 15- Right of Access
o Art 16- Rectification
o Art 17- Erasure
o Art 18- Restriction of Processing
o Art 20- Portability

The Limits of Data Protection Law


- New sophisticated tools that don’t require personal data:
o Secure multi-party computation
o Zero-knowledge proofs
o Secure enclaves
- Large advertising platforms now creating systems that target individuals without
needing access to their data
Dodging Liability

Law Enforcement Researcher access Competitor


Access Access

Data protection and Content moderation


security liability obligations

A New Threat
- A means of violating the privacy of the individual without any kind of data theft
- Does it therefore demand a new type of protection?
Possible Improvements
- Restrict availability of consent to situations where consequences solely affect that
user
- Render legal bases of consent insufficient for users where data will be linked to other
people’s data
- Establish collectivist rights, such that groups affected can also assert rights
Workplace Surveillance
- Post-pandemic, workplace surveillance has increased
o Remote working
o Blurred work/home life
o Workplace technologies integrated into personal devices/ spaces
- Low wage workers affected the most radically
- Surveillance tech claims to be:
o Anti-discrimination
o Helpful for management
o Efficiency increasing
o Better at finding workers/ analyzing CVs

Surveillance Interoperability
- Defined by Karen Levy
- Combination of government data collection, corporate surveillance and third party
harvesting
- Data about others aggregated and used to predict “risk assessments” of workers, such
as likelihood that an employee will commit fraud
- Systems are riddles with flaws, yet make decisions about workers
Proposed EU Platform Work Directive
- Promises platform workers:
o Worker Access to Data
o Algorithmic Transparency
o Contestability

Improvements to the Directive


- Should:
o Not only be for platform workers
o Include whistleblower protections
o Should have collective, noy just individual, rights
- Does knowing about data/ power dynamics amount to being able to fight it?
- Therefore, is data access/ transparency enough?
Human in the Loop?
- Many proposals focus on a requirement of human review or to avoid fully automated
decision making
- Yet, AI systems are rarely fully automated – rather lying on a spectrum between
machine and human operated
- Reliance solely on “human in the loop” provisions may just lead to “rubber stamping”
thus legitimizing the systems rather than fixing them
- “Automation bias”- humans likely to just trust the system anyway
- Shifts responsibility to a human who may be blamed for systemic failures over which
they had little control
Data Protection Impact Assessments
- Article 35 GDPR:
o DPIA should be conducted prior to algorithmic decision-making in the
workplace
- However, notice provisions still ultimately place burden on the worker
- Similarly, courts have found GDPR is primarily designed for transparency in relation to
violations of the law, rather than to inform workers of all information collected about
them
Clear Workplace Prohibitions
- Should we prohibit surveillance of:
o Office bathrooms and employees’ cars
o Off-duty hours
o Pseudo- scientific emotional recognition
o Algorithmic wage discrimination

Humanyze
- Employees carry socio-meters
- Collect data points on how they behave and communicate at workplace
- Data is analyzed to draw inference on performance, patterns etc using “Emotional AI”
(EAI)
- Analysis then used to make recommendations
- Promise to be 100% anonymous
Policy Suggestions
- Focus less on how data is collected and what happens to that data, and rather on the
actual outcome of the data usage
o This is especially important, given increased reliance on anonymization,
synthetic data production and other ML tools which allow data controllers to
say that they are no longer using individual’s personal data
- Collective rights, not just individual rights
o Include considerations of collective harm
o As such, algorithms can be considered for their effect on collective control
o As such, broader structural inequities can be remedied that do not rely on
proving harm towards an individual
Training Labor
- Hard to predict wages
- Constant geo-moving as to where the work is coming in
- Pay is determined by a surge-pricing mechanism which adjusts for how many
annotators are available and how quickly data is needed
- Pay can be as low as $1-$3/ hr
- Hours of unpaid training required which can then lead to no work
- Every task could be your last, and unclear when work will come in
- Boom-and-bust nature of AI, where huge amounts of data required for initial training,
then no more training needed after this stage
- As such, workers stay up for days without sleeping to take advantage of surges
- Incentivizes beating the system or getting bots to train each other
Lecture 5- AI Act
Why should use of AI be regulated?
- Using AI is not neutral, has an effect
o On what is done itself
o On the responsibility, accountability, liability for what is done
o On the power balance between those involved
- Use of AI may have an effect on human rights:
o Freedom to share and receive information (Art 10 ECHR)
o Freedom of thought, conscience and religion (Art 9 ECHR)
o Equal treatment (Art 14 ECHR)
o Privacy (Art 8 ECHR)
o Data protection (Art 8 ECHR, Art 8 Charter)
o Access to justice (Art 6 ECHR)
 Not necessarily just negative
- Use of AI may have an effect on human dignity
o Undermine free will
o Reduce or eliminate autonomy
o Instrumentalization of humans
 But it may of course also be very overpowering
- AI itself may develop into an existential threat for humanity
Discrimination
- Differentiate between people on suspect grounds, such as:
o Race
o Sex
o Religion
o Belief
o Sexual orientation
o Disability
- Direct discrimination:
o Explicitly on one such ground; as a rule: forbidden
- Indirect discrimination:
o Not directly on such a ground, but the effect is that the group of people
defined by such a characteristic is disproportionally affected
 E.g. part-time workers (mostly women), people in a certain
neighborhood (ethnicity); these characteristics are called proxies
 Not forbidden id there is objective justification
 Burden of proof
o Proxies allow algorithms to identify such groups, so you don’t even need the
suspect data to discriminate indirectly; but you do need the suspect data to
prove indirect discrimination
- Problem:
o There is widespread discrimination in society:
 Less women in high-paid jobs, more women doing unpaid work
 Over-representation of certain ethnic groups in crime stats
 Social stratification on the basis of race and religion
o Accurate, representative data will reflect this bias and AI will continue it
because:
 It is true that women have a lower chance of being successful in a high-
paid job
 It is true that an immigrant boy growing up in a certain neighborhood
has a higher chance of ending up in drugs and crime
- Accurate data is biased, unbiased data is inaccurate
- Human dignity, fairness, justice require that we judge people on their own merits and
not their background or the color of their skin
- This is the problem pf making predictions about people’s behavior in risk assessments
Self-fulfilling prophecies, feedback loops
- Insight from Ionica Smeets
o Say that in a state we have two groups; Redhats and Bluehats
o Crime rates reveal that 51% of found crime is committed by RH and 49% by BH
o Police investigates 100 people each month: 51 RH and 49 BH
o 51% of RH is criminal, so 26 people; 49% of BH are criminal, so 24 people
o Next month 52 RH and 48 BH are investigated
o After 2 years, 73% of found crime is committed by RH so crime rated become
rapidly more skewed
o After 7 years, 97% of the found criminals are RH and only 3% are BH; even if
only 51% of RH is in fact criminal – they just have a higher risk of being found
- Resources are scarce, so you investigate people that you think are more likely to
commit crime; and indeed in this way you find more crime
o This is the problem with predictive policing

Why should use of AI be regulated?


- Examples of potential harmful use:
o Examples mentioned by Cathy O’Neil:
 Teacher assessment, job applications, probation
o Credit or other types of social scoring
o Selection of candidates for jobs, admission to education, etc
o Predictive policing
o Fraud detection- child benefit scandal
o Search engines & social media – filter bubble
o Targeted advertising
o Fake: ChatGPT, deepfake, fake news
- Specifically:
o Discrimination of specific groups
o Creating self-fulfilling prophecies
o Disempowering victims

How should use of AI be regulated?


- By critically assessing and possibly amending regulation that is already in place, such
as:
o Administrative law
o Consumer protection law
o Non-discrimination law
o Contract law
o Tort law
o Data protection law
o Law of criminal procedure
- And seriously abiding by and enforcing this regulation
o Or
- By enacting new legislation, specifically on AI, on a European level (harmonizing the
common market)
- Also, call for regulation in US
Proposal for AI regulation
- Officially:
o Proposal for a Regulation of the European Parliament and of the Council laying
down harmonized rules on AI (AI Act) and amending certain Union legislative
acts
- Regulation article 288 TFEU: general application, binding in their entirety and directly
applicable in all EU countries
- Member states can no longer regulate on topics falling under this regulation
- Unclear why it is called “Act”
- Published April 21, 2021, subject to discussion
Definition and technological scope of the regulation (Art. 3)
- Definition of AI should be as neutral as possible in order to cover techniques which
are not yet known/ developed
- Overall aim is to cover all AI, including traditional symbolic AI, Machine Learning, as
well as hybrid systems
- Annex I: list of AI techniques and approaches should provide for legal certainty
(adaptations over time may be necessary)
- “A software that is developed with one ore more of the techniques and approaches
listed in Annex I and can, for a given set of human-defined objectives, generate
outputs such as content, predictions, recommendations, or decisions influencing the
environments they interact with”
Annex I
- Artificial Intelligence Techniques and Approaches (Art.3 (1))
o Machine learning approaches, including supervised, unsupervised and
reinforcement learning, using a wide variety of methods including deep
learning;
o Logic- and knowledge- based approaches, including knowledge representation,
inductive (logic) programming, knowledge bases, inference and deductive
engines, (symbolic) reasoning and expert systems;
o Statistical approaches, Bayesian estimation, search and optimization methods

Latest EP version
- “Artificial Intelligence System” (AI system) means a machine-based system that is
designed to operate with varying levels of autonomy and that can, for explicit or
implicit objectives, generate outputs such as predictions, recommendations, or
decisions that influence physical or virtual environments
o Plus: Generavtive AI added as a foundation model, Art. 28b

Risk-Based Approach
All that contradicts EU values is prohibited (Title II, Article 5)
- Subliminal manipulation resulting in physical/psychological harm
- Exploitation of children or mentally disabled people resulting in physical/
psychological harm
- General purpose social scoring
- Remote biometric identification for law enforcement purposes in publicly accessibly
spaces (with exceptions)
High-risk AI systems
- Certain applications in the following fields:
o Safety components of regulated products
 Which are subject to third-party assessment under the relevant
sectorial legislation
o Certain (stand-alone) AI systems in the following fields
 Biometric identification and categorization of natural people
 Management and operation of critical infrastructure
 Education and vocational training
 Employment and workers management, access to self-employment
 Access to and enjoyment of essential private services and public
services and benefits
 Law enforcement
 Migration, asylum and border control management
 Administration of justice and democratic processes
Requirements for high-risk AI (Title III, chapter 2)
- Establish and implement risk management processes & in light of the intended
purpose of the AI system
o Use high-quality training, validation and testing data (relevant, representative
etc)
o Established documentation and design logging features (traceability &
auditability)
o Ensure appropriate certain degree of transparency and provide users with
information (on how to use the system)
o Ensure human oversight (measures built into the system and/or to be
implemented by users)
o Ensure robustness, accuracy and cybersecurity

Overview: obligations of operators (Title III, Chapter 3)


- Provider obligations
o Establish and implement quality management system in its organization
o Draw-up and keep up to date technical documentation
o Logging obligations to enable users to monitor the operation of the high-risk AI
system
o Undergo conformity assessment and potentially re-assessment of the system
(in case of significant modifications)
o Register AI system in EU database
o Affix CE marking and sign declaration of conformity
o Conduct post-market monitoring
o Collaborate with market surveillance authorities
- User obligations
o Operate AI system in accordance with instructions of use
o Ensure human oversight when using of AI system
o Monitor operation for possible risks
o Inform the provider or distributor about any serious incident or any
malfunctioning
o Existing legal obligations continue to apply (e.g. under GDPR)

CE marking and process (Title III, chapter 4, Art. 49)


- CE marking is an indication that a product complies with the requirements of a
relevant Union legislation regulating the product in question
- In order to affix a CE marking to a high-risk AI system, a provider shall undertake the
following steps:
o Determine whether its AI system is classified as high-risk under the new AI
regulation
o Ensure design and development and quality management system are in
compliance with the AI regulation
o Conformity assessment procedure, aimed ay assessing and documenting
compliance
o Affix the CE marking to the system and sign a declaration of conformity
o Placing on the market or putting into service

Lifecycle of AI systems and relevant obligations


- Design in line with requirements
o Ensure AI systems perform consistently for their intended purpose and are in
compliance with the requirements put forward in the Regulation
- Conformity assessment
o Ex ante conformity assessment
- Post-market monitoring
o Providers to actively and systematically collect, document and analyze relevant
data on the reliability, performance and safety of AI systems throughout their
lifetime, and to evaluate continuous compliance of AI systems with the
Reguation
- Incident report system
o Report serious incidents as well as malfunctioning leading to breaches to
fundamental rights (as a basis for investigations conducted by competent
authorities)
- New conformity assessment
o New conformity assessment in case of substantial modification (modification
to the intended purpose or change affecting compliance of the AI system with
the Regulation) by providers or any third party, including when changes are
outside the “predefined range” indicated by the provider for continuously
learning AI systems

Most AI systems will not be high-risk


Critical remarks
- Misunderstanding of status of human rights! Not to be balanced against private
commercial interest
- Exclusion of AI systems already in use – why?
- Risk of opposite effect: legitimizing very harmful use (because compliant) certified AI
says no
- Providers and users of AI systems. Over-simplification?
- Transparency- what exactly?
- Is it even possible to detect bias, to assess quality etc.
- In human-in-the-loop, meaningful human control, possible?
- No enforceable rights for, or even mention of, the human object of AI systems
- Outsourcing the setting of standards to private, commercial organizations
- Mostly only self-assessment: box-ticking exercise without any real impact

You might also like