0% found this document useful (0 votes)
39 views21 pages

Responsible AI: A Global Ethical Guide

The Montréal Declaration for Responsible AI outlines ethical principles for the development and deployment of artificial intelligence, aiming to ensure well-being, respect for autonomy, and protection of privacy. It serves as a framework for guiding digital transitions to promote equitable and sustainable AI practices while fostering international dialogue. The Declaration emphasizes the importance of inclusivity and adaptability in addressing the evolving challenges posed by AI technologies.

Uploaded by

Rem Vaggio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views21 pages

Responsible AI: A Global Ethical Guide

The Montréal Declaration for Responsible AI outlines ethical principles for the development and deployment of artificial intelligence, aiming to ensure well-being, respect for autonomy, and protection of privacy. It serves as a framework for guiding digital transitions to promote equitable and sustainable AI practices while fostering international dialogue. The Declaration emphasizes the importance of inclusivity and adaptability in addressing the evolving challenges posed by AI technologies.

Uploaded by

Rem Vaggio
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

< >

Montréal Declaration
Responsible AI_
</ >

MONTRÉAL
DECLARATION
FOR A RESPONSIBLE
DEVELOPMENT
OF ARTIFICIAL
INTELLIGENCE
2018

3
This document is part of the 2018
MONTRÉAL DECLARATION FOR
TABLE OF
A RESPONSIBLE DEVELOPMENT
OF ARTIFICIAL INTELLIGENCE.
CONTENTS
You can find the complete report HERE.

READING THE DECLARATION 5


PREAMBLE 7

PRINCIPLES
1. WELL-BEING PRINCIPLE 8
2. RESPECT FOR AUTONOMY PRINCIPLE 9
3. PROTECTION OF PRIVACY AND INTIMACY 10
4. SOLIDARITY PRINCIPLE 11
5. DEMOCRATIC PARTICIPATION PRINCIPLE 12
6. EQUITY PRINCIPLE 13
7. DIVERSITY INCLUSION PRINCIPLE 14
8. CAUTION PRINCIPLE 15
9. RESPONSIBILITY PRINCIPLE 16
10. SUSTAINABLE DEVELOPMENT PRINCIPLE 17

GLOSSARY 18

CREDITS I
PARTNERS II

4
READING THE
DECLARATION

A DECLARATION, > Although they are presented as a list, there is no


hierarchy. The last principle is not less important
FOR WHAT PURPOSE? than the first. However, it is possible, depending
The Montréal Declaration for responsible on the circumstances, to lend more weight to
AI development has three main objectives: one principle than another, or to consider one
principle more relevant than another.
1. Develop an ethical framework > Although they are diverse, they must be
for the development and interpreted consistently to prevent any conflict
deployment of AI; that could prevent them from being applied.
As a general rule, the limits of one principle’s
2. Guide the digital transition application are defined by another principle’s
so everyone benefits from this field of application.

technological revolution; > Although they reflect the moral and political
culture of the society in which they were
3. Open a national and developed, they provide the basis for an
international forum for intercultural and international dialogue.

discussion to collectively > Although they can be interpreted in different


achieve equitable, inclusive, ways, they cannot be interpreted in just any
way. It is imperative that the interpretation
and ecologically sustainable
be coherent.
AI development.
> Although these are ethical principles, they can be
translated into political language and interpreted
in legal fashion.

A DECLARATION OF WHAT? Recommendations were made based on these


principles to establish guidelines for the digital
PRINCIPLES transition within the Declaration’s ethical framework.
It aims at covering a few key cross-sectorial themes
The Declaration’s first objective consists of to reflect on the transition towards a society in which
identifying the ethical principles and values that AI helps promote the common good: algorithmic
promote the fundamental interests of people and governance, digital literacy, digital inclusion of
groups. These principles applied to the digital diversity and ecological sustainability.
and artificial intelligence field remain general and
abstract. To read them correctly, it is important
to keep the following points in mind:

5
A DECLARATION FOR WHOM? A DECLARATION ACCORDING
The Montréal Declaration is addressed to any
TO WHAT METHOD?
person, organization and company that wishes The Declaration was born from an inclusive
to take part in the responsible development of deliberation process that initiates a dialogue
artificial intelligence, whether it’s to contribute between citizens, experts, public officials, industry
scientifically or technologically, to develop social stakeholders, civil organizations and professional
projects, to elaborate rules (regulations, codes) associations. The advantages of this approach
that apply to it, to be able to contest bad or unwise are threefold:
approaches, or to be able to alert public opinion
when necessary. 1. Collectively mediate AI’s social
and ethical controversies;
It is also addressed to political representatives,
whether elected or named, whose citizens 2. Improve the quality of reflection
expect them to take stock of developing social on responsible AI;
changes, quickly establish a framework allowing
3. Strengthen the legitimacy of the
a digital transition that serves the greater good,
proposals for responsible AI.
and anticipate the serious risks presented
by AI development.

The elaboration of principles and recommendations


is a co-construction work that involved a variety
of participants in public spaces, in the boardrooms
of professional organizations, around international
expert round tables, in research offices, classrooms
or online, always with the same rigor.

AFTER THE DECLARATION?


Because the Declaration concerns a technology
which has been steadily progressing since the
1950s, and whose pace of major innovations
increases in exponential fashion, it is essential
to perceive the Declaration as an open guidance
document, to be revised and adapted according to
the evolution of knowledge and techniques, as well
as user feedback on AI use in society. At the end
of the Declaration’s elaboration process, we have
reached the starting point for an open and inclusive
conversation surrounding the future of humanity
being served by artificial intelligence technologies.

6
PREAMBLE

For the first time in human history, it is possible to of numbers and ruling it through algorithmic
create autonomous systems capable of performing procedures is an old pipe dream that still drives
complex tasks of which natural intelligence alone human ambitions. But when it comes to human
was thought capable: processing large quantities affairs, tomorrow rarely resembles today, and
of information, calculating and predicting, learning numbers cannot determine what has moral value,
and adapting responses to changing situations, nor what is socially desirable.
and recognizing and classifying objects. Given the
The principles of the current declaration are like
immaterial nature of these tasks, and by analogy
points on a moral compass that will help guide the
with human intelligence, we designate these wide-
development of artificial intelligence toward morally
ranging systems under the general name of artificial
and socially desirable ends. They also offer an ethical
intelligence. Artificial intelligence constitutes a
framework that promotes internationally recognized
major form of scientific and technological progress,
human rights in the fields affected by the rollout of
which can generate considerable social benefits
artificial intelligence. Taken as a whole, the principles
by improving living conditions and health, facilitating
articulated lay the foundation for cultivating social
justice, creating wealth, bolstering public safety,
trust toward artificially intelligent systems.
and mitigating the impact of human activities on the
environment and the climate. Intelligent machines The principles of the current declaration rest on the
are not limited to performing better calculations than common belief that human beings seek to grow as
human beings; they can also interact with sentient social beings endowed with sensations, thoughts
beings, keep them company and take care of them. and feelings, and strive to fulfill their potential
by freely exercising their emotional, moral and
However, the development of artificial intelligence
intellectual capacities. It is incumbent on the various
does pose major ethical challenges and social risks.
public and private stakeholders and policymakers at
Indeed, intelligent machines can restrict the choices
the local, national and international level to ensure
of individuals and groups, lower living standards,
that the development and deployment of artificial
disrupt the organization of labor and the job market,
intelligence are compatible with the protection
influence politics, clash with fundamental rights,
of fundamental human capacities and goals, and
exacerbate social and economic inequalities, and
contribute toward their fuller realization. With this
affect ecosystems, the climate and the environment.
goal in mind, one must interpret the proposed
Although scientific progress, and living in a society,
principles in a coherent manner, while taking into
always carry a risk, it is up to the citizens to determine
account the specific social, cultural, political and
the moral and political ends that give meaning to the
legal contexts of their application.
risks encountered in an uncertain world.

The lower the risks of its deployment, the greater


the benefits of artificial intelligence will be. The first
danger of artificial intelligence development consists
in giving the illusion that we can master the future
through calculations. Reducing society to a series

7
1 WELL-BEING
PRINCIPLE

The development 1. AIS must help individuals improve their living conditions,
their health, and their working conditions.
and use of artificial
intelligence
2. AIS must allow individuals to pursue their preferences,
systems (AIS) must so long as they do not cause harm to other sentient beings.
permit the growth
of the well-being 3. AIS must allow people to exercise their mental and physical
of all sentient capacities.

beings.
4. AIS must not become a source of ill-being, unless it allows
us to achieve a superior well-being than what one could
attain otherwise.

5. AIS use should not contribute to increasing stress, anxiety,


or a sense of being harassed by one’s digital environment.

8
2 RESPECT FOR
AUTONOMY PRINCIPLE

AIS must be 1. AIS must allow individuals to fulfill their own moral
objectives and their conception of a life worth living.
developed and
used while
2. AIS must not be developed or used to impose a particular
respecting lifestyle on individuals, whether directly or indirectly,
people’s by implementing oppressive surveillance and evaluation
autonomy, and or incentive mechanisms.

with the goal


3. Public institutions must not use AIS to promote or discredit
of increasing a particular conception of the good life.
people’s control
over their 4. It is crucial to empower citizens regarding digital
lives and their technologies by ensuring access to the relevant forms
of knowledge, promoting the learning of fundamental skills
surroundings.
(digital and media literacy), and fostering the development
of critical thinking.

5. AIS must not be developed to spread untrustworthy


information, lies, or propaganda, and should be designed
with a view to containing their dissemination.

6. The development of AIS must avoid creating dependencies


through attention-capturing techniques or the imitation
of human characteristics (appearance, voice, etc.) in ways
that could cause confusion between AIS and humans.

9
3 PROTECTION OF
PRIVACY AND INTIMACY
PRINCIPLE
Privacy and 1. Personal spaces in which people are not subjected to
surveillance or digital evaluation must be protected from
intimacy must be
the intrusion of AIS and data acquisition and archiving
protected from AIS systems (DAAS).
intrusion and data
2. The intimacy of thoughts and emotions must be strictly
acquisition and protected from AIS and DAAS uses capable of causing harm,
archiving systems especially uses that impose moral judgments on people
(DAAS). or their lifestyle choices.

3. People must always have the right to digital disconnection


in their private lives, and AIS should explicitly offer the option
to disconnect at regular intervals, without encouraging people
to stay connected.

4. People must have extensive control over information regarding


their preferences. AIS must not create individual preference
profiles to influence the behavior of the individuals without their
free and informed consent.

5. DAAS must guarantee data confidentiality and personal


profile anonymity.

6. Every person must be able to exercise extensive control over


their personal data, especially when it comes to its collection,
use, and dissemination. Access to AIS and digital services by
individuals must not be made conditional on their abandoning
control or ownership of their personal data.

7. Individuals should be free to donate their personal data


to research organizations in order to contribute to the
advancement of knowledge.

8. The integrity of one’s personal identity must be guaranteed.


AIS must not be used to imitate or alter a person’s appearance,
voice, or other individual characteristics in order to damage
one’s reputation or manipulate other people.

10
4 SOLIDARITY
PRINCIPLE

The development 1. AIS must not threaten the preservation of fulfilling moral and
emotional human relationships, and should be developed with
of AIS must be
the goal of fostering these relationships and reducing people’s
compatible with vulnerability and isolation.
maintaining the
bonds of solidarity 2. AIS must be developed with the goal of collaborating with
among people and humans on complex tasks and should foster collaborative
work between humans.
generations.
3. AIS should not be implemented to replace people in duties that
require quality human relationships, but should be developed
to facilitate these relationships.

4. Health care systems that use AIS must take into consideration
the importance of a patient’s relationships with family
and health care staff.

5. AIS development should not encourage cruel behavior toward


robots designed to resemble human beings or non-human
animals in appearance or behavior.

6. AIS should help improve risk management and foster


conditions for a society with a more equitable and mutual
distribution of individual and collective risks.

11
5 DEMOCRATIC
PARTICIPATION
PRINCIPLE
AIS must meet 1. AIS processes that make decisions affecting a person’s life, quality
of life, or reputation must be intelligible to their creators.
intelligibility,
justifiability, and 2. The decisions made by AIS affecting a person’s life, quality of life,
accessibility or reputation should always be justifiable in a language that is
understood by the people who use them or who are subjected
criteria, and must to the consequences of their use. Justification consists in making
be subjected transparent the most important factors and parameters shaping
to democratic the decision, and should take the same form as the justification
we would demand of a human making the same kind of decision.
scrutiny, debate,
and control. 3. The code for algorithms, whether public or private, must always
be accessible to the relevant public authorities and stakeholders
for verification and control purposes.

4. The discovery of AIS operating errors, unexpected or undesirable


effects, security breaches, and data leaks must imperatively
be reported to the relevant public authorities, stakeholders,
and those affected by the situation.

5. In accordance with the transparency requirement for public


decisions, the code for decision-making algorithms used
by public authorities must be accessible to all, with the exception
of algorithms that present a high risk of serious danger if misused.

6. For public AIS that has a significant impact on the life of citizens,
citizens should have the opportunity and skills to deliberate on
the social parameters of these AIS, their objectives, and the limits
of their use.

7. We must at all times be able to verify that AIS are doing what
they were programed for and what they are used for.

8. Any person using a service should know if a decision concerning


them or affecting them was made by an AIS.

9. Any user of a service employing chatbots should be able to easily


identify whether they are interacting with an AIS or a real person.

10. Artificial intelligence research should remain open and


accessible to all.

12
6 EQUITY
PRINCIPLE

The development 1. AIS must be designed and trained so as not to create, reinforce,
or reproduce discrimination based on — among other things —
and use of AIS
social, sexual, ethnic, cultural, or religious differences.
must contribute to
the creation of a 2. AIS development must help eliminate relationships of
just and equitable domination between groups and people based on differences
society. of power, wealth, or knowledge.

3. AIS development must produce social and economic benefits


for all by reducing social inequalities and vulnerabilities.

4. Industrial AIS development must be compatible with


acceptable working conditions at every step of their life cycle,
from natural resources extraction to recycling, and including
data processing.

5. The digital activity of users of AIS and digital services should


be recognized as labor that contributes to the functioning
of algorithms and creates value.

6. Access to fundamental resources, knowledge and digital tools


must be guaranteed for all.

7. We should support the development of commons algorithms —


and of open data needed to train them — and expand their use,
as a socially equitable objective.

13
7 DIVERSITY INCLUSION
PRINCIPLE

The development 1. AIS development and use must not lead to the
homogenization of society through the standardization
and use of
of behavior and opinions.
AIS must be
compatible with 2. From the moment algorithms are conceived, AIS development
maintaining social and deployment must take into consideration the multitude
and cultural of expressions of social and cultural diversity present in the
society.
diversity and must
not restrict the 3. AI development environments, whether in research or industry,
scope of lifestyle must be inclusive and reflect the diversity of the individuals
choices or personal and groups of the society.

experiences.
4. AIS must avoid using acquired data to lock individuals into
a user profile, fix their personal identity, or confine them
to a filtering bubble, which would restrict and confine their
possibilities for personal development — especially in fields
such as education, justice, or business.

5. AIS must not be developed or used with the aim of limiting


the free expression of ideas or the opportunity to hear diverse
opinions, both being essential conditions of a democratic
society.

6. For each service category, the AIS offering must be diversified


to prevent de facto monopolies from forming and undermining
individual freedoms.

14
8 PRUDENCE
PRINCIPLE

Every person 1. It is necessary to develop mechanisms that consider the


potential for the double use — beneficial and harmful —
involved in AI
of AI research and AIS development (whether public or private)
development must in order to limit harmful uses.
exercise caution
by anticipating, 2. When the misuse of an AIS endangers public health or safety
as far as possible, and has a high probability of occurrence, it is prudent to
restrict open access and public dissemination to its algorithm.
the adverse
consequences 3. Before being placed on the market and whether they are
of AIS use and offered for charge or for free, AIS must meet strict reliability,
by taking the security, and integrity requirements and be subjected to tests
that do not put people’s lives in danger, harm their quality
appropriate
of life, or negatively impact their reputation or psychological
measures to integrity. These tests must be open to the relevant public
avoid them. authorities and stakeholders.

4. The development of AIS must preempt the risks of user data


misuse and protect the integrity and confidentiality of personal
data.

5. The errors and flaws discovered in AIS and SAAD should


be publicly shared, on a global scale, by public institutions
and businesses in sectors that pose a significant danger
to personal integrity and social organization.

15
9 RESPONSIBILITY
PRINCIPLE

The development 1. Only human beings can be held responsible for decisions
stemming from recommendations made by AIS, and the actions
and use of
that proceed therefrom.
AIS must not
contribute to 2. In all areas where a decision that affects a person’s life,
lessening the quality of life, or reputation must be made, where time and
responsibility of circumstance permit, the final decision must be taken by a
human being and that decision should be free and informed.
human beings
when decisions 3. The decision to kill must always be made by human beings,
must be made. and responsibility for this decision must not be transferred
to an AIS.

4. People who authorize AIS to commit a crime or an offense,


or demonstrate negligence by allowing AIS to commit them,
are responsible for this crime or offense.

5. When damage or harm has been inflicted by an AIS, and the


AIS is proven to be reliable and to have been used as intended,
it is not reasonable to place blame on the people involved
in its development or use.

16
10 SUSTAINABLE
DEVELOPMENT
PRINCIPLE
The development 1. AIS hardware, its digital infrastructure and the relevant objects
on which it relies such as data centers, must aim for the
and use of AIS
greatest energy efficiency and to mitigate greenhouse gas
must be carried emissions over its entire life cycle.
out so as to
ensure a strong 2. AIS hardware, its digital infrastructure and the relevant objects
environmental on which it relies, must aim to generate the least amount
of electric and electronic waste and to provide for
sustainability of maintenance, repair, and recycling procedures according
the planet. to the principles of circular economy.

3. AIS hardware, its digital infrastructure and the relevant objects


on which it relies, must minimize our impact on ecosystems
and biodiversity at every stage of its life cycle, notably with
respect to the extraction of resources and the ultimate
disposal of the equipment when it has reached the end
of its useful life.

4. Public and private actors must support the environmentally


responsible development of AIS in order to combat the waste
of natural resources and produced goods, build sustainable
supply chains and trade, and reduce global pollution.

17
GLOSSARY

Algorithm Deep Learning


An algorithm is a method of problem solving through Deep learning is the branch of machine learning that
a finite and non-ambiguous series of operations. uses artificial neuron networks on many levels. It is
More specifically, in an artificial intelligence context, the technology behind the latest AI breakthroughs.
it is the series of operations applied to input data
to achieve the desired result.

Digital Commons
Digital commons are the applications or data
Artificial intelligence (AI) produced by a community. Unlike material goods,
Artificial intelligence (AI) refers to the series of
they are easily shareable and do not deteriorate
techniques which allow a machine to simulate human
when used. Therefore, unlike proprietary software,
learning, namely to learn, predict, make decisions
open source software—which is often the result of a
and perceive its surroundings. In the case of a
collaboration between programmers—are considered
computing system, artificial intelligence is applied
digital commons since their source code is open and
to digital data.
accessible to all.

Artificial intelligence system (AIS) Digital Disconnection


An AIS is any computing system using artificial
Digital disconnection refers to an individual’s
intelligence algorithms, whether it’s software,
temporary or permanent ceasing of online activity.
a connected object or a robot.

Digital Literacy
Chatbot An individual’s digital literacy refers to their
A chatbot is an AI system that can converse with
ability to access, manage, understand, integrate,
its user in a natural language.
communicate, evaluate and create information
safely and appropriately through digital tools and
networked technologies to participate in economic
and social life.
Data Acquisition and Archiving
System (DAAS)
DAAS refers to any computing system that can collect
and record data. This data is eventually used to train Filter Bubble
AI systems or as decision-making parameters. The filter bubble (or filtering bubble) expression
refers to the “filtered” information which reaches
an individual on the Internet. Various services
such as social networks or search engines offer
Decision Justifiability personalized results for their users. This can have
An AIS’s decision is justified when there exist the effect of isolating individuals (inside “bubbles”)
non-trivial reasons that motivate this decision, since they no longer have access to common
and that these reasons can be communicated information.
in natural language.

18
GAN Open Data
Acronym for Generative Adversarial Network. In Open data is digital data that users can access freely.
a GAN, two antagonist networks are placed in For example, this is the case for most published AI
competition to generate an image. They can for research results.
example be used to create an image, a recording or a
video that appears practically real to a human being.

Path Dependency
Social mechanism through which technological,
Intelligibility organizational or institutional decisions, once
An AIS is intelligible when a human being with the deemed rational but now subpar, still continue to
necessary knowledge can understand its operations, influence decision-making. A mechanism maintained
meaning its mathematical model and the processes because of cognitive bias or because change would
that determine it. require too much money or effort. Such is the case
for urban road infrastructure when it leads to traffic
Machine Learning optimization programs, rather than considering
Machine learning is the branch of artificial a change to organize transportation with very low
intelligence that consists of programing carbon emissions. This mechanism must be known
an algorithm so that it can learn by itself. when using AI for special projects, as training data
in supervised learning can sometimes reinforce old
The various techniques can be classified into three organizational paradigms that are now contested.
major types of machine learning:

> In supervised learning, the artificial intelligence


system (AIS) learns to predict a value from Personal Data
entered data. This requires annotated entry-value Personal data are those that help directly
couples during training. For example, a system or indirectly identify an individual.
can learn to recognize an object featured in
a picture.

> In unsupervised learning, AIS learns to find Rebound Effect


similarities among data that hasn’t been The rebound effect is the mechanism through which
annotated, for example in order to divide them greater energy efficiency or better environmental
performance of goods, equipment and services leads
into various homogeneous partitions. A system
to an increase in use that is more than proportional.
can thereby recognize communities of social
For example, screen size increases, the number
media users.
of electronic devices in a household goes up,
> Through reinforcement learning, AIS learns to and greater distances are traveled by car or plane.
The global result is greater pressure on resources
act on its environment in order to maximize the
and the environment.
reward it receives during training. This is the
technique through which AIS was able to beat
humans in the game of Go or the videogame
Dota2. Reliability
An AIS is reliable when it performs the task it
was designed for, in expected fashion. Reliability
is the probability of success that ranges between
Online Activity 51% and 100%, meaning strictly superior to chance.
Online activity refers to all activities performed by The more a system is reliable, the more its behavior
an individual in a digital environment, whether those is predictable.
activities are done on a computer, a telephone or any
other connected object.

19
Strong Environmental Sustainability
The notion of strong environmental sustainability
goes back to the idea that in order to be sustainable,
the rate of natural resource consumption and
polluting emissions must be compatible with
planetary environmental limits, the rate of resources
and ecosystem renewal, and climate stability.

Unlike weak sustainability, which requires less effort,


strong sustainability does not allow the substitution
of the loss of natural resources with artificial capital.

Sustainable Development
Sustainable development refers to the development
of human society that is compatible with the capacity
of natural systems to offer the necessary resources
and services to this society. It is economic and social
development that fulfills current needs without
compromising the existence of future generations.

Training
Training is the machine learning process through
which AIS build a model from data. The performance
of AIS depends on the quality of the model, which
itself depends on the quantity and quality of data
used during training.

20
Martin Gibert, Ethics Counsellor at IVADO and
CREDITS researcher in Centre de recherche en éthique (CRÉ)

Lyse Langlois, Full Professor and Vice-Dean


of the Faculty of Social Science; Director of the
Institut d’éthique appliquée (IDÉA); Researcher
The writing of the Montréal Declaration Interuniversity Research Center on Globalization
and Work (CRIMT)
for the responsible development
of artificial intelligence is the result François Laviolette, Full Professor, Department
of the work of a multidisciplinary of Computer Science and Software Engineering,
and inter-university scientific team Université Laval; Director of the Centre de recherche
en données massives (CRDM)
that draws on a citizen consultation
process and a dialogue with experts Pascale Lehoux, Full Professor at the École de
and stakeholders of AI development. santé publique, Université de Montréal (ESPUM);
Chair on Responsible Innovation in Health
Christophe Abrassart, Associate Professor
in the School of Design and Co-director of Lab Jocelyn Maclure, Full Professor, Faculty of
Ville Prospective of the Faculty of Planning of Philosophy, Université Laval, and President of
the Université de Montréal, member of Centre de the Quebec Ethics in Science and Technology
recherche en éthique (CRÉ) Commission (CEST)

Yoshua Bengio, Full Professor of the Department Marie Martel, Professor in École de
of Computer Science and Operations Research, bibliothéconomie et des sciences de l’information,
UdeM, Scientific Director of MILA and IVADO Université de Montréal

Guillaume Chicoisne, Scientific Programs Joëlle Pineau, Associate Professor, School


Director, IVADO of Computer Science, McGill University; Director
of Facebook AI Lab in Montréal; Co-director
Nathalie de Marcellis-Warin, Full Professor, of the Reasoning and Learning Lab
Polytechnique Montréal, President and Chief
Executive officer, Center for Interuniversity Research Peter Railton, Gregory S. Kavka Distinguished
and Analysis of Organizations (CIRANO) University Professor; John Stephenson Perrin
Professor; Arthur F. Thurnau Professor, Department
Marc-Antoine Dilhac, Associate Professor, of Philosophy, University of Michigan, Fellow of the
Department of Philosophy, Université de Montréal, American Academy of Arts & Sciences
Chair of the Ethics and Politics Group, Centre de
recherche en éthique (CRÉ), Canada Research Chair Catherine Régis, Associate professor, Faculty
in Public Ethics and Political Theory, Director of the of Law, Université de Montréal; Canada Research
Institut Philosophie Citoyenneté Jeunesse Chair in Collaborative Culture in Health Law and
Policy; Regular researcher, Centre de recherche
Sébastien Gambs, Professor of Computer en droit public (CRDP)
Science of Université du Québec à Montréal, Canada
Research Chair in Privacy-Preserving and Ethical Christine Tappolet, Full Professor, Department
Analysis of Big Data of Philosophy, UdeM, Director of Centre de recherche
en éthique (CRÉ)
Vincent Gautrais, Full Professor, Faculty of Law,
Université de Montréal; Director of the Centre Nathalie Voarino, PhD Candidate in Bioethics
de recherche en droit public (CRDP); Chair of the of Université de Montréal
L.R. Wilson Chair in Information Technology and
E-Commerce Law

I
OUR PARTNERS

II
montrealdeclaration-responsibleai.com

You might also like