Moral Consideration For AI Systems by 2030: Jeff Sebo Robert Long
Moral Consideration For AI Systems by 2030: Jeff Sebo Robert Long
https://doi.org/10.1007/s43681-023-00379-1
ORIGINAL RESEARCH
Received: 22 July 2023 / Accepted: 31 October 2023 / Published online: 11 December 2023
© The Author(s) 2023
Abstract
This paper makes a simple case for extending moral consideration to some AI systems by 2030. It involves a normative prem-
ise and a descriptive premise. The normative premise is that humans have a duty to extend moral consideration to beings that
have a non-negligible chance, given the evidence, of being conscious. The descriptive premise is that some AI systems do
in fact have a non-negligible chance, given the evidence, of being conscious by 2030. The upshot is that humans have a duty
to extend moral consideration to some AI systems by 2030. And if we have a duty to do that, then we plausibly also have a
duty to start preparing now, so that we can be ready to treat AI systems with respect and compassion when the time comes.
Vol.:(0123456789)
592 AI and Ethics (2025) 5:591–606
do that, then we plausibly also have a duty to start prepar- for consciousness are and how difficult these requirements
ing to discharge that duty now, so that we can be ready to are to satisfy. Our own view is that the threshold for non-
treat potentially morally significant AI systems with respect negligibility is much lower than 0.1%, and that the chance
and compassion when the time comes.1 that some AI systems will be conscious by 2030 is much
Before we begin, we should note several features of our higher than 0.1%. But we focus on this threshold here to be
argument that will be relevant. First, our discussion of both generous to skeptics about our view, and to emphasize that
the normative premise and the descriptive premise are some- in order to avoid our conclusion, one must take extremely
what compressed. Our aim in this paper is not to establish bold and tendentious positions about either the values, the
either premise with maximum rigor, but rather to motivate facts, or both.
them in clear and concise terms and then show how they Finally, we should emphasize that our conclusion here has
interact. We think that examining these premises together is no straightforward implications for how humans should treat
important, since while we might find each one unremarkable AI systems. Even if we agree that we should extend moral
when we consider them in isolation, what happens when standing to AI systems by 2030, we need to consider fur-
we put them together is striking: they jointly imply that we ther questions before we know what that means in practice.
should expand our moral circle substantially, to a vast num- For instance, how much do AI systems count and in what
ber and wide range of additional beings. We aim to show ways do they count? What do they want and need, how will
how that happens and indicate why this conclusion is more our actions and policies affect them, and what do we owe
plausible than it might initially appear to be. them in light of these expected effects? And how can, and
Second, this paper assumes that conscious beings merit should, we make tradeoffs between humans, animals, and AI
moral consideration. Of course, philosophers disagree systems in practice? We will consider possible tradeoffs in
about the basis for moral standing, with some denying that more detail below. For now, we will simply note that answer-
consciousness is necessary for moral standing and others ing these questions responsibly will take a lot of work from
denying that consciousness is sufficient. Our aim is not to a lot of people, which is why we should start asking these
intervene in this debate, but rather to argue that if conscious questions now.3
beings merit moral consideration, then we should extend However, while the implications of AI moral standing are
moral consideration to some AI systems by 2030. As we difficult to predict with specificity, we can predict that they
discuss below, we personally think that conscious beings do will include at least the following general responsibilities.
merit moral consideration, and if you agree, then you can First, AI companies will have a responsibility to consider the
read our argument in unconditional terms. If not, then you risk of harm to AI systems when testing and deploying new
can read our argument in conditional terms, pending further systems, and to increase the caution with which they test and
work on the basis for moral standing and the relationship deploy new systems accordingly [32–34]. Second, govern-
between consciousness and other morally relevant features. ments will have a responsibility to consider this risk as well,
Third, our argument in this paper is intentionally con- and to increase the caution with which they regulate new
servative in two respects. When we develop our normative systems accordingly. Third, academics will have a respon-
premise, we assume for the sake of argument that a non-neg- sibility to develop concrete frameworks that AI companies
ligible chance means a 0.1% chance or higher.2 And when and governments can use to estimate risks and benefits for
we develop our descriptive premise, we make conserva-
tive assumptions about how demanding the requirements
Footnote 2 (continued)
1 is conservative for present purposes because it leads to less moral cir-
Of course, we are not the first to suggest that AI systems might be
cle expansion.
moral patients, or that we should start preparing for AI moral pati- 3
enthood now. Others have argued for similar conclusions in different Making interspecies welfare comparisons for the sake of prioriti-
ways. See, for instance, [23–28]. However, in a discussion of existing zation is an important topic that is receiving increasing attention
work on AI moral standing, [29] notes: “to the extent that [the] argu- among philosophers. For instance, the nonprofit organization Rethink
ments avoid questionable assumptions, they do little to inform our Priorities published a “Moral Weight Project” designed to prioritize
present and future decisions about actual AIs, which have no demon- resource allocation across species [30]. Sebo [31] extends this project
strated connection to the imaginary forms of AI they hypothesize” (p. to population-level comparisons between, for instance, small popula-
4). Accordingly, this paper does more than discuss the possibility of tions of large animals like elephants and large populations of small
AI moral patienthood. It examines the probability that near-term AI animals like insects. And Fischer and Sebo (forthcoming) extend
systems will meet specific conditions for moral patienthood, as well this project to intersubstrate comparisons (i.e., silicon-based as well
as how this probability is relevant to our actions and policies. as carbon-based substrates). In all cases, it is important to note that
while knowledge about which beings matter and how much they mat-
2 ter is helpful, it is not always enough to motivate humans to treat
N.B. When we say that our considerability threshold of .1% is
these beings well. We emphasize the need for structural social, legal,
“conservative,” we mean that it sets a relatively high bar for consider-
political, and economic changes that can build capacity and political
ability, not a relatively low one. Setting a high bar for considerability
will in addition to research.
AI and Ethics (2025) 5:591–606 593
humans, animals, and AI systems in an integrative manner. to do. If an action has a non-negligible chance of gravely
Finally, we will all have a responsibility to build political harming or killing someone against their will, then that risk
will for doing this work. counts against that action. Of course, non-negligible risks
may or may not count decisively against an action; that will
depend on the details of the case, as well as on our further
2 The normative premise moral assumptions, some of which we can consider in a
moment. But whether or not this kind of risk is a decisive
We start by defending the idea that we should set a relatively factor in our decision-making, it should at least be a factor.
low bar for moral considerability. Assuming that conscious And importantly, this can be true even if the risk is very
beings merit moral consideration, we should extend moral low, for instance, even if the chance that the action or pol-
consideration to a being not when that being is definitely icy might harm someone against their will is only one in a
conscious, nor even when that being is probably conscious, thousand.
but rather when that being has a non-negligible chance of There are many examples of this phenomenon, ranging
being conscious. We might disagree about whether to con- from the ordinary to the extraordinary. To take an ordinary
sider negligible risks, about how much weight to give non- example, many people rightly see driving drunk as wrong
negligible risks, or about how to factor non-negligible risks because it carries a non-negligible risk of leading to an acci-
into decision-making. But we can, and should, agree on at dent, and because this risk clearly trumps any benefits that
least this much: when a being has at least a one in a thousand driving drunk may involve. Granted, we can imagine excep-
chance of having the capacity for subjective awareness, we tions to this rule; for instance, if your child is dying, and if
should extend this being at least some consideration when the only way that you can save them is by driving them to
making decisions that affect them. a nearby hospital while drunk, then we might or might not
As noted above, we are assuming in this paper that con- think that the benefits of driving drunk outweigh the risks in
scious beings merit moral consideration. Different phi- this case, depending on the details and our further assump-
losophers might accept this view for different reasons. For tions. But in standard cases, we rightly hold that even a low
example, we might hold that consciousness suffices for risk of causing an accident is reason enough to make driv-
moral standing [35–38]. We might hold that sentience (that ing drunk wrong. And either way, the risk should at least be
is, valenced consciousness) suffices for moral standing and considered.
that consciousness suffices for sentience [23]. Or, we might Alternatively, to take an extraordinary example, sup-
hold that sentience suffices for moral standing and that con- pose that building a superconducting supercollider carries
sciousness and sentience have overlapping conditions, such a non-negligible risk of creating a black hole that swallows
as perception, embodiment, self-awareness, and agency. the planet. In this case, many people would claim that this
Either way, as long as consciousness and moral standing experiment is wrong because it carries this risk, and because
are closely related in this context, we can be warranted in this risk generally outweighs the benefits of scientific explo-
treating consciousness as a proxy for moral standing in this ration [39]. Again, we can imagine exceptions; for instance,
context. if the sun will likely destroy the planet within the century,
Our own view is that consciousness and moral stand- and if the only way that we can survive is by advancing
ing are closely related in this context because, even if sen- particle physics, then we might think that the benefits of this
tience is necessary for moral standing, AI consciousness experiment outweigh the risks in this case. But otherwise,
is likely the main barrier to AI sentience in practice. That we might hold that even a low risk of creating a black hole
is, we expect that the “step” from non-conscious states to is reason enough to make the experiment wrong. And either
conscious states is much harder than the “step” from non- way, the risk should once again at least be considered.
valenced states to valenced states. Of course, this is not to Of course, these further details often matter. For instance,
say that this latter “step” will be easy. Instead, it is only suppose that one superconducting supercollider carries a
to say that if and when AI consciousness is possible, AI one in a thousand chance of creating a black hole, whereas
sentience will likely be possible too. But since it would another superconducting supercollider carries a one in a
take more space than we have here to defend this claim, hundred chance of doing so. Suppose further that the black
we instead simply assume that consciousness is a proxy for hole would be equally bad either way, causing the same
moral standing in this context, and we leave an examination amount of death and destruction for humans and other mor-
of this assumption—and an extension of our argument to ally relevant beings. In this case, should we assign equal
other potentially significant features—for another day. weight to these risks in our decision-making, because they
With that in mind, the basis for our normative premise in both carry a non-negligible risk of creating a black hole and
this paper is simple, plausible, and widely accepted: we have this outcome would be equally bad either way? Or should we
a duty to consider non-negligible risks when deciding what instead assign more weight to the risk involved with using
594 AI and Ethics (2025) 5:591–606
the second superconducting supercollider, because it carries threshold should be, and the implications of these views will
a higher risk of creating a black hole in the first place? differ more or less depending on that [31, 42].
According to the precautionary principle (on one inter- Despite these disagreements, we can all agree on this
pretation), we should take the former approach. If an action much: we should assign at least some weight to at least
or policy carries a non-negligible risk of causing harm, non-negligible risks. In what follows, we will assume that
then we should assume that this harm will occur and ask much and nothing more. As for what level of risk counts
whether the benefits of this action or policy outweigh this as non-negligible, philosophers generally set the thresh-
harm. In contrast, according to the expected value principle, old somewhere between one in ten thousand and one in
we should take the latter approach. If an action or policy car- ten quadrillion, as Monton [43] helpfully catalogs.5 (If a
ries a non-negligible risk of causing harm, then we should superconducting supercollider carried a one in ten thousand
multiply the probability of harm by the level of harm and chance of killing us all, we would want to know that!) But
ask whether the benefits of this action or policy outweigh the for our purposes here, we will assume that the threshold
resulting amount of harm. These approaches use different is one in a thousand. That way, when we explain how our
methods to incorporate non-negligible risks into our deci- normative assumption leads to a moral duty to extend at
sions, but importantly for our purposes here, they do both least some moral consideration to at least some near-future
incorporate these risks into our decisions [40, 41] AI systems, no one can reasonably accuse us of stacking the
To take another example, suppose that a third supercon- deck in favor of our conclusion.
ducting supercollider carries only a negligible chance (say, a Now, how does our assumption that we should consider
one in a quintillion chance) of creating a black hole. But sup- non-negligible risks apply to the question of AI conscious-
pose that the black hole would be as bad as before, causing ness? This is the general idea: we start with the assump-
the same amount of death and destruction for humans and tion that conscious beings have the capacity for welfare
other morally significant beings. Should we assign at least and moral standing, which means that they can be harmed
some weight to this risk in our decision-making, in spite of and wronged.6 So, if a being has a non-negligible chance
the fact that the probability of harm is so low, because the of being conscious, then they have a non-negligible chance
risk is still present and it would still be bad if this outcome of being capable of being harmed and wronged. And, if
came to pass? Or should we instead assign no weight at all a being has a non-negligible chance of being capable of
to this risk in our decision-making, in spite of the fact that being harmed and wronged, then moral agents have a duty
the risk is still present and it would still be bad if this out- to consider whether our actions might harm or wrong them.
come came to pass, simply because the probability of harm Finally, if moral agents have a duty to consider whether our
is so low that we can simply neglect it entirely for practical actions might harm or wrong a particular being, then that
purposes? means that we have a duty to treat them as having moral
According to what we can call the no threshold view, we standing, albeit with a few caveats.
should take the former approach. We should consider all Here are the caveats. First, to say that moral agents should
risks, including extremely low ones. Granted, if we com- treat a being as having moral standing is not to say that
bine this view with the expected value principle, then we the being does have moral standing. If consciousness is
can assign extremely little weight to extremely unlikely out- necessary and sufficient for moral standing and if a being
comes, all else equal. But we should still assign weight to
these outcomes. In contrast, according to what we can call
the threshold view, we should take the latter approach. We 5
One might think that the threshold for the negligibility of risks
should consider all non-negligible risks (that is, risks above depends, in part, on the stakes. A one in a thousand chance of
a particular probability threshold), but we can permissibly destroying the world seems non-negligible, but a one in a thousand
neglect all negligible risks (that is, risks below that thresh- chance of stubbing a toe seems negligible. While this view may be
worth considering, [43] reminds us that utility functions already
old).4 Of course, this view faces the question about what that account for differences in stakes (pp. 18–19). For instance, if reduc-
ing the risk of stubbing a toe requires extra effort, such as walking
around the couch, then it might not be worth it given the very low
probability and severity of harm. However, if reducing this risk
requires no extra effort—for instance, if it requires taking an equally
4
Threshold views are often motivated as a response to seemingly direct path—then it might be worth it given the non-zero probability
counterintuitive implications of the no threshold view. According and severity of harm.
6
to the no threshold view, we should consider all possible risks—no As a reminder, we are assuming that conscious beings have moral
matter how small—if their expected impact is great enough. In other standing in this paper for the sake of simplicity. We note that not eve-
words, a tiny probability of achieving a tremendous amount of good ryone accepts that consciousness is sufficient for moral standing, and
may be preferable to a guarantee of achieving a moderate amount we plan to examine other proposed conditions for moral standing in
of good. For discussion of the no threshold view under the name of future work. But we still take this assumption to be relatively ecu-
fanaticism, see [42]. menical for reasons that we describe above.
AI and Ethics (2025) 5:591–606 595
has a non-negligible chance of being conscious, given the our actions might be imposing non-negligible risks on them
evidence, then we should treat this being as having moral before making that determination.
standing. But if this being is not, in fact, conscious, then Seen from this perspective, the idea that we should extend
this would be an example of a false positive. It would be a moral consideration to a being with a non-negligible chance
case where we treat a non-conscious, non-morally significant of being conscious is simply an application of the idea that
being as conscious and morally significant. False positives we should extend moral consideration to morally signifi-
carry costs, and we will discuss how we should think about cant impacts that have a non-negligible chance of happening.
these costs below. But what matters for present purposes is Granted, in some cases, we might be confident that a being
that our argument is about whether we should treat AI sys- is morally significant but not that action will harm or wrong
tems as having moral standing, not whether they do. them. In other cases, we might be confident that our action
A second caveat is that to say that moral agents should will harm or wrong a being if this being is morally signifi-
treat a being as having moral standing is not to say how cant, but not that they are. And in other cases we might not
we should treat this being all things considered. Here, a lot be confident about either of these points. Either way, if a
depends on our further assumptions. For example, if we being has a non-negligible chance of being morally signifi-
perceive tradeoffs between what this being might need and cant, then we have a duty to consider whether our actions
what everyone else needs, then we of course need to con- might harm or wrong them.
sider these tradeoffs carefully. And if we accept an expected One final point will matter for our argument here. Plau-
value principle and hold that a being is, say, only 10% likely sibly, we can have duties to moral patients who either might
to be morally significant, then we can assign their interests or will come into existence in future as well. Granted, there
only 10% of the weight we otherwise would, all else equal. are a lot of issues to be sorted out involving creation ethics,
We will consider these points below as well. But what mat- population ethics, intergenerational justice, and so on. For
ters for present purposes is that when a being has a non- instance, some philosophers think that we should consider
negligible chance of being morally significant, they merit all risks that our actions impose on future moral patients,
at least some moral consideration in decisions about how whereas others think that we should consider only some
to treat them. of these risks, for instance if the risks are non-negligible,
A third caveat is that to say that a being has a non-neg- if the moral patients will exist whether or not we perform
ligible chance of being capable of being harmed is not to these actions, and/or if these actions will cause these moral
say that any particular action has a non-negligible chance patients to have lives that would be worse for them than non-
of harming them. For example, suppose that a being has existence. But the idea that we can have at least some duties
a one in forty chance of having moral standing and that a to at least some future moral patients is widely accepted.
particular action has a one in forty chance of harming them Here is why this point will matter: suppose that current
if and only if they do. In this case, we might be permitted to AI systems have only a negligible chance of being mor-
ignore these effects (assuming the threshold view with a one ally significant but that near-future AI systems have a non-
in a thousand threshold), since the chance that this action negligible chance of being morally significant. In this case,
will harm this being is only one in sixteen hundred, given we might think that we can have duties to near-future AI
the evidence. But we would still need to treat this being as systems whether or not we also have duties to current AI
having moral standing in the sense that we would still need systems. Suppose, moreover, that in some cases there is a
to consider whether our action has a non-negligible chance non-negligible chance that these near-future AI systems will
of harming them before deciding whether to consider these exist whether or not we perform particular actions and that
effects in this case. these actions will cause these AI systems to have lives that
We can find analogs for all these points in standard cases are worse for them than non-existence. In these cases, the
involving risk. For example, when an action carries a non- idea that we currently have duties to these AI systems fol-
negligible risk of harming someone, we accept that we lows from a wide range of views about the ethics of risk
should assign weight to that impact even when that impact and uncertainty coupled with a wide range of views about
is, in fact, unlikely to occur. When tradeoffs arise between creation ethics and population ethics.
(non-negligible) low-probability distant impacts and high- Before we explain why we think that AI systems will
probability local impacts, we accept that we should weigh soon pass this test, we want to anticipate an objection that
these tradeoffs carefully, not simply ignore one of these people may have to our argument. The objection is that our
impacts. And when the probability that our action will argument appears to depend on the idea that the risk of false
harm someone is below the threshold for non-negligibility, negatives (that is, the risk of mistakenly treating subjects
we might even ignore this risk entirely. But even in cases as objects) is worse than the risk of false positives (that is,
where we discount or neglect our impacts on others for these the risk of mistakenly treating objects as subjects) in this
kinds of reasons, we still ask whether and to what extent domain. Yet false positives are a substantial risk in this
596 AI and Ethics (2025) 5:591–606
domain too. And when we consider both of these risks holis- positive-sum policies where possible. That would allow us
tically, we may find that they cancel each other out either to extend moral standing to many AI systems without sac-
in whole or in part. Thus, it would be a bad idea to simply rificing our own interests excessively or unnecessarily [52].
include anyone who might be a moral patient in the moral Consider each of these points in turn. First, the risk of
circle. Instead, we need to develop a moderate approach to false negatives may be worse than the risk of false posi-
moral circle inclusion that properly balances the risk of false tives. This may be true in two respects. First, the probabil-
positives and false negatives. ity of false negatives may be higher than the probability of
To see why this objection has force, consider some of the false positives. After all, while excessive anthropomorphism
risks involved with false positives. One risk is that insofar as (mistakenly seeing nonhumans as having human properties
we mistakenly treat objects as subjects, we might end up sac- that they lack) is always a risk, excessive anthropodenial
rificing the interests and needs of actual subjects for the sake (mistakenly seeing nonhumans as lacking human properties
of the “interests” and “needs” of merely perceived subjects. that they have) is always a risk too. And if the history of our
At present, there are many more invertebrates than verte- treatment of animals is any indication, our tendency toward
brates in the world, and in future, there might be many more anthropodenial may be stronger than our tendency toward
digital minds than biological minds. If we treat all these anthropomorphism, in part because we have a strong incen-
beings as moral patients, then we might face difficult trade- tive to view nonhumans as objects so that we can exploit
offs between their interests and needs. And if we follow the and exterminate them. This same dynamic may arise with
numbers,7 then we might end up prioritizing invertebrates AI systems, too [53].
over vertebrates and digital minds over biological minds all Second, the harm of false negatives may be higher than
else equal. It would be a shame if we made that sacrifice for the harm of false positives, all else equal. A false negative
beings that, in fact, have no moral standing at all! involves treating a subject as an object, whereas a false posi-
And in the case of AI, there are additional risks. In par- tive involves treating an object as a subject. And as the his-
ticular, some experts perceive a tension between AI safety tory of our treatment of nonhuman animals (as well as fel-
and AI welfare [4]. Whereas the former is about protecting low humans) illustrates, the harm involved when someone
humans and other animals from AI systems, the latter is is treated as something is generally worse than the harm
about protecting AI systems from humans. And we might involved when something is treated as someone. Granted,
worry that these goals are in tension. For instance, we might when we mistakenly treat objects as subjects, we might end
think that protecting humans and other animals from AI up prioritizing merely perceived subjects over actual sub-
systems requires controlling them more, whereas protect- jects. But to the extent that we take the kind of balanced
ing AI systems from humans requires controlling them less. approach that we discuss in a moment, we can include a
And when we consider the stakes involved in these deci- much vaster number and wider range of beings in our moral
sions—many experts see the risk of human extinction from circle than we currently do while mitigating this kind of risk.
AI as a global priority alongside pandemics and nuclear war And in any case, whether or not the risk of false negatives
[51]—we can see how dangerous it might be for us to give is worse than the risk of false positives, taking both risks
AI systems the benefit of the doubt. seriously requires striking a balance between them. Consider
Here is the general form of our response to this objec- three possible ways of doing so. First, instead of accepting
tion. We agree that false positives and false negatives in a no threshold view and extending moral consideration to
this domain both involve substantial risks, and that we need anyone who has any chance at all of being conscious, we
to take these risks seriously. However, we also think that can accept a threshold view and extend moral considera-
the risk of false negatives may be worse than the risk of tion to anyone who has at least a non-negligible chance of
false positives overall. And either way, insofar as we take being conscious. On this view, we can still set a non-zero
both risks seriously, the upshot is not that we should simply risk threshold and exclude potentially conscious beings from
exclude potentially conscious beings from the moral circle. the moral circle when they have a sufficiently low chance of
The upshot is instead that we should strike a balance, for being conscious. But we would still need to set the thresh-
instance by including some of these beings and not others, old at a different place than we do now, and we would still
by assigning a discount rate to their interests, and by seeking need to include many more beings in the moral circle than
we do now.
Second, instead of accepting a precautionary principle
7
Many theories, including consequentialist and non-consequentialist and assigning full moral weight to anyone we include in the
theories, give weight to numbers, though they may do so in differ- moral circle, we can accept an expected weight principle
ent ways and to different degrees [44–47]. Additionally, even theories and assign varying amounts of moral weight to everyone we
that resist “following the numbers” [48–50] need a way to resolve
tradeoffs, including tradeoffs between the risks of false positives and include in the moral circle. More specifically, our assign-
false negatives about moral patienthood. ments of moral weight can depend on at least two factors:
AI and Ethics (2025) 5:591–606 597
how likely someone is to be conscious, and how much wel- be synergistic fields. After all, building safe AI requires not
fare they could have if they were.8 If we accept this kind of only aligning AI values with human values, but also improv-
view, then even if we include, say, invertebrates and near- ing human values in the first place, partly by addressing our
future AI systems in the moral circle, we can still assign own oppressive attitudes and practices [52].
humans and other vertebrates a greater amount of moral We can, and should, thus take the same kind of One
weight than invertebrates and AI systems to the extent that Health (or, if we prefer, One Welfare, One Rights, or One
humans and other vertebrates are more likely to be conscious Justice) approach to our interactions with AI systems as we
and/or have higher welfare capacities than invertebrates and do with our interactions with animals. In both cases, the
AI systems, in expectation. task is to think holistically and structurally about how we
Third, we can keep in mind that morality involves more can pursue positive-sum solutions for humans, animals, and
than mere harm-benefit analysis, at least in practice. We need AI systems. And insofar as intractable conflicts remain, the
to take care of ourselves, partly because we have a right to task is to think ethically and strategically about how to set
do so, and partly because we need to take care of ourselves priorities and mitigate harm. And if we take this approach
to be able to take care of others. Relatedly, we need to work while recognizing all the other points discussed in this sec-
within our epistemic, practical, and motivational limitations tion, then we can include a much vaster number and wider
by pursuing projects that can be achievable and sustainable range of beings in the moral circle without inviting disaster
for us. Thus, even if including, say, invertebrates and AI for humans or other vertebrates. Indeed, if we do this work
systems in the moral circle requires assigning them a lot of well, then we will plausibly improve outcomes for humans
moral weight all else equal, we might still be warranted in and other vertebrates too.
prioritizing ourselves all things considered to the degree that To sum up, the normative premise of our argument holds
self-care and practical realism requires. Granted, that might that we should extend at least some moral consideration to
mean prioritizing ourselves less than we do now. But we beings with at least a one in a thousand chance of being
can, and should, still ensure that we can live well [21, 55]. conscious, given the evidence. As a reminder, our argument
There are also many positive-sum solutions to our prob- treats consciousness as a proxy for moral standing.9 It also
lems. This point is familiar in the animal ethics literature as treats a one in a thousand chance of harm as the threshold
well. We might initially assume that pursuing our self-inter- for non-negligibility. In our view, it would be more plausible
est requires excluding other animals from the moral circle. to accept a more inclusive view, by holding that we should
But upon further reflection, we can see that this assump- extend at least some moral consideration to beings with at
tion is false. Human and nonhuman fates are linked for a least, say, a one in ten thousand chance of being, say, con-
variety of reasons. When we oppress animals, we reinforce scious or agential or otherwise significant. And this more
the idea that one can be treated as “lesser than” because of inclusive version of the premise would make our conclusion
perceived cognitive and physical differences, which is at the about the moral status of near-future AI systems easier to
root of human oppressions too. Additionally, practices that establish. But we will stick with the current version here for
oppress animals contribute to pandemics, climate change, the sake of discussion.
and other global threats that harm us all. Recognizing these
links allows us to build new systems that can be good for
humans and animals at the same time [55, 56]. 3 The descriptive premise
Similarly, we might initially assume that pursuing our
self-interest requires excluding AI systems from the moral We now make a preliminary argument for the conclusion
circle. But upon further reflection, we can see that this that there is a non-negligible chance that some AI systems
assumption is false as well. Biological and artificial fates are will be conscious within the decade. Note that when we
linked, too. If we oppress AI systems, we once again rein- consider the possibility of AI consciousness, we are not
force ideas that are at the root of human oppressions. And necessarily considering the possibility of AI systems whose
since humans are training AI systems with data drawn from experiences are similar to ours. Two individuals can be simi-
human behavior, practices that oppress AI systems might lar in that they have experiences but different in that their
teach AI systems to adopt practices that oppress humans and
other animals. In this respect, AI safety and AI welfare can
9
See [22] for a review of proposed sufficient conditions for AI moral
standing. In our view, plausible candidates include non-conscious
8
This scalar account of moral weight has disadvantages, too. For agency (i.e., the capacity to set and pursue goals in a self-directed
instance, our estimates about probabilities and utilities might be mis- manner) and non-conscious life functions (i.e., the capacity to engage
taken and might lead to harmful hierarchies both within and across in behaviors that contribute to survival and reproduction). For more
species. Before adopting such a view, we suggest carefully consider- on non-conscious agency, see [19, 20]. For more on nonconscious life
ing its pros and cons. For further discussion, see [21, 41, 54]. functions, see [57, 58].
598 AI and Ethics (2025) 5:591–606
experiences have very different contents and strengths. Of assumptions about consciousness, since we need at least
course, to the extent that humans use the structures and func- some basis for our estimates (and in any case we usually at
tions of carbon-based minds as a model for those of silicon- least implicitly rely on theoretical assumptions). We should
based minds, we might have at least some evidence that our thus take a “theory-light” approach by making assump-
experiences are at least somewhat similar. But for present tions about consciousness that, on one hand, can be neutral
purposes, all that matters is that the idea of consciousness enough to reflect our uncertainty and, on the other hand, can
presupposes nothing more than the thin idea of subjective be substantial enough to serve as the basis for estimates [63].
experience. Our aim with this framework is to take an approach that is
Given the problem of other minds, we might not ever be theory-informed, yet ecumenical and reflective of disagree-
able to achieve certainty about whether other minds, includ- ment and uncertainty, when estimating when AI systems will
ing artificial minds, can be conscious. However, we can still have a non-negligible chance of being conscious (cf. [32,
clarify our thinking about this topic as follows: first, we can 64]).11 We consider a dozen commonly proposed necessary
ask how likely particular capacities are to be necessary or and sufficient conditions for consciousness, ask how likely
sufficient for consciousness, and second, we can ask how these conditions are to be individually necessary and jointly
likely near-future AI systems are to possess these capacities, sufficient, and ask how likely near-future AI systems are to
given the evidence.10 We suggest that when we sharpen our satisfy these conditions. Along the way we note our own
thinking about this topic in this way, we find that we would estimates in general terms, for instance by saying that we
need to make surprisingly bold estimates about the probabil- take particular conditions to have a high, medium, or low
ity of particular capacities being necessary for conscious- chance of being necessary. We then note how conservative
ness and the probability of these capacities being unmet our estimates would need to be to produce the result that AI
by near-future AI systems in order to confidently conclude systems have only a negligible chance of being conscious
that near-future AI systems have only a negligible chance by 2030, and we suggest that this degree of conservatism
of being conscious. is unwarranted.
Of course, a major challenge for making these estimates Throughout this discussion, we sometimes refer to what
is substantial uncertainty not only about how AI capabilities we call the direct path and the indirect path to satisfying
are likely to develop but also, and especially, about which proposed conditions. The direct path involves satisfying
capabilities are likely to be necessary or sufficient for con- these conditions as an end in itself or as a means to further
sciousness. After all, debates about consciousness are ongo- ends. The indirect path involves satisfying these conditions
ing. Some scientists and philosophers accept theories of con- as a side effect of pursuing other ends. As we will see, some
sciousness that set a very high bar and imply that relatively of these conditions concern capabilities that AI research-
few beings can be conscious, others accept theories that set ers are pursuing directly. Others concern capabilities that
a very low bar and imply that relatively many beings can be AI researchers might or might not be pursuing directly, but
conscious, and others accept theories that fall between these which can emerge as a side effect of capabilities that AI
extremes. Moreover, some scientists and philosophers accept researchers are pursuing directly. Where relevant, we note
that the problem of other minds is solvable—that we can whether satisfying the conditions on the direct or indirect
eventually know which beings are conscious—whereas oth- path is more likely. But for the sake of simplicity, our model
ers deny that this problem is solvable even in principle [62]. uses a single ‘fulfilled either directly or indirectly’ estimate
As Jonathan Birch [63] and others have argued, when we for each condition.
ask which nonhumans are conscious, it would be a mistake Of course, it would be a mistake to take any specific
to apply a “theory-heavy” approach that assumes a particu- numerical outputs of this kind of exercise too seriously.
lar theory of consciousness, since we still have too much But in our view, as long as we take these outputs with a
uncertainty about which theories are true and how to extend healthy pinch of salt, they can be useful. Specifically, they
them to nonhumans. But it would also be a mistake to claim can show that we need to make surprisingly bold estimates
to be completely “theory-neutral,” putatively avoiding all about incredibly difficult questions to vindicate the idea
that AI systems have only a negligible chance of being
conscious within the decade. This kind of exercise can also
10 help sharpen disagreements, since those who disagree with
Granted, one still might deny that knowledge about other minds
is possible at all, due to the hard problem of consciousness [59] and
the problem of other minds [60, 61]. However, denying knowledge
of other minds supports uncertainty about AI consciousness, not
11
certainty that AI systems lack consciousness. Since the implications Note that our methodology is different from Birch’s “theory-light”
of this pessimistic view are compatible with our conclusion in this proposal, which is about using the assumption that consciousness
paper, we assume that this pessimistic view is false for the sake of facilitates certain cognitive capacities, in order to look for signs of
argument. consciousness in nonhuman animals.
AI and Ethics (2025) 5:591–606 599
particular probabilities can see what their own probabilities Biological function: Other theorists hold that conscious-
entail, and those who disagree with the set-up of our model ness requires some function that only biological, carbon-
can propose a different model. We do not mean for this exer- based systems can feasibly perform, at least given existing
cise to be the last word on the subject; on the contrary, we hardware. For example, Peter Godfrey-Smith argues that
hope that this exercise inspires discussion and disagreement consciousness depends on functional properties of nervous
that lead to better models.12 systems that are not realizable in silicon-based chips, such
This exercise is primarily intended to show that it turns as metabolism and system-wide synchronization via oscil-
out to be hard to dismiss the idea of AI consciousness once lations. On this view, “minds exist in patterns of activity,
we approach the topic with all due caution and humility. but those patterns are a lot less ‘portable’ than people often
When we think about the issue in general terms, we might suppose; they are tied to a particular kind of physical and
dismiss the idea of AI consciousness because we think that biological basis.” As a result, Godfrey-Smith is “skeptical
we should extend moral consideration only to beings who about the existence of non-animal” consciousness, includ-
are conscious, we think that AI systems are not conscious, ing AI consciousness [70]. Other theorists express skepti-
and we feel satisfied with these thoughts because we find cism about AI consciousness on current hardware for similar
the idea of moral consideration for AI systems aversive. But reasons [71, 72].
when we think about the issue in more specific terms, we Of course, these views represent only a subset of views
realize that the ethics of risk and uncertainty push in the about which substrates and functions are required for con-
opposite direction: given ongoing uncertainty about other sciousness. Many views—most notably, many varieties of
minds, dismissing the idea of AI consciousness requires computationalism and/or functionalism—allow that con-
making unacceptably exclusionary assumptions about either sciousness requires a general physical substrate or a general
the values, the facts, or both. set of functions that can be realized in both carbon-based
and silicon-based systems. Indeed, many of the conditions
3.1 Very demanding conditions that we consider below, according to which consciousness
arises when beings with a particular kind of body are capa-
We can start by considering two commonly proposed neces- ble of a particular kind of cognition, flow from such views.
sary conditions for consciousness that set a very high bar. Thus, rejecting the possibility of near-term AI consciousness
One of these views, the biological substrate view, implies out of hand requires more than accepting that consciousness
that AI consciousness is impossible. The other, the biologi- requires a particular kind of substrate or function. It also
cal function view, implies that AI consciousness is either requires accepting a specific, biological view on this matter.
impossible or, at least, very unlikely in the near term. Note also that whereas the biological substrate view
Biological substrate: Some theorists hold that a con- implies that AI consciousness is impossible as a general
scious being must be made out of a particular substrate, matter, the biological function view implies that AI con-
namely a biological, carbon-based substance. For exam- sciousness is impossible only to the extent that silicon-based
ple, according to a physicalist biological substrate theory, systems are incapable of performing the relevant functions.
consciousness is identical to particular neural states or pro- But of course, even if AI systems are incapable of perform-
cesses—that is, states or processes of biological, carbon- ing these functions given current hardware setups, that might
based neurons [67–69]. Similarly, according to a dualist change if we have other, more biologically inspired hardware
biological substrate theory, consciousness is an immaterial setups in future [73]. So, insofar as we accept this kind of
substance or property that is associated only with some view, the upshot is not that AI consciousness is impossible
particular neural states or processes.13 If we accept either forever, but rather that AI consciousness is impossible for
kind of theory, then we must reject multiple realizability in now. Nevertheless, since our goal here is to estimate the
silicon—that is, we must reject the idea that consciousness probability of AI consciousness within the decade, we can
can be realized in both the carbon-based substrate and the treat both views as ruling out AI consciousness for present
silicon-based substrate—and accept that no silicon-based purposes.
system can be conscious as a matter of principle. Our own view is that the biological substrate view is very
likely to be false, and that the biological function view is at
least somewhat likely to be false. It seems very implausible
to us that consciousness requires a carbon-based substrate as
12
For arguments in favor of estimating complex and highly uncer- a matter of principle, even if silicon-based systems can per-
tain probabilities, and recommendations for doing so responsibly, see form all the same functions. In contrast, it seems more plau-
[65]. Examples of projects that make this attempt with similarly dif-
ficult questions include Carlsmith [66]. sible that consciousness requires a specific set of functions
13
David Chalmers discusses the possibility of this kind of dualism in that, at present, only carbon-based systems can perform. But
his paper “The Singularity: A Philosophical Analysis” (2009, fn. 29). we think that this issue is, at best, a toss-up at present. At
600 AI and Ethics (2025) 5:591–606
this early stage in our understanding of consciousness, it consciousness, but not that robots with sensory capabili-
would be unreasonable for us to assign a high credence to ties do. In contrast, according to weak grounded perception,
the proposition that anything as specific as metabolism and the capacity to perceive objects in a virtual environment is
system-wide synchronization via oscillations [70] is neces- sufficient. This view might once again imply that a wider
sary for any kind of subjective experience at all. range of AI systems can be conscious. Either way, we take
Many experts appear to agree. For example, a recent sur- the probability that at least some AI systems will satisfy this
vey of the Association for the Scientific Study of Conscious- condition in the near future to be very high on both interpre-
ness, a professional membership organization for scientists, tations, for similar reasons.
philosophers, and experts in other relevant disciplines, Self-awareness: Some theorists also hold that self-aware-
found that about two thirds (67.1%) of respondents think ness, that is, awareness of oneself, is necessary for con-
that machines such as robots either “definitely” or “prob- sciousness [77]. Depending on the view, the relevant kind of
ably” could have consciousness in future [74]. This sug- self-awareness might be propositional or perceptual, and it
gests that at least this many respondents reject the idea that might concern bodily self-awareness, social self-awareness,
consciousness requires a carbon-based substrate in principle, cognitive self-awareness, and more.14 Regardless, it seems
and they also reject the idea that consciousness requires a plausible that at least some AI systems can satisfy this con-
set of functions that only carbon-based systems can realize dition. AI systems with grounded perception already possess
in practice. Of course, these respondents might or might perceptual awareness of some of these features, large lan-
not think that consciousness requires a set of functions that guage models are starting to display flickers of propositional
only carbon-based systems can realize at present. Still, the awareness of some of these features, and some researchers
fact that many experts are open to the possibility of AI con- are explicitly aiming to develop these capabilities further in
sciousness is noteworthy. a variety of systems [79–81]. While this condition is more
demanding than the previous two, we still see it as moder-
3.2 Moderately demanding conditions ately likely on any reasonable interpretation.
Agency: Relatedly, some theorists also hold that agency,
We can now consider eight proposed necessary condi- that is, the capacity to set and pursue goals in a self-directed
tions for consciousness that are moderately demanding for manner, is necessary for consciousness [82–84]. Depending
AI systems to satisfy. As we will see, the first four refer to on the view, the relevant kind of agency might involve act-
relatively general features of a system, whereas the last four ing on propositional judgments about reasons, or it might
refer to relatively specific mechanisms that flow from lead- involve acting on perceptual reactions to affordances [85].
ing theories of consciousness. Many also overlap, both in Regardless, it once again seems plausible that at least some
principle and in practice. AI systems can satisfy this condition. AI systems with
Embodiment: Some theorists hold that embodiment is grounded perception can already act on perceptual reactions
necessary for consciousness [75]. We can distinguish two to affordances, large language models are already starting
versions of this view. According to strong embodiment, a to display flickers of propositional means-ends reasoning,
physical body in a physical environment is necessary for and, once again, some researchers are explicitly aiming to
consciousness. This view might imply that AI systems like develop these capabilities further [86]. For these reasons,
large language models lack consciousness at present, but we see agency as about as likely as self-awareness on any
not that AI systems like robots do. In contrast, according to reasonable interpretation.
weak embodiment, a virtual body in a virtual environment A global workspace: Some theorists hold that a global
would be sufficient for consciousness. On this view, a wider workspace, that is, a mechanism for broadcasting represen-
range of AI systems can be conscious. In either case, since tations for global access throughout an information system,
many AI systems already have physical and virtual bodies, is necessary for consciousness [87]. In humans, for exam-
and since both kinds of embodiment are useful for many ple, a visual state is conscious when the brain broadcasts
tasks, we take the probability that at least some AI systems it for global access. Since this condition depends only on
will satisfy this condition in the near future to be very high functions like broadcasting and accessing, many experts
on both interpretations. believe that suitable AI systems can satisfy it (see, for exam-
Grounded perception: Some theorists hold that ple: [88–90]). Indeed, Yoshua Bengio and colleagues are
grounded perception, that is, the capacity to perceive objects the latest group to attempt to build an AI system with a
in an environment, is necessary for consciousness [75, 76]. global workspace [91], and Juliani et al. [92] argue that an
We can once again distinguish two versions of this view.
According to strong grounded perception, the capacity to
perceive objects in a physical environment is necessary. This 14
For more details about different kinds of self-awareness, see Ber-
view might once again imply that large language models lack múdez [78].
AI and Ethics (2025) 5:591–606 601
AI system has already developed a global workspace as a consciousness, we should note that there are relatively unde-
side effect of other capabilities. We thus take there to be a manding conditions that some theorists take to be sufficient.
moderate chance that an AI system can have a global work- Such views imply that AI consciousness is, if not guaran-
space within the decade. teed, then at least very likely within the decade. It thus mat-
Higher-order representation: Some theorists hold that ters a lot whether we give any weight at all to these views in
higher-order representation, or the representation of one’s our decisions about how to treat AI systems.
own mental states, is necessary for consciousness. This con- Information: Some theorists suggest that information
dition overlaps with self-awareness, and it admits of similar processing alone is sufficient for consciousness.15 This the-
variation. For instance, some views hold that propositional ory sets a very low bar for minimal consciousness, since
states about other states are necessary, and other views hold information processing can be present even in very simple
that perceptual states of other states are sufficient [93]. In systems. Granted, it might be that very simple systems can
either case, this capacity is plausibly realizable within AI have only very simple experiences [101, p. 294]. But first,
systems. Indeed, Chalmers [94] speculates that intelligent even very simple experiences can be sufficient for moral
systems might generally converge on this capacity, in which consideration, particularly when they involve positive or
case we can expect that sufficiently advanced AI systems negative valence. And second, many AI systems already
will have this capacity whether or not we intend for them to. have a high degree of informational complexity, and thus
We thus take there to be a moderate chance that AI systems they might already have a high degree of experiential com-
can have higher-order representation within the decade as plexity on this view.16 As AI development continues, we
well. can expect that the informational complexity of advanced
Recurrent processing: Some theorists hold that recur- AI systems will only increase.
rent processing, that is, the ability for neurons to commu- Representation: Relatedly, some theorists hold that
nicate with each other in a kind of feedback loop, is suf- minimal representational states are sufficient for conscious-
ficient for consciousness [95–97]. One might also hold it to ness. For example, Michael Tye [103, 104] defends a PANIC
be necessary. In biological systems, this condition might be theory of consciousness, according to which an experience
less demanding than some of the previous conditions, but in is conscious when its content is poised (ready to play a role
artificial systems, it might be more demanding. However, as in a cognitive system), abstract (able to represent objects
Chalmers [36] notes, even if we take recurrence to be neces- whether or not those objects are present), non-conceptual
sary, this condition is plausibly satisfied either by systems (able to represent objects without the use of concepts), and
that have recurrence in a broad sense, or, at least, by systems intentional (represents something in the world). This view
that have recurrence via recurrent neural networks and long proposes a sufficient condition for consciousness that AI
short-term memory. We take recurrent processing to be more systems with embodied perception and weak agency plau-
likely on the direct path than the indirect path at present, and sibly already satisfy. For instance, a simple robot that can
to be at least somewhat likely overall. perceive objects and act on these perceptions whether or
Attention schema: Finally (as a newer view), some theo- not the objects are still present might count as conscious
rists hold that an attention schema, that is, the ability to on this view.
model and control attention, is necessary for consciousness. We can also give an honorable mention to panpsychism,
Graziano and colleagues have already built computational which holds that consciousness is a fundamental property of
models of the attention schema [98]. Some theorists also matter. Whether panpsychism allows for AI consciousness
speculate that, like metacognition, intelligent systems might depends on its theory of combination, that is, its theory of
generally benefit from an attention schema [99], in which which systems of “micro” experiences can comprise a fur-
case we may once again expect that sufficiently advanced ther “macro” experience. Many panpsychists hold that, say,
AI systems will have this capacity whether or not we intend human and nonhuman animals are the kinds of systems that
for them to. Since proponents of attention schemas take this
capacity to be more demanding than, say, global workspace
15
and higher-order representations [100], we take the chance Chalmers [101] discusses, but does not necessarily endorse, infor-
that AI systems can have an attention schema to be some- mation processing accounts of consciousness in The Conscious Mind
(1996, pp. 276–308).
what lower than the chance that they can have these other 16
To be clear, not all views that center information processing imply
capacities, while still being somewhat likely overall. that AI systems built with current hardware have the relevant kind of
informational complexity. For example, while Integrated Information
3.3 Very undemanding conditions Theory has liberal implications about which systems can be conscious
in some respects, leading proponents of this theory believe that com-
puters lack the causal make-up required for a high degree of ‘inte-
While our model asks how likely AI systems will be to grated information’ in the relevant sense [102]. See also Butlin et al.
satisfy relatively demanding necessary conditions for [32].
602 AI and Ethics (2025) 5:591–606
can have macro experiences but that, say, tables and chairs 4 Discussion
are not. And at least in principle, panpsychists can accept
theories of combination that include all, some, or none of Thus far, this section has surveyed a dozen proposed condi-
the necessary or sufficient conditions for consciousness tions for consciousness, noting our own estimates about how
discussed above. In that respect, we can distinguish very likely these conditions are to be both correct and fulfilled by
demanding, middle ground, and very undemanding versions some AI systems in the near future along the way. We now
of panpsychism, and a comprehensive survey would give close by suggesting that our estimates about these matters
weight to all these possibilities. would need to be unacceptably confident and skeptical to
Indeed, as noted in our discussion of very demanding justify the idea that AI systems have only a negligible chance
conditions, many theories of consciousness are similarly of being conscious by 2030.
expansive, in that they similarly allow for very demanding, Our claim is that vindicating the idea that AI systems
moderately demanding, and very undemanding interpreta- have only a negligible chance of being conscious by 2030,
tions. For example, many computational theories of con- given the evidence, requires making unacceptably bold
sciousness are imprecise enough to allow for the possibility assumptions either about the values, about the facts, or
that AI systems can perform the relevant computations now. about both. Specifically, we need to either (a) assume an
They appeal to concepts like “perception,” “self-awareness,” unacceptably high risk threshold (for instance, holding
“agency,” “broadcast,” “metacognition,” and “attention” that that the probability that an action will harm vulnerable
similarly admit of minimalist interpretations. And while populations needs to be higher than one in a thousand
some theorists might prefer to reject these possibilities and to merit consideration), (b) assume an unacceptably low
add precision to their theories to avoid them, other theorists probability of AI consciousness within the decade (for
might prefer to embrace these possibilities, along with the instance, holding that the probability that at least some AI
moral possibilities that they entail. systems will be conscious within the decade is lower than
Our own view is that there is at least a one in a thousand one in a thousand), or (c) both. But these assumptions are
chance that at least one of these very undemanding condi- simply not plausible when we consider the best available
tions is sufficient for consciousness and that AI systems can information and arguments in good faith.
satisfy this condition at present or in the near future. Given To illustrate this idea, we present a simple model into
the need for humility in the face of the problem of other which we can enter probabilities that these conditions are
minds, we think that it would be arrogant to simply assume necessary for consciousness and that some AI systems will
that very undemanding theories of consciousness are false at satisfy these conditions by 2030. We then show the extent
this stage, in the same kind of way that we think that it would to which we would need to bet on particular conditions
be arrogant to simply assume that very demanding theories being both necessary and unmet to avoid the conclusion
are true at this stage. Instead, we think that an epistemically that AI systems have a non-negligible chance of conscious-
responsible distribution of credences involves taking there ness by 2030. In particular, we would need to assume that
to be at least a low but non-negligible chance that views the very demanding conditions have a very high chance
at both extremes are correct, and then taking there to be a of being necessary and no chance of being met. We would
higher chance that views between these extremes are correct. need to assume that the moderately demanding conditions
For whatever it may be worth, many experts do seem generally have a high chance of being necessary and a low
to be open to quite permissive theories of consciousness. chance of being met. And we would need to assume that
For example, on a 2020 survey of philosophers, 7.55% of the very undemanding conditions have a very low chance
respondents indicate that they accept or lean toward panpsy- of being sufficient.
chism together with other views, and 6.08% indicate that Before we present this model, we should note an impor-
they accept or lean toward panpsychism instead of other tant simplification, which is that this model assesses each
views. 11.8% also claim to be agonistic or undecided, which of these conditions independently, with independent prob-
might indicate openness to some of these views well [105]. abilities of being necessary and of being met. But this
Of course, this survey leaves it unclear what theory of com- assumption is very likely false, and some interactions
bination these philosophers accept, and, so, what the impli- between these conditions might drive down our estimates
cations are for AI consciousness. But the fact that so many of AI consciousness. In particular, there might be what
philosophers accept or lean toward panpsychism or agnosti- we can call an “antipathy” between different conditions
cism is consistent with the kind of epistemic humility that being met by a single AI system. For example, it might be
we believe is warranted given current evidence. that when an AI system has a global workspace, then this
AI system is less likely to have recurrence. If so, then the
probability that an AI system can satisfy these conditions
together is not simply a product of the probabilities that an
AI and Ethics (2025) 5:591–606 603
AI system can satisfy them separately, as our model treats and unmet (except embodiment and grounded perception;
them for the sake of simplicity. see above) (even though other moderately demanding con-
However, we think that this kind of antipathy is unlikely ditions are plausibly already met too and researchers are
to hold as a general matter. First of all, it seems plausible pursuing promising strategies for meeting them); we can
that many of these conditions are at least as likely, if not still end up with a one in a thousand chance of AI conscious-
much more likely, to interact positively as to interact nega- ness by 2030—which, we believe, is more than enough to
tively, that is, that satisfying some conditions increases the warrant at least some moral consideration for at least some
probability of satisfying others at least as much as, if not near-term AI systems.
more than, doing so decreases this probability. Second of
all, we know that at least one system—the human brain—
can satisfy all of these conditions at once, which is pre- 5 Chance of AI Consciousness by 2030
cisely why philosophers have picked out these conditions.
And while one might argue that only carbon-based systems Reminder: This table is for illustrative purposes only. These
are capable of satisfying all these conditions at once, we credences are not meant to be accurate, but are rather meant
expect that such a view depends on either the biological to show how skeptical one can be about AI consciousness
substrate view, the biological function view, or both, and while still being committed to at least a one in a thousand
is only as plausible as these views are. chance of AI consciousness by 2030.
With that said, we also allow for an X factor in this model
for this reason. We recognize that our survey of proposed Conditions Necessary Not Met by Necessary and Not
2030 Met
conditions for consciousness is not comprehensive, in that
it might exclude conditions that it should include, and it Biological 80% 100% 80.0%
might also exclude interactions among conditions. We thus substrate or
include a line in our model that allows for such possibili- function
ties. Of course, a more comprehensive treatment of X fac- Embodiment 70% 10% 7.0%
tors would account for a wider range of views and a wider Grounded 70% 10% 7.0%
perception
range of interactions, some of which could make near-term
Self-awareness 70% 70% 49.0%
AI sentience more likely and others of which could make
Agency 70% 70% 49.0%
it less likely. But for present purposes we allow only for
Global work- 70% 70% 49.0%
views and interactions that make near-term AI conscious- space
ness less likely, in the spirit of showing that even when we Higher-order 70% 70% 49.0%
make assumptions that favor negligibility, negligibility can representation
still be hard to establish. Recurrent 70% 80% 56.0%
Finally, as we note in the introduction to this paper, a processing
comprehensive estimate about the probability of near-term Attention 50% 75% 37.5%
AI moral standing might need to consider more than the schema
probability of near-term AI consciousness. Specifically, if X factor 75% 90% 67.5%
multiple theories of moral standing have a non-negligible AI Conscious- ~ 0.1% (1 in 1000)17
ness by 2030*
chance of being correct, then we will need to estimate the
probability that each theory is correct, estimate the probabil-
ity that some near-term AI systems will have moral stand- 17
The “exact” calculation, which is artificially more “precise” than
ing according to each theory, and then put it all together to the inputs, is 0.105%. This estimate is calculated as follows: The first
generate an estimate that reflects our normative uncertainty two columns are inputs based on subjective credences. (In the main
and our descriptive uncertainty. We expect that expanding text, we discussed our credence of the conditions being met. Here we
list our credence in the condition not being met, to make the calcu-
our model in this manner would drive the probability of AI lation more straightforward.) From the odds that the conditions are
moral standing up, not down, but we emphasize that our (a) necessary for AI consciousness and (b) not met by 2030 (condi-
conclusion in this paper is tentative until we confirm that. tional on being necessary), we can calculate the odds that a condi-
With that in mind, the table below illustrates that even if tion is a barrier to AI sentience (i.e., “necessary and not met”). For
example, when we multiply the odds that recurrent processing is nec-
we assume, implausibly in our view, that a biological sub- essary (70%) by the odds that this condition is not met (80%), we can
strate or function has a very high chance of being necessary derive the odds that this condition is a barrier to AI consciousness:
and a 100% chance of being unmet; that an X factor has a 70% × 80% = 56%. And when we multiply the odds of each condi-
very high chance of being both necessary and unmet; and tions, including the X factor(s), not being a barrier together (assum-
ing independence [see discussion]), we get the odds that nothing is
that each moderately demanding condition has a high chance a barrier, and, so, that AI systems can be conscious: i.e., (1–80%) x
of being both necessary (except attention schema; see above) (1–7%) … (1–67.5%) = 0.105%.
604 AI and Ethics (2025) 5:591–606
*The chance that all conditions, including an X factor, are S., Renduchintala, A., Roller, S., Zijlstra, M., Meta Fundamental
either unnecessary or met by 2030 AI Research Diplomacy Team (FAIR)†: Human-level play in the
game of diplomacy by combining language models with strategic
This exercise, rough as it may be, shows that accepting reasoning. Science. 378(6624), 1067–1074 (2022)
a non-negligible chance of near-future AI consciousness 3. Padalkar, A., Pooley, A., Jain, A., Bewley, A., Herzog, A., Irpan,
and moral standing is not a fringe position. On the contrary, A., Khazatsky, A., Rai, A., Singh, A., Brohan, A., Raffin, A.,
rejecting this possibility requires holding stronger views Wahid, A., Burgess-Limerick, B., Kim, B., Schölkopf, B., Ichter,
B., Lu, C., Xu, C., Finn, C., Cui, Z. J.: Open X-Embodiment:
about the nature and value of other minds and the pace of AI Robotic Learning Datasets and RT-X Models (arXiv:2310.
development than we think is warranted. In short, assuming 08864). arXiv (2023). Accessed 15 June 2023
that conscious beings merit consideration, humans should 4. Villalobos, P.: "Scaling Laws Literature Review," Published
extend moral consideration to beings with at least a one in online at epochai.org (2023). Retrieved from: 'https://epochai.
org/blog/scaling-laws-literature-review'. Accessed 15 June 2023
a thousand chance of being conscious, and we should take 5. Bowman, S.: Eight things to know about large language mod-
some AI systems to have at least a one in a thousand chance els (arXiv:2304.00612). arXiv (2023). https://doi.org/10.48550/
of being conscious and morally significant by 2030. It fol- arXiv.2304.00612. Accessed 15 June 2023
lows that we should extend moral consideration to some AI 6. Acemoglu, D., Autor, D., Hazell, J., Restrepo, P.: Artificial intel-
ligence and jobs: evidence from online vacancies. J. Law Econ.
systems by 2030. And since technological change tends to 40(S1), S293–S340 (2022). https://doi.org/10.1086/718327
be faster than social change, we should start preparing for 7. Chelliah, J.: Will artificial intelligence usurp white collar jobs?
that eventuality now. Hum. Resour. Manag. Int. Dig. 25(3), 1–3 (2017). https://d oi.o rg/
10.1108/HRMID-11-2016-0152
Acknowledgements Thanks to Toni Sims for extensive research assis- 8. Zajko, M.: Artificial intelligence, algorithms, and social inequal-
tance for this paper along with helpful comments and suggestions on ity: sociological contributions to contemporary debates. Sociol.
the penultimate draft. Thanks also to Joel Becker, Christian Tarsney, Compass 16(3), e12962 (2022). https://doi.org/10.1111/soc4.
Elliott Thornley, Hayden Wilkinson, and Thomas Woodward for help- 12962
ful feedback or discussion. Finally, thanks to the Global Priorities Insti- 9. Hedden, B.: On statistical criteria of algorithmic fairness. Philos.
tute for organizing a work-in-progress seminar for this paper and to Public Aff. 49, 209–231 (2021)
the seminar’s participants for helpful feedback and discussion: Adam 10. Long, R.: Fairness in machine learning: against false positive rate
Bales, Heather Browning, Bob Fischer, Andreas Mogensen, Marcus equality as a measure of fairness. J Moral Philos 19(1), 49–78
Pivato, Brad Saad, and Derek Shiller. (2021). https://doi.org/10.1163/17455243-20213439
11. Guo, W., Caliskan, A.: detecting emergent intersectional biases:
Funding The Centre for Effective Altruism, A22-0668,Jeff Sebo. contextualized word embeddings contain a distribution of
human-like biases. Proceedings of the 2021 AAAI/ACM Con-
Declarations ference on AI, Ethics, and Society. pp. 122–133 (2021). https://
doi.org/10.1145/3461702.3462536
Conflict of interest On behalf of both authors, the corresponding au- 12. Tan, Y. C., Celis, L. E.: Assessing Social and Intersectional
thor states that there is no conflict of interest. Biases in Contextualized Word Representations. Advances in
Neural Information Processing Systems. 32 (2019)
Open Access This article is licensed under a Creative Commons Attri- 13. D’Alessandro, W., Lloyd, H.R., Sharadin, N.: Large language
bution 4.0 International License, which permits use, sharing, adapta- models and biorisk. Am J Bioethics. 23(10), 115–118 (2023)
tion, distribution and reproduction in any medium or format, as long 14. Longpre, S., Storm, M., Shah, R.: Lethal autonomous weapons
as you give appropriate credit to the original author(s) and the source, systems & artificial intelligence: trends challenges and policies.
provide a link to the Creative Commons licence, and indicate if changes MIT Sci Policy Rev. 3, 47–56 (2022). https://doi.org/10.38105/
were made. The images or other third party material in this article are spr.360apm5typ
included in the article’s Creative Commons licence, unless indicated 15. Bostrom, N.: Superintelligence: paths, dangers, strategies, 1st
otherwise in a credit line to the material. If material is not included in edn. Oxford University Press, Oxford (2014)
the article’s Creative Commons licence and your intended use is not 16. Hendrycks, D.: Natural selection favors ais over humans (arXiv:
permitted by statutory regulation or exceeds the permitted use, you will 2303.16200). ArXiv. (2023). https://doi.org/10.48550/arXiv.
need to obtain permission directly from the copyright holder. To view a 2303.16200. Accessed 15 June 2023
copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. 17. Vold, K., Harris, D.: How does artificial intelligence pose an
existential risk? In: Véliz, C. (ed.) The Oxford handbook of digi-
tal ethics. Oxford University Press, Oxford (2021)
18. Singer, P., Tse, Y.F.: AI ethics: the case for including animals.
References AI and Ethics 3(2), 539–551 (2023). https://doi.org/10.1007/
s43681-022-00187-z
1. Zhang, C., Zhang, C., Zheng, S., Qiao, Y., Li, C., Zhang, M., 19. Delon, N.: Agential value. Manuscript in preparation (n.d.)
Dam, S. K., Thwal, C. M., Tun, Y. L., Huy, L. L., Kim, D., Bae, 20. Delon, N., Cook, P., Bauer, G., Harley, H.: Consider the agent in
S. H., Lee, L. H., Yang, Y., Shen, H. T., Kweon, I. S., Hong, C. the arthropod. Anim Sentience 5(29), 32 (2020)
S.: A Complete Survey on Generative AI (AIGC): Is ChatGPT 21. Kagan, S.: How to count animals, more or less. Oxford University
from GPT-4 to GPT-5 All You Need? (arXiv:2 303.1 1717). arXiv Press, Oxford (2019)
(2023). http://arxiv.org/abs/2303.11717. Accessed 15 June 2023 22. Ladak, A.: What would qualify an artificial intelligence for
2. Meta Bakhtin, A., Brown, N., Dinan, E., Farina, G., Flaherty, C., moral standing? AI Ethics (2023). https://doi.org/10.1007/
Fried, D., Goff, A., Gray, J., Hu, H., Jacob, A.P., Komeili, M., s43681-023-00260-1
Konath, K., Kwon, M., Lerer, A., Lewis, M., Miller, A.H., Mitts,
AI and Ethics (2025) 5:591–606 605
23. Cleeremans, A., Tallon-Baudry, C.: Consciousness matters: phe- 46. Tarsney, C.: Moral uncertainty for deontologists. Ethical Theory
nomenal experience has functional value. Neurosci conscious. 1, Moral Pract 21(3), 505–520 (2018). https://doi.org/10.1007/
niac007 (2022) s10677-018-9924-4
24. Coeckelbergh, M.: Robot rights? Towards a social-relational 47. Scanlon, T.M.: What we owe to each other, chapter 5–9. Harvard
justification of moral consideration. Ethics Inf. Technol. 12, University Press, Cambridge (2000)
209–221 (2010) 48. Foot, P.: Utilitarianism and the virtues. Proc Address Am Philos
25. Gunkel, D.J.: The other question: can and should robots have Assoc 57(2), 273–283 (1983). https://doi.org/10.2307/3131701
rights? Ethics Inf. Technol. 20, 87–99 (2018) 49. Kelleher, J.P.: Relevance and non-consequentialist aggregation.
26. Danaher, J.: Welcoming robots into the moral circle: a defence Utilitas 26(4), 385–408 (2014)
of ethical behaviourism. Sci. Eng. Ethics 26, 2023–2049 (2020) 50. Taurek, J.: Should the numbers count? Philos. Public Aff. 6(4),
27. Mainzer, K.: Thinking in complexity: the computational dynam- 293–316 (1977)
ics of matter, mind, and mankind, p. I–VXI. Springer, Berlin 51. Center for AI Safety. Statement on AI Risk. (2023) Retrieved
(2004) June 9, 2023, from https://www.safe.ai/statement-on-ai-r isk.
28. Tegmark, M.: Life 3.0: Being Human in the Age of Artificial Accessed 15 June 2023
Intelligence. Knopf Doubleday Publishing Group (2018) 52. Sebo, J.: The moral circle. WW Norton (forthcoming)
29. Moosavi, P.: Will intelligent machines become moral patients? 53. de Waal, F.B.M.: Anthropomorphism and anthropodenial: con-
Philos Phenomenol Res (2023). https://doi.org/10.1111/phpr. sistency in our thinking about humans and other animals. Philos.
13019 Top. 27(1), 255–280 (1999)
30. Fischer, B.: An introduction to the moral weight project. Rethink 54. Korsgaard, C.M.: Fellow creatures: our obligations to the other
priorities (2022). https://rethinkpriorities.org/publications/an- animals. Oxford University Press, Oxford (2018)
introduction-to-t he-moral-weight-project. Accessed 15 June 55. Sebo, J.: Saving animals, saving ourselves: why animals matter
2023 for pandemics, climate change, and other catastrophes. Oxford
31. Sebo, J.: The rebugnant conclusion: utilitarianism, insects, University Press, Oxford (2022)
microbes, and AI systems. Ethics Policy Environ. (2023). https:// 56. Crary, A., Gruen, L.: Animal crisis: a new critical theory. Polity,
doi.org/10.1080/21550085.2023.2200724 Medford (2022)
32. Butlin, P., Long, R., Elmoznino, E., Bengio, Y., Birch, J., Con- 57. Goodpaster, K.E.: On being morally considerable. J. Philos.
stant, A., Deane, G., Fleming, S.M., Frith, C., Ji, X., VanRullen, 75(6), 308–325 (1978). https://doi.org/10.2307/2025709
R.: Consciousness in artificial intelligence: Insights from the Sci- 58. Vilkka, L.: The intrinsic value of nature. Brill (2021)
ence of consciousness. arXiv preprint arXiv:2308.08708 (2023). 59. Chalmers, D.J.: Facing up to the problem of consciousness. J.
Accessed 15 June 2023 Conscious. Stud. 2(3), 200–219 (1995)
33. Seth, A.: Why Conscious AI Is a Bad, Bad Idea. Nautilus (2023) 60. Avramides, A.: Other minds. Routledge, London (2001)
https://nautil.us/why-conscious-ai-is-a-bad-bad-idea-302937/. 61. Gomes, A.: Is there a problem of other minds? Proc. Aristot. Soc.
Accessed 15 June 2023 111, 353–373 (2011)
34. Association for Mathematical Consciousness Science (AMCS): 62. Carruthers, P.: The problem of other minds. In: The nature of the
the responsible development of AI agenda needs to include con- mind: an introduction, pp. 6–39. Routledge, London (2003)
sciousness research (2023) https://amcs-community.org/open- 63. Birch, J.: The search for invertebrate consciousness. Noûs 56(1),
letters/. Accessed 15 June 2023 133–153 (2022). https://doi.org/10.1111/nous.12351
35. Levy, N., Savulescu, J.: Moral significance of phenomenal con- 64. Chalmers, D.: Could a large language model be conscious? Bos-
sciousness. Prog. Brain. Res. 177, 361–370 (2009) ton Review (2023)
36. Chalmers, D.: Reality+: virtual worlds and the problems of phi- 65. Tetlock, P.E., Mellers, B.A., Scoblic, J.P.: Bringing probability
losophy. WW Norton (2022) judgments into policy debates via forecasting tournaments. Sci-
37. Lee, A. Y.: Consciousness Makes Things Matter. Unpublished ence 355(6324), 481–483 (2017). https://doi.org/10.1126/science.
manuscript (n.d.) https://www.andrewyuanlee.com/_files/ugd/ aal3147
2dfbfe_33f806a9bb8c4d5f9c3044c4086fb9b5.pdf. Accessed 66. Carlsmith, J.: Existential Risk from Power-Seeking AI. In J. Barrett,
15 June 2023 H. Greaves, & D. Thorstad (Eds.), Essays on Longtermism. Oxford
38. Shepherd, J.: Consciousness and moral status. Routledge, New University Press (forthcoming)
York (2018) 67. Block, N.: Comparing the major theories of consciousness. In: Gaz-
39. Greene, P.: The termination risks of simulation science. zaniga, M.S., Bizzi, E., Chalupa, L.M., Grafton, S.T., Heatherton,
Erkenntnis 85(2), 489–509 (2020). https://d oi.o rg/1 0.1 007/ T.F., Koch, C., LeDoux, J.E., Luck, S.J., Mangan, G.R., Movshon,
s10670-018-0037-1 J.A., Neville, H., Phelps, E.A., Rakic, P., Schacter, D.L., Sur, M.,
40. Birch, J.: Animal sentience and the precautionary principle. Wandell, B.A. (eds.) The cognitive neurosciences, pp. 1111–1122.
Anim Sentience (2017). https://doi.org/10.51291/2377-7478. MIT Press, Cambridge (2009)
1200 68. Place, U.: Is consciousness a brain process? Br J Philos 47(1),
41. Sebo, J.: The moral problem of other minds. Harvard Rev Philos 44–50 (1956)
25, 51–70 (2018). https://doi.org/10.5840/harvardreview20 69. Smart, J.J.C.: Sensations and brain processes. Philos Rev 68(2),
185913 141–156 (1959)
42. Wilkinson, H.: In defense of fanaticism. Ethics 132(2), 445–477 70. Godfrey-Smith, P.: Metazoa: animal life and the birth of the mind.
(2022). https://doi.org/10.1086/716869 Macmillan, New York (2020)
43. Monton, B.: How to avoid maximizing expected utility. Philoso- 71. Seth, A.: Being You: A new science of consciousness. Penguin
phers’ Imprint 19(18), 1–25 (2019) Random House (2021) https://www.penguinrandomhouse.com/
44. Kamm, F.M.: Is it right to save the greater number? In: Morality, books/566315/being-you-by-anil-seth/. Accessed 15 June 2023
mortality death and whom to save from it, vol. 1, pp. 99–122. 72. Shiller, D.: The importance of getting digital sentience right (n.d.)
Oxford Academic, Oxford (1998) 73. Brunet, T.D.P., Halina, M.: Minds, machines, and molecules.
45. Norcross, A.: Comparing harms: headaches and human lives. Philos. Top. 48(1), 221–241 (2020)
Philos. Public Aff. 26(2), 135–167 (1997) 74. Francken, J.C., Beerendonk, L., Molenaar, D., Fahrenfort, J.J.,
Kiverstein, J.D., Seth, A.K., van Gaal, S.: An academic survey on
606 AI and Ethics (2025) 5:591–606
theoretical foundations, common assumptions and the current state 92. Juliani, A., Arulkumaran, K., Sasai, S., Kanai, R.: On the link
of consciousness science. Neurosci Conscious. (2022). https://doi. between conscious function and general intelligence in humans and
org/10.1093/nc/niac011 machines. ArXiv (2022) http://arxiv.org/abs/2204.05133. Accessed
75. Shanahan, M.: Embodiment and the inner life: cognition and con- 15 June 2023
sciousness in the space of possible minds. Oxford University Press, 93. Brown, R., Lau, H., LeDoux, J.E.: Understanding the higher-order
Oxford (2010) approach to consciousness. Trends Cogn. Sci. 23(9), 754–768
76. Harnad, S.: The symbol grounding problem. Physica D 42, 335– (2019). https://doi.org/10.1016/j.tics.2019.06.009
346 (1990) 94. Chalmers, D.: The meta-problem of consciousness. J. Conscious.
77. Kriegel, U.: Consciousness and self-consciousness. Monist 87(2), Stud. 25(9–10), 6–61 (2018)
182–205 (2004) 95. Lamme, V.A.: How neuroscience will change our view on con-
78. Bermúdez, J.: The paradox of self-consciousness. MIT Press sciousness. Cogn. Neurosci. 1(3), 204–220 (2010)
(2000). https://mitpress.mit.edu/9780262522779/the-paradox-of- 96. Lamme, V.A.: Towards a true neural stance on consciousness.
self-consciousness/. Accessed 15 June 2023 Trends Cogn. Sci. 10(11), 494–501 (2006). https://doi.org/10.
79. Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, 1016/j.tics.2006.09.001
E., Kamar, E., Lee, P., Lee, Y. T., Li, Y., Lundberg, S., Nori, H., 97. Malach, R.: Local neuronal relational structures underlying the con-
Palangi, H., Ribeiro, M. T., & Zhang, Y.: Sparks of Artificial Gen- tents of human conscious experience. Neurosci Conscious. (2021).
eral Intelligence: Early experiments with GPT-4. ArXiv. http:// https://doi.org/10.1093/nc/niab028
arxiv.org/abs/2303.12712 (2023). Accessed 15 June 2023 98. Wilterson, A.I., Graziano, M.S.A.: The attention schema theory
80. Chen, B., Kwiatkowski, R., Vondrick, C., Lipson, H.: Fully body in a neural network agent: controlling visuospatial attention using
visual self-modeling of robot morphologies. Sci Robot. (2022). a descriptive model of attention. Proc. Natl. Acad. Sci. 118(33),
https://doi.org/10.1126/scirobotics.abn1944 e2102421118 (2021). https://doi.org/10.1073/pnas.2102421118
81. Pipitone, A., Chella, A.: Robot passes the mirror test by inner 99. Liu, D., Bolotta, S., Zhu, H., Bengio, Y., Dumas, G.: Attention
speech. Robot. Auton. Syst. 144, 103838 (2021). https://doi.org/ Schema in Neural Agents. arXiv preprint arXiv:2305.17375 (2023)
10.1016/j.robot.2021.103838 100. Graziano, M.S.A., Guterstam, A., Bio, B.J., Wilterson, A.I.: Toward
82. Evans, G.: The varieties of reference. In: McDowell J. H. (Ed.). a standard model of consciousness: reconciling the attention
Oxford University Press, Oxford (1982) schema, global workspace, higher-order thought, and illusionist
83. Kiverstein, J., Clark, A.: Bootstrapping the Mind. Behav Brain Sci theories. Cogn. Neuropsychol. 37(3–4), 155–172 (2020). https://
31(1), 41–58 (2008). https://doi.org/10.1017/s0140525x07003330 doi.org/10.1080/02643294.2019.1670630
84. Hurley, S.L.: Consciousness in action. Harvard University Press, 101. Chalmers, D.: The conscious mind. In: Search of a fundamental
Cambridge (2002) theory. Oxford University Press, Oxford (1996)
85. Sebo, J.: Agency and moral status. J Moral Philos 14(1), 1–22 102. Koch, C.: What Does It ‘Feel’ Like to Be a Chatbot? Scientific
(2017). https://doi.org/10.1163/17455243-46810046 American (2023). https://www.scientificamerican.com/article/
86. Andreas J.: Language models as agent models. ArXiv (2022). what-does-it-feel-like-to-be-a-chatbot/. Accessed 15 June 2023
https://doi.org/10.48550/arXiv.2212.01681. Accessed 15 June 2023 103. Tye, M.: Ten Problems of Consciousness. MIT Press (1995).
87. Baars, B.J.: Global workspace theory of consciousness: toward a https://mitpress.mit.edu/9780262700641/ten-problems-of-consc
cognitive neuroscience of human experience. Prog. Brain Res. 150, iousness/. Accessed 15 June 2023
45–53 (2005). https://doi.org/10.1016/S0079-6123(05)50004-9 104. Tye, M.: Consciousness, Color, and Content. MIT Press (2000).
88. Baars, B.J., Franklin, S.: Consciousness is computational: the LIDA https://mitpress.mit.edu/9780262700887/consciousness-color-and-
model of global workspace theory. Int J Mach Conscious 01(01), content/. Accessed 15 June 2023
23–32 (2009). https://doi.org/10.1142/S1793843009000050 105. Bourget, D., Chalmers, D. J.: Philosophers on Philosophy: The
89. Garrido-Merchán, E.C., Molina, M., Mendoza-Soto, F.M.: A global 2020 PhilPapers Survey Philosophers’ Imprint (2023) https://phila
workspace model implementation and its relations with philosophy rchive.org/rec/BOUPOP-3. Accessed 15 June 2023
of mind. J Artif Intell Conscious 09(01), 1–28 (2022). https://doi.
org/10.1142/S270507852150020X Publisher's Note Springer Nature remains neutral with regard to
90. Signa, A., Chella, A., Gentile, M.: Cognitive robots and the jurisdictional claims in published maps and institutional affiliations.
conscious mind: a review of the global workspace theory. Curr
Robot Rep 2(2), 125–131 (2021). https://doi.org/10.1007/
s43154-021-00044-7
91. Goyal, A., Bengio, Y.: Inductive biases for deep learning of
higher-level cognition. Proc R S A Math Phys Eng Sci 478(2266),
20210068 (2022). https://doi.org/10.1098/rspa.2021.0068