skip to main content
research-article
Open access

Why Real Citizens Would Turn to Artificial Leaders

Published: 11 July 2021 Publication History

Abstract

Governments are increasingly using artificial intelligence to improve workflows and services. Applications range from predicting climate change, crime, and earthquakes to flu outbreaks, low air quality, and tax fraud. Artificial agents are already having an impact on eldercare, education, and open government, enabling users to complete procedures through a conversational interface. Whether replacing humans or assisting them, they are the technological fix of our times. In two experiments and a follow-up study, we investigate factors that influence the acceptance of artificial agents in positions of power, using attachment theory and disappointment theory as explanatory models. We found that when the state of the world provokes anxiety, citizens perceive artificial agents as a reliable proxy to replace human leaders. Moreover, people accept artificial agents as decision-makers in politics and security more willingly when they deem their leaders or government to be untrustworthy, disappointing, or immoral. Finally, we discuss these results with respect to theories of technology acceptance and the delegation of duties and prerogatives.

References

[1]
Ajay Agrawal, Joshua Gans, and Avi Goldfarb. 2019. AI and International Trade. University of Chicago Press, Chicago.
[2]
Dolores Albarracin and Sharon Shavitt. 2018. Attitudes and attitude change. Ann. Rev. Psychol.
[3]
Christopher J. Andersen and Yuliya V. Tverdova. 2003. Corruption, political allegiances, and attitudes toward government in contemporary democracies. Am. J. Pol. Sci 47, 1 (2003), 91–109.
[4]
Eytan Bachar, Laura Canetti, Esti Galilee-Weisstub, Atara Kaplan-DeNour, and Arieh Y. Shalev. 1998. Childhood vs. adolescence transitional object attachment, and its relation to mental health and parental bonding. Child Psychiat. Hum. Dev. 28, 3 (1998), 149–167.
[5]
Albert Bandura. 2018. Toward a psychology of human agency: Pathways and reflections. Perspect. Psychol. Sci, 13, 2 (2018), 130–136.
[6]
Christoph Bartneck, Tomohiro Suzuki, Takayuki Kanda, and Tatsuya Nomura. 2007. The influence of people's culture and prior experiences with Aibo on their attitude towards robots. AI Soc. 21, 1 (2007), 217–230.
[7]
Roy F. Baumeister and Mark R. Leary. 1995. The need to belong: Desire for interpersonal attachments as a fundamental human motivation. Psychol. Bull. 117, 3 (1995), 497–529.
[8]
Jennifer A. H. Becker and H. Dan O'Hair. 2007. Machiavellians’ motives in organizational citizenship behavior. J. Appl. Commun. Res. 35 (2007), 246–267.
[9]
Tony Belpaeme, James Kennedy, Aditi Ramachandran, Brian Scassellati, and Fumihide Tanaka. 2018. Social robots for education: A review. Sci. Robot 3, 21 (2018), eaat5954.
[10]
A. J. Berinsky. 2015. New Directions in Public Opinion. Routledge, London, UK.
[11]
Adam J. Berinsky. 2012. New Directions in Public Opinion. Routledge, London, UK.
[12]
Herman J. Bierens. 2004. Information criteria and model selection. Retrieved from https://personal.psu.edu/hxb11/INFORMATIONCRIT.PDF.
[13]
Andreas Birgegard and Pehr Granqvist. 2004. The correspondence between attachment to parents and God: Three experiments using subliminal separation cues. Pers. Soc. Psychol. Bull. 30 (2004), 1122–1135.
[14]
Kenneth A. Bollen. 1989. Structural Equations with Latent Variables. Wiley, New York, NY.
[15]
Éric Brangier and Sonia Hammes-Adelé. 2011. Beyond the technology acceptance model: Elements to validate the human-technology symbiosis model. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 13–21.
[16]
John Brehm and Wendy Rahn. 1997. Individual-level evidence for the causes and consequences of social capital. Am. J. Pol. Sci 41, 3 (1997), 999--1023.
[17]
Peter G. Brewer. 2007. Evaluating a technological fix for climate. Proc. Natl. Acad. Sci. U.S.A. 104, 24 (2007), 9915–9916.
[18]
E. Broadbent, R. Stafford, and B. MacDonald. 2009. Acceptance of healthcare robots for the older population: Review and future directions. Int. J. Soc. Robot. 1 (2009), 319–330.
[19]
Christian Budnik. 2018. Trust, reliance, and democracy. Int. J. Philos. Stud, 26, 2 (2018), 221–239.
[20]
Kenneth P. Burnham and David R. Anderson. 2004. Multimodel inference: Understanding AIC and BIC in model selection. Sociol. Methods Res. 33 (2004), 261–304.
[21]
James Butcher and Irakli Beridze. 2019. What is the state of artificial intelligence governance globally? RUSI J. 164, 5–6 (2019), 88–96.
[22]
Lemuria Carter and France Bélanger. 2005. The utilization of e-government services: Citizen trust, innovation and acceptance factors. Inf. Syst. J. 15, 1 (2005), 5–25.
[23]
Jacob Cohen. 1973. Eta-squared and partial eta-squared in fixed factor ANOVA designs. Educ. Psychol. Meas. 33, 1 (1973), 107–112.
[24]
Jason J. Dahling, Brian G. Whitaker, and Paul E. Levy. 2009. The development and validation of a new Machiavellianism scale. J. Manage. 35, 2 (2009), 219–257.
[25]
Darren W. Davis and Brian D. Silver. 2004. Civil liberties vs. security: public opinion in the context of the terrorist attacks on America. Am. J. Pol. Sci. 48 (2004), 28–46.
[26]
Walter D. Davis and William L. Gardner. 2004. Perceptions of politics and organizational cynicism: An attributional and leader-member exchange perspective. Leader. Quart. 15 (2004), 439–465.
[27]
Tamara Denning, Cynthia Matuszek, Karl Koscher, Joshua R. Smith, and Tadayoshi Kohno. 2009. A spotlight on security and privacy risks with future household robots: Attacks and lessons. In Proceedings of the ACM International Conference Proceeding Series, 105–114.
[28]
Cydney H. Dupree and Susan T. Fiske. 2017. Universal dimensions of social signals: Warmth and competence. In Social Signal Processing. 23–33.
[29]
Michał Dziergwa, Mirela Kaczmarek, Paweł Kaczmarek, Jan Kędzierski, and Karolina Wadas-Szydłowska. 2018. Long-term cohabitation with a social robot: A case study of the influence of human attachment patterns. Int. J. Soc. Robot 10, 1 (2018), 163–176.
[30]
David Easton. 1957. An approach to the analysis of political systems. World Politics 9, 3 (1957), 383--400.
[31]
Kathleen O. Ell, Joanne E. Mantell, Maurice B. Haraovitch, and Robert H. Nishimoto. 1989. Social support, sense of control, and coping among patients with breast, lung, or colorectal cancer. J. Psychosoc. Oncol. 7, 3 (1989), 63–89.
[32]
Gunn Enli and Linda Therese Rosenberg. 2018. Trust in the age of social media: Populist politicians seem more authentic. Soc. Media Soc. 4, 1 (2018), 1--11.
[33]
Nicholas Epley, Scott Akalis, Adam Waytz, and John T. Cacioppo. 2008. Creating social connection through inferential reproduction: Loneliness and perceived agency in gadgets, gods, and greyhounds. Psychol. Sci. 19, 2 (2008), 114–120.
[34]
Nicholas Epley, Adam Waytz, and John T. Cacioppo. 2007. On seeing human: A three-factor theory of anthropomorphism. Psychol. Rev. 114, 4 (2007), 864–886.
[35]
Friederike Eyssel and Natalia Reich. 2013. Loneliness makes the heart grow fonder (of robots) - On the effects of loneliness on psychological anthropomorphism. In Proceedings of the ACM/IEEE International Conference on Human-Robot Interaction, 121–122.
[36]
Andrea Ferrario, Michele Loi, and Eleonora Viganò. 2020. In AI we trust incrementally: a multi-layer model of trust to analyze human-artificial intelligence interactions. Philos. Technol. 33, 3 (2020), 523–539.
[37]
John Andrew Fisher. 1991. Disambiguating anthropomorphism: An interdisciplinary review. Perspect. Ethol. 9 (1991), 49–85.
[38]
Susan T. Fiske, Amy J. C. Cuddy, and Peter Glick. 2007. Universal dimensions of social cognition: Warmth and competence. Trends Cogn. Sci. 11, 2 (2007), 77–83.
[39]
Ann Gillard and Mark F. Roark. 2013. Older adolescents’ self-determined motivations to disclose their HIV status. In Journal of Child and Family Studies. 672–683.
[40]
Omri Gillath, Ting Ai, Michael Branicky, Shawn Keshmiri, Rob Davison, and Ryan Spaulding. 2021. Attachment and trust in artificial intelligence. Comput. Hum. Behav. 115 (2021).
[41]
Ella Glikson and Anita Williams Woolley. 2020. Human trust in artificial intelligence: Review of empirical research. Acad. Manag. Ann. 14, 2 (2020), 627–660.
[42]
Jennifer Goetz and Sara Kiesler. 2002. Cooperation with a robotic assistant. In Proceedings of the Conference on Human Factors in Computing Systems. 578–579.
[43]
Paul Goren. 2005. Party identification and core political values. Am. J. Pol. Sci. 49, 4 (2005), 881–896.
[44]
Dan W. Grupe and Jack B. Nitschke. 2013. Uncertainty and anticipation in anxiety: An integrated neurobiological and psychological perspective. Nat. Rev. Neurosci. 14 (2013), 488–501.
[45]
Peter A. Hancock, Deborah R. Billings, Kristin E. Schaefer, Jessie Y. C. Chen, Ewart J. De Visser, and Raja Parasuraman. 2011. A meta-analysis of factors affecting trust in human-robot interaction. Hum. Fact. 53, 5 (2011), 517–527.
[46]
Russell Hardin. 2013. Government without trust. J. Trust Res. 3, 1 (2013), 32–52.
[47]
Teresa M. Harrison and Luis Felipe Luna-Reyes. 2020. Cultivating trustworthy artificial intelligence in digital government. (unpublished).
[48]
Dirk Helbing, Bruno S. Frey, Gerd Gigerenzer, Ernst Hafen, Michael Hagner, Yvonne Hofstetter, Jeroen Van Den Hoven, Roberto V. Zicari, and Andrej Zwitter. 2018. Will democracy survive big data and artificial intelligence? In Towards Digital Enlightenment: Essays on the Dark and Light Sides of the Digital Revolution. Springer, Cham, 73–98.
[49]
Sören Holmberg, Bo Rothstein, and Naghmeh Nasiritousi. 2009. Quality of government: What you get. Annu. Rev. Polit. Sci. 12 (2009), 135--61.
[50]
David Howe. 2009. Handbook of Attachment: Theory, Research and Clinical Applications. Rough Guides, London, UK.
[51]
Li Tze Hu and Peter M. Bentler. 1999. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model 6, 1 (1999), 1–55.
[52]
Tobin Im, Wonhyuk Cho, Greg Porumbescu, and Jungho Park. 2014. Internet, trust in government, and citizen compliance. J. Public Adm. Res. Theory 24, 3 (2014), 741–763.
[53]
Alon Jacovi, Ana Marasović, Tim Miller, and Yoav Goldberg. 2021. Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI. In Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 624--635. https://doi.org/10.1145/3442188.3445923
[54]
Frédéric Kaplan. 2004. Who is afraid of the humanoid? Investigating cultural differences in the acceptance of robots. Int. J. Humanoid Robot. 1, 3 (2004), 465–480.
[55]
Lucas A. Keefer, Mark J. Landau, Zachary K. Rothschild, and Daniel Sullivan. 2012. Attachment to objects as compensation for close others’ perceived unreliability. J. Exp. Soc. Psychol. 48, 4 (2012), 912–917.
[56]
Donald R. Kinder, Mark D. Peters, Robert P. Abelson, and Susan T. Fiske. 1980. Presidential prototypes. Polit. Behav. 2, 4 (1980), 315–337.
[57]
David C. King. 1997. Why People Don't Trust Government. Harvard University Press, Cambridge, MA.
[58]
Rex B. Kline. 2015. Principles and Practice of Structural Equation Modeling. Guilford, New York, NY.
[59]
Roderick M. Kramer. 1999. Trust and distrust in organizations: Emerging perspectives, enduring questions. Annu. Rev. Psychol. 50, 1 (1999), 569–598.
[60]
Göran Larsson. 2014. Textbooks and critical readings—A challenge for the future: A brief response to Emanuelsson and Ramey. Method Theory Study Relig. 26, 3 (2014), 308–314.
[61]
Edwin Layton. 1971. Mirror-image twins: The communities of science and technology in 19th-century America. Technol. Cult. 12, 4 (1971), 562–580.
[62]
Edwin T. Layton. 1976. American ideologies of science and engineering. Technol. Cult. 17, 4 (1976), 688–701.
[63]
John Lee and Neville Moray. 1992. Trust, control strategies and allocation of function in human-machine systems. Ergonomics 35, 10 (1992), 1243–1270.
[64]
Lauren A. Leotti, Sheena S. Iyengar, and Kevin N. Ochsner. 2010. Born to choose: The origins and value of the need for control. Trends Cogn. Sci. 14 (2010), 457–463.
[65]
J. David Lewis and Andrew Weigert. 1985. Trust as a social reality. Soc. Forces 63, 4 (1985), 967–985.
[66]
Loet Leydesdorff and Girma Zawdie. 2010. The triple helix perspective of innovation systems. Technol. Anal. Strateg. Manage. 22 (2010), 789–804.
[67]
John C. Loehlin and A. Alexander Beaujean. 2016. Latent Variable Models: An Introduction to Factor, Path, and Structural Equation Analysis, 5th edition. Taylor & Francis, London, UK.
[68]
Graham Loomes and Robert Sugden. 1982. Regret theory: An alternative theory of rational choice under uncertainty. Econ. J. 92, 368 (1982), 805–824.
[69]
Graham Loomes and Robert Sugden. 1986. Disappointment and dynamic consistency in choice under uncertainty. Rev. Econ. Stud. 53, 2 (1986), 271--282.
[70]
Richard Loudon. 2002. Trust in government. Commun. Dev. J. 37 (2002), 118–119.
[71]
Fadi T. Maalouf, Giovanna Porta, Benedetto Vitiello, Graham Emslie, Taryn Mayes, Gregory Clarke, Karen D. Wagner, Joan Rosenbaum Asarnow, Anthony Spirito, Martin Keller, Boris Birmaher, Neal Ryan, Wael Shamseddeen, Satish Iyengar, and David Brent. 2012. Do sub-syndromal manic symptoms influence outcome in treatment resistant depression in adolescents? A latent class analysis from the TORDIA study. J. Affect. Disord. 138, 1–2 (2012), 86–95.
[72]
Robert C. MacCallum, Michael W. Browne, and Hazuki M. Sugawara. 1996. Power analysis and determination of sample size for covariance structure modeling. Psychol. Methods 1, 2 (1996), 130–149.
[73]
Karl F. MacDorman, Sandosh K. Vasudevan, and Chin Chang Ho. 2009. Does Japan really have robot mania? Comparing attitudes by implicit and explicit measures. AI Soc. 23, 4 (2009), 485–510.
[74]
Thomas M. Maddox, John S. Rumsfeld, and Philip R. O. Payne. 2019. Questions for artificial intelligence in health care. J. Am. Med. 321 (2019), 31–32.
[75]
John W. McHoskey, William Worzel, and Christopher Szyarto. 1998. Machiavellianism and psychopathy. J. Pers. Soc. Psychol. 74, 1 (1998), 192–210.
[76]
Mario Mikulincer, Phillip R. Shaver, and Dana Pereg. 2003. Attachment theory and affect regulation: The dynamics, development, and cognitive consequences of attachment-related strategies. Motivat. Emot. 27 (2003), 77–102.
[77]
Manit Mishra. 2016. Confirmatory factor analysis (CFA) as an analytical technique to assess measurement error in survey research. Paradigm 20, 2 (2016), 97–112.
[78]
Evgeny Morozov. 2014. To Save Everything, Click Here: Technology, Solutionism and the Urge to Fix Problems That Don't Exist. Penguin Books Ltd, London, UK.
[79]
Taewoo Nam. 2019. Citizen attitudes about job replacement by robotic automation. Futures 109, (2019), 39–49.
[80]
Clifford Nass and Youngme Moon. 2000. Machines and mindlessness: Social responses to computers. J. Soc. Issues 56, 1 (2000), 81–103.
[81]
Andrew J. Nathan and Andrew Scobell. 2012. How China sees America. Foreign Aff. 91, 5 (2012), 557–573.
[82]
Richard R. Nelson. 1994. The co-evolution of technology, industrial structure, and supporting institutions. Ind. Corp. Chang. 3, 1 (1994), 47–63.
[83]
Tatsuya Nomura, Tomohiro Suzuki, Takayuki Kanda, and Kensuke Kato. 2006. Measurement of anxiety toward robots. In Proceedings of the IEEE International Workshop on Robot and Human Interactive Communication, 372–377.
[84]
Seri Nonaka, Kenji Inoue, Tatsuo Arai, and Yasushi Mae. 2004. Evaluation of human sense of security for coexisting robots using virtual reality evaluation of pick and place motion of humanoid robots. In Proceedings of the IEEE International Conference on Robotics and Automation, 2770–2775.
[85]
Hadas Okon-Singer. 2018. The role of attention bias to threat in anxiety: Mechanisms, modulators and open questions. Curr. Opin. Behav. Sci. 19 (2018), 26–30.
[86]
Raja Parasuraman, Thomas B. Sheridan, and Christopher D. Wickens. 2008. Situation Awareness, Mental Workload, and Trust in awareness, mental workload, and trust in automation: Viable, empirically supported cognitive engineering constructs. J. Cogn. Eng. Decis. Mak 2, 2 (2008), 140–160.
[87]
Camille Parmesan. 2006. Ecological and evolutionary responses to recent climate change. Annu. Rev. Ecol. Evol. Syst. 37, (2006) 637–669.
[88]
Delroy L. Paulhus and Kevin M. Williams. 2002. The dark triad of personality: Narcissism, Machiavellianism, and psychopathy. J. Res. Pers 36, 6 (2002), 556–563.
[89]
Vincenzo Pavone and Sara Degli Esposti. 2012. Public assessment of new surveillance-oriented security technologies: Beyond the trade-off between privacy and security. Publ. Underst. Sci. 21, 5 (2012), 556–572.
[90]
Miroslav Popper. 2013. Social trust, norms and morality. Hum. Aff. 23, 3 (2013), 443–457.
[91]
Brian L. Quick. 2012. What is the best measure of psychological reactance? An empirical test of two measures. Health Commun. 27, 1 (2012), 1–9.
[92]
Peter J. Richerson and Morten Christiansen. 2013. Cultural Evolution: Society, Technology, Language, and Religion. MIT Press, Cambridge, MA.
[93]
Bo Rothstein and Daniel Eek. 2009. Political corruption and social trust: An experimental approach. Ration. Soc. 21, 1 (2009), 81–112.
[94]
Barb Ruppert. 2011. New directions in the use of virtual reality for food shopping: Marketing and education perspectives. J. Diabetes Sci. Technol. 5, 2 (2011), 315–318.
[95]
Stuart Russell. 2016. Should we fear supersmart robots? Sci. Am. 314, 6 (2016), 58–59.
[96]
Richard M. Ryan and Edward L. Deci. 2006. Self-regulation and the problem of human autonomy: Does psychology need choice, self-determination, and will? J. Pers. 74, 6 (2006), 1557–1586.
[97]
Jeroen Schepers and Martin Wetzels. 2007. A meta-analysis of the technology acceptance model: Investigating subjective norm and moderation effects. Inf. Manag. 44, 1 (2007), 90–103.
[98]
F. David Schoorman, Roger C. Mayer, and James H. Davis. 2007. An integrative model of organizational trust: Past, present, and future. Acad. Manage. Rev. 32 (2007), 344–354.
[99]
Lijiang Shen and James Price Dillard. 2005. Psychometric properties of the Hong psychological reactance scale. J. Pers. Assess. 85, 1 (2005), 74–81.
[100]
Keng Siau and Weiyu Wang. 2018. Building trust in artificial intelligence, machine learning, and robotics. Cut. Bus. Technol. J. 31, 2 (2018), 47–53.
[101]
Nicholas F. Skinner, John A. Giokas, and Henry A. Hornstein. 2006. Personality correlates of Machiavellianism: 1. Consensual validation. Soc. Behav. Personal. Int. J. 4, 2 (2006), 273–276.
[102]
Matthew B. Sparke. 2006. A neoliberal nexus: Economy, security and the biopolitics of citizenship on the border. Polit. Geogr. 25, 2 (2006), 151–180.
[103]
Nicolas Spatola, Clément Belletier, Pierre Chausse, Maria Augustinova, Alice Normand, Vincent Barra, Ludovic Ferrand, and Pascal Huguet. 2019. Improved cognitive control in presence of anthropomorphized robots. Int. J. Soc. Robot. 11, 3 (2019), 463–476.
[104]
Nicolas Spatola, Clément Belletier, Alice Normand, Pierre Chausse, Sophie Monceau, Maria Augustinova, Vincent Barra, Pascal Huguet, and Ludovic Ferrand. 2018. Not as bad as it seems: When the presence of a threatening humanoid robot improves human performance. Sci. Robot. 3, 21 (2018), eaat5843.
[105]
Nicolas Spatola and Karolina Urbanska. 2019. God-like robots: The semantic overlap between representation of divine and artificial entities. AI Soc. 35, (2019), 329–341.
[106]
Dag Sverre Syrdal, Kerstin Dautenhahn, Kheng Lee Koay, and Michael L. Walters. 2009. The negative attitudes towards robots scale and reactions to robot behaviour in a live human-robot interaction study. In Proceedings of the 23rd Convention of the Society for the Study of Artificial Intelligence and Simulation of Behaviour: Adaptive and Emergent Behaviour and Complex Systems (AISB’09), 109–115.
[107]
Tom R. Tyler, Kenneth A. Rasinski, and Kathleen M. McGraw. 1985. The influence of perceived injustice on the endorsement of political leaders. J. Appl. Soc. Psychol. 15, 8 (1985), 700–725.
[108]
David Valle-Cruz, Rodrigo Sandoval-Almazan, Edgar A. Ruvalcaba-Gomez, and J. Ignacio Criado. 2019. A review of artificial intelligence in government and its potential from a public policy perspective. In Proceedings of the ACM International Conference Proceeding Series, 91–99.
[109]
Viswanath Venkatesh, Michael G. Morris, Gordon B. Davis, and Fred D. Davis. 2003. User acceptance of information technology: Toward a unified view. MIS Q. Manag. Inf. Syst. 27, 3 (2003), 425–478.
[110]
Vipasiri Vimonses, Shaomin Lei, Bo Jin, Chris W. K. Chow, and Chris Saint. 2009. Adsorption of congo red by three Australian kaolins. Applied Clay Science 43, 3--4 (2009), 465–472.
[111]
Steven Van de Walle, Steven Van Roosbroek, and Geert Bouckaert. 2008. Trust in the public sector: Is there any evidence for a long-term decline? Int. Rev. Admin. Sci. 74, 47–64.
[112]
Bernard Williams. 2000. Formal structures and social reality. In Trust: Making and Breaking Cooperative Relations, Diego Gambetta (Ed.). University of Oxford, Oxford, 3–13.
[113]
Bogdan Wojciszke and Bozena Klusek. 1996. Moral and competence-related traits in political perception. Pol. Psychol. Bull. 27, 1 (1996), 319–325.
[114]
Phil Wood. 2008. Confirmatory factor analysis for applied research. The American Statistician 62, 1 (2008), 91--92.

Cited By

View all
  • (2024)Designing a Human-centered AI Tool for Proactive Incident Detection Using Crowdsourced Data Sources to Support Emergency ResponseDigital Government: Research and Practice10.1145/36337845:1(1-19)Online publication date: 12-Mar-2024
  • (2024)After confronting one uncanny valley, another awaitsNature Reviews Electrical Engineering10.1038/s44287-024-00041-w1:5(276-277)Online publication date: 4-Apr-2024
  • (2023) Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk Regulation & Governance10.1111/rego.1251218:1(3-32)Online publication date: 6-Feb-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image Digital Government: Research and Practice
Digital Government: Research and Practice  Volume 2, Issue 3
Regular Papers
July 2021
102 pages
EISSN:2639-0175
DOI:10.1145/3474845
Issue’s Table of Contents
This work is licensed under a Creative Commons Attribution International 4.0 License.

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 11 July 2021
Online AM: 05 February 2021
Accepted: 01 January 2021
Revised: 01 December 2020
Received: 01 June 2020
Published in DGOV Volume 2, Issue 3

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Agents
  2. artificial intelligence
  3. attitudes
  4. competence
  5. technology acceptance
  6. trust

Qualifiers

  • Research-article
  • Research
  • Refereed

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)253
  • Downloads (Last 6 weeks)29
Reflects downloads up to 28 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Designing a Human-centered AI Tool for Proactive Incident Detection Using Crowdsourced Data Sources to Support Emergency ResponseDigital Government: Research and Practice10.1145/36337845:1(1-19)Online publication date: 12-Mar-2024
  • (2024)After confronting one uncanny valley, another awaitsNature Reviews Electrical Engineering10.1038/s44287-024-00041-w1:5(276-277)Online publication date: 4-Apr-2024
  • (2023) Trustworthy artificial intelligence and the European Union AI act: On the conflation of trustworthiness and acceptability of risk Regulation & Governance10.1111/rego.1251218:1(3-32)Online publication date: 6-Feb-2023

View Options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format.

HTML Format

Login options

Full Access

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media