A new film from acclaimed director, Leanne Pooley.A new film from acclaimed director, Leanne Pooley.A new film from acclaimed director, Leanne Pooley.
Keir Dullea
- Self - Narrator
- (voice)
Max Tegmark
- Self - Astrophysicist
- (as Prof. Max Tegmark)
- …
Louis Rosenberg
- Self - CEO & Chief Scientist, Unanimous A.I.
- (as Dr. Louis Rosenberg)
Stuart J. Russell
- Self
- (as Stuart Russell)
Rodney Brooks
- Self - Robotics, M.I.T.
- (as Emeritus Prof. Rodney Brooks)
Mary Cummings
- Self - Humans & Autonomy Lab, Duke University
- (as Dr. Mary Cummings)
Featured reviews
I think the film is great! I've recommended to my friends and colleagues in IT to watch it as soon as they get a chance. I think currently in society the risks of AI (more specifically Artifical General Intelligence and Artificial Super Intelligence) are not understood by most people, even most IT and AI researchers, as the main focus (and main budgets) goes to ANI (narrow AI) that already makes is way into our society and has a lot of (potential) benefits in various fields including medicine (e.g. diagnosis of cancer, fighting pandemics), logistics, climate control, sustainability, etc.
It's brilliant that in this film Keir Dullea looks back on "2001" and his interactions with HAL. For most people outside the field of AI, HAL is still the most recognizable superintelligent AI computer. The documentary gives a very nice overview of the different stakeholders and views in the current AGI pro/con discussions (benefits of AI, robotics, warfare, existential risks for humanity, is it controllable or not?). Especially Bryan Johnson's quote ("What is our plan as a species? ... We don't have a plan and we don't realize it's necessary to have a plan.") keeps coming back to my mind. I think that's exactly the issue. Almost anyone in the field of AI (even more cautious people like Stuart Russell or Max Tegmark) assumes that AGI will soon be there (within the next 10 to 50 years). And many researches agree that there are very serious risks (including the existential risk) that come with this. However, when they talk about mitigation of these risks, the discussions become more unclear, e.g. Stuart Russell's suggestion of "Provable Benevolent AI", or Ben Goertzel's ideas on "decentralized AI". To me this doesn't make sense; we should first have a plan that proves the fundamental risks are mitigated before we move on. Or else put the AGI research on hold (if this is still possible...) like we did with genetic manipulation and cloning.
It's brilliant that in this film Keir Dullea looks back on "2001" and his interactions with HAL. For most people outside the field of AI, HAL is still the most recognizable superintelligent AI computer. The documentary gives a very nice overview of the different stakeholders and views in the current AGI pro/con discussions (benefits of AI, robotics, warfare, existential risks for humanity, is it controllable or not?). Especially Bryan Johnson's quote ("What is our plan as a species? ... We don't have a plan and we don't realize it's necessary to have a plan.") keeps coming back to my mind. I think that's exactly the issue. Almost anyone in the field of AI (even more cautious people like Stuart Russell or Max Tegmark) assumes that AGI will soon be there (within the next 10 to 50 years). And many researches agree that there are very serious risks (including the existential risk) that come with this. However, when they talk about mitigation of these risks, the discussions become more unclear, e.g. Stuart Russell's suggestion of "Provable Benevolent AI", or Ben Goertzel's ideas on "decentralized AI". To me this doesn't make sense; we should first have a plan that proves the fundamental risks are mitigated before we move on. Or else put the AGI research on hold (if this is still possible...) like we did with genetic manipulation and cloning.
There is a variety of interesting (if at times somewhat shallow) subjects covered in this film, with some of the greatest minds in the field offering their insights. Unfortunately, it is severely undercut by ridiculous narration more on par with a paranormal documentary on the Discovery channel. The Baby X part was particularly interesting, would love to see a doc on that.
... from the world of A.I. as just about every possible scenario is forecast for its future use and application over every possible time scale - all through opinions, without too much evidence, from so called "geeks" and "nerds" who must have recently read Philosophy for Dummies.
Also doubles as a Terminator franchise promotion, because time travel is also likely to happen! Blade Runner would have been a far better reference for a lot of the discussions.
Also doubles as a Terminator franchise promotion, because time travel is also likely to happen! Blade Runner would have been a far better reference for a lot of the discussions.
10roman_y
Just finished watching. The movie was amazing. It is like they took my work, my friends, my dreams and interests and made an awesome documentary out of it. Very well made and scientifically accurate. A lot of star power! I loved the way the movie showed debate between the two disagreeing factions on the issues of AI risk. I am sure this documentary will change many lives and open many minds to the issues of AI Safety and security. Probably the most important documentary of the decade.
Discusses the potential of AI with clear, thought provoking questions. The variety of opinions expressed by featured leading academics gives you a wide picture of the future in this field, making you do exactly as the title entails. Talk about AI.
Did you know
- ConnectionsFeatures 2001: A Space Odyssey (1968)
- How long is We Need to Talk About A.I.?Powered by Alexa
Details
- Release date
- Country of origin
- Also known as
- We Need to Talk About A.I
- Production companies
- See more company credits at IMDbPro
- Runtime
- 1h 26m(86 min)
Contribute to this page
Suggest an edit or add missing content