A new film from acclaimed director, Leanne Pooley.A new film from acclaimed director, Leanne Pooley.A new film from acclaimed director, Leanne Pooley.
Keir Dullea
- Self - Narrator
- (voice)
Max Tegmark
- Self - Astrophysicist
- (as Prof. Max Tegmark)
- …
Louis Rosenberg
- Self - CEO & Chief Scientist, Unanimous A.I.
- (as Dr. Louis Rosenberg)
Stuart J. Russell
- Self
- (as Stuart Russell)
Rodney Brooks
- Self - Robotics, M.I.T.
- (as Emeritus Prof. Rodney Brooks)
Mary Cummings
- Self - Humans & Autonomy Lab, Duke University
- (as Dr. Mary Cummings)
Featured reviews
I really liked this film. The people interviewed were nearly all insiders not just external commentators. And some really important people who know what they are talking about. That gives is real weight for me. And they were allowed to speak properly, not cut into little sound bites. Didn't agree with some of what was said but hey - that's the point of a discussion! Really good film. Well made and personally I loved the 2001 theme with the narrator. Clever!
Discusses the potential of AI with clear, thought provoking questions. The variety of opinions expressed by featured leading academics gives you a wide picture of the future in this field, making you do exactly as the title entails. Talk about AI.
This is a fascinating look at an important subject. As a fan of 2001: A Space Odyssey I thought using Kubrick's film to frame this conversation was both relevant and entertaining. He warned us what might happen 60 years ago and this documentary explains why everything he said is still relevant. We do need to talk about it.
I think the film is great! I've recommended to my friends and colleagues in IT to watch it as soon as they get a chance. I think currently in society the risks of AI (more specifically Artifical General Intelligence and Artificial Super Intelligence) are not understood by most people, even most IT and AI researchers, as the main focus (and main budgets) goes to ANI (narrow AI) that already makes is way into our society and has a lot of (potential) benefits in various fields including medicine (e.g. diagnosis of cancer, fighting pandemics), logistics, climate control, sustainability, etc.
It's brilliant that in this film Keir Dullea looks back on "2001" and his interactions with HAL. For most people outside the field of AI, HAL is still the most recognizable superintelligent AI computer. The documentary gives a very nice overview of the different stakeholders and views in the current AGI pro/con discussions (benefits of AI, robotics, warfare, existential risks for humanity, is it controllable or not?). Especially Bryan Johnson's quote ("What is our plan as a species? ... We don't have a plan and we don't realize it's necessary to have a plan.") keeps coming back to my mind. I think that's exactly the issue. Almost anyone in the field of AI (even more cautious people like Stuart Russell or Max Tegmark) assumes that AGI will soon be there (within the next 10 to 50 years). And many researches agree that there are very serious risks (including the existential risk) that come with this. However, when they talk about mitigation of these risks, the discussions become more unclear, e.g. Stuart Russell's suggestion of "Provable Benevolent AI", or Ben Goertzel's ideas on "decentralized AI". To me this doesn't make sense; we should first have a plan that proves the fundamental risks are mitigated before we move on. Or else put the AGI research on hold (if this is still possible...) like we did with genetic manipulation and cloning.
It's brilliant that in this film Keir Dullea looks back on "2001" and his interactions with HAL. For most people outside the field of AI, HAL is still the most recognizable superintelligent AI computer. The documentary gives a very nice overview of the different stakeholders and views in the current AGI pro/con discussions (benefits of AI, robotics, warfare, existential risks for humanity, is it controllable or not?). Especially Bryan Johnson's quote ("What is our plan as a species? ... We don't have a plan and we don't realize it's necessary to have a plan.") keeps coming back to my mind. I think that's exactly the issue. Almost anyone in the field of AI (even more cautious people like Stuart Russell or Max Tegmark) assumes that AGI will soon be there (within the next 10 to 50 years). And many researches agree that there are very serious risks (including the existential risk) that come with this. However, when they talk about mitigation of these risks, the discussions become more unclear, e.g. Stuart Russell's suggestion of "Provable Benevolent AI", or Ben Goertzel's ideas on "decentralized AI". To me this doesn't make sense; we should first have a plan that proves the fundamental risks are mitigated before we move on. Or else put the AGI research on hold (if this is still possible...) like we did with genetic manipulation and cloning.
Thought provoking and extremely watchable documentary showcasing a balance of viewpoints from leading experts - the trouble is, they were all so convincing and backed up their arguments so well, I am not sure whose side I come down on. The most telling comment for me however, was the observation from one that 'the human race does not have a plan'. So now the AI genie has been released, are we hurtling across a minefield without a map towards doom, or salvation? And - another point of professional disagreement - will it take 40, 60, 100 or 200 years to find out? I have been talking about AI a lot since I viewed this.
Did you know
- ConnectionsFeatures 2001: A Space Odyssey (1968)
- How long is We Need to Talk About A.I.?Powered by Alexa
Details
- Release date
- Country of origin
- Also known as
- We Need to Talk About A.I
- Production companies
- See more company credits at IMDbPro
- Runtime
- 1h 26m(86 min)
Contribute to this page
Suggest an edit or add missing content