A new film from acclaimed director, Leanne Pooley.A new film from acclaimed director, Leanne Pooley.A new film from acclaimed director, Leanne Pooley.
Keir Dullea
- Self - Narrator
- (voice)
Max Tegmark
- Self - Astrophysicist
- (as Prof. Max Tegmark)
- …
Louis Rosenberg
- Self - CEO & Chief Scientist, Unanimous A.I.
- (as Dr. Louis Rosenberg)
Stuart J. Russell
- Self
- (as Stuart Russell)
Rodney Brooks
- Self - Robotics, M.I.T.
- (as Emeritus Prof. Rodney Brooks)
Mary Cummings
- Self - Humans & Autonomy Lab, Duke University
- (as Dr. Mary Cummings)
Featured reviews
A refreshing new take on a subject dominated by either pretentious techno gurus or tin-foil-hat-wearing conspiracy theorists. This documentary cuts through the morass of hype surrounding AI showing us both the people involved and the critics. The contrasting views presented offer a thought-provoking account of the divide between progress and morality that characterises this issue. Overall, a stimulating documentary, would definitely recommend to anyone with a brain.
I think the film is great! I've recommended to my friends and colleagues in IT to watch it as soon as they get a chance. I think currently in society the risks of AI (more specifically Artifical General Intelligence and Artificial Super Intelligence) are not understood by most people, even most IT and AI researchers, as the main focus (and main budgets) goes to ANI (narrow AI) that already makes is way into our society and has a lot of (potential) benefits in various fields including medicine (e.g. diagnosis of cancer, fighting pandemics), logistics, climate control, sustainability, etc.
It's brilliant that in this film Keir Dullea looks back on "2001" and his interactions with HAL. For most people outside the field of AI, HAL is still the most recognizable superintelligent AI computer. The documentary gives a very nice overview of the different stakeholders and views in the current AGI pro/con discussions (benefits of AI, robotics, warfare, existential risks for humanity, is it controllable or not?). Especially Bryan Johnson's quote ("What is our plan as a species? ... We don't have a plan and we don't realize it's necessary to have a plan.") keeps coming back to my mind. I think that's exactly the issue. Almost anyone in the field of AI (even more cautious people like Stuart Russell or Max Tegmark) assumes that AGI will soon be there (within the next 10 to 50 years). And many researches agree that there are very serious risks (including the existential risk) that come with this. However, when they talk about mitigation of these risks, the discussions become more unclear, e.g. Stuart Russell's suggestion of "Provable Benevolent AI", or Ben Goertzel's ideas on "decentralized AI". To me this doesn't make sense; we should first have a plan that proves the fundamental risks are mitigated before we move on. Or else put the AGI research on hold (if this is still possible...) like we did with genetic manipulation and cloning.
It's brilliant that in this film Keir Dullea looks back on "2001" and his interactions with HAL. For most people outside the field of AI, HAL is still the most recognizable superintelligent AI computer. The documentary gives a very nice overview of the different stakeholders and views in the current AGI pro/con discussions (benefits of AI, robotics, warfare, existential risks for humanity, is it controllable or not?). Especially Bryan Johnson's quote ("What is our plan as a species? ... We don't have a plan and we don't realize it's necessary to have a plan.") keeps coming back to my mind. I think that's exactly the issue. Almost anyone in the field of AI (even more cautious people like Stuart Russell or Max Tegmark) assumes that AGI will soon be there (within the next 10 to 50 years). And many researches agree that there are very serious risks (including the existential risk) that come with this. However, when they talk about mitigation of these risks, the discussions become more unclear, e.g. Stuart Russell's suggestion of "Provable Benevolent AI", or Ben Goertzel's ideas on "decentralized AI". To me this doesn't make sense; we should first have a plan that proves the fundamental risks are mitigated before we move on. Or else put the AGI research on hold (if this is still possible...) like we did with genetic manipulation and cloning.
Thought provoking and extremely watchable documentary showcasing a balance of viewpoints from leading experts - the trouble is, they were all so convincing and backed up their arguments so well, I am not sure whose side I come down on. The most telling comment for me however, was the observation from one that 'the human race does not have a plan'. So now the AI genie has been released, are we hurtling across a minefield without a map towards doom, or salvation? And - another point of professional disagreement - will it take 40, 60, 100 or 200 years to find out? I have been talking about AI a lot since I viewed this.
It was an amazing idea to slice and dice the thoughts of smart people into an insane salad, dress it with some insights from random quacks as deep as a tea spoon, then serve it as a "documentary".
How did Cameron deserve a right to have an opinion about the A.I.? By making a movie with a robot in it?
He also made a movie with a ship in it - well, now, let's ask his opinion on naval architecture and the problems of modern hydrodynamic engineering.
Instead of letting the smart people express their smart thoughts precisely and at length, the movie rips their words out of context and rearranges them to fit some weird alarmist narrative.
The four stars are specifically for Roman Yampolskiy, Sam Harris, Max Tegmark and Jurgen Schmidhuber.
How did Cameron deserve a right to have an opinion about the A.I.? By making a movie with a robot in it?
He also made a movie with a ship in it - well, now, let's ask his opinion on naval architecture and the problems of modern hydrodynamic engineering.
Instead of letting the smart people express their smart thoughts precisely and at length, the movie rips their words out of context and rearranges them to fit some weird alarmist narrative.
The four stars are specifically for Roman Yampolskiy, Sam Harris, Max Tegmark and Jurgen Schmidhuber.
... from the world of A.I. as just about every possible scenario is forecast for its future use and application over every possible time scale - all through opinions, without too much evidence, from so called "geeks" and "nerds" who must have recently read Philosophy for Dummies.
Also doubles as a Terminator franchise promotion, because time travel is also likely to happen! Blade Runner would have been a far better reference for a lot of the discussions.
Also doubles as a Terminator franchise promotion, because time travel is also likely to happen! Blade Runner would have been a far better reference for a lot of the discussions.
Did you know
- ConnectionsFeatures 2001: A Space Odyssey (1968)
- How long is We Need to Talk About A.I.?Powered by Alexa
Details
- Release date
- Country of origin
- Also known as
- We Need to Talk About A.I
- Production companies
- See more company credits at IMDbPro
- Runtime
- 1h 26m(86 min)
Contribute to this page
Suggest an edit or add missing content