Adicionar um enredo no seu idiomaWhen MIT Media Lab researcher Joy Buolamwini discovers that facial recognition does not see dark-skinned faces accurately, she embarks on a journey to push for the first-ever U.S. legislatio... Ler tudoWhen MIT Media Lab researcher Joy Buolamwini discovers that facial recognition does not see dark-skinned faces accurately, she embarks on a journey to push for the first-ever U.S. legislation against bias in algorithms that impact us all.When MIT Media Lab researcher Joy Buolamwini discovers that facial recognition does not see dark-skinned faces accurately, she embarks on a journey to push for the first-ever U.S. legislation against bias in algorithms that impact us all.
- Direção
- Roteiristas
- Artistas
- Prêmios
- 3 vitórias e 6 indicações no total
- Self - Author, Weapons of Math Destruction
- (as Cathy O'Neil Ph.D.)
- Self - Author, Twitter and Tear Gas
- (as Zeynep Tufekci Ph.D.)
- Self - Author, Automating Inequality
- (as Virginia Eubanks Ph.D.)
- Self - Technical Co-Lead, Ethical A.I. Team at Google
- (as Timnit Gebru Ph.D.)
- Self - Author, Algorithms of Oppression
- (as Safiya Umoja Noble Ph.D.)
Avaliações em destaque
It's minor, but wasn't sure why some interviewees were framed with so much headroom. Then again, Mr Robot did that and I never understood it there, and given that that was also concerned with themes of technology and surveillance, maybe there's some shared symbolism I'm not picking up on.
Some of the segments with the AI saying menacing things were a little cheesy, but overall brief at least.
There's also a sense that the documentary may cover a little too much in its 85-minute runtime. While I can admire its ambition in covering so many aspects of facial recognition software, its racial biases, algorithm discrimination, and so on, it does make for a documentary that jumps around a fair bit and not always smoothly... at least all the topics are interesting on their own.
But in the end, it covers important topics and presents compelling arguments about particular flaws and biases in technology. It does warn that this is something that if unchecked could become a serious problem, so I like the attempt to bring awareness to this issue before it completely spirals out of control.
It's well edited, features interesting interviewees and subjects, and ends on a little more hope than I was anticipating, which was a nice surprise. Overall, it's one of the better Netflix documentaries I've watched in a while.
That said, this documentary feels incomplete. It seems to be one-sided, with lots of interviews with people who are against the use of AI.
But while the film-makers do an ok job of highlighting the dangers and inadequacies of AI systems such as facial recognition software, they failed to show what really is behind these glaring.blunders - was it some kind of knowing omission meant to create more biases, was it a case of software engineers creating something that they themselves don't understand and thus making a mess of things, or was it simply incompetence.on the part of many involved?
Who knows. And that's the problem with this film.
There´s little focus but very hyperbolic interpertation of the data when it came to racial profiling by the AI... just a whiff of "wokeness" that was digestible to me, but also caused the polarization on this review section.
The rest of the documentary is well produced, informative, and seriously eye opening and you should see it because any of the negatives at least for me dont even come close to the deeper understandig you get from practical examples you see around the world that are very scary.
.
The second argument of what if our government becomes like China is flawed as well. The face recognition AIs are going going to get better even if we do not work on them here someone will. Anything useful can also be used as a weapon. So if the government does want to use face recognition they will just get it. Probably better to have a known working system than one bought hasily and rushed into place.
It is odd to they barely mention any AI or ML outside of face recognition despite face recognition being a small part of what is out there.
All in all might be good to get you to started on research of your own but mostly misdirected furry.
Você sabia?
- Citações
Self - Author, Weapons of Math Destruction: On internet advertising as data scientists, we are competing for eyeballs on one hand, but really we're competing for eyeballs of rich people. And then, the poor people, who's competing for their eyeballs? Predatory industries. So payday lenders, or for-profit colleges, or Caesars Palace. Like, really predatory crap.
- ConexõesFeatured in Jeremy Vine: Episode #4.95 (2021)
Principais escolhas
- How long is Coded Bias?Fornecido pela Alexa
Detalhes
- Data de lançamento
- Países de origem
- Centrais de atendimento oficiais
- Idioma
- Também conhecido como
- Kodlanmış Önyargı
- Locações de filme
- Empresas de produção
- Consulte mais créditos da empresa na IMDbPro
Bilheteria
- Faturamento bruto nos EUA e Canadá
- US$ 10.236
- Fim de semana de estreia nos EUA e Canadá
- US$ 10.236
- 15 de nov. de 2020
- Faturamento bruto mundial
- US$ 10.236
- Tempo de duração1 hora 26 minutos
- Cor