As far back as I can remember, technology has always been suspected of bringing us the worst evils without the prophecies finally being fulfilled. Lack of knowledge of the subject by the prophets who saw only a misunderstood black box? Lack of understanding of the evolution of society, which one day is ready to accept or even asks for what it would have refused yesterday? Anything else? It must be a little bit of all this.
Humans are schizophrenic when it comes to good and evil.
And then we cannot approach the notions of right and wrong without trying to understand the prism through which humans look at things. And it must be recognized that it is often ambiguous or even schizophrenic. Just look at a current topic to see it: personal data.
The same person will tell you in the same discussion that they care about their privacy, that they are concerned about how companies use the data collected about them but that they are asking for a personalized service, while taking care to block cookies on their browser. Not knowing what to think about it.
Technology is not the problem, use is.
I always had the same answer when this subject was discussed. The problem is not the technology, whatever it is, but the one who uses it. But that’s not new: you can use a shovel to dig and plant a tree like you would to kill someone.
Today I will go a little further in my remarks. The problem is at the level of the person who uses it and the person who designs it. When you create a weapon of mass destruction you cannot simply clear your name by putting everything on the back of the unconscious madman who used it. People who were proud to have contributed intellectually to the atomic bomb program felt uncomfortable when it was actually used.
In short, we always come back to the same subject: ethics. Technology is neutral, the ethics of the user and the designer are not.
And today the main subject is of course personal data.
Cambridge Analytica case with Facebook, aspiration of unwanted data by Android, evangelization around the GDPR, the Internet user finally woke up. If it took him a long time to understand that “if it’s free it’s because you’re the product” he now understands that it goes much further. Maybe too much.
I am amused that people were surprised or shocked by the Cambridge Analytica case. The data is Facebook’s business model and when you become a billionaire by selling people’s lives you can’t expect it to change one day. Not only was it inevitable, but it will happen again because that is the logic of things.
When I talk about the user of the technology, I am not of course talking about the end user, who is often the target, the “product”, but about the company that implements it. This is Facebook, inseparable from its founder. Yes, Zuckerberg has no form of ethics, or he is 30 years ahead of the society. Or his lack of ethics will drive society not to go in that direction. And we’ve seen employees leaving Facebook precisely for that reason: they no longer supported the direction the company was taking… even though few of them opened up publicly, or did so too late. Some users did the same too.
Does the tech have its whistleblowers?
Whistleblowers have a good reputation in the society, as long as they do not confuse this role with the pursuit of personal interests. The question that arises is: who, in a technology company, can or should act as a whistleblower. Or show an ethic that will make them say “no I won’t develop that”, “no I can’t sell that”.
This reminds me of an initiative of the Mozilla Foundation which, in collaboration with researchers and professors, has just launched a programme aimed at reintroducing ethics into computer science teaching. Other major universities have also introduced ethics into the training paths of computer scientists (MIT, Stanford). In my opinion, this is a subject that needs to be introduced more than reintroduced. Already that ethics has never held the upper hand in training courses, except for senior managers (no matter what they did with them), we had never thought that it was a subject for executors, but, precisely, for decision-makers. With ethical decision-makers, there is no need for the subject to pollute the field. Mozilla’s initiative suggests that ethics must be a better distributed subject in the company and that if the decision-maker does not demonstrate it, it is up to others to demonstrate it in his place.
If the data is the new oil, there will never be a data business without data ethics. And it is certainly a good thing to see that we are trying to make it a matter for those who develop the technologies of tomorrow.
It remains to be seen how this will then be transposed into businesses. While Chief Data Officers are flourishing on every street corner, I have not yet seen a company put ethics back at the heart of its business. I mean with the means and a person at a level sufficient for it to see carries and the authority to stop a deviant initiative. And I wouldn’t be surprised if that didn’t happen before a customer rebellion or a heavy conviction in court.
In short, let us have confidence in technological progress, it is neutral. But let’s always look at the Men in the back.
Photo : Diable De Tampo via Shutterstock