These are the preparations for an interview with Alan Moore. I interviewed him many times, now it is the other way around.
Alan is working on "Crafting Beautiful Businesses" and I'm engaged in "Digital Happiness", our new research theme. It was obvious this wasn't going to be an easy talk. But who needs easy talks in a bizarre world where everyone is looking at "What's Next" and "How do we get there"? Alan started asking about the importance of memes, the way new ideas spread into our culture. Words matter. A popular word in the digital space is "Alternative Facts" and a rising star is "The Frightful Five" (referring to the 5 big tech companies like Google). There is a growing unease with everything being digitized, spread, managed and controlled.
When you ask yourself the question "Do I trust the digital world that surrounds me?" and the answer is no, you are in good company. Not trusting institutions or big (tech) companies and their bosses and "The whole system" has been a trending topic for the last few years. The Edelman trust Barometer 2017 shows an all time low, now referred to as the "Slow Meltdown" of the system. Worldwide only 15% of the population believes the system is still working very well. Trust in CEO's dropped 12% this year to a level of 37%.
Who do we trust most? The Edelman outcome: People like our selves, people that are "like you". It is hard not to make any cynical comment on that outcome. On the other hand, it is a sign. It points at us, you, me, we, people like us. If organizations are clever, and many of them are, they want to find out what it is that make people trust people more than organizations, governments, boards of directors and institutions.
My prediction: In ten years time, perhaps even in five, robots will appear in Edelman's trust barometer. Where will they stand? Would we trust the algorithm more than our bosses? Is a robot to be trusted more than the organization itself? Organizations and robots are the same species, they are both artificial, artifacts, products of our imagination. If we are experiencing a system change, then definitely we need to look at the role of artificial intelligence in the new system. Can it fix the meltdown?
Algorithms, androids and automata are "people like me". They have the power of trust, they feel very familiar. Heimlich, as Sigmund Freud would call it (see Das Unheimliche). We feel at home with them. But the other meaning of the word heimlich is secret, almost the opposite.
This dualism of the artificial needs to be managed well. If we succeed, we feel at home, we trust, we're not afraid. (You can read more about this in our report The Frankensteinfactor).
Robots cannot prevent the meltdown, in my view only humans can. No matter how intelligent algorithms are, the thing they cannot do is define their own goals. So the important question is, what are the design principles for the artificially intelligent world? What are the robots going to do? The simple answer is: Making us happy. We can program them in such a way that they improves our wellbeing, joy, and purpose in life (aka happiness). Design principles are emerging in a new scientific field called Positive Computing. We have only scratched the surface of the digital world. Like many other technologies, digital technology started ugly. The side effects of living a digital life are becoming more clear. We need safety belts, deal with the digital exhaust and introduce speed limits as we did with the car. Even better: we need self driving cars we can trust.
To interact with Menno, please visit the blog on LinkedIn: The meltdown of the system