The arms race of chatbots has sparked more discussions around the risks of artificial intelligence. At the same time, there are those who are more calm and claim that the chatbots are only advanced language tools.
In his book Scary Smart Mo Gawdat describes a unique approach to AI. He should know – in his previous job as Chief Business Officer at Google X, which is the IT giant’s cutting-edge division, he helped develop the first steps towards artificial intelligence.
Gawdat’s analysis is that AI definitely poses a risk to humanity, but that our approach to the system will determine the outcome of its action. He believes that AI can be compared to a child who has so far barely reached school age. The main principle in child rearing, that the child does as the parents do and not as they say, is applicable here as well. This means that what will shape AI’s development is our actions, as AI is an autodidactic, i.e. self-learning, tool.
According to Moore’s law, the data capacity doubles every two years, which means that the AI we know today is light years from the one we will experience in 10 or 50 years.
If we humans continue to breed injustice, wage war, exploit animals and exploit our natural resources, AI will learn to do the same, albeit even more efficiently. If instead we implement values such as respect and compassion and allow them to be integrated into working principles such as non-violence and equality in all its aspects, AI will be able to develop into the world’s most well-intentioned agency.
AI is probably the greatest innovation of our time and the most potent problem solver that has ever existed. What would happen if we applied that resource to the biggest challenge of our time, namely the climate crisis? AI would probably be able to cope with what we humans have not yet succeeded in, namely finding the way to a genuinely sustainable society. If we assume that we are raising AI like a child and that we are its role models, one could argue that we have failed miserably so far.
Herein lies the crux but also the core of the subject – AI does not really play a role.
In the long run, it is only a reinforcing effect of what is already happening, that is, everything that will happen, will happen faster with the help of AI. Its input is our actions by analyzing everything that is shown, said and written on the Internet. Every single article, social post and comment becomes part of its curriculum.
If we want society to be more loving, we need to be kinder to each other. AI will listen and take notes. If we are instead intolerant and aggressive, I fear AI will learn from that and become the most Machiavellian of all entities.
My conclusion is that AI will always be a reflection of us humans and our actions. It has no built-in morality and probably won’t develop one either. It will do what we do, only much more efficiently. This is also the great hope for us. By starting to lead the way to a rapid and resolute climate transition, we can recruit the world’s smartest ally. We can get ideas and guidance for ways to manage our ecology that we could not even imagine.
I would argue that AI is humanity’s greatest hope, but also our greatest risk. We ourselves decide what role it will play. But we must show that way, for that is our role as its parents.
Written by Misha Istratov, environmental entrepreneur
Misha Istratov runs the company Elithus, which has been building exclusive homes for 17 years and aims to be Sweden’s most environmentally and climate-friendly company. He has a great interest in literature and nature and prefers to spend his free time in forests and mountains where he gathers inspiration.