Superintelligence: Danger To Humanity?
Artificial Intelligence is already part of our daily life. The point of the debate on AI is that if you assume current trends into the future, a moment will come in which AI surpasses the intelligence of human beings in every way. It could take quite a few decades, but some say that it would be the last invention done by man because from that moment on there are robots, or whatever they are, that do better. This event is also known as the technological singularity.
These robots would not be confined to a biology arissen through evolution but built with optimal designs and materials for intelligence and might also improve themselves, going through a development that would render us human beings useless. There is a danger that mankind could go extinct because this new “species” could see us as enemies or simply waste. We would likely be lucky to be kept alive out of some sort of respect for living beings, or a need for conservation.
The paradox is that, on the other hand, we probably can't afford to be without AI to solve the many problems we are faced with now and in the future. These may include problems in the fields of climate, economy, cosmology, space travel and, indeed, AI itself. It is therefore necessary that people already start addressing the question of how we can lead the development of AI in the right direction so that humanity goes forward instead of digging its own grave.
This can't be left to the technicians and engineers themselves, because we're now dealing with purely human aspects that are more in the area of ethics and philosophy as well. One could of course try to impose restrictions on AI, both on the hardware or on the software by setting some kind of smart rules, but we must assume that a super-AI would always find a way to escape those restrictions.
Therefore something will have to be developed that has inherent characteristics that prevent it from behaving incorrectly and destroying mankind, whether intentionally or not. It would have to be something with a kind of integrated respect to other forms of life in general, and humans in particular. The question then is why for us, at least for most of us, it is normal to have that respect.
The answer might lay in the fact that each individual is unique. Everyone, even identical twins, is born with his own set of genes and then goes through his own development. It is not (yet) possible to make a backup or copy to replace someone, so we find that no one has the right to kill another. Of course this doesn't mean it never happens, and there may be lots of other reasons why causing harm to others is bad, but the basic principle here is that every person is unique.
One could therefore imagine that a super-AI or robot that, instead of being produced in random amounts, had unique characteristics and its own development as an individual, would be considered as a unique "person" with its own "life." Most likely a sociopath who only cares for his own purposes but still, you may have to think in that direction if you want to build an AI with morals and ethics that are not implemented, and therefore prone to errors and abuse, but really emerge from the thing by themselves.
Perhaps a more positive way to look at the development of AI is by saying that in about 40 or 100 years AI may be so well-developed that all the problems humanity is now struggling with are solved. There is no longer disease or shortage, everything you need is there and you're practically immortal. This sounds nice but could bring its own problems. What's the meaning of life if you never die, or at least a copy of you could always continue in a new body?
It gets really weird if you could imagine how nice it would be to spend some time walking around in a simulation of the 21st century: just place your order. How can you be sure that there haven't already been (alien) civilizations around that have reached that level and that you are currently part of such a simulation, built with the use of super-AI?
Other than us being the first in the entire universe to reach that level (after all there has to be a first one), there are only two more reasons why we might not be living inside a simulation according to Nick Bostrom's simulation argument: first, no single civilization ever gets there because they always destroy themselves, or secondly, any civilization that does come so far is no longer interested in simulations.
If everything would turn out to be a simulation than time travel, even backwards, should be possible because the simulator can always fix everything. If for example I would murder my own mother before my birth, it shouldn't be any problem to fix that in an instant. In which case we wouldn't have to worry about AI either: our admin will take care of it.
(November 2015)