Sam Harris – Can We Build AI Without Losing Control Over It?

Sam Harris – Can We Build AI Without Losing Control Over It?

Sam Harris - Can we build AI?

A time will come where we will improve our intelligence machines to a point where they get smarter than we are, and once we have those machines, they will begin to improve themselves risking what we call Intelligence Explosion

We will build machines that are more confident than we are, and a little divergence between our goal and theirs can lead to our destruction. Take ants, for example, we do not naturally hate them, but when our goals are disturbed by their presence, we do not think twice before we exterminate them. The problem now is that we build machines capable of treating us the same way we treat ants.

What exactly is intelligence?

There are three major assumptions about this;

  1. Intelligence is the product of information processing in physical systems. This is far beyond a mere assumption as we have built systems that perform far beyond mere human intelligence already.
  2. The second assumption is we won’t stop improving our intelligent machines. Intelligence is our most valuable resource. We desperately want to solve problems that we face, so we improve the intelligence of our machines no matter what.
  3. We are nowhere near the peak of our intelligence. The spectrum of intelligence extends further than we are currently seeing it, so if we embark on building machines that are smarter than us, they will explore that spectrum in ways we cannot imagine. 

How then do we put a constraint on a machine that thinks a million times faster than the minds that build it? The most common reason why we are told not to be worried about Artificial Intelligence safety is because of time. The researchers always say that it is a long way off, probably about 50-100 years from now. They claim ‘worrying about AI safety is like worrying about overpopulation on mars’ If we claim intelligence safety is only about the, but we keep improving our machines, we will produce some forms of superintelligence. 

Another reason we are told no need to worry is that the machines will share our values as they are like extensions of ourselves. The recommended path forward is to implant this technology into our brains. Are we ready to stick such into our heads? If we are sure we are ready to build such a system and improve it, knowing fully well that the horizon of cognition exceeds what we currently know. Then we should admit we are on the verge to build a god but are we sure it is a god we can live with?

Our Supplement:

After watching Sam Harris talk, we would agree with him that expanding the intelligence of machines while we try to exploit them might make then turn on us. However, if you are still then bent on expanding superintelligence, we should be able to come up with ways that will help curb any unseen crisis that might arise from Artificial Intelligence.