Artificial intelligence has taken the world of technology by storm. There is so much potential that comes with AI, and it has made a world of difference in how well we are able to work with the world around us. From self-driving cars, search engines, and more, there is a lot that we can already do with AI, and it is likely to continue growing in the future.
AI has made some big changes in the past few years. While we can watch some fun science fiction and see AI in the form of robots that act like humans, AI will encompass anything from the search algorithms on a search engine and even SIRI that you use as a personal assistant.
The AI that we use today is often called weak or narrow AI because it is designed in a way to perform a narrow task. For example, it can’t get all of the different types of intelligence, but the technology may work with facial recognition or for internet searches. It specializes in that part, and that is all that it can do. It may be able to outperform humans at that specific task, but it can’t handle a bunch of tasks as humans can.
The goal of keeping AI’s impact on society beneficial is part of the short-term goal of researching it. While it may be just a little nuisance if you have a laptop crash or someone hacks into it, it becomes more important that the AI system does what you want it to do if it is in charge of the car, your power grid, your pacemaker or something else. Another short-term challenge is to make sure that no one starts part of the arms race in lethal autonomous weapons as well.
Those are some of the short-term goals of checking on the safety of AI. There are also some important questions that we need to focus on for the long-term as well. We need to look closely at what is going to happen if the quest for strong AI does succeed and then this technology becomes smarter and better at a cognitive task than humans are.
This kind of system could undergo some recursive self-improvement, which could trigger a big explosion in intelligence that could leave the intellect of humans far behind. When we invent some of these new technologies, it could do a lot of good to our lives, including eradicating war, poverty, and disease so it could be great for us. Some experts are concerned about this though and are worried about it becoming super intelligent and taking over.
There are some who question whether a strong AI is achievable and then there are others who insist that this kind of AI is guaranteed to help out everyone. Both of these positions have some merit so it is important to understand how AI works and how it can be used both for good and for bad purposes.
Most researchers will agree that this kind of AI is unlikely to exhibit any emotions that are seen as humans, such as hate or love, and that we really shouldn’t expect AI to become malevolent or benevolent on its own. Instead, we need to take a look at AI through the lens of how it could become a risk. There are two scenarios that many experts think are possible with AI and these include:
The AI is programmed by a person to do something devastating. Autonomous weapons are designed with artificial intelligence and theoretically could be programmed to kill. If these are given to the wrong person, it is possible that these weapons would be programmed to kill many people.
With this same idea, an AI arms race could lead to a big AI war, which in the end would result in some mass casualties. To avoid the enemy taking over these and stopping a plan, it is possible the weapons are designed to not just turn off so humans could lose all control in this kind of situation. This risk is present even when we talk about narrow AI like above, but it will grow and become a bigger risk as AI and autonomy start to increase more.
The second situation is that the AI is programmed with the idea it will do something beneficial, but then it starts to take on a destructive method to reach this goal. This can happen when the programmer doesn’t align the AI’s goals with ours, which is sometimes more difficult to accomplish than it may seem. If you ask an intelligent car to take you to the airport quickly, you could get there quickly, but a bunch of traffic rules is broken and people may get hurt in the process,.
If you ask a super-intelligent system to do a big and complicated process, it can cause some issues and it can turn into a threat that humans need to deal with as well. Even when the original goal was to benefit humans and the world around us, things can go horribly wrong along the way.
As we can see here, the concern about AI isn’t always about it being malevolent. However, it is about whether the system is competent. While this kind of technology can do a lot of amazing things in the process, we have to remember that we are working with machines here, and not some of the traditional human thoughts. Humans may make mistakes, but we see a lot more in the world than a system designed with AI and that could be the key that we need to worry about with thee systems.
AI is a great technological advancement that has changed the way we do many things in our lives already. And it is likely that it will grow and change quite a bit more in the future. There is a lot of excitement around what this technology is able to do for us and how great it will work. But we do need to be responsible citizens and think about the way that this technology could backfire or how some of the simple processes can go wrong. Only when we understand all of this are we able to really see some great results with the technology.