Leaps and bounds in AI technology can totally change the game
In its most basic definition, artificial intelligence is when a machine shows human-like intelligence and cognitive functions that usually only live minds can carry out, such as learning.
Artificial intelligence is undoubtedly going to change the world, and most people agree that with the right usage, it could improve many people's lives. On the other side of the spectrum, a lot of people think that giving a machine this human-like intelligence could have undesirable effects, such as machines taking most humans' jobs or in extreme cases, machines rebelling against their creators.
Although AI is still relatively new, many companies have already invested heavily into it. Google's DeepMind is one of the more well-known examples of this. Their AI AlphaGo has even been able to defeat the World Go Champions KeJie and Lee Sedol. This was a great feat since Go is an extremely complex game, which is said to have 1 million trillion trillion trillion trillion more possible configurations than chess (There are more possible configurations of Go than there are atoms in the entire observable universe). The way it learnt this game, was by playing other human Go players and learning from each game. This ability to learn is something that was previously exclusive to living, organic beings. A later on version, known as AlphaGo Zero, was even able to learn by solely playing against itself, meaning it was able to learn much faster, and that its opponent's skill would increase as its skill increased.
Of course, this is only one example of AI, and its goal was to master board games. There are dozens, if not hundreds of AI which have much more practical uses, from virtual personal assistants like Google Assistant, to the self-driving cars being developed by some companies. Most people would agree that the development of AI would result in many other fields and industries that utilize it, developing rapidly. However, this is only if the AI can be utilized properly.
Many experts warn people to be cautious about how we go about developing AI, as it is unlikely that they can be instilled with a set of human morals or emotions, which would help them achieve our goals, in our methods. We could tell AI what goal we want to achieve, but generally it will do so in its own method, which it deems as the most efficient. Say an AI driven car was told to get to a specific location as fast as possible. If it was a human driving the car, they would try to achieve this, while still staying on the road and following the laws. If an AI was not specifically told, or taught to conform to the rules of society, it might try to achieve this by driving over pedestrians, through parks, on the sidewalk, etc. It would certainly get the passengers to their desired destination quickly, but at the cost of both their, and other people's safety. It is for reasons like these that AI developers are extremely careful in outlining the methods they want their AI to use to achieve their goals. A lot of people think that only primitive minds fear the rise of AI, but in reality, even experts are worried that AI could get out of hand.
Human beings are not the strongest, fastest or biggest animals on the planet, yet we are most definitely at the top of the animal hierarchy, and this is solely because of our intelligence. Who is to say that a more powerful being, even if it is a machine and not an actual living, breathing being, wouldn't be able to usurp us?
AI is still years, if not decades of from reaching human levels, but the field continues to rapidly advance, from Sophia, the first ever robot to be granted citizenship of a country (Saudi Arabia to be specific), to even Google Assistant, which is now able to make real-time calls with human beings, without them being able to ascertain that they're not speaking with an actual human. I have no doubt that by the end of my lifetime, I'll be able to witness a human-level machine, but for now that idea is decades away.