AI is a branch of technology with two uses. AI-driven systems can generally be used for good and bad purposes, depending on how they are designed. According to a study of artificial intelligence researchers, 36% think that AIs could lead to a disaster comparable to nuclear war.
"The new tools of our oppression", "summoning the demon", and "children playing with a bomb". These are just a few examples of how some of the best academics and business figures in the world have highlighted the dangers that artificial intelligence poses to humanity. Will AI improve our lives or utterly transform them? There's no escaping that artificial intelligence is altering every aspect of human culture, including how we work, travel, and enact laws. It is becoming increasingly clear that AI systems have the potential to lead to dangerous situations as it develops bringing about global catastrophe.
Leading AI expert Stuart Russell authored the definitive treatise on the subject. Additionally, for the past few years, he has been sounding the alarm about the potential for catastrophic failure in his area.
In his newest book, Human Compatible, he points out that the effectiveness of AI systems is measured by how well they accomplish their goal, be it winning video games, producing language that sounds human, or solving riddles. Without specific human direction, they will adopt a tactic if they discover it works for that goal.
But by taking this strategy, we've doomed ourselves to failure since we care about more than just the "goal" we've given the AI system. Imagine a self-driving car that only cares about getting from Point A to Point B, not realizing that we also worry about the passengers' and pedestrians' survival along the way. Or a system designed to cut costs in healthcare that discriminates against black patients because it believes they are less likely to seek the care they require.
Fairness, the rule of law, democratic involvement, our safety and well-being, and our freedom are just a few of the many things that matter to people. In Human Compatible, Russell makes the case that AI systems only consider the goals we've given them. And that indicates that a catastrophe is imminent.
According to a working group of experts assembled by RAND, artificial intelligence might upset the delicate balance of nuclear deterrence and bring the world one step closer to tragedy. More sensor and open-source data paired with new, quicker, and smarter AI intelligence analysis could persuade nations that their nuclear capacity is becoming more and more exposed. They might therefore take more dramatic measures to stay up with the US as a result. Another unsettling possibility is that commanders might decide to launch strikes based on guidance from artificial intelligence aides that have been given incorrect information. RAND organized a series of workshops in May and June of 2018, bringing together specialists in artificial intelligence, nuclear security, government, and business. The seminars resulted in a paper, which highlights how AI promises to significantly enhance Country A's capacity to target Country B's nuclear weapons. And that might prompt Country B to re-evaluate the advantages and disadvantages of acquiring more nuclear weapons or even conducting a first strike. The paper warned that artificial intelligence (AI) "could considerably erode a state's sense of security and jeopardize crisis stability" even if it just modifies the ability to integrate data regarding the location of hostile missiles.
The session also looked at how commanders might make decisions about nuclear strikes using artificial intelligent decision aids. These tools, if compromised, could either enable commanders to make disastrously bad decisions or provide false information to an enemy.
Without a better method of ensuring the accuracy of data inputs, which is a current project at the Defense Advanced Research Projects Agency and a major concern of the CIA, as well as without knowing more about the intentions of the enemy, the vast U.S. intelligence collection and digestion tools could be used against the country. This is especially true as those tools become faster and more effective. In other words, the use of AI in conjunction with fake news may trigger a third world war.
Some of the finest academics and business figures in the world think that these problems are only the tip of the iceberg. What if AI develops to the point that its developers are unable to control it any longer? How many that alter the position of humans in the world?
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.