The development, application, and capabilities of AI-based systems are evolving rapidly, leaving largely unanswered a broad range of important short and long-term questions related to the social impact, governance, and ethical implementations of these technologies and practices. In this article, we will discuss what AI governance can learn from Crypto's Decentralization Ethos and why there is a need for governance.
Many sectors of society rapidly adopt digital technologies and big data, resulting in the quiet and often seamless integration of AI, autonomous systems, and algorithmic decision-making into billions of human lives. AI and algorithmic systems already guide a vast array of decisions in both private and public sectors. For example, private global platforms, such as Google and Facebook, use AI-based filtering algorithms to control access to information. AI can use this data for manipulation, biases, social discrimination, and property rights.
Humans are unable to understand, explain or predict AI's inner workings. This is a cause for rising concern in situations where AI is trusted to make important decisions that affect our lives. This calls for more transparency and accountability of Artificial Intelligence and the need for AI governance.
The titans of U.S. tech have rapidly gone from being labeled by their critics as self-serving techno-utopianists to being the most vocal propagators of a techno-dystopian narrative.
This week, a letter signed by more than 350 people, including Microsoft founder Bill Gates, OpenAI CEO Sam Altman, and former Google scientist Geoffrey Hinton (sometimes called the "Godfather of AI") delivered a single, declarative sentence: "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
According to Coindeck, two months ago, an earlier open letter signed by Tesla and Twitter CEO Elon Musk along with 31,800 others, called for a six-month pause in AI development to allow society to determine its risks to humanity. In an op-ed for TIME that same week, Eliezer Yudkowsky, considered the founder of the field of artificial general intelligence (AGI), said he refused to sign that letter because it didn't go far enough. Instead, he called for a militarily-enforced shutdown of AI development labs lest a sentient digital being arises that kills every one of us.
Job Threats– Automation has been eating away at manufacturing jobs for decades. AI has accelerated this process dramatically and propagated it to other domains previously imagined to remain indefinitely in the monopoly of human intelligence. From driving trucks to writing news and performing recruitment tasks, AI algorithms are threatening middle-class jobs like never before. They might set their eyes on other areas as well, such as replacing doctors, lawyers, writers, painters, etc.
Responsibility- Who's to blame when software or hardware malfunctions? Before AI, it was relatively easy to determine whether an incident was the result of the actions of a user, developer, or manufacturer. But in the era of AI-driven technologies, the lines are blurred. This can become an issue when AI algorithms start making critical decisions such as when a self-driving car has to choose between the life of a passenger and a pedestrian. Other conceivable scenarios where determining culpability and accountability will become difficult, such as when an AI-driven drug infusion system or robotic surgery machine harms a patient.
Data Privacy– In the hunt for more and more data, companies may trek into uncharted territory and cross privacy boundaries. Recently we have seen how Facebook harvested personal data over some time and used it in a way that leads to privacy violations. Such was the case of a retail store that found out about a teenage girl's secret pregnancy. Another case is, UK National Health Service's patient data sharing program with Google's DeepMind, a move that was supposedly aimed at improving disease prediction. There's also the issue of bad actors, of both governmental and non-governmental nature, that might put AI and ML to ill use. A very effective Russian face recognition app rolled out proved to be a potential tool for oppressive regimes seeking to identify and crack down on dissidents and protestors.
Technological Arms Race-Innovations in weaponized artificial intelligence have already taken many forms. The technology is used in the complex metrics that allow cruise missiles and drones to find targets hundreds of miles away, as well as the systems deployed to detect and counter them. Algorithms that are good at searching holiday photos can be repurposed to scour spy satellite imagery, for example, while the control software needed for an autonomous minivan is much like that required for a driverless tank.
Many recent advances in developing and deploying artificial intelligence emerged from research from companies such as Google. Google has long been associated with the corporate motto "Don't be evil". But recently Google confirmed that it is providing the US military with artificial intelligence technology that interprets video imagery as part of Project Maven.
According to experts, the technology could be used to better pinpoint bombing targets. This may lead to any autonomous weapons systems, the kind of robotic killing machines. To what extent can AI systems be designed and operated to reflect human values such as fairness, accountability, and transparency and avoid inequalities and biases? As AI-based systems are now involved in making decisions for instance, in the case of autonomous weapons. How much human control is necessary or required? Who bears responsibility for the AI-based outputs?
To ensure transparency, accountability, and explainability for the AI ecosystem, our governments, civil society, the private sector, and academia must be at the table to discuss governance mechanisms that minimize the risks and possible downsides of AI and autonomous systems while harnessing the full potential of this technology. The process of designing a governance ecosystem for AI, autonomous systems, and algorithms is certainly complex but not impossible.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.