The European Commission has come up with a detailed proposal for the controlled use of Artificial Intelligence on 21st April 2021, to be followed by the states of the European Union. Tech experts are expecting these regulations to have a great impact on MNCs like Amazon, Facebook, Google, or Microsoft where Europe is in a leading position to drive the digital industry. The proposal is further accompanied by a list of regulations to be applied in the use of AI in machinery.
Decades after the invention, there remains a lot of uncharted territory in the world of artificial intelligence, which makes it unpredictable and prone to be used with wrong intentions. To make AI trustworthy to people, the European Commission has come up with this harmonious set of rules that has the scope to be used universally. The head of Artificial Intelligence and Machine Learning of World Economic Forum, Kay Firth-Butterfield has stated in an interview with Forbes, "Forward-looking companies should proactively establish such a 'vetting process' to ensure their AI systems' trustworthy design and deployment."
In the aforementioned proposal, the use of AI has been defined into four categories of unacceptable, high risk, limited risk, and minimum risk. The use of AI in social scrutiny by government and voice assistants giving dangerous instructions to children has been considered unacceptable. Sometimes, public authorities use AI systems to evaluate the trustworthiness of certain people based on their social life. They use these social scores to discriminate against certain groups of people and it has been identified as unacceptable by the commission. Remote biometric identifications and facial recognition applications used by law enforcement are also prohibited as per these regulations. AI in education and critical infrastructure is labeled as high risk and is subject to assessment before being introduced to the market, as per articles 6 and 7 of the policy. Controlling asylums and borders, justice administration, and law enforcement systems are also considered as high risk. Limited risk is identified in the use of chatbots that would require following transparency obligations. Minimal risk is opted for the use of AI in video games and spam filters and has been allowed to be used freely.
The proposal submitted by the European Commission is only a draft that can be put to use by the member state authorities. They are also in charge of determining the fine amounts for not obeying the rules. So far, it can be maximized up to up to EUR 30 million or it can take away 6% of the company's annual turnover on a global basis. The European Parliament along with the EU Council would review the proposals of the commission and they may also suggest some of the modifications. After their approval, The European Artificial Intelligence Board is supposed to supervise the implementation of the regulations. The whole process is supposed to take about three years to complete.
Europe's initiative in regulating the use of the technology of Artificial Intelligence is a part of the worldwide considerations among the governments for making AI trustworthy to the public. With the power that AI holds within itself, governments must join hands in protecting this technology from falling into the wrong hands.
Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp
_____________
Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.