Artificial Intelligence

EU Strikes Agreement on Artificial Intelligence Regulation

Explore the EU strikes agreement on artificial intelligence regulation

Harshini Chakka

Lasting for 3 days, the ‘marathon’ negotiation has successfully ended, and an agreement between the Council presidency and the negotiators for the harmonized rules on artificial intelligence has just been reached, which is supposed to end up serving as the Artificial Intelligence Act for Artificial Intelligence Regulation. The draft regulation focuses on providing a compliance framework that prioritizes safe and rights-respecting AI system placement in the EU. This artificial intelligence regulation goes beyond the inspiration of both investment and innovation in the AI area of European countries.

The AI Act is a landmark piece of legislation that can create an enabling environment in which the use of AI would be an instrument of more excellent safety and trust, ensuring the participation of both public and private institutional stakeholders across the EU. The main idea is to regulate AI based on the latter’s capacity to cause harm to society following a ‘risk-based’ approach: the higher the danger level of the risk, the more restrictions need to be made. This first law ever established a global precedent for AI regulation to be followed by other jurisdictions. In contrast, the GDPR, in the same manner, has already done so for the protection of personal data and, therefore, has promoted the EU approach to tech regulation all over the world.

The Main Elements of the Provisional Agreement

Compared to the initial Commission proposal, the main elements of the provisional agreement can be summarized as follows: Compared to the initial Commission proposal, the main new elements of the provisional agreement for artificial intelligence regulation can be summarized as follows:

·       Categorization of high-impact general-purpose AI models that carry systemic risks in the future and high-risk AI systems that control the system.

·       Either an upgraded system of governance with coordinating powers at the EU level or an overhaul of economic policies.

·       The list of prohibited things has been expanded, but remote biometric identification in public spaces by police officers can be used; however, that is under conditions that are not subject to abuse.

·       Better rights enforcement arising from the requirement that AI actors deploying potentially risky systems access the fundamental rights impact of these systems before employing them.

In more concrete terms, the provisional agreement covers the following aspects:

Definitions and Scope

According to the selection agreement, the definition of an AI system that is trying to opt for the OECD shelter corresponds to the suggested procedure. This way, the criteria for AI systems help to separate the simple software systems more clearly.

Moreover, the interim agreement explained in more detail that the rules of the regulation do not cover the sectors that belong to the sphere of the EU law and cannot in any way diminish the competencies of the member states concerning the national security sphere or the parties that share the responsibility in the area of the national security. Not only that, but the AI Act would not extend to AI systems that are to be used only for military or defense purposes. Meanwhile, the treaty indicates, in addition, that the laws shall apply to AI systems just in case the systems are employed not only for scientific and research purposes but also for non-scientific and non-innovative reasons, including people who are not technicians or experts in AI.

Classification of AI Systems as High-Risk and Prohibited AI Practices

It creates a horizontal safety screen, which encompasses level 1.1, "probable critical/significant rights damage," to exclude AI systems that are not forecasted to pose such a threat. Those AI systems that bear a shallow risk of harming the user would have minimal transparency requirements. For example, they should notify the user that the content was AI-generated so that they can decide whether or not to use the content or carry out further actions.

Approval will be given for a great diversity of AI-based systems that will operate in the EU territory. Still, they must meet the requirements and responsibilities of entering the EU market. These co-legislators added and revised some of the requirements to make them more technically straightforward and less of a burden for the stakeholders, for example, as regards the supplies of the quality of data and the technical documents that SMEs need to write to show that their AI systems have been safely built and are compliant with the current requirements.

As AI systems are created and supplied in a demanding value chain environment, the compromise arrangement would require, among other things, amendments in companies’ acts, which reflect a clarification regarding the responsibilities/sphere of influence of the various actors within that chain, namely providers and users of that technological system. It also articulates how the AI-specific duties derived from the AI Act interact and come into conflict with duties addressed in other laws, such as data legislation and sectoral legislation in the EU.

AI applications will be rejected on the basis of high risk, which is forbidden for some uses of AI. Hence, these devices will not be introduced into the EU. With the preliminary contract, prohibited practices are, e.g., cognitive techniques of behavioral control, collection of facial images from the web without purpose, etc., emotion recognition in institutions and education, social scoring, biometric approaches for figuring out sexual orientation or religious persuasion as well as some genealogy of private individuals for policing purposes.

Law Enforcement Exceptions

Taking into account the law enforcement organizations’ peculiarities and the necessity for the way in which they can utilize computers in their duties, which are of paramount importance, a number of changes to the proposal of the Commission rule AI for law enforcement were agreed upon. As long as careful measures are enacted, these are the transformations that are supposed to translate the necessity for operators to protect information privacy. A case in point is an emergency procedure of activating a high-risk AI implementation apart from the conformity assessment, which can be done under the circumstance of urgency. In addition to that, a particular operation that will provide competence for the guarantee of human rights against the misuse of AI applications has been elaborated.

Furthermore, the text of the provisional agreement clearly expresses reasons behind the use of real-time remote biometric identification systems in publicly accessible spaces only for law enforcement reasons, and only authorities are allowed to do that in exceptional cases. The compromise agreement provides for additional safeguards and limits these exceptions to cases of killings of suspected criminals, ensuring that the searches will only be carried out in cases of genuine threats, as well as in preventing terrorist attacks and searches when the people are suspected of the most severe crimes, for instance.

General-Purpose AI Systems and Foundation Models

New provisions have been laid down for scenarios where AI systems are used for multiple purposes, namely general-purpose AI and stand-alone systems. Another high-risk system (self-driving car) is integrated with general-purpose AI. The transitional agreement also includes the GPAs (GPAI). GPA regulation is the central part of the agreement.

The ins and outs of foundation models, which are being described as the systems that are able to show competence in complex activities like text generation, segmentation of video, processing natural language, rendering code, and many more computer tasks, have also been concluded. The provisional arrangements require that foundational models meet desaturating requirements prior to being released on the market. The policy needed for ‘high impact’ foundation models was set up to be a lot more stringent. These mean data models with huge sizes and very advanced complexity, capabilities, and performance can deliver systemic risks along the company's value chain; such risks become shared by all participant companies.

A New Governance Architecture

Following the new GPAI models constraining measures and the need for their standardized monitoring at the EU level, a unique AI Office was created by the Commission and serves to oversee these most sophisticated AI models, contribute to building norms and test procedures, and enforce the main rules in all member states. An independent scientific panel of technology experts will give the GPAI AI office advice on the model scale by developing methods of assessing the power of foundation models and technology, performing evaluations of the GPAI status and foundation models that are ready to go on the market and possibly monitoring the safety of materials related to foundation models.

To that end, the AI Board, comprising Member States as its members and acting as a coordination platform and an advice body to the Commission, should give Member States a prominent and critical role in implementing the regulation as codes of practice for foundation models should be designed within its area, as well. Last but not least, a forum will be established where industry actors, SMEs, start-ups, civil society, and universities represent individual stakeholders. This can offer technical knowledge that the AI Board can use.

Penalties

For docking and sanctioning of the violators in the AI Act, the fines were set at a minimum of a certain number or, at worst, could reach the percentage of the company’s global annual turnover in the previous financial year. A penalty of €35 million (7%) shall be imposed for violations of the mentioned AI applications, €15 million (3%) for the breach of the AI Act's duties, and €7,5 million (1,5%) for the supply of misleading info. Nonetheless, this provisional accord includes exceptional fines, which affect SMEs and start-ups in minor proportionality when they commit to the AI Act provisions.

The concise agreement with the AI Act states that a natural or legal person is entitled to file an official complaint with the relative market surveillance entity. Furthermore, such an authority should follow its particular and stipulated procedures in handling the said complaint.

Transparency and Protection of Fundamental Rights

Significantly, the provisional agreement requires the AI system deployer to carry out a rights impact assessment on the end user's safeguards before putting high-risk systems on the market. This temporary deal is also a good starting point for the widespread use of complex AI and automatic truth-detecting systems. It will make the extent of the system’s implementation clear. Significantly, some stipulations of the proposed Commission have been modified by referring to various occasional governmental users of the high-risk AI systems that are also registered in the EU database for high-risk AI machine learning systems. Apart from that, the updated line came with a message delivered to users who operate an emotion recognition system to tell them when they are exposed to the system.

Measures in Support of Innovation

The substantially modified provisions of this part, which encouraged innovation, represented a critical aspect of the Commission's proposal to create a more science-based innovative framework that continually adapts to ensure a sustainable EU-wide regulatory environment.

It should be required that AI regulatory sandboxes, which are meant to ensure a controlled environment for the development, testing, and validation of novel AI systems, also allow for the testing of agencies in genuine conditions. Also, new restrictions have been enabled where AI systems are subjected to be tested in real-world situations so that the systems located in specific conditions can be used. Aiming to reduce the burden of administrative procedures for smaller companies, this provisional agreement was reached by putting down the list of measures to support those that have low revenue. Such is the situation where a derogation is allowed if it is legitimately limited and strictly specific.

Entry into Force

The provisional agreement on artificial intelligence regulation stipulates that the provisions of the AI Act should be applicable for two years from the date of its entry into force, subject to certain exceptions.

Join our WhatsApp Channel to get the latest news, exclusives and videos on WhatsApp

                                                                                                       _____________                                             

Disclaimer: Analytics Insight does not provide financial advice or guidance. Also note that the cryptocurrencies mentioned/listed on the website could potentially be scams, i.e. designed to induce you to invest financial resources that may be lost forever and not be recoverable once investments are made. You are responsible for conducting your own research (DYOR) before making any investments. Read more here.

5 Top Performing Cryptos In December 2024 You’ll Regret Ignoring – Watch Before the Next Breakout

AI Cycle Returning? Keep an Eye on Near Protocol, IntelMarkets, and Bittensor to Rally Before 2025

Solana to Double its 2021 Rally Says Top Analyst, Shows Alternative that Will Mirrors its Gains in 3 Months

Ethereum and Litecoin Rallies Spark Excitement, But Whales Are Targeting a New Altcoin for 20x Gains

Here Are 4 Altcoins You’ll Regret Not Holding In This Crypto Bull Run