The European Union has just made a ground-breaking move towards regulating the burgeoning new field of artificial intelligence with the introduction of the AI Act. This is a milestone part of legislation, the first of its kind in the world, harmonizing rules on AI in such a manner that it fosters the development and take-up of safe and trustworthy AI systems across the single market of the EU.
The AI Act answers itself to how quickly AI technologies have been developed and placed into different sectors. The approach that is used is 'risk-based', where an AI system, depending on the higher risk posed for society, will be subject to a lot of regulations. This approach is meant to ensure AI systems will be developed and used respecting fundamental rights and, thus capable of building trust and accountability.
The new law classifies AI systems according to the associated risk. Systems of low risk are to be subject to/transparency obligations. The relevant law set high-risk AI systems that would only enter the EU market after a series of modalities for such systems: for example, bans some forms of AI practices altogether; these are considered unacceptable for the inherent risk associated with them. Examples of such banned AI practices include cognitive behavioral manipulation and social scoring.
The AI Act, for example, forbids the use of AI for predictive policing based on profiling and systems making categorical representations of persons, such as race, religion or sexual orientation. All these are forbidden for the reason of preventing discrimination and also protecting privacy and dignity.
Similarly, the law controls the models of GPAI that have become very commonplace. The models of GPAI which are not systemic risks will have low requirements, and those which are high systemic risk will have a much stricter regulation. It gives an approach by which the potential loss can be reduced while nurturing innovation and growth in this particular sector.
In order to put these new rules into practice, the EU is setting up a few governance structures; an AI Office under the Commission and an independent group of experts on scientific advice. These structures will back up the enforcement actions, and make sure the common rules are applied in the same way throughout the EU.
The result of this is that the AI Act will most definitely have an impact on businesses operating within the EU. Businesses will certainly have to be more transparent about the data used for training their AI systems, but this might just bring to light industry secrets that are otherwise tightly locked away. Unless transparency is resorted to in this direction, businesses will not win the trust of consumers and AI systems will perpetuate biases or result in everything but fairness.
The AI Act is to be rolled out in a two-year phase-up, giving the regulators time to implement the new laws and businesses time to adjust to their new commitments. The phased rollout or implementation also allows for feedback and readjustments along the way so that the regulations are effective and fair to all.