AI Act, Europe first in the world to regulate artificial intelligence

Security Awareness
6 May 2024
AI ACT - riconoscimento biometrico

AI ACT: adequate training essential for companies

Those who feared that artificial intelligence could cause an unchecked drift in the use of technology can now sleep a little easier.
Especially if he lives the status of a European citizen.

Indeed, it is recent news of the regulation of such a new and in many ways still unfamiliar subject by Europe, the first in the world to legally delve into this area.

About a month ago, on March 13, The European Parliament has in fact approved by a very large majority (523 votes for, 46 against and 49 abstentions) theAI Act, the European regulation on artificial intelligence, which had been proposed by the European Commission on April 21, 2021, with the aim of Ensure respect for the fundamental rights and dignity of people in the development and use of AI technologies.

Its scope spans all sectors (except the military) and all types of artificial intelligence.

The Regulations will become law by May or June as the text needs to be translated into 24 languages and adapted to national regulations through a further vote by Parliament and approval by the Council of the European Union.

Applications of artificial intelligence are classified and regulated according to their risk of causing harm to citizens and fall mainly into three categories: prohibited practices, high-risk systems, and other systems.

Practices prohibited by AI applications are, for example, biometric classification systems, indiscriminate extrapolation of facial images from the Internet or CCTV system recordings in order to create facial recognition databases.
The goal is to exclude all those systems that manipulate human behavior or exploit people’s vulnerabilities.

Real-time biometric identification in publicly accessible spaces is not outright prohibited but restricted, and permissible uses include, for example, searching for a missing person or preventing a terrorist attack but require judicial or independent authority approval.
Ex-post biometric identification is considered high risk. Therefore, in order to have recourse to it, judicial authorization must be linked to a crime.

High-risk systems, according to the proposed regulation, are those that pose significant threats to people’s health, safety or fundamental rights.
They require a mandatory conformity assessment, undertaken as a self-assessment by the supplier, before being placed on the market.
They include not only critical infrastructure or security components but also:

  • the schooling (to determine access or admission, to assign people to institutions or vocational education and training programs at all levels, to evaluate people’s learning outcomes, to assess the appropriate level of education for a person, to monitor and detect prohibited student behavior during tests, etc.);
  • the management of workers (for hiring and selecting people, making decisions regarding the conditions of employment, promotion and termination of contractual relationships, assigning tasks based on individual behavior, etc.);
  • i essential services including health services; social security benefits; social services, but also creditworthiness; administration of justice; and migration and border management.

Particularly critical applications, such as those for medical devices, require that the vendor’s self-assessment under the Artificial Intelligence Regulation be taken into account by the body conducting the assessment, pursuant to existing European Union regulations.

Other systems, that is, artificial intelligence systems outside of the above categories (including large generative AI models, such as the well-known content creation platform ChatGPT) will have to comply with a number of transparency requirements:

  • Disclose that the content was generated by AI;
  • Making sure that models do not generate illegal content;
  • Publish summaries of copyrighted data.

More powerful models that could pose systemic risks will also have to comply with other obligations, such as conducting model evaluations, assessing and mitigating systemic risks, and reporting incidents.


Application of the AI ACT

The scope of the Regulation covers both suppliers and users of IA systems, who must ensure that products meet established standards, are accompanied by the necessary documentation, and carry a European mark of conformity. There are exceptions, however, such as systems intended exclusively for military or defense purposes and free and open-source models that pose no systemic risk.

Time frame for implementation of the AI ACT

The regulations will begin to take effect in stages, a way also to give companies time to adjust to the new measures.

More precisely:

  • 6 months later, member states must ban banned AI systems,
  • 1 year later the rules for general-purpose artificial intelligence systems will begin to apply,
  • 2 years later the rest of the DI law will be applicable,
  • 3 years later the obligations will apply to high-risk systems,

Sanctions of the AI ACT

The Regulations classify IA systems according to the level of risk, which can be minimal, limited, high or unacceptable. This entails different responsibilities and limitations for developers and users of such systems, with penalties imposed on those who do not comply, ranging from 1.5 percent to 7 percent of the company’s overall revenue.

European Committee for Artificial Intelligence

The bill also proposes the introduction of a European committee for artificial intelligence to promote international cooperation and ensure compliance with the Regulations.
EU countries will need to establish and make nationally accessible regulatory testing spaces and real-world testing mechanisms so that SMEs and start-ups can develop AI systems before bringing them to market.

The reactions

As always, there are the proponents of regulation, who fear the dangers of technological deregulation, and therefore welcome the new measures, and those, on the other hand (particularly companies), who fear that these will bring excessive limitations in application development.

In any case, according to Internal Market Commissioner Thierry Breton, echoed by Sole 24 ore, “Europe is now a world standard in reliable Ai.” The same business newspaper calls the Regulations, the “most comprehensive regulatory system on artificial intelligence to date, a historic package of standards that, in the absence of appropriate U.S. legislation, could set the tone for how Ai should be governed in the Western world.”

Certainly we are talking about a subject, artificial intelligence, that should not be underestimated and should be treated with awareness, seriousness and professionalism, also because it is constantly evolving. These new regulations also represent an additional field of knowledge that must be acquired and managed.
Therefore, it is essential that companies undergo specific training that makes them unassailable on both the technological and legal fronts.

Related Articles