On 21 April 2021, the European Parliament provided a Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts (Regulation, Proposal). In the Explanatory Memorandum of the Regulation, it is recognised that artificial intelligence (AI) is a fast-evolving family of technologies that can bring a wide range of socio-economic benefits across the whole spectrum of industries and social activities, i.a. by improving prediction, optimising operations and resource allocation, and personalising provision of services.

However, in the explanatory memorandum it is recognised that the same elements and techniques that power the socio-economic benefits resulting from using artificial intelligence can also bring about new risks or negative consequences for natural persons or the society.

Considering the above, the intention of the European Union was to establish a balanced approach and to ensure that Europeans can benefit from new technologies developed and operating in accordance with the values, fundamental rights and principles respected within the European Union. The EU would like to make human interest and focus on a person as a general principle of this regulation. In the view of the applicant, this is to provide people with confidence that artificial intelligence will be used in a safe and lawful manner. There is also an economic value for the Community as a whole in the interest of the EU in AI. Ultimately, the European Union would like to become a global leader in the development of safe, reliable and ethical artificial intelligence.

The proposed regulatory framework by the European Commission regarding AI is to perform the following detailed objectives:

  • to ensure that artificial intelligence systems placed on the Union market and used are safe and respect the existing law on fundamental rights and Union values;
  • to ensure legal certainty to facilitate investment and innovation in AI;
  • enhance governance and effective enforcement of existing law on fundamental rights and safety requirements applicable to AI systems;
  • facilitate the development of a single market for lawful, safe and trustworthy AI applications and prevent market fragmentation..

The objective of the applicant was to create balanced and proportionate regulatory approach. On one hand, there has been an attempt to create a complex legal framework that does not become out of date too quickly. On the other hand, however, there has been an intention not to implement unnecessary trade restrictions and, moreover, regulatory intervention has been provided in areas where the greatest risks associated with the use of artificial intelligence can be anticipated.

With reference to details, the subject matter of the Regulation is to regulate placing on the market, putting into service and the use of artificial intelligence systems. In Title I of the proposal, there is a number of definitions, including a definition of the artificial intelligence system which shall be understood as a software developed with one or more of the techniques and approaches listed in Annex I and that can - for a given set of human-defined objectives - generate outputs, such as: content, predictions, recommendations, or decisions influencing the environments they interact with. In Annex I, to which the above definition refers, there will be a detailed list of approaches and techniques for the development of AI to be adjusted by the Commission in the light of technological progress.

Artificial intelligence system (AI system) - a software developed with one or more of the techniques and approaches listed in Annex I and that can - for a given set of human-defined objectives - generate outputs, such as: content, predictions, recommendations, or decisions influencing the environments they interact with

In Title I, the participants across the AI value chain are also clearly defined, such as providers and users of AI systems that cover both public and private operators to ensure a level playing field.

Title II establishes a list of prohibited artificial intelligence practices by dividing the AI systems creating: (i) an unacceptable risk, (ii) a high risk, and (iii) low or minimal risk. The above-mentioned list of prohibited practices includes all those AI systems the use of which is considered unacceptable as they are contrary to the EU values. The prohibitions cover the practices that have a significant potential to manipulate people by using subliminal techniques affecting their consciousness or exploiting vulnerabilities of specific vulnerable groups such as children or persons with disabilities in order to materially distort their behaviour in a manner that is likely to cause them or another person psychological or physical harm. The applicant also points out a negative impact of other manipulative or exploitative practices affecting adults, however, in the applicant’s view, the AI systems within these areas could be limited by the currently applicable provisions regarding i.a.: data protection, consumer protection and digital services ensuring that natural persons are properly informed and have a freedom of choice not to be subject to profiling or other practices that might affect their behaviour.

The Proposal prohibits public authorities from using AI-based social behaviour scoring systems for general purposes and prohibits the use of real-time remote biometric identification systems in public spaces for law enforcement purposes (with certain exceptions). There are also specific obligations to be imposed on providers and users of AI systems in order to ensure safety and respect for the currently applicable provisions regarding protection of fundamental rights throughout the lifetime of the systems.

Title III includes specific provisions regarding artificial intelligence systems that pose a high risk to the health and safety or fundamental rights of natural persons. These systems are allowed to be placed on the European market subject to compliance with certain mandatory requirements and assessment of conformity ex ante. In addition, it is worth to emphasise that classification as a high-risk AI system depends not only on the function performed by the AI system, but also on a specific purpose and mode of usage. Title III also provides the requirements that the high-risk AI systems have to meet (regarding data and data management, documentation and registration of events, transparency and provision of information to users, human supervision, robustness, accuracy and safety). Furthermore, it also specifies obligations imposed on suppliers of AI system, a framework for the notified bodies to be involved in conformity assessment procedures as independent third parties and the conformity assessment procedures that should be followed in case of each type of a high-risk AI system has also been clarified the conformity assessment procedures that should be followed in case of each type of a high-risk AI system has also been clarified.

Title IV deals with specific artificial intelligence systems that: (i) interact with humans, (ii) are used to detect emotions or identify associations with (social) categories on the basis of biometric data or (iii) generate or manipulate content (deepfake technology). The provisions are intended to address specific risks of manipulation they create.

In cases where individuals interact with AI system or if their emotions, or characteristic features are identified by automated means, it will be necessary to inform them of this fact. The information obligation will also exist if the AI system is used to generate or manipulate images, sounds or video content that significantly imitates authentic content.

Other regulations in the following Titles V to IX are related to, among others, measures to support innovation, management and monitoring systems at the EU and national level, obligations of AI system providers to monitor and report incidents, and a framework for the development of codes of conduct to encourage suppliers of non-high-risk AI systems to voluntarily comply with mandatory requirements applicable to high-risk AI systems.

A significant part of the above article has been prepared on the basis of the explanatory memorandum of the Proposal, including quotations of the explanatory memorandum. The Polish version of the Proposal can be found on the following website: https://eur-lex.europa.eu.