On December 9, 2022, the fourth meeting of the interdisciplinary team consisting of researchers from UJ's Future Law Lab, computer scientists/programmers, and lawyer-practitioners from the BSJPtech project took place.
Our joint meeting, in hybrid form, was held at the Faculty of Law and Administration of the Jagiellonian University in the Auditorium Maximum. The entire meeting was about the European Commission's draft regulation on artificial intelligence (AI), which would facilitate access to justice in that AI would facilitate access to justice in that AI would facilitate access to justice in that AI and increase public confidence in such systems.
Course of the meeting.
The seminar was led by lawyers from the BSJPTech project, who began the meeting by discussing the course of work and the main points of the Artificial Intelligence Ordinance (AI Ordinance), followed by a detailed presentation of prohibited practices and high-risk AI systems. In addition, the obligations of artificial intelligence providers and users were discussed, along with an analysis of the standards and an assessment of their compliance with the law. At the very end, the lawyers touched on transparency obligations and post-marketing monitoring of artificial intelligence. The aforementioned issues were addressed by Jakub Kabza, Ani Sokolowska, Maciej Jura and Marcin Kroll.
What is the AI regulation?
The regulation is an attempt to improve the functioning of the internal market by establishing a uniform legal framework, particularly for the development, marketing and use of artificial intelligence. This will involve restrictions on the freedom and conduct of economic and scientific activities. The aim of these potential restrictions is to focus on human beings, particularly the protection of fundamental rights.
What is artificial intelligence?
According to the definition proposed in the draft regulation, it is "software developed using one or more of the techniques and approaches listed in Annex I that can, for a given set of human-defined purposes, generate results such as content, predictive recommendations or decisions that affect the environments with which it interacts."
The aforementioned Appendix I, provides a list of techniques and methods that are the basis for qualifying a given system as an artificial intelligence system. As examples of techniques, these include: machine learning mechanisms, logic and knowledge-based methods or statistical approaches
What risks does AI pose?
The risks associated with bringing AI to market come from the way it was designed and the data it works on. Both the design and the data can be biased, intentionally or unintentionally. AI algorithms can be programmed for a - predetermined - result. Describing a complex and ambiguous reality with numbers can also be a problem.
Accordingly, the EU wants to introduce a risk-based approach to AI. It will be based on a case-by-case analysis of artificial intelligence systems from a fundamental rights and security perspective. All the solutions used in the regulation will be in line with current legislation, among others, the Charter of Fundamental Rights; RODO, as well as in line with the Union's previous AI policies on technology development, digital markets, etc.
Transparency obligations will be imposed on system providers or their users, and will apply to systems: that interact with humans, are used to detect emotions or determine links to (social) categories based on biometric data, generate or manipulate content (deepfake).
Measures to support innovation.
The AI regulation provides for the creation of regulatory sandboxes by national authorities. These are top-down controlled environments that facilitate the development, testing and validation of innovative artificial intelligence systems for a limited period of time before they are placed on the market or put into service according to a specific plan. All activities will take place under the direct supervision of the competent authorities and in accordance with their guidelines.
Authorities will have specific remedies for any significant risks to health and safety and fundamental rights during the development and testing stages of the systems. The regulation tests the possibility of sandbox processing of personal data collected for other purposes, subject to certain limitations.