The ARTIFICIAL INTELLIGENCE ACT published by the European Commission on 21.04.2021 aims to regulate the use of these systems. An objective measure of risk assessment will be applied, with four different levels of risk. In this article by ZELLER & SEYFERT, you will find out how these are designed and what practical effects the decision will have. To begin with, this much: According to our assessment, it is likely that all artificial intelligence measures related to German labor law will be subject to significant regulation.
Artificial intelligence (AI) is the name given to a technology that both offers significant potential benefits and poses new dangers and threats to the safety of people and systems. To assess the significance of potential risks, they are categorized into four levels: 1. unacceptable, 2. high, 3. low, 4. minimal. There are different control mechanisms for all levels.
Level 1: Unacceptable risk
First, AI systems that are considered a clear threat to human security, livelihoods and rights are to be banned. This includes, firstly, those that manipulate human behaviour in order to circumvent the free will of users. It also covers applications that enable authorities to evaluate social behaviour (like e. g. social scoring in China). In this context, toys with a voice assistant that encourage minors to engage in dangerous behaviour can be cited as an example of AI systems that manipulate human behaviour.
Level 2: High risk
High-risk AI systems will have strict requirements that must be met before the applications can be brought to market. These include:
- Appropriate risk assessment and mitigation systems,
- High quality data sets fed into the system to minimize risk and discriminatory results,
- logging of operations to allow traceability of results,
- detailed documentation with all necessary information about the system and its purpose to allow authorities to assess its compliance,
- clear and adequate information for users,
- adequate human supervision to minimize risks, and
- maximum robustness, security (from possible hacking), and accuracy.
However, this only applies to those systems that could cause significant harm during their use. A wide variety of areas of daily life are affected.
Level 2 risks arise, among other things, during use in critical infrastructures (example: autopilot in traffic). This level also applies to education or vocational training if a person’s access to education and professional life could be affected (example: assessment of exams).
Products and services
Safety components of products may also be affected, e.g., an AI application for robotic-assisted surgery. In the area of German labor law, employment of workers, human resources management, and access to self-employment, the use of AI may be associated with level 2 risks (example: software for evaluating resumes for hiring procedures). Increased security requirements should also apply to important private or public services (example: checking creditworthiness).
Fundamental rights and administration of justice
For law enforcement, level 2 applies if people’s fundamental rights could be interfered with (example: assessing the reliability of evidence). High security requirements also apply to applications related to migration, asylum, and border control, e.g., verifying the authenticity of travel documents. Finally, AI applications in the administration of justice and democratic processes also fall into this category (example: application of legislation to concrete facts).
Restrictions at level 2
In the aforementioned areas of risk level 2, the use of AI is not prohibited per se, but high requirements are (justifiably) placed on its use. An error or even the misuse of these systems can never be completely ruled out. For this reason, constant checks must be carried out and appropriate safety precautions to avert danger must be guaranteed. This can become important for regulations on artificial intelligence in employment law.
This supervision is also appropriate and necessary from the point of view of data protection. This applies in particular to all types of remote biometric identification systems. Their real-time use in public spaces for law enforcement purposes will be prohibited as a matter of principle. Their use will only be permitted when absolutely necessary. Examples include the search for a missing child, the defense against a concrete and immediate terrorist threat, or the identification and prosecution of suspects of serious crimes.
It should be emphasized that such use of AI systems must be authorized by a judicial authority or other independent body. However, exactly how this will be implemented in European or German law has not yet been regulated. It is foreseeable, however, that there will be considerable regulation of all measures relating to labor law regulations in Germany.
Level 3: Low risk
When dealing with AI systems such as chatbots, users should be aware that they are dealing with a machine. This is necessary so that they can make an informed decision as to whether or not they want to continue using the application. Therefore, more extensive transparency obligations apply to such systems.
Level 4: Minimal risk
Systems that pose minimal risk are not subject to restrictions on use. For example, in the case of AI-supported video games or spam filters, the draft regulation is not intended to intervene.
How is the control of the standards carried out in practice?
The control of these regulations is to be carried out in practice by the competent national market surveillance authorities. Furthermore, the creation of a European committee for artificial intelligence is suggested. This is to accompany the implementation and drive forward the development of standards in the field of AI.
What are the practical implications of AI regulation?
The question for employers is what practical relevance this regulatory proposal may have. AI systems are expected to be omnipresent in the future. Digitization will advance at an increasing pace and continue to permeate most areas of life and work.
This raises the question of whether artificial intelligence innovations require political regulation and, if so, to what extent. The EU Commission has answered this fundamental question in the affirmative with the present decision – which is not surprising, since the Commission is said to have a certain tendency to regulate all areas of life.
Should the present resolution be implemented in directly or indirectly applicable law, employers will have to ensure that the control and security mechanisms imposed do not also prevent innovations that would be desirable for society as a whole, because they have numerous inherent opportunities. After all, based on the status quo to date, considerable regulation of all artificial intelligence measures related to German labor law is to be expected.