Brussels wants to develop a “gold standard” for the new technology. Companies fear competitive disadvantages. There is one point in particular that bothers them.

The final negotiations on the so-called AI Act will begin in these weeks. With this, the European Union (EU) wants to regulate the use and development of artificial intelligence (AI) in almost all areas of life. On Tuesday (21/2/2023), the member states want to adopt a position in the Council, which will then be coordinated with the EU Parliament and Commission.

This is a key regulatory project for the association of states. Artificial intelligence is one of “the most strategically important technologies of the 21st century,” writes the EU Commission in a paper. “The way we approach AI is crucial for the world we will live in in the future.” But the way the EU approaches legislation scares many companies.

Brussels wants to set a standard for how technology will be designed in the future, even beyond the borders of Europe. When it comes to AI development, the USA and China basically compete to become the world market leader. When it comes to money and talent, data and computing power, Europe lags behind the other two economic areas – but politically it has a lever.

The details of the regulation are likely to be negotiated up to the last minute. The basic features have start-up founders, corporate managers and association representatives already fearing regulation that will lead to great uncertainty and interfere heavily with the development of the technology.

An IT company warns that the broad definition regulates numerous products that have little to do with AI. In the management of a Dax [Deutscher Aktien Index or the GER40]  listed company, one criticizes the “spongyness” of the project. The digital association Bitkom warns against “focusing too much on risks”. And the KI-Bundesverband [Federal Association for AI] even sees “the entire AI ecosystem and in large parts also the use of software” massively restricted.

Why does the EU want to regulate artificial intelligence?

The EU Commission sees artificial intelligence as a technology with great potential – for better or for worse. According to the draft law, there is a chance of “many benefits for the economy and society”, whether in climate protection, in the healthcare sector or in sectors such as mobility. However, there are also new risks, for individuals and for society.

A few examples prove this. The organization Algorithm Watch complains that automated decision-making systems – which often use AI – repeatedly discriminate against people, be it in the allocation of jobs or the biometric recognition of faces. There is also a lack of information on exactly how these systems work, making it difficult to challenge decisions.

Artificial intelligence requires large amounts of data as learning material, which, however, contains – also hidden – human prejudices. In general, the data quality is of crucial importance for the results. In addition, the results of the calculations are often difficult to understand. The algorithm: a black box.

The EU Commission therefore wants to ensure that research institutions and companies develop artificial intelligence according to “European values”. The hope is to develop a “gold standard” for regulation: similar to the GDPR, Brussels should set rules that ideally have a global effect – and at the same time strengthen Europe as a location, which is currently losing importance, if only because of energy prices.

How does the AI ​​Act regulate artificial intelligence?

The current draft of the AI ​​Act, including appendices, is 125 pages long. It provides a risk-based approach – the rules are based on what risk is assumed for a particular technology: minimal, limited, high and unacceptable.

The focus of the AI ​​Act is on high-risk applications, which the Commission estimates account for up to 15 percent of all AI systems. The ordinance includes the operation of critical infrastructures as well as algorithmically supported surgery. Also included: systems that pre-sort applications and those that predict offender behavior. Life insurance risk models and credit ratings in the banking sector also fall under this definition.

The AI ​​Act stipulates high requirements for these applications: companies must introduce risk management for artificial intelligence, fulfill transparency obligations towards users, submit technical documentation with detailed information on the data used and also enter their program in an EU database.

Where could there be difficulties?

Most companies refrain from public criticism, instead asserting their influence through the associations. According to business circles, they make regular appearances in Berlin and Brussels. And indeed, the current draft of the EU Council already takes some suggestions into account.

Nevertheless, from the point of view of the economy, there is still room for improvement. The main point of criticism is aimed at the definitions. In addition to “concepts of machine learning”, the draft law also designates statistical approaches as well as search and optimization processes as artificial intelligence. The KI-Bundesverband complains that this includes almost every piece of software that is being developed today.

From the point of view of the technology industry, which application entails a high risk must also be defined much more precisely. Bitkom demands that specific applications should not be classified across the board. Not every program for the human resources department sorts CVs, not every software of an electricity supplier controls the network.

A Dax listed company criticizes that it is unclear who bears the bureaucratic duties for complex products such as machine controls, robots or cars – due to the coordination between the manufacturers and numerous suppliers, a “huge overhead” can be expected, especially in German industry .

Last but not least, the technology industry sees a need for discussion when it comes to handling data: This should be “representative, error-free and complete” in order, for example, to prevent discrimination against underrepresented population groups. However, developers point out that high-quality data sets are only available to a very limited extent. The requirement is therefore likely to be difficult to meet.

Source: (german language) Handelsblatt

[Editorial comment: The EU Parliament is scheduled to vote on the draft AI Act by end of March 2023. Following this vote, discussions between the EU Member States, the EU Parliament and the Commission (so-called trilogue) are expected to commence in April. If this timeline is met, the final AI Act should be adopted by the end of 2023. ]

 


 

How the EU AI ​​Act assesses risk

Four stages: The European Union wants to regulate artificial intelligence with the AI ​​Act. A risk-based approach is envisaged – the greater the risk, the higher the requirements. The ordinance provides for four stages.

Unacceptable risk:

Applications that are a clear threat to human rights, such as facial recognition in public spaces or social credit systems that the state uses to encourage citizens to behave in a certain way are considered an unacceptable risk. They are strictly forbidden.

High risk:

According to the regulation, there is a high risk if the health, safety or fundamental rights of EU citizens are at risk. Included are biometric systems, the operation of critical infrastructures and personnel software, for example for applications.

Limited risk:

Applications in non-critical areas, such as chatbots for customer service, are considered limited risk. These are only subject to a transparency obligation – users should know that they are dealing with an automated system.

Minimum risk:

With many applications there is only a minimal risk, for example with computer games, film recommendations or spam filters. The regulation does not provide for any restrictions here.

Source: (german language) Handelsblatt