Skip to content Search

On June 14th, the European Parliament adopted the AI Act, the first set of regulations related to artificial intelligence, with a vote of 499 to 28 (with 93 abstentions). Prof. Tomasz Trzciński, research group leader at IDEAS NCBR, and Dr. Tomasz Michalak, research team leader at IDEAS NCBR, provide their insights on the matter.

Regulating the high-speed train

“The European Union began working on legal frameworks for artificial intelligence two years ago, when it was not yet such a prominent topic. The final stage of this work coincides with the accelerated development of AI, where it has gained significant attention from the media, politicians, and businesses. Regulating the ‘high-speed train’ has the advantage of minimizing the risk of derailment, but on the other hand, it will move slower than competing trains. I believe the direction is right, but there is a significant risk of slowing down AI development in Europe, and both of these aspects should be considered when introducing potential regulations,” says Prof. Tomasz Trzciński from IDEAS NCBR, leader of the research group on Zero-waste machine learning in computer vision.

Professor Tomasz Trzciński
Professor Tomasz Trzciński

“We can also see that the EU does not intend to regulate everything once and for all, as it aims to establish the European Artificial Intelligence Agency to continuously monitor the development of this technology. I hope it will not become an ‘AI police’ because such a function would simply be impossible to implement – access to models and data is too decentralized. A sensible scenario is for the agency to serve as a think tank, acting as an advisory body to governments,” adds Prof. Trzciński.

Will the AI Act limit scientific research?

“EU institutions assure us that the regulations outlined in the AI Act actually only apply to commercial applications of artificial intelligence and do not restrict scientific research. Therefore, the European Union will not lag behind the United States or China. However, assuming that what happens in research does not permeate into business is quite risky. In practice, the flow of people and knowledge between these two areas is significant and heavily influences how realistic and applicable such safeguards will be,” comments Tomasz Trzciński.

“On the positive side, the AI Act focuses on solutions that prevent the use of artificial intelligence for tasks we typically have concerns about, such as privacy infringement. We should be particularly attentive to ethical issues related to medical data, diagnoses, or profiling individuals within the context of the law. In general, any issues related to personal data are a good indicator of the need for legal changes. With access to such data, we should have clear legal frameworks on what should not be done, similar to what happens in medical research,” he further explains.

“However, I would not expect that everyone will feel secure with artificial intelligence solely based on regulations. Users trust systems that are predictable and whose workings they understand, not necessarily those that are regulated. I believe that interpretability of AI, through solutions like explainable AI, can bring much more good in terms of trust than regulations alone,” concludes Tomasz Trzciński.

Legal changes and innovation

Dr. Tomasz Michalak, leader of the research team in IDEAS NCBR AI for Security, also comments on the AI Act.

“In the case of any significant technological change, a dilemma arises: is it better to wait for the technology to develop before regulating it, or to regulate it from the very beginning, before it reaches maturity? Introducing regulations gradually as technology develops allows for increased innovation and flexibility in research. It also enables the full utilization of the potential inherent in the evolving technology without unnecessary constraints, potentially leading to optimization and increased efficiency in the actions taken. There is greater room for creativity in technology development, and it becomes easier to achieve progressive results when the possibilities for further development are not strictly regulated,” explains Tomasz Michalak.

Dr Tomasz Michalak
Dr Tomasz Michalak

“On the other hand, regulating AI development from the very beginning, even before the technology reaches full maturity, has its own advantages. It allows for the responsible identification of potential risks associated with AI and proactive addressing of ethical challenges. Planning research activities at an early stage of technology development in collaboration with lawmakers enables a more purposeful adaptation of the resulting solutions to societal values and expectations. As a result, it is possible to ensure safety, privacy protection, and responsible use of resources, which is particularly important for a technology with a significant impact on human well-being, such as AI,” he elaborates.

“Unfortunately, finding the right balance between regulation and innovation is a significant challenge. It requires ongoing dialogue between decision-makers, researchers, entrepreneurs, and society. Potential risks and benefits arising from the technology need to be monitored, and the appropriate scope and timing of regulations need to be determined. Therefore, the ongoing discussions in the European Union, the United States, and around the world are highly relevant. Considering the potential risks and impact of AI on humanity, in my opinion, taking precautions and implementing regulations are the more appropriate approach in this case. Of course, the question is: what kind of regulations and for whose benefit?” says the leader of the research team in IDEAS NCBR.

“Regarding the European Artificial Intelligence Agency and the debate on whether to make it more of a ‘police’ or an advisory body, the question arises: is it possible to effectively regulate technologies like AI without any enforcement mechanism? An interesting example is the early regulation of the internet. In the early days of the internet when it was a relatively new technology, regulations were very limited or did not exist at all. The focus was on supporting development rather than imposing strict enforcement mechanisms. As internet access became widespread, its impact on various aspects of society grew. Governments began introducing more rigorous rules and enforcement mechanisms to address key issues such as privacy, intellectual property rights, and cybersecurity. I believe those in their forties remember well how intellectual property rights were respected online 20 years ago. Therefore, I don’t believe in the concept of ‘AI police’ at present. In the future, perhaps,” concludes Tomasz Michalak.

Regulating AI like aviation regulation

“There are numerous examples of technologies that were initially met with skepticism by many people but gained societal trust through regulation. When airplanes first appeared, pilots were considered risk-takers, and the idea of flying was seen as a folly. After all, humans don’t have wings, right? And indeed, in the early years of aviation, airplanes were not as safe as they are today. Through the establishment of regulatory bodies, the introduction of safety standards, and technological advancements, aviation gradually earned the trust of society. Today, it is widely accepted and regulated as a means of transportation. At the moment, it seems to me that this is the path we will have to go through with AI. From significant concerns, through regulations, to familiarity and acceptance,” explains Tomasz Michalak.

“But regulations also pose a risk, as we discussed earlier, namely the potential stifling of innovation. We don’t know if research that currently seems risky to us may turn out to be groundbreaking in a completely different context. However, I believe it is possible to require certain basic principles of fairness from every type of AI, such as a prohibition of discrimination based on skin color or religious beliefs. These requirements can be formulated as simple axioms. For example, the decision made by an AI system towards a person of Asian descent should be the same as towards a person of African descent if skin color is the only relevant distinguishing variable,” adds the leader of the research team in IDEAS NCBR.

“Of course, there are areas in AI research that should be examined more closely than others as a matter of principle. For example, AI in healthcare requires greater regulatory attention than industrial applications for detecting defects on a factory production line. Another area requiring attention is the application of AI in security and prevention. Importantly, these areas are already heavily regulated. In theory, we could extend existing regulations to AI technologies,” says Tomasz Michalak.

Featured news

R-GRID: Artificial intelligence for the security of power grids – supported by the NATO Science for Peace and Security Programme
We can’t imagine power grids without AI
IDEAS NCBR joins Adra