Skip to content Search
18.05.2023

An increasing number of artificial intelligence researchers talk about the concept of “AI for Social Impact”, i.e. the use of artificial intelligence to solve significant challenges in the social dimension. This issue was the main topic of the meeting at the headquarters of IDEAS NCBR, attended by scientists from this center and experts from the Panoptykon Foundation and NASK.

On May 10, 2023, IDEAS NCBR – the center for research and development of AI –  invited journalists and opinion leaders interested in AI to a discussion and coffee meeting entitled “The friendly side of AI. Why AI should develop sustainably.”

The central part of the meeting was a panel attended by scientists Prof. Piotr Sankowski, President of IDEAS NCBR and prof. Tomasz Trzciński Leader of the research group at IDEAS NCBR and experts Katarzyna Szymielewicz from the Panoptykon Foundation and Inez Okulska, PHD, NASK.

The discussion took place almost half a year after the outbreak of huge interest in the subject of AI. The media debate on this technology is dominated by entertainment and business contexts, which are juxtaposed with the potential threats that its unsustainable development may create.

Meanwhile, in the scientific community, we are increasingly hearing about AI applications that can benefit societal challenges.

Responsible management of resources

Already, some researchers draw attention to the need to create more “green” algorithms through the implementation of the so-called “recycling”of resources used in machine learning. Importantly, their reuse can also affect the performance of algorithms. Research in this area is conducted by the research group for “Zero-waste machine learning in computer vision” at IDEAS NCBR. Research in the area of sustainable AI development is extremely important for computer vision because, for example, in the medical industry, where algorithms support medical personnel during surgeries using robots, the computational efficiency of algorithms translates directly into the medics’ reaction time during the procedure. One of the benefits is a reduced risk of complications.

Scientists are trying to use AI in medicine on many levels: for example, to diagnose dangerous skin lesions, to virtually stain tissues or to select organs for transplantation. This approach can not only speed up the work of doctors and medical staff but also reduce the cost of treatment.

There are also less obvious benefits that AI can offer in the social dimension. It is about a positive impact on solutions in agriculture (collection and analysis of precise data on crops) or precision forestry (development of methods enabling exact measurement of individual trees).



Social aspects are crucial?

“When building AI solutions – even those that do not provide direct benefits – one should always take into account the social aspects accompanying the development of AI. We must not ignore them and focus only on increasing revenues, because this can lead to very undesirable situations, such as aggravating prejudices or inequalities, instead of combating them,” said Piotr Sankowski, President of IDEAS NCBR.

In the United States, algorithms are used to route police patrols. They can identify specific areas where crime is more likely to occur, for example during major sporting events. Clues are generated based on associations between the place, events, and crime rates. Other tools use personal data to determine, before giving a sentence, whether a defendant will be prone to commit another crime. The problem lies in the data that is fed to the algorithm, which often leads to distortions in a very simple way. According to an article in MIT Technology Review, a black person is five times more likely to be detained without justifiable cause than a white person. Poorly constructed algorithms only reinforce prejudices and can lead to social unrest.

“It is important for users to have insight into what inference path led the algorithm to a specific result. This would not only reduce people’s concerns about AI but also minimize mistakes and the bias that follows them. Therefore, I have no doubt that it is necessary to create tools in the field of Explainable AI that would clarify decisions made by algorithms and allow to correct errors contained in them,” added Piotr Sankowski who conducts research in this area.

“This can be done on several levels: build models based on architectures designed in advance to be as explainable as possible, or post factum explain decisions made by the models, and more statistically or more empirically, depending on the complexity or access to the model itself. The so-called ‘black boxes’ can, among other things, be ‘opened’ behaviorally to a certain extent, and this applies also to big language models. This involves designing tests, for example in the case of a chatbot consisting of sequences and variants of specific queries (the so-called ‘prompt’) and a multifaceted study of the model’s responses. Building responsible and trustworthy AI is currently the most important work that must go hand in hand with the development of algorithms themselves. To make sure that the focus on spectacular results does not obscure the entire horizon. In-depth analysis and understanding of the data used to teach models – explaining decisions, examining possible biases and resilience of proposed solutions – is as important as the next percentage in the rankings of results,” emphasized Inez Okulska, PhD, Head of the Department of Linguistic Engineering and Text Analysis at NASK.

“The market for AI-based solutions has already shown that it will not regulate itself. Errors and distortions in the operation of unsupervised algorithms, training models on incomplete data sets, often in violation of personal data protection rules – these are all well-recognized problems. In order to prevent their escalation, the European Union is currently working on a comprehensive regulation of the AI sector. In place of fuzzy declarations of ethical approach to technology, which technology companies are so eager to serve us today, there will be specific legal obligations – including an obligation to assess the impact of a given system on human rights – before it is introduced to the market,” explained Katarzyna Szymielewicz, lawyer and co-founder of the Panoptykon Foundation, who is actively involved in the work on the Act on Artificial Intelligence.

Green AI

“AI can also be used to support the energy transition and increase energy efficiency (smart grids). “Energy efficiency can be increased already at the stage of building algorithms. For this purpose, we can teach and use machine learning models in an energy-efficient way. It is particularly important because they are becoming increasingly common and require more and more computing power,” commented Tomasz Trzciński, leader of the research group dealing with this issue at IDEAS NCBR.

“In our research group, we do not limit machine learning models; we try to increase their effectiveness. For this purpose, we use information, calculations and resources to which we already have access. You can call it the ‘recycling of calculations’. In the project, we focus on creating models that learn to be efficient, not just capable of solving a specific task,” added the expert.

The issue of reducing energy consumption is extremely important because training a single AI model can contribute to carbon dioxide emissions equal to the entire life cycle of five cars.

AI for Social Impact already in the mainstream?

Although the concept of AI for Social Impact is not new, in recent times – along with the growing discussion about artificial intelligence – many people have started to pay more attention to it. During this year’s edition of the prestigious conference of the Association for the Advancement of AI (AAAI), which was held in Washington, researchers paid a lot of attention to the social applications of AI. A dedicated part of the event was devoted to this topic, and in August AAAI will organize a special conference: “Artificial Intelligence, Ethics and Society”.

“As part of the biggest conferences on machine learning, for some time now, a mandatory ‘social impact’ section has appeared in the application forms, in which we, as authors, must describe the impact of our algorithms on society,” explained Tomasz Trzciński.

Not without significance are also the values that guide many researchers. “The popularity of the ‘AI for social impact’ topic is boosted by the growing trend, which I would call ‘anti-corporate’ – young people want to create socially useful projects that solve real problems and can lead to an improvement in the quality of life,” added Piotr Sankowski, President of IDEAS NCBR.

***

Katarzyna Szymielewicz is a lawyer specializing in human rights and new technologies. Co-founder and president of the Panoptykon Foundation, as well as Vice-President of European Digital Rights in 2012-2020. A graduate of the Faculty of Law and Administration of the University of Warsaw and of Development Studies at the School of Oriental and African Studies. Formerly, a lawyer at the international law firm Clifford Chance, a member of social councils at the Minister of Digital Affairs, a scholarship holder of the international network of social entrepreneurs Ashoka. She has published, among others, in The Guardian, Polityka, Gazeta Wyborcza and Dziennik Gazeta Prawna. Winner of many awards, including the Radio Tok.fm Award in 2013. She hosts the Panoptykon 4.0 podcast.

Inez Okulska is the Head of the Department of Linguistic Engineering and Text Analysis at the NASK National Research Institute. After completing a colorful humanistic path (which included, among others, linguistics, comparative literary studies, cultural studies, philosophy), culminating in a doctorate in translation studies and a postdoctoral fellowship at Harvard University, she completed master’s studies in automation and robotics at the WEiTI faculty of the Warsaw University of Technology. Scientifically interested in the semantic and pragmalinguistic potential of grammar, explores proprietary vector representations of text and their algebraic potential. She implements projects related to cybersecurity, primarily at the level of detection and classification of undesirable content.

Piotr Sankowski is the President of IDEAS NCBR. He is a professor at the Institute of Computer Science at the University of Warsaw, where he obtaiend his habilitation in 2009 and the doctorate in computer science in 2005. His research interests involve algorithmics, with particular emphasis on algorithmic graph analysis and data analysis algorithms. Piotr Sankowski obtained his doctorate in physics in the field of solid state theory at the Polish Academy of Sciences in 2009. He is the first Pole to receive 4 grants from the European Research Council (ERC). In 2010 it was the ERC Starting Independent Researcher Grant, in 2015 the ERC Proof of Concept Grant, in 2017 the ERC Consolidator Grant, and in 2023 another ERC Proof of Concept Grant. He is the co-founder and CSO of the MIM Solutions spin-off.

Tomasz Trzciński is a professor at the Warsaw University of Technology and the Jagiellonian University. He leads the work of the CVLab machine vision team. He is also a member of the GMUM machine learning team at the Jagiellonian University, as well as the leader of the machine vision research group at IDEAS NCBR. He obtained his habilitated doctor degree at the Warsaw University of Technology in 2020, his ScD in machine vision at the École Polytechnique Fédérale de Lausanne in 2014, and the double master’s degree at the Universitat Politècnica de Catalunya and Politecnico di Torino. He completed scientific internships at Stanford University in 2017 and at Nanyang Technological University in 2019. He is an Associate Editor at IEEE Access and MDPI Electronics, a reviewer of papers published in TPAMI, IJCV, CVIU, TIP and TMM journals, and a member of conference organizing committees, including CVPR, ICCV and ICML. He worked at Google in 2013, Qualcomm in 2012 and Telefónica in 2010. He is a Senior Member at IEEE, a member of the ELLIS Society and the ALICE Collaboration at CERN, and an expert at the National Science Center and the Foundation for Polish Science. He is the co-owner of Tooploox where, as the Chief Scientist, he leads a team dealing with machine learning. He is also a co-founder of Comixify, a technology startup that uses artificial intelligence methods for video editing.

Featured news

25.10.2023
Marcin Żukowski joins the IDEAS NCBR Scientific Council
09.10.2023
Drone research team at IDEAS NCBR. Watch the video
04.10.2023
Protecting critical infrastructure with game theory, optimization techniques, and AI algorithms