Skip to content Search

Our team is focused on developing sustainable computer vision methods for autonomous machines, taking into account constrained resources and sensor diversity. We delve into issues such as adapting machine learning models to new types of data, for instance, using event-based cameras. Moreover, we introduce new architectures suitable for limited computational resources such as GPU, CPU, and RAM. Our emphasis is on machine learning theory rather than hardware aspects, considering environmental factors as a crucial aspect of machine learning development. Therefore, our priority lies in methods that not only enhance model performance and effectiveness but also reduce the carbon footprint. This approach translates into potentially expanding the applications of these machines while minimizing their negative environmental impact.

Animal protection, fire detection or support for safety services

Our solutions could potentially be used in drones as a tool supporting the protection of national parks, including animals against poaching. They allow for fast and efficient monitoring of large land areas in remote locations thanks to panoramic vision and specialized data from laser or thermal scanners, for example. As a result, it is possible to monitor movements of animals or to early detect forest fires. We already have drones that can perform such missions, but their operation requires specially trained personnel. There are also limitations related to equipment. In order to operate autonomously, with minimal human support, a drone must be able to detect and identify animals, or notice forest fires, without outside help, and then take appropriate action. This application perfectly fits into the goals of our group, among which we should mention the introduction of innovative methods of active visual exploration.

Robots that can see like people

One of the issues we are interested in is active visual exploration. Machines do not observe their surroundings like humans. This is due to the fact that the limited capabilities of sensors, or the size of the battery, do not allow the machine to analyze the entire environment at the same time. Therefore, the challenge for an autonomous vehicle is to analyze the environment and decide what to focus attention on and what next steps to take. In other words, active exploration addresses the problem of limited sensor capabilities in real-world scenarios, where subsequent observations are actively selected based on the environment. For example, robot sensors have a limited field of view, the environment is constantly changing, and computational costs are high, which complicates obtaining complete information about the environment. Therefore, in order to infer about the entire environment, the agent must most efficiently sample new observations. We solve this problem by introducing new techniques based on well-generalized transformers.

Fast adaptation is the key to success

An important element of our research is the introduction of models capable of generalization so that a model trained for one environment (such as the prairie) can be quickly adapted to a new one (such as a jungle). To this end, we are introducing novel approaches to self-supervised learning that trains the model without the use of labels, which in turn benefits applications in a variety of fields. As a result, we kind of recycle models, significantly reducing the amount of energy required for training.

Research Team Leader

Bartosz Zieliński

Bartosz Zieliński is a research team leader at IDEAS NCBR and an associate professor at Jagiellonian University. He obtained his master’s degree at UJ in 2007, his Ph.D. at IPPT PAN in 2012, and his habilitation at Wrocław University of Science and Technology in 2023, all in Computer Science. He is a member of ELLIS Society and the author of numerous publications at top machine learning conferences. His research interests revolve around computer vision, deep neural networks, as well as interpretable and sustainable artificial intelligence.

– Narodowe Centrum Nauki: grant OPUS 2023 – 2026, projekt: Interpretowalne metody zrównoważonej sztucznej inteligencji tłumaczące decyzje w sposób intuicyjny
– Narodowe Centrum Nauki: grant SONATA 2016 – 2020, projekt: Detektory i deskryptory punktów charakterystycznych oparte na informacji topologicznej
– Małopolskie Centrum Przedsiębiorczości: grant DOCTUS 2008 – 2012, projekty: Wspomagana komputerowo detekcja zmian reumatycznych

Other research groups and teams