How Artificial Intelligence is changing the game in optronic sensors

Staying ahead of the game

Digital sensors such as day and night cameras are vital for today’s security and military forces enabling situational awareness and threat detection at greater ranges, even in low visibility conditions such as bad weather or at night. In this article, we look at how HENSOLDT’s experts are advancing these sensor technologies and leveraging areas such as artificial intelligence and deep-learning algorithms to stay ahead of the game.

April 06, 2021
info@hensoldt.net

CATEGORIES:
Innovation, Optronics, Artificial Intelligence

Background

Over the last decades, advanced digital sensors – for example electro-optic infrared (EO/IR) cameras – have dramatically changed how security and military personnel operate. These technologies enable various missions at longer ranges and even in poor visibility, particularly during the night, when vision and situational awareness are often severely affected.

Western forces have famously called this “owning the night”, which means that sensors and soldiers can detect threats quicker and more effectively, and thus take action before the enemy strikes.

This has a positive impact when it comes to protecting those on the frontline.

But recognising these operational benefits, adversaries are now also adopting advanced day and night sensors that could impact how western forces operate on the future battlefield.

In this new and potentially more dangerous environment, how do friendly forces maintain their advantage when it comes to sensor technology?

Sensors will understand, not only see

In the future, HENSOLDT sensors will be enhanced further with the introduction of more advanced embedded systems, which will include advanced levels of artificial intelligence (AI). Sensor systems traditionally rely on a computer processor close to the sensor, or several servers in an HQ, for data processing and exploitation, but the idea for next-generation systems is that the sensor itself will do much of this processing work.

This will build on existing “smart” features in HENSOLDT’s sensor systems, including automatic object detection, target tracking and moving target indication.

For cameras, this is known as embedded computer vision, which brings “intelligence directly into the camera” and reduces issues such as latency, explains Dr Michael Teutsch, Staff Scientist at HENSOLDT Optronics. “It is better when you do the data processing closer to the camera, because greater distances can increase the risk of signal loss or raise the demand for data compression. In both cases, the quality of data is affected.”

The company’s engineers are now exploring how computer vision – by utilising deep learning algorithms – can be used at the sensor level to improve areas such as threat detection and classification. Deep learning algorithms embedded in the sensor itself will not only allow devices to see the environment, but also understand it.

Learning from the consumer industry

This new era of computer vision is being enabled by the miniaturisation of electronics and a new generation of technologies including Field Programmable Gate Arrays (FPGA) and the latest Digital Signal Processors (DSP). Several industries – including medical, energy, automotive, commerce – are already utilising smart and intelligent embedded device processing (rather than sending the data to a computer or server) to speed up processes and improve areas such as fault detection.

“The consumer industry nowadays is so powerful and puts so much effort into such technology that we in the military industry can benefit here,” explains Teutsch.

In particular, HENSOLDT is leveraging the huge amounts of innovation taking place in the automotive sector and the development of self-driving cars. To safely navigate the complex environments in which they drive, self-driving cars require sensors that provide a deep understanding of the things around them. This sensing needs to be low latency for split-second decision making.

One popular technique for self-driving cars is known as semantic segmentation, which links each pixel of an image with a class label (a person, object, car, etc.). This has been improved significantly with the development of deep learning algorithms.

“This tells us for each pixel in the image, what ‘class’ it may be. It could be vegetation, trees, vehicles, humans or buildings, for example, and this already gives you the chance to distinguish between relevant and irrelevant scene content, object presence, or object motion,” says Teutsch.

 

“AI will help us to derive information for decision making. Imagine you are sitting in a main battle tank and there is a system that surveys the environment, it will detect a lot of motion and a lot of objects in the world, and now there is the chance to filter this information so that the operator receives an alarm only in presence of a real threat.”

Dr Michael Teutsch
Staff Scientist

This will be improved further as cameras integrate higher megapixel sensors and resolution sky rockets, which ultimately translates into more accurate object detection, recognition, and identification, as well as an increased coverage of an environment. Higher resolution systems bring their own challenges, however, including greater levels of data and how to process that in real time without delay for prompt decision making.

But as those challenges are overcome, the resulting sensor systems will not only outperform the human eye, but also the human brain’s comprehension and understanding of the surrounding environment.

Smart analysis & human decision-making

HENSOLST sensors will be able to provide wide area surveillance and rapidly identify or even predict anomalies in the environment, including how objects behave, even in crowded and congested areas such as urban settings. Sensors will be able to distinguish between a person going about their daily routine and someone acting suspiciously, as well as between a person who is holding a weapon or someone holding a bag.

These types of sensors could eventually find their way onto Europe’s developmental Main Ground Combat System (MGCS) or Future Combat Air System (FCAS) platforms, giving them a significant advantage over potential adversaries.

Human beings are still a key part of the decision-making process and will remain so for many years to come, particularly in cases where weapon systems need to be deployed. “We do not want to get rid of the human altogether, because they are important in making decisions,” says Teutsch. “What we want to do is to ease the process of decision making as much as possible, so not only with algorithms that derive information from the environment, but also with human-machine interfaces that display only the relevant information.”