Our world is becoming increasingly complex. Every day an immense amount of data is generated, which, if properly understood, can help us make better sense of our world.
Until recently, we did not have the means to make proper sense of all of this data. But recent developments in artificial intelligence have changed this. Now, we are able to draw meaningful insights from enormous data sets and use these to make better decisions.
From helping us to develop cures for cancer, to making it easier for us to shop online, data driven artificial intelligence has a huge range of applications. While data is the fabric of the modern world, artificial intelligence is its engine.
Deep Learning for the Planet
As artificial intelligence develops, there is an increasing focus on developing solutions for the world’s greatest challenges, including how to take better care of our environment. Such initiatives fall under the topic of “AI for Good”.
Driven by the United Nations Global Sustainability Goals (SDG) initiative, researchers at German Research Center for Artificial Intelligence (DFKI) have developed a solution which makes it possible to monitor the health of our planet and improve the effectiveness of disaster response teams. They are using remote sensing with satellite imagery data already being collected by programs such as ESA’s Copernicus, NASA’s Landsat, and Planet, and then using deep learning algorithms to determine the health of the Earth. Having access to these satellite images and analyzing them with the help of artificial intelligence, has the potential to revolutionize how we monitor agriculture, urbanization, and our environment, as well as how we act.
Improving Effectiveness in Disaster Zones
Wildfire and flooding are some of the most catastrophic natural disasters on our planet, costing about 300 billion Dollars in economic damage and immeasurable personal loss. When such natural disasters strike, access to real-time information from the ground is crucial for emergency response teams or relief agencies to react efficiently and effectively.
DFKI’s project “AI for SDG” focuses on improving the resources available in cases of emergency. The team of researchers developed several systems, such as “DeepEye”, to analyze satellite imagery with deep neural networks. The goal of these systems is to automatically detect flooded land, quantify the impact of the flooded area, and highlight infrastructure accessibility during the flooding. “DeepEye” is also able to segment building footprints from the satellite data and automatically inform emergency response teams about the damage sustained by the buildings in the area. This information is indispensable when trying to help answering questions such as: Is the street to the residual area still accessible? How much damage did the building experience? Can I enter it in the usual way?
“DeepEye” was developed using satellite data collected from 27,000 locations all over Europe and annotated with 10 land use and land cover classes. One years’ worth of data from these locations requires the analysis of 2 million images. Such a large amount of data requires a large amount of processing power – NVIDIA’s DGX-1 was the only system which provide the power we needed to train our deep learning models. For the completion of this project, they trained their models using 100 million images in just 9 days using DGX-1 systems.
Improving Accuracy with Artificial Intelligence
When a flood occurs, optical satellites analyze the Earth’s surface with infrared bands, in addition to the standard RGB (Red Green Blue) channels. During flooding events, these infrared bands are crucial for calculating changes in the water content of soil, leaves, and vegetation. On the other hand, non-optical satellites, such as the ones based on Synthetic-aperture radar (SAR), are powerful tools for sensing the surface, terrain, and topography, and have been shown to be of vital importance in the monitoring of flooding events.
The fusion of complementary information across different satellites, spectral bands and other sensors can lead to substantial synergy effects and improve the overall accuracy of monitoring during disasters.
However, to achieve this, the researchers had to work out how they could get high quality information for specific areas at a given time, seeing as optical satellites cannot see through the clouds, which are often present during flooding events, and many satellites are not fixed to one location.
To tackle this, they trained conditional Generative Adversarial Networks (GANs) to transform the satellite image from one format into the image of the missing format. With experiments on Red Green Blue (RGB) and depth images they have shown that a synthetic depth image can be extracted from a RGB image. By feeding this synthetic depth image into a CNN trained on both modalities for the segmentation of building footprints, they are able to achieve higher levels of accuracy compared to the RGB modality alone.
Looking to the Future
DFKI hopes that their research into fusing different formats of satellite imagery and creating synthetic images, will support the efforts of emergency response teams in times of disasters. Currently, disasters are monitored manually, but by automatically processing data which is already being collected by satellites, the new method can help these teams to scale and respond more effectively.
But it is not only the teams on the ground who will benefit from this use of artificial intelligence. Policy makers interested in monitoring the health of the planet or the growth of cities, for example, will be able to make better decisions. Insurance companies looking to limit the damage to buildings in particular high-risk areas will be able to make more accurate assessments and implement preventative measures.
Webinar: Earth Observation from Space: Deep Learning Based Satellite Image Analysis
Learn more about DFKI’s Research on Satellite Image Analysis
Find out how deep learning is fueling leading-edge research