Today Google Earth launches a project called Dynamic World, a new venture that creates maps paired with a new AI model for deep learning. It is capable of classifying land cover by type (water, urban, forest, crop) with a resolution of 10 meters or 32 feet. This means that each pixel covers approximately 10 meters of land. For comparison, the previous state-of-the-art technology had a resolution of 100 meters (320 feet).
Dynamic World is a way for people to observe from space the myriad ways in which land cover changes on Earth, be it natural seasonal changes, storms and disasters aggravated by climate change or long-term changes caused by human activities. such as clearing wild habitats for crops, livestock or logging. Experts and researchers can use this new project to understand how land cover changes naturally and report when unexpected changes appear to be taking place.
Users can visit Google’s Dynamic World website to peruse the various datasets and see what the tagged maps look like. For example, a map shows how the volume of water and vegetation blooms and recedes in Botswana’s Okavango Delta from the rainy season to the dry season.
The map model, which draws satellite images from the European Space Agency’s Sentinel-2, can update the data stream for monitoring global land cover every 2-5 days. In fact, about 12 terabytes of data come from the Sentinel-2 satellite every day. From there, it enters Google’s data centers and Google Earth Engine, a cloud platform created to organize and transmit Earth observations and environmental analyzes. The Earth Engine is connected to tens of thousands of computers that process information and derive detailed information with computer models before it becomes available in the Earth Engine Data Catalog.
In order to automatically label how the earth represented in all those satellite images is used, Google needed the help of artificial intelligence. The land cover labeling AI they developed as part of this project was trained on 5 billion labeled pixels by human experts (and some non-experts). In the training data, they identified the pixels in the Sentinel-2 images and what land cover class they were (water, trees, grass, flooded vegetation, built-up areas such as cities, crops, bare ground, shrubs, snow). They then presented the model with an image that was not in the training set and asked him to classify land cover types. Not only are there color differences to distinguish the different types of terrain on the maps, but there are also shading differences. This is because pixels also convey probabilities. The brighter the color, the more confident the model is in its classification accuracy. This creates a structural effect when the topography goes from land to forest or land to water.
[Related: Google Street View just unveiled its new camera—and it looks like an owl]
A detailed description of their dataset was published in the journal Scientific data on nature.
“We are making everything available under a free and open license,” Rebecca Moore, director of Google Earth, said at a press conference ahead of the announcement. “The datasets are free and open. The artificial intelligence model is open source “.
About 10 years ago, Google and the World Resources Institute partnered on Global Forest Watch, a project aimed at monitoring forest cover to protect these areas looking for changes from illegal activities such as logging or mining. Now, they are looking to expand their efforts beyond just protecting and observing one type of land cover.
The idea is to help make sense of the data available out there. “We have heard from several governments, [and] researchers who are committed to action, but lack environmental monitoring information about what is happening in the field so they can create science-based data-informed policies, track the results of their actions, [and] communicate with stakeholders, “Moore said.” The irony is not that there isn’t a lot of data. But they are thirsty for insights. They are looking for actionable guidance to support the decisions they need to make. And manage the raw data. in many cases it is overwhelming “.
Google believes that Dynamic World’s role in this is to be able to bridge the gap in land use and land cover data and describe where key ecosystems such as forests, water resources, agriculture and urban development are located. This type of information, Moore said, can be useful in guiding decisions about sustainable management of scarce natural resources, food and water. It can also help with questions on how to manage disaster resilience, how to cope with rising sea levels, where to create protected areas, where to build dams, and what trade-offs might be needed, to name a few.