A project of the Framework Partnership Agreement on Copernicus User Uptake


Open Atlas is an open-source algorithm registry for earth observation. Our platform is designed for researchers interested in developing and sharing algorithms related to the water and food nexus. We focus on earth observation data from sources like the Copernicus satellite missions. By publishing algorithms on Open Atlas, researchers can accelerate the development of new tools and applications related to thewater and food nexus. Join our community and contribute to our growing library of algorithms.

Join our community and contribute to our growing library of algorithms.

Open Atlas, how it works

Find an algorithm

Contact the author

Create a partnership

Endless opportunities

Explore open atlas


We collected 94,986 high-quality aerial images from 3,432 farmlands across the US, where each image consists of RGB and Near-infrared (NIR) channels with resolution as high as 10 cm per pixel. We annotate nine types of field anomaly patterns that are most important to farmers. As a pilot study of aerial agricultural semantic segmentation, we perform comprehensive experiments using popular semantic segmentation models; we also propose an effective model designed for aerial agricultural pattern recognition.

The purpose of this study was to evaluate the feasibility and applicability of object-oriented crop classification using Sentinel-1 images in the Google Earth Engine (GEE). The research time was two consecutive years (2018 and 2019), which were
used to verify the robustness of the method. Sentinel-1 images of the
crop growth period (May to September) in each study area were composited
with three time intervals (10 d, 15 d and 30 d).

This work assesses the potential of Sentinel-2A images in precision agriculture for Barley production in a case study. Two workflows are proposed: 1) images were acquired with a relatively simple methodology to follow the crop development; 2) two images around harvest time were downloaded and processed using a more complex and accurate methodology to calculate four vegetation indices (NDVI, WDRVI, GRVI and GNDVI) to be correlated to yield with linear regression models.

Utilise time-series Sentinel-1 data of canola and wheat fields over a Canadian test site to show the sensitivity of θxP to the development of crop morphology at different phenological stages. During the initial growth stages, θxP values are low due to the low vegetation density. In contrast, at advanced phenological stages, we observe decreased values of θxP due to the appearance of complex canopy structure.

The Tracking Radar Vegetation Index (TRVI) is a script used in agriculture development that combines data from radar images and vegetation indices to monitor and assess vegetation growth and health over time.

The TRVI script works by analyzing radar data from satellites, which can penetrate through clouds and vegetation to measure the roughness and moisture content of the Earth’s surface. The script then uses this data to calculate the vegetation index, which is a measurement of the amount and health of vegetation in a given area.

By combining the radar data with the vegetation index, the TRVI script is able to track vegetation growth and health over time. This information can be used by farmers and other agricultural stakeholders to monitor crop yields, identify areas of concern, and make decisions about irrigation, fertilization, and other management practices.

The TRVI script can be applied to a variety of crops and vegetation types, including both annual and perennial crops, forests, and grasslands. It has the advantage of being able to provide information even in cloudy or rainy conditions, when optical sensors such as cameras or satellites cannot provide reliable data.

Overall, the TRVI script is a powerful tool for monitoring and managing agricultural landscapes, providing farmers and other stakeholders with valuable information to make informed decisions about their land management practices.

Open source thresholding and segmentation algorithms were applied to extract aquaculture ponds from Sentinel-1 time series data based on object and shape metrics.

Nearly 10,000 km² of free high-resolution and matched low-resolution satellite imagery of unique locations which ensure stratified representation of all types of land-use across the world: from agriculture to ice caps, from forests to multiple urbanization densities. Each high-resolution image (1.5 m/pixel) comes with multiple temporally-matched low-resolution images from the freely accessible lower-resolution Sentinel-2 satellites (10 m/pixel).

Major challenges for satellite image analysis in the context of Rwanda include heavily clouded scenes and small plot sizes that are often intercropped. Sentinel-2 scenes corresponding to mid-season were analyzed, and spectral signatures of maize could be distinguished from those of other crops. Seasonal mean filtering was applied to Sentinel-1 scenes, and there was significant overlap in the spectral signatures across different types of vegetation. Random Forest models for classification of Sentinel scenes were developed using a training dataset that was constructed from high-resolution multispectral images acquired by unmanned aerial vehicles (UAVs) in several different locations in Rwanda and labeled as to the crop type by trained observers.

he SARSense campaign was conducted to investigate the potential for estimating soil and plant parameters at the agricultural test site in Selhausen (Germany). It included C- and L-band air- and space-borne observations accompanied by extensive in situ soil and plant sampling as well as unmanned aerial system (UAS) based multispectral and thermal infrared measurements.

This study focused on exploring the utility of Sentinel-2 data in mapping of crop types and testing the two machine learning algorithms which are Random Forest and Support Vector Machine performance in classifying crop types in a heterogeneous agriculture landscape in Free state province, South Africa. Nine crop types were successfully classified.