Crop maps help scientists and policymakers track global food supplies and estimate how they might change with climate change and population growth. But getting accurate maps of the types of crops grown from farm to farm often requires field surveys that only a handful of countries have the resources to carry out.
Now, MIT engineers have developed a method to quickly and accurately label and map crop types without requiring in-person assessments of each farm. The team’s method uses a combination of Google Street View imagery, machine learning and satellite data to automatically determine which crops are grown in a region, from one fraction of an acre to the next. Their work is published on arXiv preprint server.
Researchers used this technique to automatically generate the first national crop map of Thailand, a smallholder country where small, independent farms are the predominant form of agriculture. The team created a border-to-border map of Thailand’s four main crops (rice, cassava, sugarcane and corn) and determined which of the four types were grown, every 10 meters and without interruption, in all the countries. The resulting map achieved an accuracy of 93%, which the researchers say is comparable to field mapping efforts in high-income, large-farm countries.
The team is applying its mapping technique to other countries like India, where small farms support most of the population but where the type of crops grown from farm to farm has historically been poor. checked in.
“There is a long-standing gap in knowledge about what is grown around the world,” says Sherrie Wang, Arbeloff Assistant Professor of Career Development in MIT’s Department of Mechanical Engineering and Institute for Data, systems and society (IDSS). “The end goal is to understand agricultural outcomes such as yield and how to farm more sustainably. One of the key preliminary steps is to map what is being grown: the more granular you can map, the more questions you can answer.
Wang, along with MIT graduate student Jordi Laguarta Soler and Thomas Friedel of agricultural technology company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI artificial intelligence conference.
Ground truth
Small farms are often run by a single family or farmer, who subsists on the crops and livestock they raise. It is estimated that small farms support two-thirds of the world’s rural population and produce 80% of the world’s food. Keeping tabs on what is grown and where is grown is essential for tracking and forecasting the world’s food supplies. But the majority of these small farms are in low- and middle-income countries, where few resources are devoted to tracking each farm’s crop types and yields.
Crop mapping efforts are primarily conducted in high-income regions like the United States and Europe, where government agricultural agencies oversee crop surveys and send assessors to farms to label crops of a field to another. These “ground truth” labels are then fed into machine learning models that make connections between the field labels of real crops and satellite signals from the same fields. They then label and map larger swathes of agricultural land that assessors don’t cover but satellites automatically cover.
“What is missing in low- and middle-income countries is this terrestrial label that we can associate with satellite signals,” explains Laguarta Soler. “Getting these fundamental truths to train a model in the first place has been limited in most of the world.”
The team realized that although many developing countries do not have the resources to conduct crop surveys, they could potentially use another source of field data: road imagery, captured by services such as Google Street View and Mapillary, which send cars across an entire region to take continuous 360-degree images with dashcams and rooftop cameras.
In recent years, these services have been able to reach low- and middle-income countries. Although the goal of these services is not specifically to capture images of crops, the MIT team found that they could search the roadside images to identify crops.
Cropped image
In their new study, the researchers worked with Google Street View (GSV) images taken across Thailand, a country that the service has recently photographed quite extensively and which consists mainly of small farms.
Starting with more than 200,000 GSV images randomly sampled across Thailand, the team filtered out images depicting buildings, trees and vegetation in general. Around 81,000 images were related to cultures. They set aside 2,000, which they sent to an agronomist, who determined and labeled each type of crop by eye.
They then trained a convolutional neural network to automatically generate crop labels for the other 79,000 images, using a variety of training methods, including iNaturalist, a crowdsourced web-based biodiversity database, and GPT-4V, a “large “multimodal language model” which allows a user to enter an image and ask the model to identify what the image represents. For each of the 81,000 images, the model generated a label for one of four crops that the image likely represented: rice, corn, sugarcane, or cassava.
The researchers then matched each labeled image with corresponding satellite data taken at the same location throughout a single growing season. This satellite data includes measurements across multiple wavelengths, such as how green a location is and its reflectivity (which can be a sign of water).
“Each crop type has a certain signature across these different bands, which changes throughout the growing season,” notes Laguarta Soler.
The team trained a second model to make associations between a location’s satellite data and the corresponding crop label. They then used this model to process satellite data taken over the rest of the country, where crop labels were neither generated nor available. From the associations learned by the model, it then assigned crop labels across Thailand, generating a country-wide map of crop types, with a resolution of 10 square meters.
This crop map, the first of its kind, included locations corresponding to the 2,000 GSV images initially set aside by researchers and labeled by arborists. These human-labeled images were used to validate the map’s labels, and when the team looked to see if the map’s labels matched the experts’ labels, the “gold standard,” they found it done in 93% of cases.
“In the US, we are also looking at over 90% accuracy, whereas in previous work in India we only saw 75% due to the limited number of ground labels,” says Wang. “We can now create these labels in an automated and inexpensive way.”
Researchers are preparing to map crops across India, where roadside images through Google Street View and other services have recently become available.
“There are more than 150 million smallholder farmers in India,” says Wang. “India is covered in agriculture, almost wall to wall farms, but very small farms, and historically it has been very difficult to create maps of India because there are very few labels on the ground .”
The team is working to create crop maps in India, which could be used to inform policies related to assessing and increasing yields, as global temperatures and populations increase.
“What would be interesting would be to create these maps over time,” says Wang. “Then you can start to see trends, and we can try to relate those things to everything related to climate change and policy.”
More information:
Jordi Laguarta Soler et al, Combining deep learning and Street View imagery to map smallholder crop types, arXiv (2023). DOI: 10.48550/arxiv.2309.05930
Journal information:
arXiv
Provided by the Massachusetts Institute of Technology
This story is republished courtesy of MIT News (web.mit.edu/newsoffice/), a popular site that covers news about MIT research, innovation and education.
Quote: Researchers remotely map crops, field by field (February 15, 2024) retrieved February 15, 2024 from
This document is subject to copyright. Apart from fair use for private study or research purposes, no part may be reproduced without written permission. The content is provided for information only.