Pre-harvest assessment of perennial weeds in cereals based on images from unmanned aerial systems (UAS)

Publikation: KonferencebidragPosterForskning

Standard

Pre-harvest assessment of perennial weeds in cereals based on images from unmanned aerial systems (UAS). / Egilsson, Jon; Pedersen, Kim Steenstrup; Olsen, Søren Ingvor; Nielsen, Jon; Ntakos, George; Rasmussen, Jesper.

2015.

Publikation: KonferencebidragPosterForskning

Harvard

Egilsson, J, Pedersen, KS, Olsen, SI, Nielsen, J, Ntakos, G & Rasmussen, J 2015, 'Pre-harvest assessment of perennial weeds in cereals based on images from unmanned aerial systems (UAS)'.

APA

Egilsson, J., Pedersen, K. S., Olsen, S. I., Nielsen, J., Ntakos, G., & Rasmussen, J. (2015). Pre-harvest assessment of perennial weeds in cereals based on images from unmanned aerial systems (UAS).

Vancouver

Egilsson J, Pedersen KS, Olsen SI, Nielsen J, Ntakos G, Rasmussen J. Pre-harvest assessment of perennial weeds in cereals based on images from unmanned aerial systems (UAS). 2015.

Author

Egilsson, Jon ; Pedersen, Kim Steenstrup ; Olsen, Søren Ingvor ; Nielsen, Jon ; Ntakos, George ; Rasmussen, Jesper. / Pre-harvest assessment of perennial weeds in cereals based on images from unmanned aerial systems (UAS). 1 s.

Bibtex

@conference{1b975659effc46119a44877e17234c34,
title = "Pre-harvest assessment of perennial weeds in cereals based on images from unmanned aerial systems (UAS)",
abstract = "Unmanned aerial systems (UAS) are able to deliver images of agricultural fields of high spatial and temporal resolution. It is, however, not trivial to extract quantitative information about weed infestations from images. This study contributes to weed research by using state-of-the-art computer vision techniques to assess pre-harvest weed infestations in cereals based on true color (RGB) images from consumer graded cameras mounted on UAS. The objective is to develop a fully automatic algorithm in an open programming language, Python, to discriminate and quantify weed infestations in cereals before harvest. Results are compared with an in-house image analysis procedure developed in the commercial eCognition Developer software. The importance of flight altitude and robustness across fields are emphasised. Image acquisition took place during the summer of 2013 and 2014 in a number of fields under different weather and lighting conditions in spring and winter cereals (barley, wheat and oats). Images were acquired in different altitudes in the range of 10 to 50 m to give different image resolutions. There were perennial weeds in all fields with Cirsium arvense as the most frequent species.In order to provide ground truth prior to the modeling phase in Python, a subset of 600 images was annotated by experts with 16000 regions of weeds or crop. Following this, images were segmented into regions with weeds or crop by subdividing each image into 64 by 64 pixel patches and classifying each patch as either crop or weed. A collection of geo-referenced segmented images may subsequently be used to map weed occurrences in fields. To find a robust and fully automated assessment method both texture and color information was used to build a number of different competing weed-crop classifiers, including several variants of the excess green (2G-R-B) vegetation index, and normalizations. The performance of these was measured in terms of classification accuracy. Models were trained offline on the annotated ground truth data (not used for testing). In particular for the texture-based methods, this training is necessary to learn the statistical properties of filter responses from weed and crop patches. Results emphasise the importance of a broad training context. If models were trained and tested on images representing narrow ranges of color and illumination variations, it was possible to achieve more than 95% accuracy, which approaches the potential maximum and fully satisfies practical mapping requirements. However, if models were evaluated on images from fields not included in training data, results were varying and unreliable in some fields. In general, the automated image analysis procedure based on color was not competitive with results achieved with eCognition, which provided accuracies in the range of 86% to 92%. Flight altitude and image resolution (3 to 15 mm/pixel) were not important for the accuracy and ortho-mosaicking had no clear impact. Models including texture-based methods were not fully evaluated because they required hours of computer time per image, and it is doubtful whether their performance can justify the computational expenses. Results are discussed in a practical context and the consequences of varying accuracies are evaluated in different scenarios. ",
keywords = "Faculty of Science, weed detection, drone image analysis, weed assessment, image processing",
author = "Jon Egilsson and Pedersen, {Kim Steenstrup} and Olsen, {S{\o}ren Ingvor} and Jon Nielsen and George Ntakos and Jesper Rasmussen",
year = "2015",
language = "English",

}

RIS

TY - CONF

T1 - Pre-harvest assessment of perennial weeds in cereals based on images from unmanned aerial systems (UAS)

AU - Egilsson, Jon

AU - Pedersen, Kim Steenstrup

AU - Olsen, Søren Ingvor

AU - Nielsen, Jon

AU - Ntakos, George

AU - Rasmussen, Jesper

PY - 2015

Y1 - 2015

N2 - Unmanned aerial systems (UAS) are able to deliver images of agricultural fields of high spatial and temporal resolution. It is, however, not trivial to extract quantitative information about weed infestations from images. This study contributes to weed research by using state-of-the-art computer vision techniques to assess pre-harvest weed infestations in cereals based on true color (RGB) images from consumer graded cameras mounted on UAS. The objective is to develop a fully automatic algorithm in an open programming language, Python, to discriminate and quantify weed infestations in cereals before harvest. Results are compared with an in-house image analysis procedure developed in the commercial eCognition Developer software. The importance of flight altitude and robustness across fields are emphasised. Image acquisition took place during the summer of 2013 and 2014 in a number of fields under different weather and lighting conditions in spring and winter cereals (barley, wheat and oats). Images were acquired in different altitudes in the range of 10 to 50 m to give different image resolutions. There were perennial weeds in all fields with Cirsium arvense as the most frequent species.In order to provide ground truth prior to the modeling phase in Python, a subset of 600 images was annotated by experts with 16000 regions of weeds or crop. Following this, images were segmented into regions with weeds or crop by subdividing each image into 64 by 64 pixel patches and classifying each patch as either crop or weed. A collection of geo-referenced segmented images may subsequently be used to map weed occurrences in fields. To find a robust and fully automated assessment method both texture and color information was used to build a number of different competing weed-crop classifiers, including several variants of the excess green (2G-R-B) vegetation index, and normalizations. The performance of these was measured in terms of classification accuracy. Models were trained offline on the annotated ground truth data (not used for testing). In particular for the texture-based methods, this training is necessary to learn the statistical properties of filter responses from weed and crop patches. Results emphasise the importance of a broad training context. If models were trained and tested on images representing narrow ranges of color and illumination variations, it was possible to achieve more than 95% accuracy, which approaches the potential maximum and fully satisfies practical mapping requirements. However, if models were evaluated on images from fields not included in training data, results were varying and unreliable in some fields. In general, the automated image analysis procedure based on color was not competitive with results achieved with eCognition, which provided accuracies in the range of 86% to 92%. Flight altitude and image resolution (3 to 15 mm/pixel) were not important for the accuracy and ortho-mosaicking had no clear impact. Models including texture-based methods were not fully evaluated because they required hours of computer time per image, and it is doubtful whether their performance can justify the computational expenses. Results are discussed in a practical context and the consequences of varying accuracies are evaluated in different scenarios.

AB - Unmanned aerial systems (UAS) are able to deliver images of agricultural fields of high spatial and temporal resolution. It is, however, not trivial to extract quantitative information about weed infestations from images. This study contributes to weed research by using state-of-the-art computer vision techniques to assess pre-harvest weed infestations in cereals based on true color (RGB) images from consumer graded cameras mounted on UAS. The objective is to develop a fully automatic algorithm in an open programming language, Python, to discriminate and quantify weed infestations in cereals before harvest. Results are compared with an in-house image analysis procedure developed in the commercial eCognition Developer software. The importance of flight altitude and robustness across fields are emphasised. Image acquisition took place during the summer of 2013 and 2014 in a number of fields under different weather and lighting conditions in spring and winter cereals (barley, wheat and oats). Images were acquired in different altitudes in the range of 10 to 50 m to give different image resolutions. There were perennial weeds in all fields with Cirsium arvense as the most frequent species.In order to provide ground truth prior to the modeling phase in Python, a subset of 600 images was annotated by experts with 16000 regions of weeds or crop. Following this, images were segmented into regions with weeds or crop by subdividing each image into 64 by 64 pixel patches and classifying each patch as either crop or weed. A collection of geo-referenced segmented images may subsequently be used to map weed occurrences in fields. To find a robust and fully automated assessment method both texture and color information was used to build a number of different competing weed-crop classifiers, including several variants of the excess green (2G-R-B) vegetation index, and normalizations. The performance of these was measured in terms of classification accuracy. Models were trained offline on the annotated ground truth data (not used for testing). In particular for the texture-based methods, this training is necessary to learn the statistical properties of filter responses from weed and crop patches. Results emphasise the importance of a broad training context. If models were trained and tested on images representing narrow ranges of color and illumination variations, it was possible to achieve more than 95% accuracy, which approaches the potential maximum and fully satisfies practical mapping requirements. However, if models were evaluated on images from fields not included in training data, results were varying and unreliable in some fields. In general, the automated image analysis procedure based on color was not competitive with results achieved with eCognition, which provided accuracies in the range of 86% to 92%. Flight altitude and image resolution (3 to 15 mm/pixel) were not important for the accuracy and ortho-mosaicking had no clear impact. Models including texture-based methods were not fully evaluated because they required hours of computer time per image, and it is doubtful whether their performance can justify the computational expenses. Results are discussed in a practical context and the consequences of varying accuracies are evaluated in different scenarios.

KW - Faculty of Science

KW - weed detection

KW - drone image analysis

KW - weed assessment

KW - image processing

M3 - Poster

ER -

ID: 140709172