filmov
tv
Sam Oswald - Semi-Automated Object Detection Workflows using UAV Imagery
Показать описание
Abstract
In plant phenotyping, researchers are often interested in measuring the timing and distribution of plant emergence, as well as the density of organs on a mature plant. By using high-resolution UAV imagery and modern machine learning model architectures, it is possible to predict the density of these objects on a much greater scale with less manual work required. Difficulties remain however, in the scaling up of these models, with most showing accurate results only when applied to a restricted target region.This is due to differences in species morphology, however is also caused by differences in the images themselves, onset by disparate collection plans, as well as variable image quality.
To overcome this, we have developed an end-to-end image processing platform (MAPEO), which allows researchers to collect and process UAV flights at scale, outputting several products and providing standardised imagery outputs which can be compared spatially and temporally. This workflow extends from the initial flight parametization, to collection and in-field quality assessment, through to online processing and publishing.
Outputs are standardised and with attached metadata can be incorporated into additional machine learning pipelines. This allows increasingly high amounts of data to be leveraged in order to train increasingly robust models, covering various field set-ups in terms of environmental setting and the species morphologies present. Additionally, fine-tuning is optimized towards target phenotyping problems. Overall, researchers are able to spend less labour and computational time but still retrieve more reliable and accurate results as compared to ad hoc object detection modelling. Test results show that this workflow is able to produce better results when adapting model architectures to phenotyping use cases as compared to out-of-the-box architectures. Additionally, test cases for wheat-ear detection has shown models trained from datasets which were collected under similar flight conditions can be applicable to each other, increasingly so as more variability in wheat morphologies and environmental conditions is included.
In plant phenotyping, researchers are often interested in measuring the timing and distribution of plant emergence, as well as the density of organs on a mature plant. By using high-resolution UAV imagery and modern machine learning model architectures, it is possible to predict the density of these objects on a much greater scale with less manual work required. Difficulties remain however, in the scaling up of these models, with most showing accurate results only when applied to a restricted target region.This is due to differences in species morphology, however is also caused by differences in the images themselves, onset by disparate collection plans, as well as variable image quality.
To overcome this, we have developed an end-to-end image processing platform (MAPEO), which allows researchers to collect and process UAV flights at scale, outputting several products and providing standardised imagery outputs which can be compared spatially and temporally. This workflow extends from the initial flight parametization, to collection and in-field quality assessment, through to online processing and publishing.
Outputs are standardised and with attached metadata can be incorporated into additional machine learning pipelines. This allows increasingly high amounts of data to be leveraged in order to train increasingly robust models, covering various field set-ups in terms of environmental setting and the species morphologies present. Additionally, fine-tuning is optimized towards target phenotyping problems. Overall, researchers are able to spend less labour and computational time but still retrieve more reliable and accurate results as compared to ad hoc object detection modelling. Test results show that this workflow is able to produce better results when adapting model architectures to phenotyping use cases as compared to out-of-the-box architectures. Additionally, test cases for wheat-ear detection has shown models trained from datasets which were collected under similar flight conditions can be applicable to each other, increasingly so as more variability in wheat morphologies and environmental conditions is included.