Large-scale crop research projects – such as ‘Optimising canola production in diverse Australian growing environments’ – rely on lots of eyes and feet.
When monitoring and measuring canola flowering periods, visual assessments and record-keeping need to be carried out every couple of days, which means lots of people walking up and down rows in paddock and glasshouse trials.
But GRDC-invested postdoctoral fellow Dr Jing Wang is working with the researchers on the canola project to automate growth stage scoring using drones and specifically designed software.
“I was first exposed to agricultural projects in my PhD, where computer vision was used to count the number of pineapples on a farm,” he says.
“Once we combine automatic image recognition and object detection with technology such as drones, we can also increase the frequency of monitoring of the plants to extract more information, providing the potential to discover properties which have not been known before.
“Moreover, human scoring of plants is inevitably affected by subjective judgement and experiences of the person. We can save a lot of time because machines work much faster than humans, do not sleep or go on holidays.”
Dr Jing started the project by taking a deep dive into the agronomy of canola by growing his own plants and visiting farms. He is now building a deep-learning model that will take images of canola and provide a multitude of growth-stage data.
“There are loads of possibilities to apply computer vision and object detection in agriculture and plant research,” he says.
“Similar technology can be used to perform weed detection, disease and pest detection, plant recognition and monitoring, yield prediction, fruit and leaf counting – all from images. Under the microscopes, tasks like cell detection and counting can also be performed by a computer to free us from much of the repetitive and time-consuming work.”