Building an AI-powered robot that can scan a farmer’s field, find the weeds hiding in that crop then shoot those weeds with a tiny laser is — no surprise — not an easy task.
But it’s exactly what the engineering team at Carbon Robotics, an agriculture tech business known for making autonomous robots, does every day.
Using AI in precision agriculture is fairly common these days. Deep learning is just one area of AI that is applied to agriculture, and it has already been used to manage watering, crops, classify seeds, find diseased plants, harvest and, of course, identify weeds. It makes sense that deep learning was the easy choice for engineers at Carbon Robotics.
“Deep learning in particular allows us to learn directly from images without relying on any feature engineering,” said Raven Pillmann, senior deep learning engineer at Carbon Robotics.
Using deep learning to pinpoint weeds is more than just training datasets, though; it means the team has to make adjustments for every type of crop and weed that its robot, the LaserWeeder, might encounter.
While deep learning is common in digital agriculture, there is still a need for models that can account for different crop types, image conditions and sensor modalities, according to research conducted by Umm Al-Qura University. Another study by North Dakota State and Montana State Universities, which was focused on the challenges of training deep learning models for precision weeding, pointed out that it requires engineers to train models using limited data, then turn around and test it out on unseen datasets with different distribution.
The engineering team at Carbon Robotics has their work cut out for them. Built In Seattle sat down with Pillmann to learn more.
Can you share some examples of how AI/ML has directly contributed to enhancing your product line or accelerating time to market?
There were lots of ways we could have tackled the plant identification problem from a computer vision standpoint, but we decided early on to use deep learning, which is standard in our industry. Deep learning in particular allows us to learn directly from images without relying on any feature engineering. This allows us to quickly and constantly adapt to new plants and field conditions when we deploy a machine: instead of having to figure out exactly what makes a new weed a weed, we can simply annotate images and train a model, a process that allows us to get performance improvements out to customers more rapidly and with less effort.
We also recognize that there’s a lot of information we can gain from the data that we’ve already obtained. Detecting anomalies in weeding patterns, for example, has helped our support team determine what specific actions are likely to mitigate a customer’s issues, whereas analyzing electrical signals has helped us discover whether physical components should be replaced. Building tools to digest these massive amounts of data has allowed us to diagnose a wide range of problems like these, often before customers may realize they exist.
How is your team integrating AI and ML into the product development process, and what specific improvements have you seen as a result?
The main function of the LaserWeeder is to shoot weeds while protecting crops. At its core, this relies on computer vision systems to locate, categorize, track and target weeds of different shapes and sizes, and without deep learning none of this would be feasible.
But beyond shooting weeds, the LaserWeeder is a high-powered camera-based computer that is capable of drawing insights about anything it sees. Our customers are able to view different metrics pertaining to the weeds and crops in their fields through our mobile companion app and web-based Ops Center, and our support team can use those metrics to diagnose any troubles our customers might be facing.
As we gear towards the future, we are constantly thinking about new ways we can take advantage of the data collected by our machines to help further the experiences customers have, which includes building new data analysis tools and integrating machine learning advancements into our system that we think will address issues identified in that data.
What strategies are you employing to ensure that your systems and processes keep up with the rapid advancements in AI and ML?
Attending conferences has been one of the best ways to find trends that are percolating through the AI community. Similarly, reading papers and articles about recent deep learning and computer vision advancements has been a great way to learn technical details and generate new ideas that may help us improve our own systems.
That being said, it’s imperative that we pursue ideas that are applicable to the problems we’re trying to solve. A significant portion of research today goes into pushing the limits of the next great idea, but it isn’t necessarily geared towards solving domain-specific problems on real-world datasets like the ones we’re tackling. It’s far more important that we identify ideas that can translate to our use cases rather than always chasing the next advancement, and that we validate whether ideas are promising before pursuing them on a large scale. Generally, this means that we spend a decent amount of time experimenting and prototyping before we even begin to consider how we’ll integrate an idea into production.