论文标题
部分可观测时空混沌系统的无模型预测
Deep learning-based Crop Row Detection for Infield Navigation of Agri-Robots
论文作者
论文摘要
农业环境中的自主导航受到可耕地中发生的不同田间条件的挑战。在这种环境中自动导航的最新解决方案需要昂贵的硬件,例如RTK-GNSS。本文提出了一种强大的作物排检测算法,该算法使用廉价的相机承受此类现场变化。现有用于作物行检测的数据集并不代表所有可能的字段变化。创建了一个糖图像的数据集,该数据集代表了11种田间变化,其中包括多个生长阶段,光线,不同的杂草密度,弯曲的作物行和不连续的作物行。拟议的管道采用基于深度学习的方法进行作物行,并采用预测的分割掩模,使用一种新型的中央作物行选择算法来提取中央作物。测试了新型的农作物行检测算法的作物行检测性能以及沿作物行沿视觉宣誓的能力。基于视觉伺服的导航在具有真实地面和植物纹理的现实模拟场景上进行了测试。我们的算法在具有挑战性的现场条件下表现出强大的基于视力的作物行检测,其表现优于基线。
Autonomous navigation in agricultural environments is challenged by varying field conditions that arise in arable fields. State-of-the-art solutions for autonomous navigation in such environments require expensive hardware such as RTK-GNSS. This paper presents a robust crop row detection algorithm that withstands such field variations using inexpensive cameras. Existing datasets for crop row detection does not represent all the possible field variations. A dataset of sugar beet images was created representing 11 field variations comprised of multiple grow stages, light levels, varying weed densities, curved crop rows and discontinuous crop rows. The proposed pipeline segments the crop rows using a deep learning-based method and employs the predicted segmentation mask for extraction of the central crop using a novel central crop row selection algorithm. The novel crop row detection algorithm was tested for crop row detection performance and the capability of visual servoing along a crop row. The visual servoing-based navigation was tested on a realistic simulation scenario with the real ground and plant textures. Our algorithm demonstrated robust vision-based crop row detection in challenging field conditions outperforming the baseline.