Combining Local and Global Viewpoint Planning for Fruit Coverage

@article{Zaenker2021CombiningLA,
  title={Combining Local and Global Viewpoint Planning for Fruit Coverage},
  author={Tobias Zaenker and Christopher F. Lehnert and Chris McCool and Maren Bennewitz},
  journal={2021 European Conference on Mobile Robots (ECMR)},
  year={2021},
  pages={1-7}
}
Obtaining 3D sensor data of complete plants or plant parts (e.g., the crop or fruit) is difficult due to their complex structure and a high degree of occlusion. However, especially for the estimation of the position and size of fruits, it is necessary to avoid occlusions as much as possible and acquire sensor information of the relevant parts. Global viewpoint planners exist that suggest a series of viewpoints to cover the regions of interest up to a certain degree, but they usually prioritize… 

Figures and Tables from this paper

Attention-driven Active Vision for Efficient Reconstruction of Plants and Targeted Plant Parts

TLDR
It is concluded that an attention mechanism for active-vision is necessary to improve the quality of perception in complex agro-food environments by adding an attention mechanisms toactive-vision to reconstruct the whole plant and targeted plant parts.

Deep Reinforcement Learning for Next-Best-View Planning in Agricultural Applications

TLDR
A novel deep reinforcement learning (DRL) approach to determine the next best view for automatic exploration of 3D environments with a robotic arm equipped with an RGB-D camera, which takes as input the encoded 3D observation map and the temporal sequence of camera view pose changes, and outputs the most promising camera movement direction.

Contrastive 3D Shape Completion and Reconstruction for Agricultural Robots Using RGB-D Frames

TLDR
A pipeline is proposed that exploits high-resolution 3D data in the learning phase but only requires a single RGB-D frame to predict the 3D shape of a complete fruit during operation, needing only 4 ms for inference.

Fruit Mapping with Shape Completion for Autonomous Crop Monitoring

TLDR
This paper presents an approach for mapping fruits on plants and estimating their shape by matching superellipsoids, and demonstrates in various simulated scenarios with a robotic arm equipped with an RGB-D camera that this approach can accurately estimate fruit volumes.

References

SHOWING 1-10 OF 18 REFERENCES

Viewpoint Planning for Fruit Size and Position Estimation

TLDR
This work presents a novel viewpoint planning approach that builds up an octree of plants with labeled regions of interest (ROIs), i.e., fruits, that uses this octree to sample viewpoint candidates that increase the information around the fruit regions and evaluates them using a heuristic utility function that takes into account the expected information gain.

Contour-based next-best view planning from point cloud segmentation of unknown objects

A novel strategy is presented to determine the next-best view for a robot arm, equipped with a depth camera in eye-in-hand configuration, which is oriented to autonomous exploration of unknown

Multi-Robot Region-of-Interest Reconstruction with Dec-MCTS

TLDR
This work considers the problem of reconstructing regions of interest of a scene using multiple robot arms and RGB-D sensors, and proposes a targeted information gain planner that outperforms state-of-the-art baselines and outperforms them in every measured metric.

Coverage Path Planning using Path Primitive Sampling and Primitive Coverage Graph for Visual Inspection

TLDR
This paper proposes a novel planning method to directly sample and plan the inspection path for a camera-equipped UAV to acquire visual and geometric information of the target structures as a video stream setting in complex 3D environment.

A comparison of volumetric information gain metrics for active 3D object reconstruction

TLDR
This paper proposes several new ways to quantify the volumetric information (VI) contained in the voxels of a probabilisticvolumetric map, and compares them to the state of the art with extensive simulated experiments.

Humanoid Robot Next Best View Planning Under Occlusions Using Body Movement Primitives

TLDR
This work presents an approach for humanoid Next Best View (NBV) planning that exploits full body motions to observe objects occluded by obstacles that results in a more complete reconstruction of objects than a conventional algorithm that only changes the orientation of the head.

3D Move to See: Multi-perspective visual servoing towards the next best view within unstructured and occluded environments

TLDR
It is shown, on a real robotic platform, that by moving the eye-in-hand camera using the gradient of an objective function leads to a locally optimal view of the object of interest, even amongst occlusions.

Efficient coverage of 3D environments with humanoid robots using inverse reachability maps

TLDR
This paper introduces a novel inverse reachability map representation that can be used for fast pose generation and combine it with a next-best-view algorithm and shows that this approach enables the humanoid to efficiently cover room-sized environments with its camera.

Fruit Detectability Analysis for Different Camera Positions in Sweet-Pepper †

TLDR
The effect of multiple camera positions and viewing angles on fruit visibility and detectability was investigated and the best single positions were the front views and looking with a zenith angle of 60° upwards.

Complete coverage path planning and guidance for cleaning robots

TLDR
A complete coverage path planning and guidance methodology for a mobile robot, having the automatic floor cleaning of large industrial areas as a target application, and the capability of the path planner to deal with a priori mapped or unexpected obstacles in the middle of the working space.