• Publications
  • Influence
The YCB object and Model set: Towards common benchmarks for manipulation research
TLDR
The Yale-CMU-Berkeley (YCB) Object and Model set is intended to be used for benchmarking in robotic grasping and manipulation research, and provides high-resolution RGBD scans, physical properties and geometric models of the objects for easy incorporation into manipulation and planning software platforms. Expand
Benchmarking in Manipulation Research: Using the Yale-CMU-Berkeley Object and Model Set
TLDR
The Yale-Carnegie Mellon University-Berkeley object and model set is presented, intended to be used to facilitate benchmarking in robotic manipulation research and to enable the community of manipulation researchers to more easily compare approaches and continually evolve standardized benchmarking tests and metrics as the field matures. Expand
Benchmarking in Manipulation Research: The YCB Object and Model Set and Benchmarking Protocols
TLDR
The Yale-CMU-Berkeley (YCB) Object and Model set is presented, intended to be used to facilitate benchmarking in robotic manipulation, prosthetic design and rehabilitation research, and a comprehensive literature survey on existing benchmarks and object datasets is presented. Expand
Yale-CMU-Berkeley dataset for robotic manipulation research
TLDR
An image and model dataset of the real-life objects from the Yale-CMU-Berkeley Object Set, which is specifically designed for benchmarking in manipulation research, is presented. Expand
Comparison of extremum seeking control algorithms for robotic applications
TLDR
The purpose of this paper is to help engineers and researches to choose among the extremum seeking control techniques for robotic applications such as object grasping, active object recognition and viewpoint optimization by proposing the usage of the approximation based methods when the noise level is negligible. Expand
Grasping of unknown objects via curvature maximization using active vision
TLDR
A novel grasping algorithm that uses active vision as basis and the curvature information obtained from the silhouette of the object is used, which leads to a faster and still reliable grasping of the target object in 3D. Expand
Single-Grasp Object Classification and Feature Extraction with Simple Robot Hands and Tactile Sensors
TLDR
The proposed approach does not require object exploration, re-grasping, grasp-release, or force modulation and works for arbitrary object start positions and orientations, so the technique may be integrated into practical robotic grasping scenarios without adding time or manipulation overheads. Expand
Variable-Friction Finger Surfaces to Enable Within-Hand Manipulation via Gripping and Sliding
TLDR
This letter presents a simple mechanical analogy to the human finger pad, via a robotic finger with both high- and low-friction surfaces, and demonstrates how within-hand rolling and sliding of an object may be achieved without the need for tactile sensing, high-dexterity, dynamic finger/object modeling, or complex control methods. Expand
Unplanned, model-free, single grasp object classification with underactuated hands and force sensors
TLDR
The technique leverages the benefits of simple, adaptive robot grippers (which can grasp successfully without prior knowledge of the hand or the object model), with an advanced machine learning technique (Random Forests) to discriminate between different object classes. Expand
Image based visual servoing using algebraic curves applied to shape alignment
TLDR
A novel method for using boundary information in visual servoing is presented, where intersection of these lines are used as point features in visual Servoing. Expand
...
1
2
3
4
5
...