Methods and apparatus for determining the pose of an object based on point cloud data
First Claim
Patent Images
1. A method implemented by one or more processors, comprising:
- identifying, by one or more of the processors, scene point cloud data that captures at least a portion of an object in an environment;
selecting, by one or more of the processors and from the scene point cloud data, a plurality of scene point pairs that each include a scene reference point and a corresponding additional scene point;
identifying, by one or more of the processors, a model point pair for each of the scene point pairs, wherein each of the model point pairs is identified from a stored model of the object based on one or more features of a corresponding one of the scene point pairs;
generating, by one or more of the processors, a plurality of scene reference point candidate in-plane rotations based on the model point pairs and the scene point pairs,wherein each of the scene reference point candidate in-plane rotations is generated based on a corresponding one of the model point pairs and a corresponding one of the scene point pairs;
determining, by one or more of the processors, a candidate pose for a scene reference point, of the scene reference points, based on a group of the scene reference point candidate in-plane rotations and their model reference points,wherein determining the candidate pose for the scene reference point based on the group of the scene reference point candidate in-plane rotations comprises;
including a first instance of a given candidate scene reference point in-plane rotation of the candidate scene reference point in-plane rotations in the group and excluding a second instance of the given candidate scene reference point in-plane rotations from the group,wherein the first instance of the given candidate scene reference point in-plane rotation is generated based on a given model point pair of the model point pairs and the second instance is excluded from the group based on the second instance of the given candidate scene reference point in-plane rotation also being based on the given model point pair.
3 Assignments
0 Petitions
Accused Products
Abstract
Methods, apparatus, and computer readable media that are related to 3D object detection and pose determination and that may optionally increase the robustness and/or efficiency of the 3D object recognition and pose determination. Some implementations are generally directed to techniques for generating an object model of an object based on model point cloud data of the object. Some implementations of the present disclosure are additionally and/or alternatively directed to techniques for application of acquired 3D scene point cloud data to a stored object model of an object to detect the object and/or determine the pose of the object.
58 Citations
20 Claims
-
1. A method implemented by one or more processors, comprising:
-
identifying, by one or more of the processors, scene point cloud data that captures at least a portion of an object in an environment; selecting, by one or more of the processors and from the scene point cloud data, a plurality of scene point pairs that each include a scene reference point and a corresponding additional scene point; identifying, by one or more of the processors, a model point pair for each of the scene point pairs, wherein each of the model point pairs is identified from a stored model of the object based on one or more features of a corresponding one of the scene point pairs; generating, by one or more of the processors, a plurality of scene reference point candidate in-plane rotations based on the model point pairs and the scene point pairs, wherein each of the scene reference point candidate in-plane rotations is generated based on a corresponding one of the model point pairs and a corresponding one of the scene point pairs; determining, by one or more of the processors, a candidate pose for a scene reference point, of the scene reference points, based on a group of the scene reference point candidate in-plane rotations and their model reference points, wherein determining the candidate pose for the scene reference point based on the group of the scene reference point candidate in-plane rotations comprises; including a first instance of a given candidate scene reference point in-plane rotation of the candidate scene reference point in-plane rotations in the group and excluding a second instance of the given candidate scene reference point in-plane rotations from the group, wherein the first instance of the given candidate scene reference point in-plane rotation is generated based on a given model point pair of the model point pairs and the second instance is excluded from the group based on the second instance of the given candidate scene reference point in-plane rotation also being based on the given model point pair. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A robot, comprising:
-
one or more actuators; one or more processors receiving scene point cloud data, the scene point cloud data capturing at least a portion of an object in an environment, wherein the one or more processors are configured to; select, from the scene point cloud data, a plurality of scene point pairs that each include a scene reference point and a corresponding additional scene point identify a model point pair for each of the scene point pairs, wherein each of the model point pairs is identified from a stored model of the object based on one or more features of a corresponding one of the scene point pairs; generate a plurality of scene reference point candidate in-plane rotations based on the model point pairs and the scene point pairs, wherein each of the scene reference point candidate in-plane rotations is generated based on a corresponding one of the model point pairs and a corresponding one of the scene point pairs; determine a candidate pose for a scene reference point, of the scene reference points, based on a group of the scene reference point candidate in-plane rotations and their model reference points, wherein in determining the candidate pose for the scene reference point based on the group of the scene reference point candidate in-plane rotations one or more of the processors are configured to; include a first instance of a given candidate scene reference point in-plane rotation of the candidate scene reference point in-plane rotations in the group and exclude a second instance of the given candidate scene reference point in-plane rotations from the group, wherein the first instance of the given candidate scene reference point in-plane rotation is generated based on a given model point pair of the model point pairs and the second instance is excluded from the group based on the second instance of the given candidate scene reference point in-plane rotation also being based on the given model point pair; determine a pose for the object based on the candidate pose of the scene reference point; and provide, to one or more of the actuators, control commands that are based on the determined pose. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. One or more non-transitory computer-readable media comprising instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to perform the following operations:
-
identifying scene point cloud data that captures at least a portion of an object in an environment; selecting, from the scene point cloud data, a plurality of scene point pairs that each include a scene reference point and a corresponding additional scene point; identifying a model point pair for each of the scene point pairs, wherein each of the model point pairs is identified from a stored model of the object based on one or more features of a corresponding one of the scene point pairs; generating a plurality of scene reference point candidate in-plane rotations based on the model point pairs and the scene point pairs, wherein each of the scene reference point candidate in-plane rotations is generated based on a corresponding one of the model point pairs and a corresponding one of the scene point pairs; determining a candidate pose for a scene reference point, of the scene reference points, based on a group of the scene reference point candidate in-plane rotations and their model reference points, wherein determining the candidate pose for the scene reference point based on the group of the scene reference point candidate in-plane rotations comprises; including a first instance of a given candidate scene reference point in-plane rotation of the candidate scene reference point in-plane rotations in the group and excluding a second instance of the given candidate scene reference point in-plane rotations from the group, wherein the first instance of the given candidate scene reference point in-plane rotation is generated based on a given model point pair of the model point pairs and the second instance is excluded from the group based on the second instance of the given candidate scene reference point in-plane rotation also being based on the given model point pair. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification