System and method for improved scoring of 3D poses and spurious point removal in 3D image data
First Claim
1. A method in a vision system for estimating a degree of match of a 3D alignment pose of a runtime 3D point cloud with respect to a trained model 3D point cloud comprising the steps of:
- scoring, with a vision system processor, a match of a candidate pose of the runtime 3D point cloud relative to the trained model 3D point cloud, including providing a visibility check that comprises(a) receiving an optical center of a 3D camera,(b) receiving the trained model 3D point cloud,(c) receiving the runtime 3D point cloud, and(d) constructing a plurality of line segments from the optical center to a plurality of 3D points in the trained model 3D point cloud or the runtime 3D point cloud at the runtime candidate pose; and
determining, based upon a location of the 3D points along respective line segments, whether to exclude or include the 3D points in the step of scoring.
1 Assignment
0 Petitions
Accused Products
Abstract
This invention provides a system and method for estimating match of a 3D alignment pose of a runtime 3D point cloud relative to a trained model 3D point cloud. It includes scoring a match of a candidate pose of the runtime 3D point cloud relative to the trained model 3D point cloud, including a visibility check that comprises (a) receiving a 3D camera optical center (b) receiving the trained model 3D point cloud; (c) receiving the runtime 3D point cloud; and (d) constructing a plurality of line segments from the optical center to a plurality of 3D points in the 3D point cloud at the runtime candidate pose. A system and method for determining an accurate representation of a 3D imaged object by omitting spurious points from a composite point cloud based on the presence or absence of such points in a given number of point clouds is also provided.
27 Citations
20 Claims
-
1. A method in a vision system for estimating a degree of match of a 3D alignment pose of a runtime 3D point cloud with respect to a trained model 3D point cloud comprising the steps of:
-
scoring, with a vision system processor, a match of a candidate pose of the runtime 3D point cloud relative to the trained model 3D point cloud, including providing a visibility check that comprises (a) receiving an optical center of a 3D camera, (b) receiving the trained model 3D point cloud, (c) receiving the runtime 3D point cloud, and (d) constructing a plurality of line segments from the optical center to a plurality of 3D points in the trained model 3D point cloud or the runtime 3D point cloud at the runtime candidate pose; and determining, based upon a location of the 3D points along respective line segments, whether to exclude or include the 3D points in the step of scoring. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A system in a vision system for estimating a degree of match of a 3D alignment pose of a runtime 3D point cloud with respect to a trained model 3D point cloud comprising:
-
a scoring process, operating in a vision system processor, that scores a match of a candidate pose of the runtime 3D point cloud relative to the trained model 3D point cloud, including a visibility check process that is arranged to (a) receive an optical center of a 3D camera, (b) receive the trained model 3D point cloud, (c) receive the runtime 3D point cloud, and (d) construct a plurality of line segments from the optical center to a plurality of 3D points in the trained model 3D point cloud or the runtime 3D point cloud at the runtime candidate pose; and a determination process that, based upon a location of the 3D points along respective line segments, determines whether to exclude or include the 3D points. - View Dependent Claims (11, 12, 13, 14, 15, 16)
-
-
17. A system for removing spurious points from a 3D image of an object comprising:
-
a plurality of 3D cameras arranged to acquire images of an object within a working section thereof from a plurality of respective points of view; one or more vision system processors configured to operate; a visibility process that, (a) receives a measured 3D point cloud from a 3D camera in the plurality of 3D cameras, (b) generates presumed poses for the object relative to the measured 3D point cloud, and (c) uses information relative to location and orientation of the 3D camera with respect to the object to determine visibility of points of the 3D point cloud; and a composite point cloud generation process that combines the 3D point cloud from the plurality of 3D cameras into a composite 3D point cloud free of spurious points by omitting points that are not corroborated by appearing in a predetermined number of point clouds in which such points are expected to be visible, based on the visibility process. - View Dependent Claims (18, 19, 20)
-
Specification