SYSTEM AND METHOD FOR TRAINING A MODEL IN A PLURALITY OF NON-PERSPECTIVE CAMERAS AND DETERMINING 3D POSE OF AN OBJECT AT RUNTIME WITH THE SAME
First Claim
1. A method for determining a 3D pose of an object during runtime operation of a 3D vision system comprising the steps of:
- orienting at least a first non-perspective camera assembly and a second non-perspective camera assembly with respect to the object so that a first non-perspective image is acquired by the first non-perspective camera assembly and a second non-perspective image is contemporaneously acquired by the second non-perspective camera;
searching for 2D model features in the first non-perspective image based upon a first model;
searching for 2D model features in the second non-perspective image based upon a second model in which the second model is a trained descendant of the first model; and
determining 3D pose of the object based upon locations of the 2D model features in the first non-perspective image and the second non-perspective image.
1 Assignment
0 Petitions
Accused Products
Abstract
This invention provides a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime. Each of the camera assemblies includes a non-perspective lens that acquires a respective non-perspective image for use in the process. The searched object features in one of the acquired non-perspective image can be used to define the expected location of object features in the second (or subsequent) non-perspective images based upon an affine transform, which is computed based upon at least a subset of the intrinsics and extrinsics of each camera. The locations of features in the second, and subsequent, non-perspective images can be refined by searching within the expected location of those images. This approach can be used in training, to generate the training model, and in runtime operating on acquired images of runtime objects. The non-perspective cameras can employ telecentric lenses.
-
Citations
24 Claims
-
1. A method for determining a 3D pose of an object during runtime operation of a 3D vision system comprising the steps of:
-
orienting at least a first non-perspective camera assembly and a second non-perspective camera assembly with respect to the object so that a first non-perspective image is acquired by the first non-perspective camera assembly and a second non-perspective image is contemporaneously acquired by the second non-perspective camera; searching for 2D model features in the first non-perspective image based upon a first model; searching for 2D model features in the second non-perspective image based upon a second model in which the second model is a trained descendant of the first model; and determining 3D pose of the object based upon locations of the 2D model features in the first non-perspective image and the second non-perspective image. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A 3D vision system for determining a 3D pose of an object during runtime operation comprising:
-
a first non-perspective camera assembly and a second non-perspective camera assembly that respectively acquire a first non-perspective image and a second non-perspective image of the object contemporaneously; and a searching tool that searches for 2D model features in the first non-perspective image based upon a first model and that searches for 2D model features in the second non-perspective image based upon a second model in which the second model is a trained descendant of the first model, the 3D pose being based upon locations of the searched 2D model features in each of the first non-perspective image and the second-non-perspective image. - View Dependent Claims (8, 9, 10, 12, 13, 14, 15)
-
-
16. A method for training a model of an object for use during runtime operation of a 3D vision system comprising the steps of:
-
orienting at least a first non-perspective camera assembly and a second non-perspective camera assembly with respect to the object so that a first non-perspective image is acquired by the first non-perspective camera assembly and a second non-perspective image is contemporaneously acquired by the second non-perspective camera; providing an affine transform between the first non-perspective camera assembly and the second non-perspective camera assembly based on at least a subset of intriniscs and extrinsics of the first non-perspective camera assembly and intrinsics and extrinsics of the second non-perspective camera assembly; defining a first model with a reference point in the first non-perspective image; and generating a second model with the reference point based upon the affine transform. - View Dependent Claims (17, 18, 19, 20)
-
-
21. A 3D vision system comprising:
-
a first non-perspective camera assembly and a second non-perspective camera assembly that respectively acquire a first non-perspective image and a second non-perspective image of the object contemporaneously; and a training process that includes, (a) a search process that locates a first model pattern in the first non perspective image, (b) an affine transform process that generates an affine-transformed pattern from the first pattern based upon at least a subset of intrinsics and extrinsics of each of the first non-perspective camera assembly and the second non-perspective camera assembly, and (c) a registration process that registers the affine-transformed pattern with respect to the second non-perspective image, determines a locale in the second non-perspective image of a pattern associated with the affine-transformed pattern, and defines a second model pattern from the associated pattern. - View Dependent Claims (22, 23, 24)
-
Specification