3D position estimation of objects from a monocular camera using a set of known 3D points on an underlying surface
First Claim
1. A computer-implemented method, comprising:
- determining three-dimensional (3D) locations of a plurality of reference points, wherein the plurality of reference points are points on one or more surfaces in an environment;
receiving a two-dimensional (2D) image of a portion of the environment from a camera on an autonomous vehicle, wherein the image contains an object;
projecting some of the plurality of reference points into the 2D image based on the 3D locations of the plurality of reference points and a location of the camera on the autonomous vehicle;
selecting certain reference points from the projected reference points such that the selected reference points form a polygon in the 2D image, wherein the polygon comprises a 2D shape that surrounds the object in the 2D image, wherein selecting the certain reference points comprises performing one or more 2D point-in-polygon tests to determine the selected reference points making up the polygon that surrounds the object in the 2D image;
determining an intersection point of a ray directed toward the object and a 3D polygon formed by the selected reference points;
based on the intersection point of the ray directed toward the object and the 3D polygon formed by the selected reference points, determining a 3D location of the object in the environment; and
providing instructions to control the autonomous vehicle based at least in part on the 3D location of the object in the environment.
5 Assignments
0 Petitions
Accused Products
Abstract
An example method may include determining a three-dimensional (3D) location of a plurality of reference points in an environment, receiving a two-dimensional (2D) image of a portion of the environment that contains an object, selecting certain reference points from the plurality of reference points that form a polygon when projected into the 2D image that contains at least a portion of the object, determining an intersection point of a ray directed toward the object and a 3D polygon formed by the selected reference points, and based on the intersection point of the ray directed toward the object and the 3D polygon formed by the selected reference points, determining a 3D location of the object in the environment.
34 Citations
15 Claims
-
1. A computer-implemented method, comprising:
-
determining three-dimensional (3D) locations of a plurality of reference points, wherein the plurality of reference points are points on one or more surfaces in an environment; receiving a two-dimensional (2D) image of a portion of the environment from a camera on an autonomous vehicle, wherein the image contains an object; projecting some of the plurality of reference points into the 2D image based on the 3D locations of the plurality of reference points and a location of the camera on the autonomous vehicle; selecting certain reference points from the projected reference points such that the selected reference points form a polygon in the 2D image, wherein the polygon comprises a 2D shape that surrounds the object in the 2D image, wherein selecting the certain reference points comprises performing one or more 2D point-in-polygon tests to determine the selected reference points making up the polygon that surrounds the object in the 2D image; determining an intersection point of a ray directed toward the object and a 3D polygon formed by the selected reference points; based on the intersection point of the ray directed toward the object and the 3D polygon formed by the selected reference points, determining a 3D location of the object in the environment; and providing instructions to control the autonomous vehicle based at least in part on the 3D location of the object in the environment. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A vehicle, comprising:
-
a camera configured to capture a 2D image of a portion of an environment, wherein the image contains an object; and a computing system configured to; determine 3D locations of a plurality of reference points in the environment; project some of the plurality of reference points into the 2D image based on the 3D locations of the plurality of reference points and a location of the camera; select certain reference points from the projected reference points such that the selected reference points form a polygon in the 2D image, wherein the polygon comprises a 2D shape that surrounds the object in the 2D image, wherein the computing system is configured to select the certain reference points by performing one or more 2D point-in-polygon tests to determine the selected reference points making up the polygon that surrounds the object in the 2D image; determine an intersection point of a ray directed toward the object and a 3D polygon formed by the selected reference points; based on the intersection point of the ray directed toward the object and the 3D polygon formed by the selected reference points, determine a 3D location of the object in the environment; and provide instructions to control the vehicle based at least in part on the 3D location of the object in the environment. - View Dependent Claims (9, 10, 11, 12)
-
-
13. A non-transitory computer readable medium having stored therein instructions, that when executed by a computing system, cause the computing system to perform functions comprising:
-
determining three-dimensional (3D) locations of a plurality of reference points, wherein the plurality of reference points are points on one or more surfaces in an environment; receiving a two-dimensional (2D) image of a portion of the environment from a camera on an autonomous vehicle, wherein the image contains an object; projecting some of the plurality of reference points into the 2D image based on the 3D locations of the plurality of reference points and a location of the camera on the autonomous vehicle; selecting certain reference points from the projected reference points such that the selected reference points form a polygon in the 2D image, wherein the polygon comprises a 2D shape that surrounds the object in the 2D image, wherein selecting the certain reference points comprises performing one or more 2D point-in-polygon tests to determine the selected reference points making up the polygon that surrounds the object in the 2D image; determining an intersection point of a ray directed toward the object and a 3D polygon formed by the selected reference points; based on the intersection point of the ray directed toward the object and the 3D polygon formed by the selected reference points, determining a 3D location of the object in the environment; and providing instructions to control the autonomous vehicle based at least in part on the 3D location of the object in the environment. - View Dependent Claims (14, 15)
-
Specification