ESTIMATION OF OBJECT PROPERTIES IN 3D WORLD
First Claim
1. A method for modeling objects within two-dimensional (2D) video data by three-dimensional (3D) models as a function of object type and motion, the method comprising:
- calibrating a 2D image field of view of a video data input of a camera to three spatial dimensions of a 3D modeling cube via a user interface of an application executing on a processor, wherein each of the 3D modeling cube three spatial dimensions are at right angles with respect to each other of the three spatial dimensions;
in response to observing an image of an object in motion in the 2D image field of view of a video data input, computing an initial 3D location of the observed 2D object image via the processor as an intersection between a ground plane of the calibrated camera field of view and a backward projected line passing through a center of the calibrated camera and a point on the object 2D image within a focal plane in the 2D image field of view of the video data input at an initial time;
computing a second 3D location of the observed 2D object image via the processor as an intersection between the ground plane and another backward projected line passing through the center of the calibrated camera and the point on the object 2D image within the video data input 2D image field of view focal plane at a second time that is subsequent to the initial time;
determining a heading direction of the object as a function of the calibrating of the camera and a movement of the 2D object image from the computed initial 3D location to the computed second subsequent 3D location;
replacing the 2D object image in the video data input with a one of a plurality of object-type 3D polygonal models that has a projected bounding box that best matches a bounding box of an image blob of the 2D object image relative to others of the 3D polygonal models, the replacing further orienting the selected object-type 3D polygonal model in the determined heading direction, wherein each of the plurality of object-type 3D polygonal models are for an object type and have a projected bounding box ratio that are different from the object types and projected bounding box ratios of others of the 3D polygonal models;
scaling the bounding box of the replacing polygonal 3D model to fit the object image blob bounding box; and
rendering the scaled replacing polygonal 3D model with image features extracted from the 2D image data as a function of the calibrated dimensions of the 3D modeling cube.
2 Assignments
0 Petitions
Accused Products
Abstract
Objects within two-dimensional (2D) video data are modeled by three-dimensional (3D) models as a function of object type and motion through manually calibrating a 2D image to the three spatial dimensions of a 3D modeling cube. Calibrated 3D locations of an object in motion in the 2D image field of view of a video data input are computed and used to determine a heading direction of the object as a function of the camera calibration and determined movement between the computed 3D locations. The 2D object image is replaced in the video data input with an object-type 3D polygonal model having a projected bounding box that best matches a bounding box of an image blob, the model oriented in the determined heading direction. The bounding box of the replacing model is then scaled to fit the object image blob bounding box, and rendered with extracted image features.
-
Citations
25 Claims
-
1. A method for modeling objects within two-dimensional (2D) video data by three-dimensional (3D) models as a function of object type and motion, the method comprising:
-
calibrating a 2D image field of view of a video data input of a camera to three spatial dimensions of a 3D modeling cube via a user interface of an application executing on a processor, wherein each of the 3D modeling cube three spatial dimensions are at right angles with respect to each other of the three spatial dimensions; in response to observing an image of an object in motion in the 2D image field of view of a video data input, computing an initial 3D location of the observed 2D object image via the processor as an intersection between a ground plane of the calibrated camera field of view and a backward projected line passing through a center of the calibrated camera and a point on the object 2D image within a focal plane in the 2D image field of view of the video data input at an initial time; computing a second 3D location of the observed 2D object image via the processor as an intersection between the ground plane and another backward projected line passing through the center of the calibrated camera and the point on the object 2D image within the video data input 2D image field of view focal plane at a second time that is subsequent to the initial time; determining a heading direction of the object as a function of the calibrating of the camera and a movement of the 2D object image from the computed initial 3D location to the computed second subsequent 3D location; replacing the 2D object image in the video data input with a one of a plurality of object-type 3D polygonal models that has a projected bounding box that best matches a bounding box of an image blob of the 2D object image relative to others of the 3D polygonal models, the replacing further orienting the selected object-type 3D polygonal model in the determined heading direction, wherein each of the plurality of object-type 3D polygonal models are for an object type and have a projected bounding box ratio that are different from the object types and projected bounding box ratios of others of the 3D polygonal models; scaling the bounding box of the replacing polygonal 3D model to fit the object image blob bounding box; and rendering the scaled replacing polygonal 3D model with image features extracted from the 2D image data as a function of the calibrated dimensions of the 3D modeling cube. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A system, comprising:
-
a processing unit, a computer readable memory and a computer readable storage medium; wherein the processing unit, when executing program instructions stored on the computer readable storage medium via the computer readable memory; calibrates a 2D image field of view of a video data input of a camera to three spatial dimensions of a 3D modeling cube specified manually by a user via a user interface, wherein each of the 3D modeling cube three spatial dimensions are at right angles with respect to each other of the three spatial dimensions; in response to an observed image of an object in motion in the 2D image field of view of a video data input, computes an initial 3D location of the observed 2D object image via the processor as an intersection between a ground plane of the calibrated camera field of view and a backward projected line passing through a center of the calibrated camera and a point on the object 2D image within a focal plane in the 2D image field of view of the video data input at an initial time; computes a second 3D location of the observed 2D object image via the processor as an intersection between the ground plane and another backward projected line passing through the center of the calibrated camera and the point on the object 2D image within the video data input 2D image field of view focal plane at a second time that is subsequent to the initial time; determines a heading direction of the object as a function of the manual calibration of the camera and a movement of the 2D object image from the computed initial 3D location to the computed second subsequent 3D location; replaces the 2D object image in the video data input with a one of a plurality of object-type 3D polygonal models that has a projected bounding box that best matches a bounding box of an image blob of the 2D object image relative to others of the 3D polygonal models, the replacing further orienting the selected object-type 3D polygonal model in the determined heading direction, wherein each of the plurality of object-type 3D polygonal models are for an object type and have a projected bounding box ratio that are different from the object types and projected bounding box ratios of others of the 3D polygonal models; scales the bounding box of the replacing polygonal 3D model to fit the object image blob bounding box; and renders the scaled replacing polygonal 3D model with image features that are extracted from the 2D image data as a function of the calibrated dimensions of the 3D modeling cube. - View Dependent Claims (9, 10, 11, 12, 13)
-
-
14. An article of manufacture, comprising:
-
a computer readable storage medium having computer readable program code embodied therewith, the computer readable program code comprising instructions that, when executed by a computer processor, cause the computer processor to; calibrate a 2D image field of view of a video data input of a camera to three spatial dimensions of a 3D modeling cube specified manually by a user via a user interface, wherein each of the 3D modeling cube three spatial dimensions are at right angles with respect to each other of the three spatial dimensions; in response to an observed image of an object in motion in the 2D image field of view of a video data input, compute an initial 3D location of the observed 2D object image via the processor as an intersection between a ground plane of the calibrated camera field of view and a backward projected line passing through a center of the calibrated camera and a point on the object 2D image within a focal plane in the 2D image field of view of the video data input at an initial time; compute a second 3D location of the observed 2D object image via the processor as an intersection between the ground plane and another backward projected line passing through the center of the calibrated camera and the point on the object 2D image within the video data input 2D image field of view focal plane at a second time that is subsequent to the initial time; determines a heading direction of the object as a function of the manual calibration of the camera and a movement of the 2D object image from the computed initial 3D location to the computed second subsequent 3D location; replace the 2D object image in the video data input with a one of a plurality of object-type 3D polygonal models that has a projected bounding box that best matches a bounding box of an image blob of the 2D object image relative to others of the 3D polygonal models, the replacing further orienting the selected object-type 3D polygonal model in the determined heading direction, wherein each of the plurality of object-type 3D polygonal models are for an object type and have a projected bounding box ratio that are different from the object types and projected bounding box ratios of others of the 3D polygonal models; scale the bounding box of the replacing polygonal 3D model to fit the object image blob bounding box; and render the scaled replacing polygonal 3D model with image features that are extracted from the 2D image data as a function of the calibrated dimensions of the 3D modeling cube. - View Dependent Claims (15, 16, 17, 18, 19)
-
-
20. A method of providing a service for modeling objects within two-dimensional (2D) video data by three-dimensional (3D) models as a function of object type and motion, the method comprising providing:
-
a camera calibration interface that enables a user to calibrate a 2D image field of view of a video data input of a camera to three spatial dimensions of a 3D modeling cube specified manually by the user, wherein each of the 3D modeling cube three spatial dimensions are at right angles with respect to each other of the three spatial dimensions; a 3D location determiner that, in response to an observed image of an object in motion in the 2D image field of view of a video data input, computes an initial 3D location of the observed 2D object image via the processor as an intersection between a ground plane of the calibrated camera field of view and a backward projected line passing through a center of the calibrated camera and a point on the object 2D image within a focal plane in the 2D image field of view of the video data input at an initial time; and
computes a second 3D location of the observed 2D object image via the processor as an intersection between the ground plane and another backward projected line passing through the center of the calibrated camera and the point on the object 2D image within the video data input 2D image field of view focal plane at a second time that is subsequent to the initial time;a heading direction determiner that determines a heading direction of the object as a function of the manual calibration of the camera and a movement of the 2D object image from the computed initial 3D location to the computed second subsequent 3D location; a model selector that replaces the 2D object image in the video data input with a one of a plurality of object-type 3D polygonal models that has a projected bounding box that best matches a bounding box of an image blob of the 2D object image relative to others of the 3D polygonal models, the replacing further orienting the selected object-type 3D polygonal model in the determined heading direction, wherein each of the plurality of object-type 3D polygonal models are for an object type and have a projected bounding box ratio that are different from the object types and projected bounding box ratios of others of the 3D polygonal models; a model scaler that scales the bounding box of the replacing polygonal 3D model to fit the object image blob bounding box; and a feature extractor that renders the scaled replacing polygonal 3D model with image features that are extracted from the 2D image data as a function of the calibrated dimensions of the 3D modeling cube. - View Dependent Claims (21, 22, 23, 24, 25)
-
Specification