Modeling and video projection for augmented virtual environments
First Claim
1. A method comprising:
- obtaining a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment;
identifying in real time a region in motion with respect to a background image in real-time video imagery information from at least one image sensor having associated position and orientation information with respect to the three dimensional model, the background image comprising a single distribution background dynamically modeled from a time average of the real-time video imagery information;
placing a surface that corresponds to the moving region in the three dimensional model, wherein placing the surface comprises casting a ray from an optical center, corresponding to the real-time video imagery information, to a bottom point of the moving region in an image plane in the three dimensional model, and determining a position, an orientation and a size of the surface based on the ray, a ground plane in the three dimensional model, and the moving region;
projecting the real-time video imagery information onto the three dimensional model, including the surface, based on the position and orientation information; and
visualizing the three dimensional model with the projected real-time video imagery;
wherein identifying a region in motion in real time comprises subtracting the background image from the real-time video imagery information, identifying a foreground object in the subtracted real-time video imagery information, validating the foreground object by correlation matching between identified objects in neighboring image frames, and outputting the validated foreground objects;
wherein identifying a foreground object comprises identifying the foreground object in the subtracted real-time video imagery information using a histogram-based threshold and a noise filter.
2 Assignments
0 Petitions
Accused Products
Abstract
Systems and techniques to implement augmented virtual environments. In one implementation, the technique includes: generating a three dimensional (3D) model of an environment from range sensor information representing a height field for the environment, tracking orientation information of image sensors in the environment with respect to the 3D model in real-time, projecting real-time video from the image sensors onto the 3D model based on the tracked orientation information, and visualizing the 3D model with the projected real-time video. Generating the 3D model can involve parametric fitting of geometric primitives to the range sensor information. The technique can also include: identifying in real time a region in motion with respect to a background image in real-time video, the background image being a single distribution background dynamically modeled from a time average of the real-time video, and placing a surface that corresponds to the moving region in the 3D model.
326 Citations
15 Claims
-
1. A method comprising:
- obtaining a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment;
identifying in real time a region in motion with respect to a background image in real-time video imagery information from at least one image sensor having associated position and orientation information with respect to the three dimensional model, the background image comprising a single distribution background dynamically modeled from a time average of the real-time video imagery information; placing a surface that corresponds to the moving region in the three dimensional model, wherein placing the surface comprises casting a ray from an optical center, corresponding to the real-time video imagery information, to a bottom point of the moving region in an image plane in the three dimensional model, and determining a position, an orientation and a size of the surface based on the ray, a ground plane in the three dimensional model, and the moving region; projecting the real-time video imagery information onto the three dimensional model, including the surface, based on the position and orientation information; and visualizing the three dimensional model with the projected real-time video imagery;
wherein identifying a region in motion in real time comprises subtracting the background image from the real-time video imagery information, identifying a foreground object in the subtracted real-time video imagery information, validating the foreground object by correlation matching between identified objects in neighboring image frames, and outputting the validated foreground objects;wherein identifying a foreground object comprises identifying the foreground object in the subtracted real-time video imagery information using a histogram-based threshold and a noise filter. - View Dependent Claims (2, 3, 4, 5)
- obtaining a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment;
-
6. An augmented virtual environment system comprising:
- an object detection and tracking component that identifies in real time a region in motion with respect to a background image in real-time video imagery information from at least one image sensor having associated position and orientation information with respect to a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment, the background image comprising a single distribution background dynamically modeled from a time average of the real-time video imagery information, and places a surface that corresponds to the moving region with respect to the three dimensional model, wherein the object detection and tracking component places the surface by performing operations comprising casting a ray from an optical center, corresponding to the real-time video imagery information, to a bottom point of the moving region in an image plane in the three dimensional model, and determining a position, an orientation and a size of the surface based on the ray, a ground plane in the three dimensional model, and the moving region;
a dynamic fusion imagery projection component that projects the real-time video imagery information onto the three dimensional model, including the surface, based on the position and orientation information; and
a visualization sub-system that visualizes the three dimensional model with the projected real-time video imagery;
wherein the object detection and tracking component identifies the moving region by performing operations comprising subtracting the background image from the real-time video imagery information, identifying a foreground object in the subtracted real-time video imagery information, validating the foreground object by correlation matching between identified objects in neighboring image frames, and outputting the validated foreground object;
wherein identifying a foreground object comprises identifying the foreground object in the subtracted real-time video imagery information using a histogram-based threshold and a noise filter. - View Dependent Claims (7, 8, 9, 10)
- an object detection and tracking component that identifies in real time a region in motion with respect to a background image in real-time video imagery information from at least one image sensor having associated position and orientation information with respect to a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment, the background image comprising a single distribution background dynamically modeled from a time average of the real-time video imagery information, and places a surface that corresponds to the moving region with respect to the three dimensional model, wherein the object detection and tracking component places the surface by performing operations comprising casting a ray from an optical center, corresponding to the real-time video imagery information, to a bottom point of the moving region in an image plane in the three dimensional model, and determining a position, an orientation and a size of the surface based on the ray, a ground plane in the three dimensional model, and the moving region;
-
11. A machine-readable storage device embodying information indicative of instructions for causing one or more machines to perform operations comprising:
-
obtaining a three dimensional model of a three dimensional environment, the three dimensional model generated from range sensor information representing a height field for the three dimensional environment; identifying in real time a region in motion with respect to a background image in real-time video imagery information from at least one image sensor having associated position and orientation information with respect to the three dimensional model, the background image comprising a single distribution background dynamically modeled from a time average of the real-time video imagery information;
placing a surface that corresponds to the moving region in the three dimensional model, wherein placing the surface comprises casting a ray from an optical center, corresponding to the real-time video imagery information, to a bottom point of the moving region in an image plane in the three dimensional model, and determining a position, an orientation and a size of the surface based on the ray, a ground plane in the three dimensional model, and the moving region;
projecting the real-time video imagery information onto the three dimensional model, including the surface, based on the position and orientation information; and
visualizing the three dimensional model with the projected real-time video imagery;
wherein identifying a region in motion in real time comprises subtracting the background image from the real-time video imagery information, identifying a foreground object in the subtracted real-time video imagery information, validating the foreground object by correlation matching between identified objects in neighboring image frames, and outputting the validated foreground object;wherein identifying a foreground object comprises identifying the foreground object in the subtracted real-time video imagery information using a histogram-based threshold and a noise filter. - View Dependent Claims (12, 13, 14, 15)
-
Specification