System and method for mixing a scene with a virtual scenario
First Claim
Patent Images
1. A system for providing a virtual video sequence, comprising:
- an image capturing unit configured to capture images so as to cover a scene viewed from a first viewpoint in a global geographical coordinate system, wherein the first viewpoint is a viewpoint of the scene from a first position in the global geographical coordinate system;
an image representation generation unit configured to generate at least one image representation based on said captured images, said image representation comprising an image depth map comprising information about a distance to objects in said captured images provided in the global geographical coordinate system and texture data;
a game engine unit configured to generate a virtual scenario comprising virtual objects in a coordinate system aligned with the global geographical coordinate system, wherein generating the virtual scenario comprises forming depth map data and texture data for at least one generated virtual object;
a warping unit configured to warp the image depth map to a warped depth map related to a second viewpoint so as to form a warped representation of the scene viewed from the second viewpoint, the warped depth map comprises texture data, wherein the second viewpoint is obtained from a position of the coordinate system aligned with the global geographical coordinate system that is different than the first position of the first viewpoint;
an image processing unit configured to mix the scene with the virtual scenario based on the warped representation of the scene viewed from the second viewpoint and on the generated scenario;
wherein the second viewpoint is a virtual viewpoint that is a view at a distance from the image capturing unit and is arbitrarily chosen within an area around the first viewpoint,wherein the image processing unit is configured to adapt the warped depth map comprising texture data based on the formed depth map data for said generated virtual scenario so as to provide the virtual video sequence comprising the warped representation of the scene viewed from the second viewpoint mixed with the virtual scenario, anda position/posture estimation unit configured to estimate position and/or posture information at said second viewpoint in relation to said first viewpoint,wherein the position/posture estimation unit is configured to determine a field of view from the second viewpoint,wherein the position/posture estimation unit is configured to elect one or a plurality of images with associated depth maps corresponding to the determined field of view,wherein the warping unit is configured to process the depth maps of the images elected by the position/posture estimation unit, andwherein the warping unit is configured to adapt the depth map for the second viewpoint as it is moving based on updated images with the associated depth maps elected by the position/posture estimation unit.
1 Assignment
0 Petitions
Accused Products
Abstract
A system and method for mixing a scene with a virtual scenario. An image capturing unit is arranged to capture at least one image so as to cover the scene from a first viewpoint. An image representation generation unit is arranged to generate at least one image representation based on the captured image. A game engine unit is arranged to generate a virtual scenario. An image processing unit is arranged to adapt the at least one image representation based on the generated virtual scenario so as to provide a virtual video sequence.
-
Citations
15 Claims
-
1. A system for providing a virtual video sequence, comprising:
-
an image capturing unit configured to capture images so as to cover a scene viewed from a first viewpoint in a global geographical coordinate system, wherein the first viewpoint is a viewpoint of the scene from a first position in the global geographical coordinate system; an image representation generation unit configured to generate at least one image representation based on said captured images, said image representation comprising an image depth map comprising information about a distance to objects in said captured images provided in the global geographical coordinate system and texture data; a game engine unit configured to generate a virtual scenario comprising virtual objects in a coordinate system aligned with the global geographical coordinate system, wherein generating the virtual scenario comprises forming depth map data and texture data for at least one generated virtual object; a warping unit configured to warp the image depth map to a warped depth map related to a second viewpoint so as to form a warped representation of the scene viewed from the second viewpoint, the warped depth map comprises texture data, wherein the second viewpoint is obtained from a position of the coordinate system aligned with the global geographical coordinate system that is different than the first position of the first viewpoint; an image processing unit configured to mix the scene with the virtual scenario based on the warped representation of the scene viewed from the second viewpoint and on the generated scenario; wherein the second viewpoint is a virtual viewpoint that is a view at a distance from the image capturing unit and is arbitrarily chosen within an area around the first viewpoint, wherein the image processing unit is configured to adapt the warped depth map comprising texture data based on the formed depth map data for said generated virtual scenario so as to provide the virtual video sequence comprising the warped representation of the scene viewed from the second viewpoint mixed with the virtual scenario, and a position/posture estimation unit configured to estimate position and/or posture information at said second viewpoint in relation to said first viewpoint, wherein the position/posture estimation unit is configured to determine a field of view from the second viewpoint, wherein the position/posture estimation unit is configured to elect one or a plurality of images with associated depth maps corresponding to the determined field of view, wherein the warping unit is configured to process the depth maps of the images elected by the position/posture estimation unit, and wherein the warping unit is configured to adapt the depth map for the second viewpoint as it is moving based on updated images with the associated depth maps elected by the position/posture estimation unit. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A method for generating a virtual scenario, the method comprising:
-
providing a representation of a scene related to a first viewpoint, said representation comprising an image depth map data and texture data, the image depth map comprising information about a distance to objects in images provided in a global geographical coordinate system and texture data, wherein the first viewpoint is a viewpoint of the scene from a first position in the global geographical coordinate system; providing a representation of the scene viewed from a second viewpoint that is a virtual viewpoint that is different from the first viewpoint, wherein providing a warped image representation of the scene viewed from the second viewpoint comprises warping at least the depth map data of the representation of the scene so as to provide a warped depth map corresponding to the second viewpoint and comprising texture data; generating the virtual scenario utilizing a game engine unit, wherein the virtual scenario comprising virtual objects is generated in a coordinate system aligned with a coordinate system of the image depth map, wherein generating the virtual scenario comprises forming depth map data and texture data for at least one generated virtual object; mixing the scene with the virtual scenario based on the representation of the scene viewed from the second viewpoint and on the generated scenario; choosing the second viewpoint arbitrarily within an area around the first viewpoint, wherein the second viewpoint is from a position that is different than the first position of the first viewpoint, wherein mixing the scene with the virtual scenario comprises adapting the warped image representation based on the formed depth map data for said generated virtual scenario so as to provide a virtual video sequence comprising the warped representation of the scene viewed from the second viewpoint mixed with the virtual scenario, estimating position and posture information at said second viewpoint in relation to said first viewpoint; determining a field of view from the second viewpoint; electing at least one image with associated depth maps corresponding to the determined field of view; and processing the depth maps of the elected images so as to provide the representation of the scene viewed from the second viewpoint. - View Dependent Claims (11)
-
-
12. A virtual scenario generating device, comprising:
-
a memory unit configured to store images covering a scene from a first viewpoint, wherein the images each are associated to position information and posture information, wherein the position information comprises a coordinate in a global coordinate system and the posture information comprises a compass bearing, wherein the first viewpoint is a viewpoint of the scene from a first position in a global geographical coordinate system; an image representation generation unit configured to generate at least one image representation in the global coordinate system based on the images stored in the memory unit, said image representation comprising an image depth map comprising information about a distance to objects in the images provided in the global geographical coordinate system and texture data; a game engine unit configured to generate a virtual scenario comprising virtual objects in a coordinate system aligned with the global coordinate system, wherein generating the virtual scenario comprises forming depth map data and texture data for at least one generated virtual object; a warping unit configured to warp the image depth map to a warped depth map related to a second viewpoint so as to form a representation of the scene viewed from the second viewpoint, wherein the second viewpoint is from a position of the coordinate system aligned with the global geographical coordinate system that is different than the first position of the first viewpoint, and the warped depth map comprising texture data; an image processing unit configured to mix the scene with the virtual scenario based on the representation of the scene viewed from the second viewpoint and on the virtual scenario; wherein the second viewpoint is a virtual viewpoint that is arbitrarily chosen within an area around the first viewpoint, wherein the image processing unit is configured to adapt the warped depth map based on the formed depth map data for the generated virtual scenario so as to provide a virtual video sequence comprising the warped representation of the scene viewed from the second viewpoint mixed with the virtual scenario, and a position/posture estimation unit configured to estimate position and/or posture information at said second viewpoint in relation to said first viewpoint, wherein the position/posture estimation unit is configured to determine a field of view from the second viewpoint, wherein the position/posture estimation unit is configured to elect one or a plurality of images with associated depth maps corresponding to the determined field of view and wherein the warping unit is configured to process the depth maps of the images elected by the position/posture estimation unit and wherein the warping unit is configured to adapt the depth map for the second viewpoint as it is moving based on updated images with the associated depth maps elected by the position/posture estimation unit. - View Dependent Claims (13)
-
-
14. A training system, comprising:
-
a memory unit configured to store images covering a scene from a first viewpoint, wherein the images are each associated with position information and posture information, wherein the position information comprises a coordinate in a global coordinate system and the posture information comprises a compass bearing, wherein the first viewpoint is a viewpoint of the scene from a first coordinate in a global geographical coordinate system; an image representation generation unit configured to generate at least one image representation in the global coordinate system based on the images stored in the memory unit, said image representation comprising an image depth map comprising texture data and comprising information about a distance to objects in the images provided in the global geographical coordinate system; a game engine unit configured to generate a virtual scenario comprising virtual objects in a coordinate system aligned with the global coordinate system, wherein generating the virtual scenario comprises forming depth map data and texture data for at least one generated virtual object; a warping unit configured to warp the image depth map to a warped depth map related to a second viewpoint so as to form a warped representation of the scene viewed from the second viewpoint, wherein the second viewpoint is from a position of the coordinate system aligned with the global geographical coordinate system that is different than the first coordinate of the first viewpoint, the warped depth map comprising texture data; an image processing unit configured to mix the scene with the virtual scenario based on the warped representation of the scene viewed from the second viewpoint and on the virtual scenario, wherein the second viewpoint is a virtual viewpoint that is arbitrarily chosen within an area around the first viewpoint, wherein the image processing unit is configured to adapt the warped depth map based on the formed depth map data for the generated virtual scenario so as to provide a virtual video sequence comprising the warped representation of the scene viewed from the second viewpoint mixed with the virtual scenario, a weapon having a sight configured to display the virtual video sequence and a position/posture estimation unit for estimating position and/or posture information at said second viewpoint in relation to said first viewpoint, wherein the position/posture estimation unit is configured to determine a field of view from the second viewpoint, wherein the position/posture estimation unit is configured to elect one or a plurality of images with associated depth maps corresponding to the determined field of view and wherein the warping unit is configured to process the depth maps of the images elected by the position/posture estimation unit and wherein the warping unit is configured to adapt the depth map for the second viewpoint as it is moving based on updated images with the associated depth maps elected by the position/posture estimation unit. - View Dependent Claims (15)
-
Specification