Virtual 3D methods, systems and software
First Claim
Patent Images
1. A video communication method that enables a user to view a remote scene in a manner that gives the user a visual impression of being present with respect to the remote scene, the method comprising:
- capturing images of the remote scene, the capturing comprising utilizing at least two cameras, each having a view of the remote scene;
executing a scene feature correspondence function by detecting common features between corresponding images of the remote scene captured by the respective cameras and measuring a relative distance in image space between the common features, to generate disparity values for common features between corresponding images of the remote scene;
generating a scene data representation based on the disparity values and pixel data from the at least two cameras, the scene data representation being representative of (1) the captured images of the remote scene, (2) the remote scene, and (3) the corresponding disparity values;
reconstructing a synthetic view of the remote scene, based on the scene data representation and based on the disparity values and pixel data on which the scene data representation is based; and
displaying the synthetic view to the user on a display screen used by the user;
the capturing, detecting, generating, reconstructing and displaying being executed such that;
(a) the user is provided the visual impression of looking through his display screen as a physical window to the remote scene, and(b) the user is provided an immersive visual experience of the remote scene.
4 Assignments
0 Petitions
Accused Products
Abstract
Methods, systems and computer program products (“software”) enable a virtual three-dimensional visual experience (referred to herein as “V3D”) videoconferencing and other applications, and capturing, processing and displaying of images and image streams.
78 Citations
22 Claims
-
1. A video communication method that enables a user to view a remote scene in a manner that gives the user a visual impression of being present with respect to the remote scene, the method comprising:
-
capturing images of the remote scene, the capturing comprising utilizing at least two cameras, each having a view of the remote scene; executing a scene feature correspondence function by detecting common features between corresponding images of the remote scene captured by the respective cameras and measuring a relative distance in image space between the common features, to generate disparity values for common features between corresponding images of the remote scene; generating a scene data representation based on the disparity values and pixel data from the at least two cameras, the scene data representation being representative of (1) the captured images of the remote scene, (2) the remote scene, and (3) the corresponding disparity values; reconstructing a synthetic view of the remote scene, based on the scene data representation and based on the disparity values and pixel data on which the scene data representation is based; and displaying the synthetic view to the user on a display screen used by the user; the capturing, detecting, generating, reconstructing and displaying being executed such that; (a) the user is provided the visual impression of looking through his display screen as a physical window to the remote scene, and (b) the user is provided an immersive visual experience of the remote scene. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. A digital processing system for enabling a user to view a remote scene with the visual impression of being present with respect to the remote scene, the digital processing system comprising:
-
at least two cameras, each having a view of the remote scene; a display screen for use by the user; and a digital processing resource comprising at least one digital processor, the digital processing resource being operable to; capture images of the remote scene, utilizing the at least two cameras; execute a scene feature correspondence function by detecting common features between corresponding images of the remote scene captured by the respective cameras and measuring a relative distance in image space between the common features, to generate disparity values for common features between corresponding images of the remote scene; generate a scene data representation based on the disparity values and pixel data from the at least two cameras, the scene data representation being representative of (1) the captured images of the remote scene, (2) the remote scene, and (3) the corresponding disparity values; reconstruct a synthetic view of the remote scene, based on the scene data representation and based on the disparity values and pixel data on which the scene data representation is based; and display the synthetic view to the user on the display screen; the capturing, detecting, generating, reconstructing and displaying being executed such that; (a) the user is provided the visual impression of looking through his display screen as a physical window to the remote scene, and (b) the user is provided an immersive visual experience of the remote scene. - View Dependent Claims (16)
-
-
17. A video communication method that enables a user to view a remote scene in a manner that gives the user a visual impression of being present with respect to the remote scene, the method comprising:
-
capturing images of the remote scene, the capturing comprising utilizing at least three cameras, each having a view of the remote scene; executing a feature correspondence function by detecting common features between corresponding images captured by the respective cameras and measuring a relative distance in image space between the common features, to generate disparity values; generating a data representation, representative of the captured images and the corresponding disparity values; reconstructing a synthetic view of the remote scene, based on the representation; and
displaying the synthetic view to the user on a display screen used by the user;the capturing, detecting, generating, reconstructing and displaying being executed such that; (a) the user is provided the visual impression of looking through his display screen as a physical window to the remote scene, and (b) the user is provided an immersive visual experience of the remote scene; and further wherein;
the cameras are arranged such that a first pair of cameras is disposed along a first axis and a second pair of cameras is disposed along a second axis intersecting with, but angularly displaced from, the first axis, wherein the first and second pairs of cameras share a common camera at or near the intersection of the first and second axis, so that the first and second pairs of cameras represent respective first and second independent stereo axes that share a common camera;and further comprising;
executing a feature correspondence function by detecting common features between corresponding images captured by the at least three cameras and measuring a relative distance in image space between the common features, to generate disparity values;generating a data representation, representative of the captured images and the corresponding disparity values; and utilizing an unrectified, undistorted (URUD) image space to integrate disparity data for pixels between the first and second stereo axes, thereby to combine disparity data from the first and second axes, wherein the URUD space is an image space in which polynomial lens distortion has been removed from the image data but the captured image remains unrectified.
-
-
18. A video processing method that enables a user to view a remote scene in a manner that gives the user a visual impression of being present with respect to the remote scene, the method comprising:
-
reconstructing a synthetic view of the remote scene, based on a scene data representation and based on disparity values and pixel data on which the scene data representation is based, the scene data representation being representative of (A) images of the remote scene, captured by at least two cameras, each having a view of the remote scene, (B) the remote scene, and (C) corresponding disparity values generated by executing a scene feature correspondence function by detecting common features between corresponding images of the remote scene captured by respective ones of the at least two cameras and measuring a relative distance in image space between the common features of corresponding images of the remote scene; and providing the synthetic view in a manner useable to display the synthetic view to the user. - View Dependent Claims (19)
-
-
20. A program product for use with a digital processing system, for enabling a user to view a remote scene with the visual impression of being present with respect to the remote scene, the digital processing system comprising a digital processing resource, the digital processing resource comprising at least one digital processor, the program product comprising digital processor-executable program instructions stored on a non-transitory digital processor-readable medium, which when executed in the digital processing resource cause the digital processing resource to:
-
reconstruct a synthetic view of the remote scene, based on a scene data representation and based on disparity values and pixel data on which the scene data representation is based, the scene data representation being representative of (A) images of the remote scene, captured by at least two cameras, each having a view of the remote scene, (B) the remote scene, and (C) corresponding disparity values generated by executing a scene feature correspondence function by detecting common features between corresponding images of the remote scene captured by respective ones of the at least two cameras and measuring a relative distance in image space between the common features of corresponding images of the remote scene; and provide the synthetic view in a manner useable to display the synthetic view to the user.
-
-
21. A method of generating a data representation useable to reconstruct a synthetic view of a remote scene, the method comprising:
-
capturing images of the remote scene, the capturing comprising utilizing at least two cameras, each having a view of the remote scene; executing a scene feature correspondence function by detecting common features between corresponding images of the remote scene captured by the respective cameras and measuring a relative distance in image space between the common features, to generate disparity values for common features between corresponding images of the remote scene; and generating a scene data representation based on the disparity values and pixel data from the at least two cameras, the scene data representation being representative of (1) the captured images of the remote scene, (2) the remote scene, and (3) the corresponding disparity values; the scene data representation being useable to reconstruct a synthetic view of the remote scene, based on the scene data representation and based on the disparity values and pixel data on which the scene data representation is based, in a manner such that; (a) a user, viewing on a display screen a synthetic view of the remote scene reconstructed based on the data representation, is provided the visual impression of looking through the display screen as a physical window to the remote scene, and (b) the user is provided an immersive visual experience of the remote scene that gives the user a visual impression of being present with respect to the remote scene.
-
-
22. A digital processing system for generating a data representation useable to reconstruct a synthetic view of a remote scene, the digital processing system comprising:
-
at least two cameras, each having a view of the remote scene; and a digital processing resource comprising at least one digital processor, the digital processing resource being operable to; capture images of the remote scene, utilizing the at least two cameras; execute a scene feature correspondence function by detecting common features between corresponding images of the remote scene captured by the respective cameras and measuring a relative distance in image space between the common features, to generate disparity values for common features between corresponding images of the remote scene; and generate a scene data representation;
based on the disparity values and pixel data from the at least two cameras, the scene data representation being representative of (1) the captured images of the remote scene, (2) the remote scene, and (3) the corresponding disparity values;the scene data representation being useable to reconstruct a synthetic view of the remote scene, based on the scene data representation and based on the disparity values and pixel data on which the scene data representation is based, in a manner such that; (a) a user, viewing on a display screen a synthetic view of the remote scene reconstructed based on the data representation, is provided the visual impression of looking through the display screen as a physical window to the remote scene, and (b) the user is provided an immersive visual experience of the remote scene that gives the user a visual impression of being present with respect to the remote scene.
-
Specification