PHOTOMETRIC REGISTRATION FROM ARBITRARY GEOMETRY FOR AUGMENTED REALITY
First Claim
1. A method comprising:
- receiving a sequence of video frames of an environment at a mobile device;
generating a surface reconstruction of the environment;
determining a pose of the camera with respect to the environment;
generating illumination data of the environment from at least one video frame;
generating estimated lighting conditions of the environment in each video frame based on the surface reconstruction and the illumination data; and
rendering a virtual object over the video frames based on the pose and the estimated lighting conditions.
1 Assignment
0 Petitions
Accused Products
Abstract
Photometric registration from an arbitrary geometry for augmented reality is performed using video frames of an environment captured by a camera. A surface reconstruction of the environment is generated. A pose is determined for the camera with respect to the environment, e.g., using model based tracking using the surface reconstruction. Illumination data for the environment is determined from a video frame. Estimated lighting conditions for the environment are generated based on the surface reconstruction and the illumination data. For example, the surface reconstruction may be used to compute the possible radiance transfer, which may be compressed, e.g., using spherical harmonic basis functions, and used in the lighting conditions estimation. A virtual object may then be rendered based on the lighting conditions. Differential rendering may be used with lighting solutions from the surface reconstruction of the environment and a second surface reconstruction of the environment combined with the virtual object.
-
Citations
31 Claims
-
1. A method comprising:
-
receiving a sequence of video frames of an environment at a mobile device; generating a surface reconstruction of the environment; determining a pose of the camera with respect to the environment; generating illumination data of the environment from at least one video frame; generating estimated lighting conditions of the environment in each video frame based on the surface reconstruction and the illumination data; and rendering a virtual object over the video frames based on the pose and the estimated lighting conditions. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. An apparatus comprising:
-
a camera; a display; and a processor coupled to receive a sequence of video frames of an environment captured by the camera, wherein the processor is configured to generate a surface reconstruction of the environment;
determine a pose of the camera with respect to the environment;
generate illumination data of the environment from at least one video frame;
generate estimated lighting conditions of the environment in each video frame based on the surface reconstruction and the illumination data; and
render a virtual object over the video frames based on the pose and the estimated lighting conditions. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. An apparatus comprising:
-
means for receiving a sequence of video frames of an environment; means for generating a surface reconstruction of the environment; means for determining a pose with respect to the environment; means for generating illumination data of the environment from at least one video frame; means for generating estimated lighting conditions of the environment in each video frame based on the surface reconstruction and the illumination data; and means for rendering a virtual object over the video frames based on the pose and the estimated lighting conditions. - View Dependent Claims (22, 23, 24, 25, 26)
-
-
27. A storage medium including program code stored thereon, comprising:
-
program code to generate a surface reconstruction of an environment using at least one video frame of the environment captured with a camera; program code to determine a pose of the camera with respect to the environment; program code to generate illumination data of the environment from the at least one video frame; program code to generate estimated lighting conditions of the environment in each video frame based on the surface reconstruction and the illumination data; and program code to render a virtual object over the video frames based on the pose and the estimated lighting conditions. - View Dependent Claims (28, 29, 30, 31)
-
Specification