AUGMENTED REALITY LIGHTING WITH DYNAMIC GEOMETRY
First Claim
1. A method comprising, at a computing device:
- determining a pose of a camera for a first image, wherein the first image comprises a plurality of pixels, wherein each pixel in the first image comprises a depth value and a color value, and wherein the first image corresponds to a portion of a 3D model of a scene;
obtaining a second image based on the camera pose by projecting the portion of the 3D model into a camera Field Of View (FOV) of the camera; and
obtaining a composite image comprising a plurality of composite pixels based, in part, on the first image and the second image, wherein each composite pixel in a subset of the plurality of composite pixels is obtained, based, at least in part, on a corresponding absolute difference between a depth value of a corresponding pixel in the first image and a depth value of a corresponding pixel in the second image.
1 Assignment
0 Petitions
Accused Products
Abstract
Methods for determination of AR lighting with dynamic geometry are disclosed. A camera pose for a first image comprising a plurality of pixels may be determined, where each pixel in the first image comprises a depth value and a color value. The first image may correspond to a portion of a 3D model. A second image may be obtained by projecting the portion of the 3D model into a camera field of view based on the camera pose. A composite image comprising a plurality of composite pixels may be obtained based, in part, on the first image and the second image, where each composite pixel in a subset of the plurality of composite pixels is obtained, based, in part, on a corresponding absolute difference between a depth value of a corresponding pixel in the first image and a depth value of a corresponding pixel in the second image.
50 Citations
30 Claims
-
1. A method comprising, at a computing device:
-
determining a pose of a camera for a first image, wherein the first image comprises a plurality of pixels, wherein each pixel in the first image comprises a depth value and a color value, and wherein the first image corresponds to a portion of a 3D model of a scene; obtaining a second image based on the camera pose by projecting the portion of the 3D model into a camera Field Of View (FOV) of the camera; and obtaining a composite image comprising a plurality of composite pixels based, in part, on the first image and the second image, wherein each composite pixel in a subset of the plurality of composite pixels is obtained, based, at least in part, on a corresponding absolute difference between a depth value of a corresponding pixel in the first image and a depth value of a corresponding pixel in the second image. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A device comprising:
-
a camera comprising a depth sensor to obtain a first image, comprising a plurality of pixels, wherein each pixel in the first image comprises a depth value and a color value, and wherein the first image corresponds to a portion of a 3D model of a scene; and a processor coupled to the camera, wherein the processor is configured to; determine a camera pose for the first image; obtain a second image based on the camera pose by projecting the portion of the 3D model into a Field Of View (FOV) of the camera; and obtain a composite image comprising a plurality of composite pixels based, in part, on the first image and the second image, wherein each composite pixel in a subset of the plurality of composite pixels, is obtained, based, at least in part, on a corresponding absolute difference between a depth value of a corresponding pixel in the first image and a depth value of a corresponding pixel in the second image. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20)
-
-
21. A device comprising:
-
imaging means a comprising a depth sensing means, the imaging means to obtain a live first image, comprising a plurality of pixels, wherein each pixel in the first image comprises a depth value and a color value, and wherein the first image corresponds to a portion of a 3D model of a scene; and processing means coupled to the imaging means, wherein the processing means comprises; means for determining an imaging means pose for the first image; means for obtaining a second image based on the imaging means pose by projecting the portion of the 3D model into a Field Of View (FOV) of the imaging means; and means for obtaining a composite image comprising a plurality of composite pixels based, in part, on the first image and the second image, wherein each composite pixel in a subset of the plurality of composite pixels, is obtained, based, at least in part, on a corresponding absolute difference between a depth value of a corresponding pixel in the first image and a depth value of a corresponding pixel in the second image. - View Dependent Claims (22, 23, 24, 25)
-
-
26. An article comprising:
-
a non-transitory computer readable medium comprising instructions that are executable by a processor to; determine a camera pose for a live first image, wherein the first image comprises a plurality of pixels, wherein each pixel in the first image comprises a depth value and a color value, and wherein the first image corresponds to a portion of a 3D model of a scene; obtain a second image based on the camera pose by projecting the portion of the 3D model into a Field Of View (FOV) of the camera; and obtain a composite image comprising a plurality of composite pixels based, in part, on the first image and the second image, wherein each composite pixel in a subset of the plurality of composite pixels, is obtained, based, at least in part, on a corresponding absolute difference between a depth value of a corresponding pixel in the first image and a depth value of a corresponding pixel in the second image. - View Dependent Claims (27, 28, 29, 30)
-
Specification