Augmented reality images based on color and depth information
First Claim
1. A computer-implemented method, comprising:
- accessing one or more images of a first object, the one or more images captured using at least one camera included in a user device;
accessing depth information determined by at least one depth sensor of the user device measuring distance from the user device to at least a portion of the first object;
receiving, from the user device, a composite image request including an object identifier (ID) identifying a second object to be rendered with the first object and user specified position information indicating a preferred position at which to display the second object relative to the first object in a composite image;
processing the depth information to determine a first plurality of polygons representing at least a portion of a surface of the first object;
accessing a three-dimensional model of the second object using the object ID identifying the second object, the three-dimensional model including three-dimensional coordinates of points associated with a second plurality of polygons representing at least a portion of a surface of the second object identified by the object ID;
arranging the second plurality of polygons representing at least the portion of the surface of the second object relative to the first plurality of polygons representing at least the portion of the surface of the first object in a virtual scene based on the user specified position information specified in the composite image request;
generating the composite image of the virtual scene, wherein the composite image comprises a plurality of pixels, and wherein values for the plurality of pixels are determined by ray tracing one or more virtual light rays from at least one virtual light source toward the second plurality of polygons representing at least the portion of the surface of the second object identified by the object ID and the first plurality of polygons representing at least the portion of the surface of the first object in the virtual scene; and
sending the composite image to the user device.
1 Assignment
0 Petitions
Accused Products
Abstract
Techniques are described for generating a composite image that depicts a first object with a second object. Two-dimensional images of the first object are captured, along with depth information that includes three-dimensional coordinates of points on the surface of the first object. Based on the depth information, a polygonal model may be determined for the first object including color information determined from the images. The polygonal model of the first object may be placed with a polygonal model of the second object in a virtual scene, and ray tracing operations may generate a plurality of pixels for the composite image. In cases where the first object represents a user and the second object represents a product, the composite image may provide a substantially realistic preview of the user with the product.
43 Citations
21 Claims
-
1. A computer-implemented method, comprising:
-
accessing one or more images of a first object, the one or more images captured using at least one camera included in a user device; accessing depth information determined by at least one depth sensor of the user device measuring distance from the user device to at least a portion of the first object; receiving, from the user device, a composite image request including an object identifier (ID) identifying a second object to be rendered with the first object and user specified position information indicating a preferred position at which to display the second object relative to the first object in a composite image; processing the depth information to determine a first plurality of polygons representing at least a portion of a surface of the first object; accessing a three-dimensional model of the second object using the object ID identifying the second object, the three-dimensional model including three-dimensional coordinates of points associated with a second plurality of polygons representing at least a portion of a surface of the second object identified by the object ID; arranging the second plurality of polygons representing at least the portion of the surface of the second object relative to the first plurality of polygons representing at least the portion of the surface of the first object in a virtual scene based on the user specified position information specified in the composite image request; generating the composite image of the virtual scene, wherein the composite image comprises a plurality of pixels, and wherein values for the plurality of pixels are determined by ray tracing one or more virtual light rays from at least one virtual light source toward the second plurality of polygons representing at least the portion of the surface of the second object identified by the object ID and the first plurality of polygons representing at least the portion of the surface of the first object in the virtual scene; and sending the composite image to the user device. - View Dependent Claims (2, 3, 4)
-
-
5. A system, comprising:
-
at least one memory storing computer-executable instructions; and at least one processor in communication with the at least one memory, the at least one processor configured to access the at least one memory and execute the computer-executable instructions to; access one or more images of a first object; access depth information indicating distance from a user device to at least a portion of the first object; receive, from the user device, a composite image request including an object identifier (ID) identifying a second object to be rendered with the first object and user specified position information indicating a preferred position at which to display the second object relative to the first object in a composite image; processing the depth information to determine a first plurality of polygons representing at least a portion of a surface of the first object; access a three-dimensional model of the second object using the object ID identifying the second object including three-dimensional coordinates of points associated with a second plurality of polygons representing at least a portion of a surface of the second object identified by the object ID; arrange the second plurality of polygons representing at least the portion of the surface of the second object relative to the first plurality of polygons representing at least the portion of the surface of the first object in a virtual scene based on the user specified position information; generate the composite image of the virtual scene, wherein the composite image comprises a plurality of pixels, and wherein values for the plurality of pixels are determined by using ray tracing by directing one or more virtual light rays from at least one virtual light source toward the second plurality of polygons representing at least the portion of the surface of the second object identified by the object ID and the first plurality of polygons representing at least the portion of the surface of the first object in the virtual scene; and send the composite image to the user device. - View Dependent Claims (6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. One or more non-transitory computer-readable media storing instructions which, when executed by at least one processor, instruct the at least one processor to perform actions comprising:
-
accessing one or more images of a first object including depth information associated with the first object; receiving a composite image request including an object identifier (ID) identifying a second object to be rendered with the first object and user specified position information indicating a preferred position at which to display the second object identified by the object ID relative to the first object in a composite image; processing the depth information to determine a surface of the first object; accessing a three-dimensional model of the second object using the object ID identifying the second object, the three-dimensional model including three-dimensional coordinates of points associated with a surface of the second object; arranging a visual representation of the second object identified by the object ID relative to a visual representation of the first object in a virtual scene based at least partly on the three-dimensional coordinates of the points associated with the surface of the second object from the three-dimensional model of the second object identified by the object ID and on the user specified position information specified in the composite image request; and generating the composite image of the virtual scene, wherein the composite image comprises a plurality of pixels, and wherein values for the plurality of pixels are determined by using ray tracing directing at least one virtual light source toward the surface of the second object identified by the object ID and the surface of the first object. - View Dependent Claims (16, 17, 18, 19, 20, 21)
-
Specification