Embedding metadata into images and videos for augmented reality experience
First Claim
Patent Images
1. A method, comprising:
- generating a first image including a composite of an environment captured by a device and a virtually-rendered augmented reality (AR) object, the first image embedded with a first metadata including;
a first contextual environment data, and the environment captured by the device without the virtually-rendered AR object; and
generating a second image by modifying the first image, the second image embedded with a second metadata that is different from the first data and generated based on the first metadata, and the second metadata including;
a second contextual environment data, and the environment captured by the device without the virtually-rendered AR object.
1 Assignment
0 Petitions
Accused Products
Abstract
A method for embedding metadata into images and/or videos for AR experience is described. In one example implementation, the method may include generating a first image/video including an environment captured by a device and a virtually-rendered augmented reality (AR) object composited with the environment. The first image/video may be embedded with a first metadata. The method may further include generating a second image/video by modifying the first image/video. The second image/video may be embedded with a second metadata. The second metadata is generated based on the first metadata.
11 Citations
20 Claims
-
1. A method, comprising:
-
generating a first image including a composite of an environment captured by a device and a virtually-rendered augmented reality (AR) object, the first image embedded with a first metadata including;
a first contextual environment data, and the environment captured by the device without the virtually-rendered AR object; andgenerating a second image by modifying the first image, the second image embedded with a second metadata that is different from the first data and generated based on the first metadata, and the second metadata including;
a second contextual environment data, and the environment captured by the device without the virtually-rendered AR object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A method, comprising:
-
generating a first video including a composite of an environment captured by a device and a virtually-rendered augmented reality (AR) object, the first video embedded with a first metadata including;
a first contextual environment data, and the environment captured by the device without the virtually-rendered AR object; andgenerating a second video by modifying the first video, the second video embedded with a second metadata, that is different from the first metadata and generated based on the first metadata, and the second metadata including;
a second contextual environment data, and the environment captured by the device without the virtually-rendered AR object. - View Dependent Claims (13, 14, 15, 16)
-
-
17. An apparatus, comprising:
-
a processor; and a memory, the memory including instructions configured to cause the processor to; generate a first image that includes a composite of an environment captured by a device and a virtually-rendered augmented reality (AR) object composited with the environment, the first image embedded with a first metadata including;
a first contextual environment data, and the environment captured by the device without the virtually-rendered AR object; andgenerate a second image by modifying the first image, the second image embedded with a second metadata that is different from the first metadata and generated based on the first metadata, and the second metadata including;
a second contextual environment data, and the environment captured by the device without the virtually-rendered AR object. - View Dependent Claims (18, 19)
-
-
20. A non-transitory computer-readable storage medium having stored thereon computer executable program code which, when executed on a computer system, causes the computer system to perform a method, comprising:
-
generating a first image including a composite of an environment captured by a device and a virtually-rendered augmented reality (AR) object composited with the environment, the first image embedded with a first metadata including;
a first contextual environment data, and the environment captured by the device without the virtually-rendered AR object; andgenerating a second image by modifying the first image, the second image embedded with a second metadata that is different from the first metadata and generated based on the first metadata, and the second metadata including;
a second contextual environment data, and the environment captured by the device without the virtually-rendered AR object.
-
Specification