Augmented reality interface for video tagging and sharing
First Claim
1. A computer-implemented method for providing an augmented reality interface, comprising:
- retrieving, from a server, and caching data associated with a plurality of locations within a two-dimension grid based on a GPS coordinate of a portable electronic device, the cached data comprises a plurality of image snippets preloaded from the server based on a location information of the portable electronic device, each of the plurality of image snippets being a smaller image file of a corresponding stored image;
overlaying one or more of the plurality of image snippets corresponding to a location and a direction at which a corresponding image was originally captured based on the location information and an orientation information of the portable electronic device without manipulating the plurality of image snippets based on corner features of the image of the real-world scene;
identifying at least one retrieved image with metadata having selected features;
manipulating a retrieved image corresponding to an displayed image snippet based on matching corner feature information of the retrieved images generated and provided by the server and corner features of a currently observed real-world scene; and
combining the manipulated image with the currently observed real-world scene viewed with the portable electronic device,wherein the metadata includes annotations by at least one of the server and a user who acquired the image; and
wherein the cache data is purged based on a distance between the portable electronic device from the two-dimension grid.
1 Assignment
0 Petitions
Accused Products
Abstract
A system, method, and computer program product for automatically combining computer-generated imagery with real-world imagery in a portable electronic device by retrieving, manipulating, and sharing relevant stored videos, preferably in real time. A video is captured with a hand-held device and stored. Metadata including the camera'"'"'s physical location and orientation is appended to a data stream, along with user input. The server analyzes the data stream and further annotates the metadata, producing a searchable library of videos and metadata. Later, when a camera user generates a new data stream, the linked server analyzes it, identifies relevant material from the library, retrieves the material and tagged information, adjusts it for proper orientation, then renders and superimposes it onto the current camera view so the user views an augmented reality.
-
Citations
19 Claims
-
1. A computer-implemented method for providing an augmented reality interface, comprising:
-
retrieving, from a server, and caching data associated with a plurality of locations within a two-dimension grid based on a GPS coordinate of a portable electronic device, the cached data comprises a plurality of image snippets preloaded from the server based on a location information of the portable electronic device, each of the plurality of image snippets being a smaller image file of a corresponding stored image; overlaying one or more of the plurality of image snippets corresponding to a location and a direction at which a corresponding image was originally captured based on the location information and an orientation information of the portable electronic device without manipulating the plurality of image snippets based on corner features of the image of the real-world scene; identifying at least one retrieved image with metadata having selected features; manipulating a retrieved image corresponding to an displayed image snippet based on matching corner feature information of the retrieved images generated and provided by the server and corner features of a currently observed real-world scene; and combining the manipulated image with the currently observed real-world scene viewed with the portable electronic device, wherein the metadata includes annotations by at least one of the server and a user who acquired the image; and wherein the cache data is purged based on a distance between the portable electronic device from the two-dimension grid. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16)
-
-
17. A system for providing an augmented reality interface, comprising:
-
a processor; and a memory containing instructions that, when executed by the processor cause the processor to; retrieve, from a server, and cache data associated with a plurality of locations within a two-dimension grid based on a GPS coordinate of a portable electronic device, the cached data comprises a plurality of video snippets pre-loaded from the server based on a location information of the portable electronic device, each of the plurality of video snippets being a smaller video file of a corresponding stored video; overlay one or more of the plurality of video snippets corresponding to a location and a direction at which a corresponding video was originally captured based on the location information and an orientation information of the portable electronic device without manipulating the plurality of video snippets based on corner features of the video of the real-world scene; identify at least one retrieved video with metadata having selected features; manipulate a retrieved video corresponding to an displayed video snippet based on matching corner feature information of the retrieved videos generated and provided by the server and corner features of a currently observed real-world scene; and combine the manipulated video with the currently observed real-world scene viewed with the portable electronic device, wherein the metadata includes annotations by at least one of the server and a user who acquired the video; and wherein the cache data is purged based on a distance between the portable electronic device from the two-dimension grid.
-
-
18. A computer program product for providing an augmented reality interface, comprising a non-transitory computer readable medium embodying computer-executable program instructions thereon that, when executed, cause a computing device to:
-
retrieve, from a server, and a plurality of videos and cache data associated with a plurality of locations within a two-dimension grid based on a GPS coordinate of a portable electronic device, the cached data comprises a plurality of video snippets from the server based on a location information of the portable electronic device, each of the plurality of video snippets being a smaller video file of a corresponding stored video; overlay one or more of the plurality of video snippets corresponding to a location and a direction at which a corresponding video was originally captured based on the location information and an orientation information of the portable electronic device without manipulating the plurality of video snippets based on corner features of the video of the real-world scene; identify at least one retrieved video with metadata having selected features; manipulate a retrieved video corresponding to an displayed video snippet based on matching corner feature information of the retrieved videos generated and provided by the server and corner features of a currently observed real-world scene; and combine the manipulated video with the currently observed real-world scene viewed with the portable electronic device, wherein the metadata includes annotations by at least one of the server and a user who acquired the video; and wherein the cache data is purged based on a distance between the portable electronic device from the two-dimension grid.
-
-
19. A system for providing an augmented reality interface, comprising:
-
means for retrieving, from a server, and a least one stored video caching data associated with a plurality of locations within a two-dimension grid based on a GPS coordinate of a portable electronic device, the cached data comprises a plurality of video snippets preloaded from the server based on a location information of the portable electronic device, each of the plurality of video snippets being a smaller video file of a corresponding stored video; means for overlaying one or more of the plurality of video snippets corresponding to a location and a direction at which a corresponding video was originally captured based on the location information and an orientation information of the portable electronic device without manipulating the plurality of video snippets based on corner features of the video of the real-world scene; means for identifying at least one retrieved video with metadata having selected features; means for manipulating the retrieved video corresponding to an displayed video snippet based on matching corner feature information of the retrieved videos generated and provided by the server and corner features of a currently observed real-world scene; and means for combining the manipulated video with the currently observed real-world scene viewed with the portable electronic device, wherein the metadata includes annotations by at least one of the server and a user who acquired the video; and wherein the cache data is purged based on a distance between the portable electronic device from the two-dimension grid.
-
Specification