System, method, and device including a depth camera for creating a location based experience
First Claim
1. A system for creating and sharing an environment comprising:
- a network for receiving metadata including range metadata from one or more devices each having a depth camera employed near a point of interest to capture associated metadata near said point of interest, wherein the associated metadata includes a location of the device, an orientation of the camera and range between the depth camera and one or more targets near said point of interest;
an image processing server connected to the network for receiving said metadata including range metadata, wherein the server processes the metadata including range metadata to determine the location of said one or more targets to build a 3D model of said one or more targets proximate the point of interest based at least in part on said range metadata;
an experience platform connected to the image processing server for storing the 3D model of one or more targets, whereby users can connect to the experience platform to view the point of interest from a user selected location and orientation, and view the 3D model of one or more targets.
0 Assignments
0 Petitions
Accused Products
Abstract
A system, method, and device for creating an environment and sharing an experience using a plurality of mobile devices having a conventional camera and a depth camera employed near a point of interest. In one form, random crowdsourced images, depth information, and associated metadata are captured near said point of interest. Preferably, the images include depth camera information. A wireless network communicates with the mobile devices to accept the images, depth information, and metadata to build and store a 3D model of the point of interest. Users connect to this experience platform to view the 3D model from a user selected location and orientation and to participate in experiences with, for example, a social network.
-
Citations
21 Claims
-
1. A system for creating and sharing an environment comprising:
-
a network for receiving metadata including range metadata from one or more devices each having a depth camera employed near a point of interest to capture associated metadata near said point of interest, wherein the associated metadata includes a location of the device, an orientation of the camera and range between the depth camera and one or more targets near said point of interest; an image processing server connected to the network for receiving said metadata including range metadata, wherein the server processes the metadata including range metadata to determine the location of said one or more targets to build a 3D model of said one or more targets proximate the point of interest based at least in part on said range metadata; an experience platform connected to the image processing server for storing the 3D model of one or more targets, whereby users can connect to the experience platform to view the point of interest from a user selected location and orientation, and view the 3D model of one or more targets. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method for creating an environment for use with a location based experience, comprising:
-
capturing a plurality of images comprising associated metadata near a point of interest with a plurality of mobile devices accompanying a number of contributors, each mobile device having a depth camera, wherein the associated metadata for said image includes a location of the mobile device, an orientation of the camera, and range between the depth camera and one or more targets near said point of interest; communicating said metadata from said mobile devices to a wireless network; receiving said metadata including range metadata at an image processing server connected to the network; and processing the metadata including range metadata to determine the location of one or more targets in the images and to build a 3D model of one or more targets near the point of interest, using the range between the depth camera and one or more targets near said point of interest. - View Dependent Claims (8, 9, 10)
-
-
11. A portable device for assisting in capturing 3D information near a point of interest, comprising:
-
a conventional optical camera for capturing an image of the point of interest; a depth camera for capturing range metadata comprising the range between the depth camera and one or more targets near said point of interest and excluding depth of field,
said depth camera using one or more of the following sensors—
sheet of light, structured light, time of flight, interferometry, coded aperture, LIDAR, light field or stereo triangulation;memory for storing metadata associated with said image or range metadata or both; a communication link to an experience server, operable to load information relevant to the point of interest to the device, operable to transmit said range metadata to build a 3D model proximate the point of interest and to download at least a portion of said 3D model proximate the point of interest; and a display operable to view a perspective view of said 3D model of said point of interest from said device position proximate said point of interest, said display operable to show at least some of said load information as an artificial reality (“
AR”
) message. - View Dependent Claims (12, 13, 14, 15)
-
-
16. A method of sharing content in a location based experience, comprising:
-
capturing a plurality of images and associated metadata near a point of interest, including range metadata comprising range between a depth camera and a target in an image; processing the captured images, range, and associated metadata to build a 3D model of one or more targets near said point of interest; storing the 3D model of one or more targets in an experience platform connected to a network; accessing the experience platform using the network to access the 3D model of one or more targets; and viewing the 3D model of one or more targets using glasses wearable by a user. - View Dependent Claims (17, 18, 19, 20, 21)
-
Specification