Managing virtual content displayed to a user based on mapped user location
First Claim
Patent Images
1. A method of managing virtual content to be displayed to users via three-dimensional imaging headsets, the method comprising:
- measuring locations of a user in a physical space as the user moves through the physical space, by a server apparatus receiving inputs from multiple stationary sensors positioned at respective sensor locations within the physical space and processing the inputs to generate the locations of the user;
storing map data that describes a map of the physical space;
specifying a set of holograms that have apparent locations that are defined relative to the map data; and
providing the measured locations of the user and at least a portion of the map data to a headset worn by the user, to enable the headset to render the set of holograms at the apparent locations relative to the map data and from a user perspective based on the measured locations of the user,wherein the method further comprises generating an avatar of the user, the avatar providing a virtual representation of the user, wherein generating the avatar includes;
directing the user to apply wearable sensors to respective parts of a body of the user;
with the user wearing the wearable sensors, measuring locations of the wearable sensors and locations of the headset as the user performs a set of predetermined physical movements; and
generating a skeletal model of the user based on the measured locations of the wearable sensors and of the headset,wherein, when generating the avatar, measuring the locations of the wearable sensors and the locations of the headset includes (i) directing the user to assume multiple predetermined physical postures and, (ii) for each physical posture, generating a three-dimensional map of the locations of the wearable sensors and the location of the headset while the user is assuming the respective physical posture, andwherein generating the skeletal model includes estimating a set of joint locations and a set of limb lengths of the user based on changes in three-dimensional maps that result from the user assuming different physical postures.
2 Assignments
0 Petitions
Accused Products
Abstract
A technique for rendering virtual content to a user stores map data of features in a physical environment of the user and measures the location of the user with stationary sensors placed at respective locations within the environment. A server provides the location of the user and portions of the map data to a headset worn by the user. The headset is thus enabled to render virtual content at apparent locations that are based on the measured location of the user and the features described by the map data.
52 Citations
30 Claims
-
1. A method of managing virtual content to be displayed to users via three-dimensional imaging headsets, the method comprising:
-
measuring locations of a user in a physical space as the user moves through the physical space, by a server apparatus receiving inputs from multiple stationary sensors positioned at respective sensor locations within the physical space and processing the inputs to generate the locations of the user; storing map data that describes a map of the physical space; specifying a set of holograms that have apparent locations that are defined relative to the map data; and providing the measured locations of the user and at least a portion of the map data to a headset worn by the user, to enable the headset to render the set of holograms at the apparent locations relative to the map data and from a user perspective based on the measured locations of the user, wherein the method further comprises generating an avatar of the user, the avatar providing a virtual representation of the user, wherein generating the avatar includes; directing the user to apply wearable sensors to respective parts of a body of the user; with the user wearing the wearable sensors, measuring locations of the wearable sensors and locations of the headset as the user performs a set of predetermined physical movements; and generating a skeletal model of the user based on the measured locations of the wearable sensors and of the headset, wherein, when generating the avatar, measuring the locations of the wearable sensors and the locations of the headset includes (i) directing the user to assume multiple predetermined physical postures and, (ii) for each physical posture, generating a three-dimensional map of the locations of the wearable sensors and the location of the headset while the user is assuming the respective physical posture, and wherein generating the skeletal model includes estimating a set of joint locations and a set of limb lengths of the user based on changes in three-dimensional maps that result from the user assuming different physical postures. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21)
-
-
22. A server apparatus comprising control circuitry that includes a set of processing units coupled to memory, the control circuitry constructed and arranged to:
-
measure locations of a user in a physical space as the user moves through the physical space, based on receipt of inputs from multiple stationary sensors positioned at respective sensor locations within the physical space; store map data that describes a map of the physical space; specify a set of holograms that have apparent locations that are defined relative to the map data; and provide the measured locations of the user and at least a portion of the map data to a headset worn by the user, to enable the headset to render the set of holograms at the apparent locations relative to the map data and from a user perspective based on the measured locations of the user, wherein the control circuitry is further constructed and arranged to generate an avatar of the user, the avatar providing a virtual representation of the user, the control circuitry constructed and arranged to; direct the user to apply wearable sensors to respective parts of a body of the user; with the user wearing the wearable sensors, measure locations of the wearable sensors and locations of the headset as the user performs a set of predetermined physical movements; and generate a skeletal model of the user based on the measured locations of the wearable sensors and of the headset, wherein the control circuitry is further constructed and arranged to; generate a first measurement of a location of a particular one of the wearable sensors using internal measurement circuitry within the particular wearable sensor; generate a second measurement of the location of the particular wearable sensor by processing inputs received from a set of the stationary sensors, the second measurement establishing a three-dimensional bounding region predicted to contain the particular wearable sensor; and perform a wearable-sensor-retraining operation based at least in part on the first measurement of the location of the particular wearable sensor falling outside the three-dimensional bounding region established by the second measurement. - View Dependent Claims (23, 24)
-
-
25. A computer program product including a set of non-transitory, computer-readable media having instructions which, when executed by control circuitry, cause the control circuitry to perform a method of managing virtual content to be displayed to users, the method comprising:
-
measuring locations of a user in a physical space as the user moves through the physical space, by a server apparatus receiving inputs from multiple stationary sensors positioned at respective sensor locations within the physical space and processing the inputs to generate the locations of the user; storing map data that describes a map of the physical space; specifying a set of holograms that have apparent locations that are defined relative to the map data; and providing the measured locations of the user and at least a portion of the map data to a headset worn by the user, to enable the headset to render the set of holograms at the apparent locations relative to the map data and from a user perspective based on the measured locations of the user, wherein the method further comprises generating an avatar of the user, the avatar providing a virtual representation of the user, wherein generating the avatar includes; directing the user to apply wearable sensors to respective parts of a body of the user; with the user wearing the wearable sensors, measuring locations of the wearable sensors and locations of the headset as the user performs a set of predetermined physical movements; and generating a skeletal model of the user based on the measured locations of the wearable sensors and of the headset, wherein the user is a first user, and wherein the method further comprises; repeatedly updating the avatar of the first user to reflect movement of the first user as indicated by the wearable sensors; transmitting a real-time version of the avatar of the first user to a second headset worn by a second user, such that the second headset is enabled to render the avatar of the first user to the second user as the avatar of the first user is updated to reflect the movement of the first user, wherein the physical space is a first physical space, wherein the second user is located in a second physical space distinct from the first physical space, and wherein providing the real-time version of the avatar of the first user to the second headset includes transmitting, by the server apparatus in the first physical space, the avatar of the first user over a computer network to a second server apparatus running in the second physical space, and wherein the method further comprises; directing the second headset to render a scene within the second physical space, the scene including a set of holograms that establish a virtual environment of the scene and a set of virtual interconnects at respective locations within the virtual environment of the scene, each virtual interconnect occupying a respective area that users may enter and exit; and in response to detecting that the second user has entered a virtual interconnect, directing the second headset to present a choice to the second user that enables the second user to change the scene or a characteristic thereof. - View Dependent Claims (26, 27, 28, 29, 30)
-
Specification