Interacting with a network to transmit virtual image data in augmented or virtual reality systems
First Claim
1. A method, comprising:
- sensing a physical object at a first location using at least one or more outward facing cameras of a head-mounted user display device and recognizing a type of the physical object sensed at the first location by the head-mounted user display device by;
capturing one or more field-of-view images,extracting one or more sets of points from the one or more field-of-view images,extracting one or more fiducials for at least one physical object in the one or more field-of-view images based on at least some of the one or more sets of points,processing at least some of the one or more fiducials for the at least one physical object to identify the type of the physical object sensed at the first location, wherein processing at least some of the one or more fiducials comprises comparing the one or more fiducials to sets of previously stored fiducials;
associating a virtual object with the sensed physical object based on the type of the physical object as a result of both recognizing the type of the sensed physical object at the first location by the head-mounted user display device using the at least one or more outward facing cameras and identifying a predetermined relationship between the virtual object and the type of the sensed physical object recognized at the first location by the head-mounted user display device;
receiving virtual world data representing a virtual world, the virtual would data including at least data corresponding to manipulation of the virtual object in the virtual world by a first user at the first location;
transmitting at least the virtual world data corresponding to manipulation of the virtual object by the first user at the first location to a head-mounted user display device, wherein the head-mounted user display device renders a display image associated with at least a portion of the virtual world data including at least the virtual object to the first user based on at least an estimated depth of focus of a first user'"'"'s eyes;
creating additional virtual world data originating from the manipulation of the virtual object by the first user at the first location; and
transmitting the additional virtual world data to a second user at a second location different from the first location for presentation to the second user, such that the second user experiences the additional virtual world data from the second location.
3 Assignments
0 Petitions
Accused Products
Abstract
One embodiment is directed to a system for enabling two or more users to interact within a virtual world comprising virtual world data, comprising a computer network comprising one or more computing devices, the one or more computing devices comprising memory, processing circuitry, and software stored at least in part in the memory and executable by the processing circuitry to process at least a portion of the virtual world data; wherein at least a first portion of the virtual world data originates from a first user virtual world local to a first user, and wherein the computer network is operable to transmit the first portion to a user device for presentation to a second user, such that the second user may experience the first portion from the location of the second user, such that aspects of the first user virtual world are effectively passed to the second user.
-
Citations
20 Claims
-
1. A method, comprising:
-
sensing a physical object at a first location using at least one or more outward facing cameras of a head-mounted user display device and recognizing a type of the physical object sensed at the first location by the head-mounted user display device by; capturing one or more field-of-view images, extracting one or more sets of points from the one or more field-of-view images, extracting one or more fiducials for at least one physical object in the one or more field-of-view images based on at least some of the one or more sets of points, processing at least some of the one or more fiducials for the at least one physical object to identify the type of the physical object sensed at the first location, wherein processing at least some of the one or more fiducials comprises comparing the one or more fiducials to sets of previously stored fiducials; associating a virtual object with the sensed physical object based on the type of the physical object as a result of both recognizing the type of the sensed physical object at the first location by the head-mounted user display device using the at least one or more outward facing cameras and identifying a predetermined relationship between the virtual object and the type of the sensed physical object recognized at the first location by the head-mounted user display device; receiving virtual world data representing a virtual world, the virtual would data including at least data corresponding to manipulation of the virtual object in the virtual world by a first user at the first location; transmitting at least the virtual world data corresponding to manipulation of the virtual object by the first user at the first location to a head-mounted user display device, wherein the head-mounted user display device renders a display image associated with at least a portion of the virtual world data including at least the virtual object to the first user based on at least an estimated depth of focus of a first user'"'"'s eyes; creating additional virtual world data originating from the manipulation of the virtual object by the first user at the first location; and transmitting the additional virtual world data to a second user at a second location different from the first location for presentation to the second user, such that the second user experiences the additional virtual world data from the second location. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11)
-
-
12. A system, comprising:
-
a first head-mounted user display device associated with a first user at a first location; a second head-mounted user display device associated with a second user at a second location different from the first location; and a server having a processor, the server operatively coupled to the first head-mounted user display device and the second head-mounted user display device, wherein the server is configured to; sense a physical object at a first location using at least one or more outward facing cameras of a head-mounted user display device and recognize a type of the physical object sensed at the first location by the head-mounted user display device by; capturing one or more field-of-view images, extracting one or more sets of points from the one or more field-of-view images, extracting one or more fiducials for at least one physical object in the one or more field-of-view images based on at least some of the one or more sets of points, processing at least some of the one or more fiducials for the at least one physical object to identify the type of the physical object sensed at the first location, wherein processing at least some of the one or more fiducials comprises comparing the one or more fiducials to sets of previously stored fiducials; associate a virtual object with the sensed physical object based on the type of the physical object as a result of both recognizing the type of the sensed physical object at the first location by the head-mounted user display device using the at least one or more outward facing cameras and identifying a predetermined relationship between the virtual object and the type of the sensed physical object recognized at the first location by the head-mounted user display device; receive virtual world data representing a virtual world, the virtual would data including at least data corresponding to manipulation of the virtual object in the virtual world by a first user at the first location; transmit at least the virtual world data corresponding to manipulation of the virtual object by the first user at the first location to a head-mounted user display device, wherein the head-mounted user display device renders a display image associated with at least a portion of the virtual world data including at least the virtual object to the first user based on at least an estimated depth of focus of a first user'"'"'s eyes; create additional virtual world data originating from the manipulation of the virtual object by the first user at the first location; and transmit the additional virtual world data to a second user at a second location different from the first location for presentation to the second user, such that the second user experiences the additional virtual world data from the second location. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20)
-
Specification