Mixed reality system learned input and functions
First Claim
Patent Images
1. A method of interpreting commands to a mixed reality environment, comprising:
- rendering one or more virtual objects within a field of view;
receiving combined input actions with different command types from multiple users corresponding with the virtual objects in the field of view;
monitoring input actions, the input actions comprising a combination of data from sensors detecting a natural human interaction with a virtual object, associated with the different command types linked to create natural states of input with a same resulting action forknown input actions of the multiple users having a same result enabling known functions of virtual objects;
unknown input actions for which known functions of virtual objects are configured to be enabled,known input actions for which unknown functions of a virtual object are configured to be enabled such that at least one new function is created and associated with the virtual object;
unknown input actions for which unknown function of a virtual object are configured to be enabled; and
for each unknown input action detected, determining input data resulting in an input action to link to a function; and
for each unknown function, creating a function for the virtual object.
2 Assignments
0 Petitions
Accused Products
Abstract
A see-through, near-eye, mixed reality display apparatus providing a mixed reality environment wherein one or more virtual objects and one or more real objects exist within the view of the device. Each of the real and virtual have a commonly defined set of attributes understood by the mixed reality system allowing the system to manage relationships and interaction between virtual objects and other virtual objects, and virtual and real objects.
-
Citations
20 Claims
-
1. A method of interpreting commands to a mixed reality environment, comprising:
-
rendering one or more virtual objects within a field of view; receiving combined input actions with different command types from multiple users corresponding with the virtual objects in the field of view; monitoring input actions, the input actions comprising a combination of data from sensors detecting a natural human interaction with a virtual object, associated with the different command types linked to create natural states of input with a same resulting action for known input actions of the multiple users having a same result enabling known functions of virtual objects; unknown input actions for which known functions of virtual objects are configured to be enabled, known input actions for which unknown functions of a virtual object are configured to be enabled such that at least one new function is created and associated with the virtual object; unknown input actions for which unknown function of a virtual object are configured to be enabled; and for each unknown input action detected, determining input data resulting in an input action to link to a function; and for each unknown function, creating a function for the virtual object. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9)
-
-
10. A see-through head mounted display apparatus, comprising:
-
a see-through, near-eye, augmented reality display; one or more processing devices in wireless communication with apparatus, the one or more processing devices determine an environment, one or more real objects in the environment and one or more virtual objects in the environment, the one or more processing devices receive combined input actions with different command types from multiple users corresponding with the one or more virtual objects in a field of view of the display, and monitor received input actions from the multiple users for known input actions having a same result enabling known functions of virtual objects, unknown input actions for which known functions of virtual objects are configured to be enabled, known input actions for which unknown functions of a virtual object are configured to be enabled; and
unknown input actions for which unknown function of a virtual object are configured to be enabled; andthe one or more processing devices determining input data resulting in a new input action for a function, and creating a new function for one or more virtual objects based on a correlation between emerging patterns and known inputs for the input action and a response to the input action. - View Dependent Claims (11, 12, 13, 14)
-
-
15. A method generating new input actions and new functions for virtual objects in a see through head mounted display system, comprising:
-
rendering virtual objects in an environment, each object having at least a viewable physical representation and behavior, the virtual object responsive to different input actions having different command types; receiving input data from a plurality of sensors including data representing combined input actions with different command types from multiple users corresponding with virtual and real objects in the environment; monitoring the input actions, where the input actions comprise a combination of input data from the plurality of sensors detecting a natural human interaction with the virtual object; the input data summed into a summed input action representing a new input action to implement a function of the virtual object; determining whether the summed input data represents; unknown input actions for which a known series of functions of virtual objects are configured to be enabled, a combination of known input actions from the plurality of sensors from the multiple users for which unknown functions of a virtual object are configured to be enabled with a same result; and unknown input actions for which unknown functions of a virtual object are configured to be enabled; for each unknown input action detected, determining input data resulting in the new input action and determining contextual relevancy of the new input action to link to a new function; and for each unknown function, creating the new function to link to the virtual object; linking the new input action to the created function, based on the contextual relevancy, to one or more virtual objects in the environment. - View Dependent Claims (16, 17, 18, 19, 20)
-
Specification