Methods and systems for multiple access to a single hardware data stream
First Claim
1. A method, comprising:
- defining, by a processing device, a first coordinate in a three-dimensional (3D) space to display a virtual object to a viewer;
displaying, by a display, the virtual object at the first coordinate in the 3D space;
determining, by a sensor, that an end-effector interacting with the virtual object at a first point in time is pointing to a second coordinate in the 3D space that is different than the first coordinate where the virtual object is displayed;
determining, by the processing device, an offset value between the first coordinate where the virtual object is located and the second coordinate where the end-effector is pointing to, wherein the offset value indicates a difference in the first coordinate and the second coordinate;
determining, by the processing device, a third coordinate in the 3D space, wherein the third coordinate is the first coordinate of the first coordinate adjusted by the offset value so that the viewer perceives the virtual object as being located at the first coordinate in the 3D space;
determining, by the sensor, that the end-effector interacting with the virtual object at a second point in time is pointing to a fourth coordinate in the 3D space that is different than the third coordinate where the virtual object is displayed; and
in response to the end-effector pointing to the fourth coordinate in the 3D space that is not the third coordinate, iteratively adjusting, by the processing device, the fourth coordinate for the virtual object until the end-effector points to the first coordinate.
2 Assignments
0 Petitions
Accused Products
Abstract
A target is outputted to an ideal position in 3D space. A viewer indicates the apparent position of the target, and the indication is sensed. An offset between the ideal and apparent positions is determined, and an adjustment determined from the offset such that the apparent position of the ideal position with the adjustment matches the ideal position without the adjustment. The adjustment is made to the first entity and/or a second entity, such that the entities appear to the viewer in the ideal position. The indication may be monocular with a separate indication for each eye, or binocular with a single viewer indication for both eyes. The indication also may serve as communication, such as a PIN input, so that calibration is transparent to the viewer. The method may be continuous, intermittent, or otherwise ongoing over time.
-
Citations
20 Claims
-
1. A method, comprising:
-
defining, by a processing device, a first coordinate in a three-dimensional (3D) space to display a virtual object to a viewer; displaying, by a display, the virtual object at the first coordinate in the 3D space; determining, by a sensor, that an end-effector interacting with the virtual object at a first point in time is pointing to a second coordinate in the 3D space that is different than the first coordinate where the virtual object is displayed; determining, by the processing device, an offset value between the first coordinate where the virtual object is located and the second coordinate where the end-effector is pointing to, wherein the offset value indicates a difference in the first coordinate and the second coordinate; determining, by the processing device, a third coordinate in the 3D space, wherein the third coordinate is the first coordinate of the first coordinate adjusted by the offset value so that the viewer perceives the virtual object as being located at the first coordinate in the 3D space; determining, by the sensor, that the end-effector interacting with the virtual object at a second point in time is pointing to a fourth coordinate in the 3D space that is different than the third coordinate where the virtual object is displayed; and in response to the end-effector pointing to the fourth coordinate in the 3D space that is not the third coordinate, iteratively adjusting, by the processing device, the fourth coordinate for the virtual object until the end-effector points to the first coordinate. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10)
-
-
11. A method, comprising:
-
defining, by a processing device, a first coordinate in an augmented reality environment or a virtual reality environment; displaying, by a display, a first virtual object to a viewer at the first coordinate; determining, by the processing device, that an end-effector interacting with the first virtual object is pointing to a second coordinate in the augmented reality environment or the virtual reality environment, wherein the second coordinate is different than the first coordinate; determining, by the processing device, a difference between the first coordinate where the first virtual object is located and the second coordinate where the end-effector is pointing to; determining, by the processing device, a third coordinate in the augmented reality environment or the virtual reality environment, wherein the third coordinate is the first coordinate of the first coordinate adjusted by the determined difference; generating a second virtual object; defining, by the processing device, a fourth coordinate to display the second virtual object to the viewer in the augmented reality environment or the virtual reality environment, wherein the fourth coordinate is based on the determined difference; and displaying, by the display, the first virtual object at the third coordinate and the second virtual object at the fourth coordinate. - View Dependent Claims (12, 13, 14, 15, 16)
-
-
17. An apparatus comprising:
-
a first three-dimensional (3D) display operable to output a virtual object at a first coordinate in a 3D space to a first eye of a viewer; a second 3D display operable to output the virtual object at a second coordinate in the 3D space to a second eye of the viewer, wherein the first coordinate is different than the second coordinate; a first sensor configured to measure a difference between a location of an end-effector in the 3D space interacting with the virtual object at the first coordinate relative to the first eye of the viewer; a second sensor configured to measure a difference between the location of the end-effector in the 3D space interacting with the virtual object at the second coordinate relative to the second eye of the viewer; and a processing device coupled to the first 3D display, the second 3D display, the first sensor, and the second sensor, wherein the processing device is operable to; determine that the end-effector is pointing to a third coordinate on the first 3D display that is different than the first coordinate where the virtual object is displayed at a first point in time; determine that the end-effector is pointing to a fourth coordinate on the second 3D display that is different than the second coordinate where the virtual object is displayed at the first point in time; determine a first difference between the first coordinate where the virtual object is located and the third coordinate where the end-effector is pointing to on a first display; determine a second difference between the second coordinate where the virtual object is located and the fourth coordinate where the end-effector is pointing to on a second display; determine a fifth coordinate in the 3D space that is the first coordinate adjusted by the first difference so that the viewer perceives the virtual object as being located at the first coordinate on the first 3D display, wherein the first 3D display is to display the virtual object at the fifth coordinate on the first 3D display at a second point in time; and determine a sixth coordinate in the 3D space that is the second coordinate adjusted by the second difference so that the viewer perceives the virtual object as being located at the second coordinate on the second 3D display, wherein the second 3D display is to display the virtual object at the sixth coordinate on the second 3D display at the second point in time. - View Dependent Claims (18, 19, 20)
-
Specification