Control System for Navigating a Principal Dimension of a Data Space
First Claim
1. A method for navigating through a data space, the method comprising:
- detecting a gesture of a body from gesture data received via a detector, wherein the gesture data is absolute three-space location data of an instantaneous state of the body at a point in time and physical space, the detecting comprising identifying the gesture using only the gesture data;
translating the gesture to a gesture signal;
navigating through the data space in response to the gesture signal, wherein the data space is a data-representational space comprising a dataset represented in the physical space; and
rendering the dataset in a plurality of coplanar data frames that are graphical depictions of a plurality of regions of the data space and displaying each data frame as a visible frame on a display.
4 Assignments
0 Petitions
Accused Products
Abstract
Systems and methods are described for navigating through a data space. The navigating comprises detecting a gesture of a body from gesture data received via a detector. The gesture data is absolute three-space location data of an instantaneous state of the body at a point in time and physical space. The detecting comprises identifying the gesture using the gesture data. The navigating comprises translating the gesture to a gesture signal, and navigating through the data space in response to the gesture signal. The data space is a data-representational space comprising a dataset represented in the physical space.
-
Citations
88 Claims
-
1. A method for navigating through a data space, the method comprising:
-
detecting a gesture of a body from gesture data received via a detector, wherein the gesture data is absolute three-space location data of an instantaneous state of the body at a point in time and physical space, the detecting comprising identifying the gesture using only the gesture data; translating the gesture to a gesture signal; navigating through the data space in response to the gesture signal, wherein the data space is a data-representational space comprising a dataset represented in the physical space; and rendering the dataset in a plurality of coplanar data frames that are graphical depictions of a plurality of regions of the data space and displaying each data frame as a visible frame on a display.
-
-
2. The method of claim 1, comprising:
-
detecting a first pose of the body; activating pushback interaction in response to detecting the first pose.
-
-
3. The method of claim 2, comprising:
-
recording a first position at which the first pose is entered, wherein the first position is a three-space hand position; setting the first position as an origin, wherein subsequent detected body positions are reported as relative offsets to the origin.
-
-
4. The method of claim 3, wherein the detecting comprises detecting a forward movement of the body, wherein the forward movement is movement along a z-axis toward the display, wherein the z-axis is defined as an axis normal to a view surface of the display.
-
5. The method of claim 4, wherein, in response to the forward movement of the body, the navigating comprises displacing the plurality of data frames along the z-axis, wherein more of the plane in which the data frames lie becomes visible, wherein a first visible frame rendered on the display is seen to recede from the display and neighboring data frames of the first data frame become visible.
-
6. The method of claim 5, wherein the detecting comprises detecting a rearward movement of the body, wherein the rearward movement is movement along the z-axis away the display.
-
7. The method of claim 6, wherein, in response to the rearward movement of the body, the navigating comprises displacing the plurality of data frames along the z-axis, wherein less of the plane in which the data frames lie becomes visible, wherein the first visible frame rendered on the display is seen to verge from the display and neighboring data frames of the first data frame become less visible.
-
8. The method of claim 7, comprising continuously updating a displacement along the z-axis of the plurality of data frames in direct response to movement of the body along the z-axis.
-
9. The method of claim 8, comprising:
-
detecting a second pose of the body; terminating pushback interaction in response to detecting the second pose, wherein the terminating comprises displaying a data frame of the plurality of data frames as coplanar with the display.
-
-
10. The method of claim 3, wherein the detecting comprises detecting right lateral movement of the body, wherein the right lateral movement is movement along an x-axis, wherein the x-axis lies in a plane parallel to a view surface of the display.
-
11. The method of claim 10, wherein, in response to the right lateral movement of the body, the navigating comprises displacing the plurality of data frames to the right along the x-axis, wherein a first visible frame rendered on the display is seen to slide from the display toward a right side of the display and an adjacent data frame to the first data frame slides into view from a left side of the display.
-
12. The method of claim 11, wherein the detecting comprises detecting left lateral movement of the body, wherein the left lateral movement is movement along the x-axis.
-
13. The method of claim 12, wherein, in response to the left lateral movement of the body, the navigating comprises displacing the plurality of data frames to the left along the x-axis, wherein a first visible frame rendered on the display is seen to slide from the display toward a left side of the display and an adjacent data frame to the first data frame slides into view from a right side of the display.
-
14. The method of claim 13, comprising continuously updating a displacement along the x-axis of the plurality of data frames in direct response to movement of the body along the x-axis.
-
15. The method of claim 14, comprising:
-
detecting a second pose of the body; terminating pushback interaction in response to detecting the second pose, wherein the terminating comprises displaying a data frame of the plurality of data frames as coplanar with the display.
-
-
16. The method of claim 3, wherein the data space comprises a plurality of virtual detents arranged in the plane.
-
17. The method of claim 16, wherein each virtual detent corresponds to each data frame.
-
18. The method of claim 3, comprising forming a gestural interaction space comprising an active zone and a dead zone, wherein the active zone is adjacent the display and the dead zone is adjacent the active zone.
-
19. The method of claim 18, wherein the navigating through the data space in response to the gesture signal is activated in response to the gesture when the gesture is detected in the active region.
-
20. The method of claim 18, comprising a feedback indicator rendered on the display.
-
21. The method of claim 20, wherein the feedback indicator displays feedback indicating the body is in one of the active zone and the dead zone.
-
22. The method of claim 20, wherein the feedback indicator displays feedback indicating a physical offset of the body from the origin.
-
23. The method of claim 1, comprising aligning a parameter-control axis of the dataset with a dimension of the physical space.
-
24. The method of claim 23, wherein the dimension is a depth dimension.
-
25. The method of claim 23, wherein the dimension is a horizontal dimension.
-
26. The method of claim 23, wherein the dimension is a vertical dimension.
-
27. The method of claim 23, wherein the dimension is a lateral dimension.
-
28. The method of claim 23, wherein the navigating comprises motion along the dimension to effect a data-space translation along the parameter-control axis.
-
29. The method of claim 23, wherein the navigating comprises navigating to quantized parameter spaces of the data space.
-
30. The method of claim 1, wherein the detecting includes detecting an evolving position of the body.
-
31. The method of claim 1, wherein the detecting includes detecting an evolving orientation of the body.
-
32. The method of claim 1, wherein the detecting includes detecting an evolving pose of the body, wherein the pose is a geometric disposition of a part of the body relative to at least one other part of the body.
-
33. The method of claim 1, wherein the detecting includes detecting evolving motion of the body.
-
34. The method of claim 1, wherein the detecting includes detecting at least one of an evolving position of the body, orientation of the body, pose of the body, and motion of the body.
-
35. The method of claim 1, comprising analyzing the gesture into a sequence of gestural events.
-
36. The method of claim 35, comprising identifying the gesture.
-
37. The method of claim 36, wherein the identifying of the gesture includes identifying at least one of an evolving position of the body, orientation of the body, pose of the body, and motion of the body.
-
38. The method of claim 36, comprising generating a representation of the gestural events of the sequence of gestural events.
-
39. The method of claim 38, comprising distributing the representation of the gestural events to at least one control component coupled to the data space.
-
40. The method of claim 39, comprising synchronizing the representation of the gestural events with a graphical depiction of the data space.
-
41. The method of claim 40, comprising synchronizing the representation of the gestural events with a graphical depiction of the navigating through the data space.
-
42. The method of claim 39, comprising synchronizing the representation of the gestural events with an aural depiction of the data space.
-
43. The method of claim 42, comprising synchronizing the representation of the gestural events with an aural depiction of the navigating through the data space.
-
44. The method of claim 1, wherein the dataset represents spatial information.
-
45. The method of claim 44, wherein the dataset represents spatial information of at least one of phenomena, events, measurements, observations, and structure.
-
46. The method of claim 1, wherein the dataset represents non-spatial information.
-
47. The method of claim 1, wherein the gesture comprises linear spatial motion.
-
48. The method of claim 1, wherein the navigating comprises linear verging through the data space.
-
49. The method of claim 1, comprising:
-
rendering the dataset in a plurality of data frames that are graphical depictions of a plurality of regions of the data space; displaying each data frame as a visible frame on a display.
-
-
50. The method of claim 49, wherein a size and an aspect ratio of the data frame coincide with the size and the aspect ratio of the display.
-
51. The method of claim 49, wherein a center and a normal vector of the data frame coincide with the center and the normal vector of the display.
-
52. The method of claim 49, wherein a position and an orientation of the data frame coincide with the position and the orientation of the display.
-
53. The method of claim 49, wherein each data frame comprises graphical data elements representing elements of the dataset.
-
54. The method of claim 53, wherein the graphical data elements are static elements.
-
55. The method of claim 53, wherein the graphical data elements are dynamic elements.
-
56. The method of claim 49, wherein the data frame is a two-dimensional construct.
-
57. The method of claim 56, wherein the data frame is resident in a three-dimensional graphics rendering environment having a coordinate system that coincides with coordinates that describe a local environment that includes the body.
-
58. The method of claim 49, wherein the navigating through the data space comprises navigating through the plurality of data frames.
-
59. The method of claim 1, comprising identifying the gesture, wherein the identifying includes identifying a pose and an orientation of a portion of the body.
-
60. The method of claim 1, wherein the detecting includes detecting at least one of a first set of appendages and a second set of appendages of the body.
-
61. The method of claim 1, wherein the detecting includes dynamically detecting a position of at least one tag.
-
62. The method of claim 1, wherein the detecting includes dynamically detecting and locating a marker on the body.
-
63. The method of claim 1, wherein the translating comprises translating information of the gesture to a gesture notation.
-
64. The method of claim 63, wherein the gesture notation represents a gesture vocabulary, and the gesture signal comprises communications of the gesture vocabulary.
-
65. The method of claim 64, wherein the gesture vocabulary represents in textual form instantaneous pose states of the body.
-
66. The method of claim 64, wherein the gesture vocabulary represents in textual form an orientation of the body.
-
67. The method of claim 64, wherein the gesture vocabulary represents in textual form a combination of orientations of the body.
-
68. The method of claim 64, wherein the gesture vocabulary includes a string of characters that represent a state of the body.
-
69. The method of claim 1, wherein the detecting comprises detecting when an extrapolated position of the body intersects virtual space, wherein the virtual space comprises space depicted on the display.
-
70. The method of claim 69, comprising controlling a virtual object in the virtual space when the extrapolated position intersects the virtual object.
-
71. The method of claim 70, comprising controlling a position of the virtual object in the virtual space in response to the extrapolated position in the virtual space.
-
72. The method of claim 1, comprising controlling scaling of the detecting and navigating to generate coincidence between virtual space and physical space, wherein the virtual space comprises space depicted on the display, wherein the physical space comprises space inhabited by the body.
-
73. The method of claim 1, comprising imaging the body with an imaging system.
-
74. The method of claim 73, wherein the imaging comprises generating wavefront coded images of the body.
-
75. The method of claim 74, wherein the gesture data comprises focus-resolved data of the body within a depth of field of the imaging system.
-
76. The method of claim 75, comprising generating intermediate images by coding images gathered by the imaging system.
-
77. The method of claim 76, wherein the intermediate images are blurred.
-
78. The method of claim 76, wherein the intermediate images are insensitive to changes in at least one of the body and a plurality of optical detectors of the imaging system that include defocus aberrations.
-
79. The method of claim 75, wherein the gesture data comprises focus-resolved range data of the body within the depth of field.
-
80. The method of claim 79, wherein the focus-resolved range data of the body within the depth of field is derived from an output of the imaging system.
-
81. The method of claim 75, wherein the gesture data comprises focus-resolved position data of the body within the depth of field.
-
82. The method of claim 81, wherein the focus-resolved position data of the body within the depth of field is derived from an output of the imaging system.
-
83. The method of claim 73, wherein the imaging system comprises a plurality of detectors.
-
84. The method of claim 83, wherein at least two of the detectors are wavefront coded cameras comprising a wavefront coding optical element.
-
85. The method of claim 83, wherein at least two of the optical detectors are wavefront coded cameras comprising a phase mask that increases a depth of focus of the imaging.
-
86. The method of claim 73, comprising generating modulation transfer functions and point spread functions that are invariant to a distance between the body and the imaging system.
-
87. The method of claim 73, comprising generating modulation transfer functions and point spread functions that are invariant with respect to defocus.
-
88. A system comprising:
-
a detector for receiving gesture data that represents a gesture made by a body; and a processor coupled to the detector, the processor automatically detecting the gesture from the gesture data, wherein the gesture data is absolute three-space location data of an instantaneous state of the body at a point in time and physical space, the processor identifying the gesture using only the gesture data, the processor translating the gesture to a gesture signal, the processor controlling navigating through the data space in response to the gesture signal, wherein the data space is a data-representational space comprising a dataset represented in the physical space, the processor rendering the dataset in a plurality of coplanar data frames that are graphical depictions of a plurality of regions of the data space and displaying each data frame as a visible frame on a display.
-
Specification