Method and apparatus for entering data using a virtual input device
First Claim
1. A method for a user to interact with a virtual input device using a user-controlled object, the method comprising the following steps:
- (a) acquiring data representing a single image at a given time from a single sensor system, from which data three-dimensional coordinate information of a relevant position of at least a portion of said user-controlled object may be determined such that a location defined on said virtual input device contacted by said user-controlled object is identifiable; and
(b) processing data acquired at step (a) to determine, independently of velocity of said user-controlled object, whether a portion of said user-controlled object contacted a location defined on said virtual input device, and if contacted to determine what function of said virtual input device is associated with said location.
3 Assignments
0 Petitions
Accused Products
Abstract
A user inputs digital data to a companion system such as a PDA, a cell telephone, an applicance, device using a virtual input device such as an image of a keyboard. A sensor captures three-dimensional positional information as to location of the user'"'"'s fingers in relation to where keys would be on an actual keyboard. This information is processed with respect to finger locations and velocities and shape to determine when virtual keys would have been struck. The processed digital information is output to the companion system. The companion system can display an image of a keyboard, including an image of a keyboard showing user fingers, and/or alphanumeric text as such data is input by the user on the virtual input device.
-
Citations
82 Claims
-
1. A method for a user to interact with a virtual input device using a user-controlled object, the method comprising the following steps:
-
(a) acquiring data representing a single image at a given time from a single sensor system, from which data three-dimensional coordinate information of a relevant position of at least a portion of said user-controlled object may be determined such that a location defined on said virtual input device contacted by said user-controlled object is identifiable; and
(b) processing data acquired at step (a) to determine, independently of velocity of said user-controlled object, whether a portion of said user-controlled object contacted a location defined on said virtual input device, and if contacted to determine what function of said virtual input device is associated with said location. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15)
(c) making available to a companion system information commensurate with contact location determined at step (b);
wherein said user-controlled object interacts with said virtual input device to provide information to said companion system.
-
-
3. The method of claim 1, wherein at step (c), commensurate said information includes at least one information type selected from a group consisting of (i) a signal representing an alphanumeric character, (ii) a scan code representing an alphanumeric character, (iii) a signal representing a command, (iv) a digital code representing a command, (v) a signal representing at least one real-time locus of points representing movement of said user-controlled object, and (vi) a digital code representing at least one real-time locus of points representing movement of said user-controlled object.
-
4. The method of claim 2, wherein said companion system includes at least one device selected from a group consisting of (i) a PDA, (ii) a wireless telephone, (iii) a cellular telephone, (iv) a set-top box, (v) a mobile electronic device, (vi) an electronic device, (vii) a computer, (viii) an appliance adapted to accept input information, and (ix) an electronic system.
-
5. The method of claim 1, wherein step (a) includes providing a solid state sensor having an aspect ratio greater than about 2:
- 1.
-
6. The method of claim 1, wherein at step (a), said data is acquired using time-of-flight from said single sensor system to a portion of said user-controlled object.
-
7. The method of claim 1, wherein said user-controlled object is selected from a group consisting of (i) a finger of said user, a (ii) a stylus, and (iii) an arbitrarily-shaped object.
-
8. The method of claim 1, wherein said virtual input device is defined on a work region selected from a group consisting of (i) three-dimensional space, (ii) a physical planar surface, (iii) a substrate, (iv) a substrate bearing a user-viewable image of an actual keyboard, (v) a substrate upon which is projected a user-viewable image of an actual keyboard, (vi) a substrate upon which is projected a user-viewable typing guide, (vii) a passive substrate bearing a user-viewable image of an actual keyboard and including passive key-like regions that provide tactile feedback when pressed by said user digit, (viii) a substrate that when deployed for use is larger than when not deployed for use, (ix) a substrate that when deployed for use measures at least 6″
- ×
12″
but when not used measures less than about 6″
×
8″
, (x) a display screen, (xi) an electronic display screen, (xii) a LCD screen, (xiii) a CRT screen, and (xiv) a plasma screen.
- ×
-
9. The method of claim 1, further including providing said user with feedback guiding placement of said user-controlled object with respect to said virtual input device, said feedback including at least one type of feedback selected from a group consisting of (i) tactile feedback emulating user-typing upon an actual keyboard when said virtual input device is a virtual keyboard, (ii) audible feedback, (iii) a display of visual feedback representing an image of at least one keyboard key when said virtual input device is a virtual keyboard, (iv) a display of visual feedback representing an image including at least one keyboard key and at least a portion of said user-controlled object when said virtual input device is a virtual keyboard, (v) a display of visual feedback depicting keyboard keys wherein keys adjacent to said user-controlled object are visually distinguished from a key touched by said user-controlled object when said virtual input device is a virtual keyboard, (vi) a display of visual feedback representing information input by said user-controlled object, and (vii) a display of visual feedback representing an image whose position signifies position of said user-object relative to a virtual key when said virtual input device is a virtual keyboard, and wherein size of said image signifies distance from a lower surface of said user-object to said virtual keyboard.
-
10. The method of claim 1, wherein step (b) includes processing said information substantially in real-time.
-
11. The method of claim 1, wherein step (b) includes determining spatial location of a distal portion of said user-controlled object relative to location on said virtual input device using at least one of (i) three-dimensional location of said distal portion, (ii) velocity information for said distal portion in at least one direction, (iii) matching acquired information to template models of said user-controlled object, (iv) hysteresis information processing, (v) knowledge of language being input by said user, and (vi) dynamic configuration of said virtual input device as a function of time.
-
12. The method of claim 1, wherein:
-
said virtual input device is a virtual keyboard with virtual keys; and
step (b) includes;
mapping three-dimensional positions of a distal tip portion of said user-controlled object to actual keys on an actual keyboard; and
identifying which of said actual keys would have been typed upon by said user-controlled object were they present on said virtual input device.
-
-
13. The method of claim 1, wherein:
at step (a) said data is acquired in frames such that said three-dimensional coordinate information is obtainable from a single one of said frames.
-
14. The method of claim 1, wherein a user-viewable image of said virtual input device is projected upon a work region using at least one diffractive optical element.
-
15. The method of claim 1, wherein:
-
said virtual input device includes a virtual keyboard; and
said user-controlled object includes at least a portion of a hand of said user.
-
-
16. A method for a user to interact with a virtual input device, said device having at least one location with which a function is associated, using a user-controlled object, the method comprising the following steps:
-
(a) using a single sensor system to acquire data in frames representing a single image at a given time, from which data three-dimensional coordinate information of a relevant position of at least a portion of said user-controlled object may be derived with respect to said virtual input device; and
(b) processing information acquired at step (a) to determine whether a portion of said user-controlled object contacted a location defined on said virtual input device, and if contacted to determine what function of said virtual input device is associated with said location. - View Dependent Claims (17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28)
(c) making available to a companion system information commensurate with user-object contact location with said virtual input device determined at step (b);
wherein said user-controlled object interacts with said virtual input device to provide information to said companion system.
-
-
18. The method of claim 17, wherein said companion system includes at least one device selected from a group consisting of (I) a PDA, (ii) a wireless telephone, (iii) a cellular telephone, (iv) a set-top box, (v) a mobile electronic device, (vi) an electronic device, (vii) a computer, (viii) an appliance adapted to accept input information, and (ix) an electronic system.
-
19. The method of claim 16, wherein at step (a), said single sensor system includes at least a sensor array, and three-dimensional coordinate information is captured using time-of-flight from said sensor array to a surface portion of said user-controlled object.
-
20. The method of claim 16, wherein step (a) includes providing a solid state sensor having an aspect ratio greater than about 2:
- 1.
-
21. The method of claim 16, wherein said user-controlled object is selected from a group consisting of (i) a finger of said user, a (ii) a stylus, and (iii) an arbitrarily-shaped object.
-
22. The method of claim 16, wherein said virtual input device is defined on a work region selected from a group consisting of (i) three-dimensional space, (ii) a physical planar surface, (iii) a substrate, (iv) a substrate bearing a user-viewable image of an actual keyboard, (v) a substrate upon which is projected a user-viewable image of an actual keyboard, (vi) a substrate upon which is projected a user-viewable typing guide, (vii) a passive substrate bearing a user-viewable image of an actual keyboard and including passive key-like regions that provide tactile feedback when pressed by said user digit, (viii) a substrate that when deployed for use is larger than when not deployed for use, (ix) a substrate that when deployed for use measures at least 6″
- ×
12″
but when not used measures less than about 6″
×
8″
, (x) a display screen, (xi) an electronic display screen, (xii) a LCD screen, (xiii) a CRT screen, and (xiv) a plasma screen.
- ×
-
23. The method of claim 16, further including providing said user with feedback guiding placement of said user-controlled object with respect to said virtual input device, said feedback including at least one type of feedback selected from a group consisting of (I) tactile feedback emulating user-typing upon an actual keyboard when said virtual input device is a virtual keyboard, (ii) audible feedback, (iii) a display of visual feedback representing an image of at least one keyboard key when said virtual input device is a virtual keyboard, (iv) a display of visual feedback representing an image including at least one keyboard key and at least a portion of said user-controlled object when said virtual input device is a virtual keyboard, (v) a display of visual feedback depicting keyboard keys wherein keys adjacent to said user-controlled object are visually distinguished from a key touched by said user-controlled object when said virtual input device is a virtual keyboard, (vi) a display of visual feedback representing information input by said user-controlled object, and (vii) a display of visual feedback representing an image whose position signifies position of said user-object relative to a virtual key when said virtual input device is a virtual keyboard, and wherein size of said image signifies distance from a lower surface of said user-object to said virtual keyboard.
-
24. The method of claim 17, wherein at step (c), commensurate said information includes at least one information type selected from a group consisting of (i) a signal representing an alphanumeric character, (ii) a scan code representing an alphanumeric character, (iii) a signal representing a command, (iv) a digital code representing a command, (v) a signal representing at least one real-time locus of points representing movement of said user-controlled object, and (vi) a digital code representing at least one real-time locus of points representing movement of said user-controlled object.
-
25. The method of claim 16, wherein step (b) includes determining spatial location of a distal portion of said user-controlled object digit relative to location on said virtual input device using at least one of (i) three-dimensional location of said distal portion, (ii) velocity information for said distal portion in at least one direction, (iii) matching acquired information to template models of said user-controlled object, (iv) hysteresis information processing, (v) knowledge of language being input by said user, and (vi) dynamic configuration of said virtual input device as a function of time.
-
26. The method of claim 16, wherein:
-
said virtual input device is a virtual keyboard with virtual keys; and
step (b) includes;
mapping three-dimensional positions of a distal tip portion of said user-controlled object to actual keys on an actual keyboard; and
identifying which of said actual keys would have been typed upon by said user controlled object were they present on said virtual input device.
-
-
27. The method of claim 16, wherein:
step (b) includes processing said information substantially in real-time.
-
28. The method of claim 16, wherein:
-
said virtual input device includes a virtual keyboard; and
said user-controlled object includes at least a portion of a hand of said user.
-
-
29. A method for a user to interact with a virtual input device using a user-controlled object to input information to a companion system, said virtual input device having at least one location defined thereon with which a function is associated, the method comprising the following steps:
-
(a) using a single sensor system to acquire data representing a single image at a given time from which three-dimensional coordinate information may be determined as to relevant position of at least a portion of said user-controlled object such that a location defined on said virtual input device contacted by said user-controlled object is identifiable;
(b) processing data acquired at step (a) to determine whether a portion of said user-controlled object contacted a location defined on said virtual input device, and if contacted to determine what function of said virtual input device is associated with said location; and
(c) coupling information processed at step (b) as input to said companion system. - View Dependent Claims (30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44)
said virtual input device is a virtual keyboard with virtual keys; and
step (b) includes;
mapping coordinate positions of a distal tip portion of said user-controlled object to on an actual keyboard; and
identifying which of said actual keys would have been typed upon by said user-controlled object were they present on said virtual input device.
-
-
39. The method of claim 29, wherein:
at step (a) said data is acquired in frames such that said positional coordinate information is obtainable from a single one of said frames.
-
40. The method of claim 29, wherein said companion system includes at least one device selected from a group consisting of (i) a PDA, (ii) a wireless telephone, (iii) a cellular telephone, (iv) a set-top box, (v) a mobile electronic device, (vi) an electronic device, (vii) a computer, (viii) an appliance adapted to accept input information, and (ix) an electronic system.
-
41. The method of claim 29, wherein a user-viewable image of said virtual input device is projected upon a work region using at least one diffractive optical element.
-
42. The method of claim 29, wherein:
-
said virtual input device is a computer mouse; and
step (b) includes mapping real-time locus points representing movement of at least one user-controlled object to movement events of an actual computer mouse.
-
-
43. The method of claim 29, wherein:
-
said virtual input device is a trackball device; and
further including mapping successive three-dimensional coordinate position information of a distal tip portion of said user-controlled object to a trackball and identifying how much trackball rotation would have occurred were an actual trackball present.
-
-
44. The method of claim 29, wherein:
-
said virtual input device includes a virtual keyboard; and
said user-controlled object includes at least a portion of a hand of said user.
-
-
45. A system that enables a user to interact with a virtual input device using a user-controlled object, the system comprising:
-
a single sensor system to capture data in frames representing a single image at a given time from which three-dimensional coordinate information of a relevant position of at least a portion of said user-controlled object may be derived from one of (a) a single data frame or (b) multiple data frames captured at substantially the same time with respect to said virtual input device such that a location defined on said virtual input device contacted by said user-controlled object is identifiable;
a processor, coupled to said single sensor system, to process single sensor system-captured data, to determine whether a portion of said user-controlled object contacted a location defined on said virtual input device, and if contacted to determine what function of said virtual input device is associated with said location. - View Dependent Claims (46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57)
said virtual input device is a virtual keyboard having virtual keys, and further including;
means for mapping positional coordinate positions of a distal tip portion of said user-controlled object to actual keys on an actual keyboard; and
identifying which of said actual keys would have been typed upon by said user-controlled object were they present on said virtual input device.
-
-
56. The system of claim 45, further including a sub-system to project a user-viewable image of said virtual input device upon a work region, said sub-system including at least one diffractive optical element.
-
57. The system of claim 45, wherein:
-
said virtual input device includes a virtual keyboard; and
said user-controlled object includes at least a portion of a hand of said user.
-
-
58. A system that enables a user to interact with a virtual input device using a user-controlled object, the system comprising:
-
a single sensor system to capture data representing a single image at a given time from which three-dimensional coordinate information of a relevant position of at least a portion of said user-controlled object may be derived such that a location defined on said virtual input device contacted by said user-controlled object is identifiable;
a processor, coupled to said sensor, to process single sensor system-captured data, to determine without having to calculate velocity of said user-object relative to said virtual input device whether a portion of said user-controlled object contacted a location defined on said virtual input device, and if contacted to determine what function of said virtual input device is associated with said location. - View Dependent Claims (59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70)
said virtual input device is a virtual keyboard with virtual keys; and
further including;
means for mapping positional coordinate positions of a distal tip portion of said user-controlled object to actual keys on an actual keyboard; and
identifying which of said actual keys would have been typed upon by said user-controlled object were they present on said virtual input device.
-
-
69. The system of claim 58, further including a sub-system to project a user-viewable image of said virtual input device upon a work region, said sub-system including at least one diffractive optical element.
-
70. The system of claim 58, wherein:
-
said virtual input device includes a virtual keyboard; and
said user-controlled object includes at least a portion of a hand of said user.
-
-
71. A method of determining interaction between a user-controlled object and a virtual input device, the method comprising the following steps:
-
(a) defining a plurality of identifiable locations on said virtual input device;
(b) sensing with a single sensor system that acquires data representing a single image at a given time three-dimensional positional coordinate information to detect contact between at least a portion of said user-controlled object and at least one of said identifiable locations defined on said virtual input device; and
(c) determining an input function, associated with said virtual input device, assigned to at least one location sensed at step (b) of detected contact by said user-controlled object. - View Dependent Claims (72)
said virtual input device includes a virtual keyboard; and
said user-controlled object includes at least a portion of a hand of said user.
-
-
73. A method of determining interaction between a user-controlled object and a virtual input device, the method comprising the following steps:
-
(a) defining a plurality of identifiable locations on said virtual input device;
(b) sensing with a single sensor system that acquires a single image at a given time three-dimensional coordinate information to detect contact between said user-controlled object and at least one of said plurality of identifiable positions; and
(c) determining an input function assigned to at least one of said identifiable positions sensed at step (b). - View Dependent Claims (74, 75, 76, 77, 78, 79)
step (b) includes sensing position coordinate information as said user-controlled object is moved across a series of said identifiable locations.
-
-
75. The method of claim 73, wherein:
-
step (b) includes sensing position coordinate information as said user-controlled object is moved across a series of said identifiable locations; and
step (c) includes determining an input function assigned to at least a last identifiable location in said series of said identifiable locations.
-
-
76. The method of claim 75, wherein:
step (b) includes sensing position coordinate information as said user-controlled object is moved across a series of said identifiable locations defined on a common plane.
-
77. The method of claim 75, wherein:
step (b) includes sensing positional coordinate information to detect movement of said user-controlled object along a plane and across at least one of said plurality of identifiable locations.
-
78. The method of claim 73, wherein:
-
step (b) includes sensing position coordinate information as said user-controlled object is moved across a series of said identifiable locations; and
step (c) includes determining an input function assigned to at least a first said identifiable location and a last said identifiable location in said series of identifiable locations.
-
-
79. The method of claim 73, wherein:
-
said virtual input device includes a virtual keyboard; and
said user-controlled object includes at least a portion of a hand of said user.
-
-
80. A system enabling a user to interact with a virtual keyboard using a user-controlled object to input information to a companion system, said virtual keyboard defining at least two virtual key locations, each of said virtual key locations having an associated function, the system comprising:
-
a diffractive projection sub-system to project a user-viewable image of said virtual keyboard, said sub-system including at least one diffractive optical element;
a single sensor system to acquire data representing a single image at a given time from which three-dimensional coordinate information may be determined as to relevant position of at least a portion of said user-controlled object with respect to said virtual keyboard such that identification of a virtual key location contacted by said user-controlled object is identifiable;
means for processing information acquired from said single sensor system to determine whether a portion of said user-controlled object contacted a virtual key location, and if contacted to determine what function of said virtual keyboard is associated with said location; and
means for coupling information so processed to said companion system. - View Dependent Claims (81)
said user-controlled object includes at least a portion of a hand of said user.
-
-
82. A system enabling a user to interact with a virtual mouse using a user-controlled object to input information to a companion system, the system comprising:
-
a diffractive projection sub-system to project a user-viewable image of said virtual mouse, said sub-system including at least one diffractive optical element;
a single sensor system to acquire data representing a single image at a given time from which three-dimensional coordinate information may be determined as to relevant position of at least a portion of said user-controlled object so as to map real-time locus points representing movement of said user-controlled object to movement events of an actual mouse;
means for processing information acquired from said single sensor system to determine whether a portion of said user-controlled object contacted said virtual mouse, and if contacted to determine relative movement and associated function of an actual mouse; and
means for coupling information so processed to said companion system.
-
Specification