Translation of directional input to gesture
First Claim
Patent Images
1. A process of translating user input received at a touch input of a user device into key code data while the touch input is operating in a gesture capture mode, the process comprising:
- detecting user-initiated motion at the touch input, wherein the touch input is separate from and smaller than a user output;
translating the detected motion into x-y motion data;
referencing first translation data that provides a mapping between x-y motion data and gesture data;
based on the reference to the first translation data, translating the x-y motion data into gesture data, the gesture data representing a predetermined gesture that has been mapped to a predetermined x-y motion data value within the first translation data, wherein the gesture data corresponds to one gesture from a finite set of gestures that consist of a tap gesture and a unidirectional swipe gesture;
referencing second translation data that provides a mapping between gesture data and key code data, wherein two or more gestures defined in the second translation data are mapped to a common key code;
based on the reference to the second translation data, translating the gesture data into key code data, the key code data representing a predetermined key code that has been mapped to the predetermined gesture within the second translation data, wherein the key code data causes a processor of the user device to dynamically update a predictive menu portion of the user output with one or more suggested characters or suggested strings of characters that contain a stroke which corresponds to the key code data, wherein the predictive menu portion is continuously and dynamically updated in response to detecting subsequent user-initiated motions at the touch input, and wherein the one or more suggested characters or suggested strings of characters are updated to eliminate any character or string of characters from the predictive menu portion that do not contain a stroke corresponding to the key code data; and
providing the key code data to a processor of the user device.
8 Assignments
0 Petitions
Accused Products
Abstract
A user device is disclosed which includes a touch input and a keypad input. The user device is configured to operate in a gesture capture mode as well as a navigation mode. In the navigation mode, the user interfaces with the touch input to move a cursor or similar selection tool within the user output. In the gesture capture mode, the user interfaces with the touch input to provide gesture data that is translated into key code output having a similar or identical format to outputs of the keypad.
-
Citations
13 Claims
-
1. A process of translating user input received at a touch input of a user device into key code data while the touch input is operating in a gesture capture mode, the process comprising:
-
detecting user-initiated motion at the touch input, wherein the touch input is separate from and smaller than a user output; translating the detected motion into x-y motion data; referencing first translation data that provides a mapping between x-y motion data and gesture data; based on the reference to the first translation data, translating the x-y motion data into gesture data, the gesture data representing a predetermined gesture that has been mapped to a predetermined x-y motion data value within the first translation data, wherein the gesture data corresponds to one gesture from a finite set of gestures that consist of a tap gesture and a unidirectional swipe gesture; referencing second translation data that provides a mapping between gesture data and key code data, wherein two or more gestures defined in the second translation data are mapped to a common key code; based on the reference to the second translation data, translating the gesture data into key code data, the key code data representing a predetermined key code that has been mapped to the predetermined gesture within the second translation data, wherein the key code data causes a processor of the user device to dynamically update a predictive menu portion of the user output with one or more suggested characters or suggested strings of characters that contain a stroke which corresponds to the key code data, wherein the predictive menu portion is continuously and dynamically updated in response to detecting subsequent user-initiated motions at the touch input, and wherein the one or more suggested characters or suggested strings of characters are updated to eliminate any character or string of characters from the predictive menu portion that do not contain a stroke corresponding to the key code data; and providing the key code data to a processor of the user device. - View Dependent Claims (2, 3, 4, 5, 6, 7)
-
-
8. A user device, comprising:
-
a user interface including a touch input and a user output, wherein the touch input is separate from and smaller than the user output; a user interface driver configured to convert motions detected by the touch input into electronic signals; computer-readable memory comprising one or more processor-executable instruction sets, the instruction sets including; an x-y input capture module configured to translate the electronics signals provided by the user interface driver into x-y motion data; an x-y-to-gesture translation module configured to reference first translation data to translate the x-y motion data into gesture data, the gesture data representing a predetermined gesture that has been mapped to a predetermined x-y motion data value within the first translation data, wherein the gesture data corresponds to one gesture from a finite set of gestures that consist of a tap gesture and a unidirectional swipe gesture; and a gesture-to-key code translation module configured to reference second translation data to translate the gesture data into key code data, wherein two or more gestures defined in the second translation data are mapped to a common key code, and wherein the key code data represents a predetermined key code that has been mapped to the predetermined gesture within the second translation data, wherein the key code data causes a processor of the user device to dynamically update a predictive menu portion of the user output with one or more suggested characters or suggested strings of characters that contain a stroke which corresponds to the key code data, wherein the predictive menu portion is continuously and dynamically updated in response to detecting subsequent user-initiated motions at the touch input, and wherein the one or more suggested characters or suggested strings of characters are updated to eliminate any character or string of characters from the predictive menu portion that do not contain a stroke corresponding to the key code data; and a processor configured to execute the instruction sets stored in the computer-readable memory. - View Dependent Claims (9, 10, 11)
-
-
12. A system for translating user input received at a touch input of a user device into key code output, the system comprising:
-
a processor configured to execute instructions stored in memory; and memory comprising at least the following processor-executable instructions; a user interface driver configured to convert motions detected at the touch input into electronic signals, wherein the touch input is separate from and smaller than a user output of the user device; an x-y input capture module configured to translate the electronics signals provided by the user interface driver into x-y motion data; an x-y-to-gesture translation module configured to reference first translation data to translate the x-y motion data into gesture data, the gesture data representing a predetermined gesture that has been mapped to a predetermined x-y motion data value within the first translation data, wherein the gesture data corresponds to one gesture from a finite set of gestures that consist of a tap gesture and a unidirectional swipe gesture; and a gesture-to-key code translation module configured to reference second translation data to translate the gesture data into key code data, wherein two or more gestures defined in the second translation data are mapped to a common key code, wherein the key code data represents a predetermined key code that has been mapped to the predetermined gesture within the second translation data, wherein the key code data causes a processor of the user device to dynamically update a predictive menu portion of the user output with one or more suggested characters or suggested strings of characters that contain a stroke which corresponds to the key code data, wherein the predictive menu portion is continuously and dynamically updated in response to detecting subsequent user-initiated motions at the touch input, and wherein the one or more suggested characters or suggested strings of characters are updated to eliminate any character or string of characters from the predictive menu portion that do not contain a stroke corresponding to the key code data. - View Dependent Claims (13)
-
Specification