System and Method for Inputting Text into Small Screen Devices
First Claim
Patent Images
1. A program storage device readable by a device, tangibly embodying a program of instructions executable by the device to perform program steps for operating a touch-screen interface on the device, the program steps comprising:
- displaying a character pane on the touch-screen interface, the character pane defining a plurality of targets;
accepting an input event, the input event associated with at least two characters from the character pane;
selecting a most likely character among the at least two characters using one or more target models, the one or more target models modeling location of one or more previous input events corresponding to particular targets among the plurality of targets;
displaying the most likely character on the touch-screen interface; and
using a set of swipe gestures to perform one or more functions on the touch-screen interface.
6 Assignments
0 Petitions
Accused Products
Abstract
An embodiment is directed to an interface for a small screen device, such as a watch, that enables a user to enter text on the small screen device by touching in the vicinity of characters, rather than aiming for a particular button or the exact location of a character. Embodiments further enable the design of interfaces without the use of buttons for controlling the entry of text on the small screen device.
-
Citations
37 Claims
-
1. A program storage device readable by a device, tangibly embodying a program of instructions executable by the device to perform program steps for operating a touch-screen interface on the device, the program steps comprising:
-
displaying a character pane on the touch-screen interface, the character pane defining a plurality of targets; accepting an input event, the input event associated with at least two characters from the character pane; selecting a most likely character among the at least two characters using one or more target models, the one or more target models modeling location of one or more previous input events corresponding to particular targets among the plurality of targets; displaying the most likely character on the touch-screen interface; and using a set of swipe gestures to perform one or more functions on the touch-screen interface. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19)
-
-
20. A program storage device readable by one or more small screen devices, tangibly embodying a program of instructions executable by the devices to perform program steps for operating a touch-screen interface on the small screen device, the program steps comprising:
-
displaying a typing pane to display inputted text on the touch-screen interface; displaying a QWERTY-style character pane on the touch-screen interface; accepting an input event, the input event associated with at least two characters from the character pane; communicating the input event to the set of target models; using the set of target models to generate a set of likely targets and a set of likely probabilities associated with the set of likely targets, a probability among the set of likely probabilities indicating a likelihood that a likely target among the set of likely targets was intended to be selected based on the input event; generating a set of word predictions based on the set of likely targets using a text prediction engine; displaying the set of word predictions on a prediction pane displayed on the touch-screen interface; and using a set of swipe gestures to perform one or more functions on the touch-screen interface. - View Dependent Claims (21, 22, 23, 24, 25, 26, 27, 28, 29)
-
-
30. A method for enabling a user to input text into a small screen device, comprising the steps of:
-
displaying an interface without buttons on a touchscreen of the small screen device, the interface including a word prediction pane, a typing pane, and a QWERTY-style character pane; generating an initial word prediction without a context input with a text prediction engine, the text prediction engine comprising a plurality of language models and configured to receive a text input and to generate concurrently text predictions using the plurality of language models; accepting a touch area selected by the user on the interface; predicting one or more candidate words, wherein the prediction is based on the context input, the touch area, and one or more previous text predictions; displaying the one or more candidate words on the prediction pane; updating the context input with the most likely character given the touch area; and enabling the user to control the interface using one or more swipe gestures. - View Dependent Claims (31, 32, 33, 34, 35, 36, 37)
-
Specification