System and method for a user interface for text editing and menu selection
First Claim
Patent Images
1. An apparatus comprising:
- an output display device on which two or more choices for selection are graphically presented;
an input device which detects one or more input actions performed by a user to position, move, and activate or de-activate a control point on said display; and
a processor coupled to the input device, and the output device, the processor comprising;
a first component for displaying a graphical presentation of said two or more choices within a defined, bounded region on said display;
a second component for defining one or more distinct segments of the boundary of said bounded region;
a third component for uniquely associating each of one or more of said defined segments with a distinct one of said graphically presented choices;
a fourth component for detecting an activation of said control point within said bounded region;
a fifth component for detecting a subsequent movement of said activated control point such that said activated control point exits said bounded region;
a sixth component for identifying one of said distinct boundary segments through which said activated control point is moved in exiting said bounded region; and
a sixth component for determining one of said graphically presented choices based on said identified boundary segment.
5 Assignments
0 Petitions
Accused Products
Abstract
Methods and system to enable a user of an input action recognition text input system to edit any incorrectly recognized text without re-locating the text insertion position to the location of the text to be corrected. The System also automatically maintains correct spacing between textual objects when a textual object is replaced with an object for which automatic spacing is generated in a different manner. The System also enables the graphical presentation of menu choices in a manner that facilitates faster and easier selection of a desired choice by performing a selection gesture requiring less precision than directly contacting the sub-region of the menu associated with the desired choice.
61 Citations
36 Claims
-
1. An apparatus comprising:
-
an output display device on which two or more choices for selection are graphically presented; an input device which detects one or more input actions performed by a user to position, move, and activate or de-activate a control point on said display; and a processor coupled to the input device, and the output device, the processor comprising; a first component for displaying a graphical presentation of said two or more choices within a defined, bounded region on said display; a second component for defining one or more distinct segments of the boundary of said bounded region; a third component for uniquely associating each of one or more of said defined segments with a distinct one of said graphically presented choices; a fourth component for detecting an activation of said control point within said bounded region; a fifth component for detecting a subsequent movement of said activated control point such that said activated control point exits said bounded region; a sixth component for identifying one of said distinct boundary segments through which said activated control point is moved in exiting said bounded region; and a sixth component for determining one of said graphically presented choices based on said identified boundary segment. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A method of graphically presenting two or more choices for selection within a defined, bounded region on an electronic display wherein a user can position, move, and activate or de-activate a control point on said display, the method comprising:
-
displaying a graphical presentation of said two or more choices within said bounded region; defining one or more distinct segments of the boundary of said bounded region; uniquely associating each of one or more of said defined segments with a distinct one of said graphically presented choices; detecting an activation of said control point within said bounded region; detecting a subsequent movement of said activated control point such that said activated control point exits said bounded region; identifying one of said distinct boundary segments through which said activated control point is moved in exiting said bounded region; and determining one of said graphically presented choices based on said identified boundary segment. - View Dependent Claims (7, 8, 9, 10)
-
-
11. A method of inputting and editing text on an electronic device with a user interface comprising at least one input system which detects input actions of a user to generate and edit text, and at least one text presentation system through which said text is presented to said user, the method comprising:
-
recording the location of a text insertion position within said text presentation system where a next generated textual object will be output; detecting a distinctive input action to identify one or more of said textual objects previously output to said text presentation system; identifying one or more of said textual objects previously output based on the detected distinctive input action; determining one or more alternate textual objects that correspond to one or more detected input actions from which said identified one or more textual objects was previously determined; replacing said identified previously output one or more textual objects with one or more of said determined alternate textual objects; and restoring said text insertion position to a location recorded prior to said detecting of said distinctive input action. - View Dependent Claims (12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23)
-
-
24. A text input and editing apparatus comprising:
-
one or more input devices which detect one or more input actions of a user to generate and edit text; an output device on which generated text is presented to a user; and a processor coupled to the input device, and the output device, the processor comprising; a first component for recording the location of a text insertion position where a next generated textual object will be output; a second component for detecting a distinctive input action identify one or more of said textual objects previously output to said output device; a third component for identifying one or more of said textual objects previously output based on the detected distinctive input action; a fourth component for determining one or more alternate textual objects that correspond to said one or more detected input actions from which said identified one or more textual objects was previously determined; a fifth component for replacing said identified previously output one or more textual objects with one or more of said determined alternate textual objects; and a sixth component for restoring said text insertion position to a location recorded prior to said detecting of said distinctive input action. - View Dependent Claims (25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36)
-
Specification