Method and apparatus for facilitating use of touchscreen devices
First Claim
Patent Images
1. A method comprising:
- from at least one auxiliary sensor device that is attached to a computing device as a removable outer housing surrounding a touchscreen of the computing device, receiving at the computing device a communication indicating an actuation state of the auxiliary sensor device;
obtaining descriptive information for a graphical user interface element displayed under the touchscreen;
responsive to a touch gesture received at the touchscreen included with the computing device, producing a first audio signal, based on the descriptive information, from the computing device if the actuation state of the auxiliary sensor device corresponds to a first state; and
responsive to the touch gesture, producing a second audio signal, based on the descriptive information, from the computing device, different from the first audio signal, if the actuation state of the auxiliary sensor device corresponds to a second state different from the first state,wherein the first and second audio signals each include speech utterances, and the first and second audio signals are different from one another based at least on the first and second audio signals including different speech utterances,wherein the first audio signal includes first speech utterances that describe a function of the graphical user element, and the second audio signal includes second speech utterances that describe the function of the graphical user element, the second speech utterances providing a more detailed description of the function of the graphical user interface than the first speech utterances.
1 Assignment
0 Petitions
Accused Products
Abstract
Exemplary embodiments are described wherein an auxiliary sensor attachable to a touchscreen computing device provides an additional form of user input. When used in conjunction with an accessibility process in the touchscreen computing device, wherein the accessibility process generates audible descriptions of user interface features shown on a display of the device, actuation of the auxiliary sensor by a user affects the manner in which concurrent touchscreen input is processed and audible descriptions are presented.
-
Citations
19 Claims
-
1. A method comprising:
-
from at least one auxiliary sensor device that is attached to a computing device as a removable outer housing surrounding a touchscreen of the computing device, receiving at the computing device a communication indicating an actuation state of the auxiliary sensor device; obtaining descriptive information for a graphical user interface element displayed under the touchscreen; responsive to a touch gesture received at the touchscreen included with the computing device, producing a first audio signal, based on the descriptive information, from the computing device if the actuation state of the auxiliary sensor device corresponds to a first state; and responsive to the touch gesture, producing a second audio signal, based on the descriptive information, from the computing device, different from the first audio signal, if the actuation state of the auxiliary sensor device corresponds to a second state different from the first state, wherein the first and second audio signals each include speech utterances, and the first and second audio signals are different from one another based at least on the first and second audio signals including different speech utterances, wherein the first audio signal includes first speech utterances that describe a function of the graphical user element, and the second audio signal includes second speech utterances that describe the function of the graphical user element, the second speech utterances providing a more detailed description of the function of the graphical user interface than the first speech utterances. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A method comprising:
-
from at least one auxiliary sensor device that is attached to a computing device as a removable outer housing surrounding a touchscreen of the computing device, receiving at the computing device a communication indicating an actuation state of the auxiliary sensor device; obtaining descriptive information for a graphical user interface element displayed under the touchscreen; responsive to a touch gesture received at the touchscreen included with the computing device, producing a first audio signal, based on the descriptive information, from the computing device if the actuation state of the auxiliary sensor device corresponds to a first state; and responsive to the touch gesture, producing a second audio signal, based on the descriptive information, from the computing device, different from the first audio signal, if the actuation state of the auxiliary sensor device corresponds to a second state different from the first state, wherein the first and second audio signals each include speech utterances, and the first and second audio signals are different from one another based at least on the first and second audio signals including different speech utterances, wherein the first audio signal includes first speech utterances that describe a function of the graphical user element and the second audio signal includes second speech utterances that describe a type of the graphical user element. - View Dependent Claims (8, 9, 10)
-
-
11. A system comprising:
-
an auxiliary sensor device coupled to a computing device as a removable outer housing surrounding a touchscreen interface of the computing device; and a computing device including; at least one processor executing instructions of an operating system and one or more applications; the touchscreen interface, the touchscreen interface being coupled to the processor and operable to communicate gestural user input to the processor; at least one display visible through the touchscreen and controlled by the processor; at least one communications interface coupled to the auxiliary sensor device and operable to communicate auxiliary sensor state to the processor; and at least one audio subsystem producing audio signals under control of the processor; wherein the processor operates to; obtain auxiliary sensor state information via the communications interface; obtain descriptive information for a graphical user interface element displayed by the display; receive at least one instance of gestural user input via the touchscreen interface; and responsive to receiving the at least one instance of gestural user input, controlling the audio subsystem to selectively produce a first audio signal, based on the descriptive information, if the auxiliary sensor state corresponds to a first state and to selectively produce a second audio signal, based on the descriptive information, if the auxiliary sensor state corresponds to a second state, wherein the first and second audio signals each include speech utterances and the first and second audio signals are different from one another based at least on the first and second audio signals including different speech utterances, wherein the first audio signal includes first speech utterances that describe a function of the graphical user element and the second audio signal includes second speech utterances that describe a type of the graphical user element. - View Dependent Claims (12, 13, 14, 15)
-
-
16. A system comprising:
-
an auxiliary sensor device coupled to a computing device as a removable outer housing surrounding a touchscreen interface of the computing device; and a computing device including; at least one processor executing instructions of an operating system and one or more applications; the touchscreen interface, the touchscreen interface being coupled to the processor and operable to communicate gestural user input to the processor; at least one display visible through the touchscreen and controlled by the processor; at least one communications interface coupled to the auxiliary sensor device and operable to communicate auxiliary sensor state to the processor; and at least one audio subsystem producing audio signals under control of the processor; wherein the processor operates to; obtain auxiliary sensor state information via the communications interface; obtain descriptive information for a graphical user interface element displayed by the display; receive at least one instance of gestural user input via the touchscreen interface; and responsive to receiving the at least one instance of gestural user input, controlling the audio subsystem to selectively produce a first audio signal, based on the descriptive information, if the auxiliary sensor state corresponds to a first state and to selectively produce a second audio signal, based on the descriptive information, if the auxiliary sensor state corresponds to a second state, wherein the first and second audio signals each include speech utterances and the first and second audio signals are different from one another based at least on the first and second audio signals including different speech utterances, wherein the first audio signal includes first speech utterances that describe a function of the graphical user element and the second audio signal includes second speech utterances that describe the function of the graphical user element, the second speech utterances providing a more detailed description of the function of the graphical user interface than the first speech utterances. - View Dependent Claims (17, 18, 19)
-
Specification