MULTI-MODAL USER EXPRESSIONS AND USER INTENSITY AS INTERACTIONS WITH AN APPLICATION
First Claim
1. A system, comprising:
- a sensing component that senses and interprets multi-modal user expressions as interactions for controlling an application;
an analysis component that analyzes the user expressions for user intensity, wherein the application operates according to the user expressions and the user intensity; and
a microprocessor configured to execute computer-executable instructions associated with at least one of the sensing component or the analysis component.
2 Assignments
0 Petitions
Accused Products
Abstract
Architecture that enables single and multi-modal interaction with computing devices, as well as interpreting user intensity (or liveliness) in the gesture or gestures. In a geospatial implementation, a multi-touch interaction can involve the detection and processing of tactile pressure (touch sensitive) to facilitate general navigation between two geographical points. This is further coupled with providing detailed information that facilitates navigation and turn-by-turn directions. This includes the use of time and/or pressure to release or increase the zoom level of map tiles, the touching of the two geographical points and speaking to obtain directions between these two geographical points, and the blending of tiles to create a compelling user experience, where the map is in different levels of zoom on the same view.
64 Citations
20 Claims
-
1. A system, comprising:
-
a sensing component that senses and interprets multi-modal user expressions as interactions for controlling an application; an analysis component that analyzes the user expressions for user intensity, wherein the application operates according to the user expressions and the user intensity; and a microprocessor configured to execute computer-executable instructions associated with at least one of the sensing component or the analysis component. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8)
-
-
9. A method, comprising acts of:
-
sensing and interpreting multi-modal user expressions as interactions for controlling a geospatial application; and interacting with the geospatial application based on the user expressions to cause the geospatial application to render different levels of zoom in a single overall view; and configuring a processor to execute instructions related to at least one of the acts of sensing or interacting. - View Dependent Claims (10, 11, 12, 13, 14, 15, 16)
-
-
17. A method, comprising acts of:
-
sensing and analyzing received user gestures; enabling interaction with an overall map view of a navigation application based on the user gestures to obtain navigation information; detecting concurrent touch gestures via a touch display as selections of an origin and a destination in the overall map view; rendering map view portions of both the origin and the destination at different levels of detail than other geographic portions of the overall map view; and configuring a processor to execute instructions related to at least one of the acts of sensing, enabling, detecting, or rendering. - View Dependent Claims (18, 19, 20)
-
Specification