CONTEXT-BASED SMARTPHONE SENSOR LOGIC
First Claim
1. A method comprising:
- applying a first classification procedure to received audio and/or visual information, to identify a type of said received information from among plural possible types;
applying a second classification procedure to received second information, to identify a scenario, from among plural possible scenarios, the identified scenario comprising the confluence of at least three circumstances, the received second information being different than said received audio or visual information; and
activating one or more recognition modules based on outputs from the first and second classification procedures;
wherein at least one of said acts is performed by hardware configured to perform such act(s).
1 Assignment
0 Petitions
Accused Products
Abstract
Methods employ sensors in portable devices (e.g., smartphones) both to sense content information (e.g., audio and imagery) and context information. Device processing is desirably dependent on both. For example, some embodiments activate certain processor intensive operations (e.g., content recognition) based on classification of sensed content and context. The context can control the location where information produced from such operations is stored, or control an alert signal indicating, e.g., that sensed speech is being transcribed. Some arrangements post sensor data collected by one device to a cloud repository, for access and processing by other devices. Multiple devices can collaborate in collecting and processing data, to exploit advantages each may have (e.g., in location, processing ability, social network resources, etc.). A great many other features and arrangements are also detailed.
-
Citations
46 Claims
-
1. A method comprising:
-
applying a first classification procedure to received audio and/or visual information, to identify a type of said received information from among plural possible types; applying a second classification procedure to received second information, to identify a scenario, from among plural possible scenarios, the identified scenario comprising the confluence of at least three circumstances, the received second information being different than said received audio or visual information; and activating one or more recognition modules based on outputs from the first and second classification procedures; wherein at least one of said acts is performed by hardware configured to perform such act(s). - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 38, 39, 40, 41, 42)
-
-
10. A method comprising:
-
applying a first classification procedure to received audio and/or visual information, to identify a type of said received information from among two possible types;
a first type, and a second type;applying a first combination of plural recognition technologies to the received information if the received information is identified as the first type; and applying a second combination of plural recognition technologies to the received information if the received information is identified as the second type; wherein at least one of the recognition technologies is a watermark- or fingerprint-based recognition technology, and the first, and second combinations are all different; and wherein at least one of said acts is performed by hardware configured to perform such act(s). - View Dependent Claims (11)
-
-
12-37. -37. (canceled)
-
43. A portable user device including an image or sound sensor, a processor, and a memory, the memory containing software instructions causing the device to perform a method that includes:
-
applying a first classification procedure to information received by said image or sound sensor, to identify a type of said received information from among plural possible types; applying a second classification procedure to received second information, to identify a scenario, from among plural possible scenarios, the identified scenario comprising the confluence of at least three circumstances, the received second information being different than said received audio or visual information; and activating one or more recognition modules based on outputs from the first and second classification procedures. - View Dependent Claims (44, 45, 46)
-
Specification