Context-based smartphone sensor logic
First Claim
1. A portable user device comprising a processor, a memory and plural sensors, including at least a microphone or camera, the memory containing software instructions configuring the device to perform acts including:
- (a) applying a classification procedure to received audio and/or visual information, sensed by the microphone and/or camera, to determine a type of said information from among plural possible types;
(b) determining a scenario type, based at least in part on (i) a number of people sensed as being present with a user of the device, or (ii) an identity of people sensed as being present with said user, together with one or more of;
(i) time of day, (ii) day of week, (iii) location, (iv) calendar data, (v) clock alarm status, (vi) motion sensor data, (vii) orientation sensor data, and (viii) information from a social networking service; and
(c) based on a combination of (i) the determined type of the received audio and/or visual information, and (ii) the determined type of scenario, selecting a group of one or more recognition technologies from a set of available recognition technologies, and applying the selected group of one or more recognition technologies to the received audio and/or visual information;
wherein configuration of said device by said software instructions enables the device, in act (c), to select and apply three or more different groups of recognition technologies to the received audio and/or visual information, in accordance with which of three or more different combinations of (i) information type, and (ii) scenario type, are respectively determined; and
configuration of said device by said software instructions enables the device, in act (c), to include two or more recognition technologies in one of said selected and applied groups of recognition technologies,at least one of said two or more selected and applied recognition technologies being a watermark-, fingerprint-, barcode-based, or optical character-recognition technology.
1 Assignment
0 Petitions
Accused Products
Abstract
Methods employ sensors in portable devices (e.g., smartphones) both to sense content information (e.g., audio and imagery) and context information. Device processing is desirably dependent on both. For example, some embodiments activate certain processor intensive operations (e.g., content recognition) based on classification of sensed content and context. The context can control the location where information produced from such operations is stored, or control an alert signal indicating, e.g., that sensed speech is being transcribed. Some arrangements post sensor data collected by one device to a cloud repository, for access and processing by other devices. Multiple devices can collaborate in collecting and processing data, to exploit advantages each may have (e.g., in location, processing ability, social network resources, etc.). A great many other features and arrangements are also detailed.
-
Citations
20 Claims
-
1. A portable user device comprising a processor, a memory and plural sensors, including at least a microphone or camera, the memory containing software instructions configuring the device to perform acts including:
-
(a) applying a classification procedure to received audio and/or visual information, sensed by the microphone and/or camera, to determine a type of said information from among plural possible types; (b) determining a scenario type, based at least in part on (i) a number of people sensed as being present with a user of the device, or (ii) an identity of people sensed as being present with said user, together with one or more of;
(i) time of day, (ii) day of week, (iii) location, (iv) calendar data, (v) clock alarm status, (vi) motion sensor data, (vii) orientation sensor data, and (viii) information from a social networking service; and(c) based on a combination of (i) the determined type of the received audio and/or visual information, and (ii) the determined type of scenario, selecting a group of one or more recognition technologies from a set of available recognition technologies, and applying the selected group of one or more recognition technologies to the received audio and/or visual information; wherein configuration of said device by said software instructions enables the device, in act (c), to select and apply three or more different groups of recognition technologies to the received audio and/or visual information, in accordance with which of three or more different combinations of (i) information type, and (ii) scenario type, are respectively determined; and configuration of said device by said software instructions enables the device, in act (c), to include two or more recognition technologies in one of said selected and applied groups of recognition technologies, at least one of said two or more selected and applied recognition technologies being a watermark-, fingerprint-, barcode-based, or optical character-recognition technology. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18)
-
-
19. A portable user device comprising a battery, one or more processors, a memory and plural sensors, including at least a microphone or camera, and further comprising:
-
first means, for applying a classification procedure to received audio and/or visual information, sensed by the microphone and/or camera, to determine a type of said information from among plural possible types, and for producing a corresponding first output; instructions stored in said memory for configuring the one or more processors to determine a scenario type, and produce a corresponding second output, based at least in part on (i) a number of people sensed as being present with a user of the device, or (ii) an identity of one or more people sensed as being present with said user, together with one or more of;
(i) time of day, (ii) day of week, (iii) location, (iv) calendar data, (v) clock alarm status, (vi) motion sensor data, (vii) orientation sensor data, and (viii) information from a social networking service; andsecond means, coupled to receive data from said first and second outputs, for selecting a group of one or more recognition technologies from a set of available recognition technologies, and applying the selected group of one or more recognition technologies to the received audio and/or visual information; wherein said second means enables the device to select and apply three or more different groups of recognition technologies to the received audio and/or visual information, in accordance with which of three or more different combinations of (i) information type, and (ii) scenario type, are respectively determined; and wherein said second means enables the device to include two or more recognition technologies in one of said selected and applied groups of recognition technologies, at least one of said two or more selected and applied recognition technologies being a watermark-, fingerprint-, barcode-based, or optical character-recognition technology.
-
-
20. A method employing a portable device including a processor, memory and plural sensors that include at least a microphone or a camera, the method comprising the acts:
-
(a) applying a classification procedure to received audio and/or visual information, sensed by the microphone and/or camera, to determine a type of said information from among plural possible types; (b) determining a scenario type, based at least in part on (i) a number of people sensed as being present with a user of the device, or (ii) an identity of one or more people sensed as being present with said user, together with one or more of;
(i) time of day, (ii) day of week, (iii) location, (iv) calendar data, (v) clock alarm status, (vi) motion sensor data, (vii) orientation sensor data, and (viii) information from a social networking service; and(c) based on a combination of (i) the determined type of the received audio and/or visual information, and (ii) the determined type of scenario, selecting a group of one or more recognition technologies from a set of available recognition technologies, and applying the selected group of one or more recognition technologies to the received audio and/or visual information; at a first time, act (c) comprising applying a first group of recognition technologies to the received audio and/or visual information, in accordance with a first combination of (i) information type, and (ii) scenario type; at a second time, act (c) comprising applying a second group of recognition technologies to the received audio and/or visual information, in accordance with a second combination of (i) information type, and (ii) scenario type; and at a third time, act (c) comprising applying a third group of recognition technologies to the received audio and/or visual information, in accordance with a third combination of (i) information type, and (ii) scenario type; wherein; said first, second and third groups of recognition technologies are different; said first, second and third combinations are different; and at one of said first, second or third times, act (c) includes selecting and applying a group of two or more recognition technologies to the received audio and/or visual information, at least one of said two or more recognition technologies being a watermark-, fingerprint-, barcode-based, or optical character-recognition technology.
-
Specification