Presence sensing
First Claim
1. A method for determining whether a user is facing a computing device, the method comprising:
- capturing an initial image using an image sensor having a first resolution;
computing, using a processor, a body detection parameter based on one or more macro features of a human body detected in the captured initial image, wherein the one or more macro features of a human body include at least one of an arm, a leg, a head, or a torso;
utilizing the body detection parameter to determine a first observation likelihood that the user is facing the computing device;
concurrent with capturing the initial image, capturing data from a second sensor;
using the data captured by the second sensor to compute a second observation likelihood that the user is facing the computing device;
using the first and the second observation likelihoods to determine a probability that the user is facing the computing device;
when the determined probability that the user is facing the computing device exceeds a first threshold, updating a state representation maintained by the computing device;
capturing one or more subsequent images with a camera having a second resolution higher than the first resolution of the image sensor;
using the captured one or more subsequent images to determine a facial feature detection parameter and a movement detection parameter; and
using the facial feature detection parameter and the movement detection parameter to update the probability that the user is facing the computing device;
determining that the updated probability that the user is facing the computing device exceeds a second threshold; and
further updating the state representation.
1 Assignment
0 Petitions
Accused Products
Abstract
One embodiment may take the form of a method of operating a computing device in a reduced power state and collecting a first set of data from at least one sensor. Based on the first set of data, the computing device determines a probability that an object is within a threshold distance of the computing device and, if so, the device activates at least one secondary sensor to collect a second set of data. Based on the second set of data, the device determines if the object is a person. If it is a person, a position of the person relative to the computing device is determined and the computing device changes its state based on the position of the person. If the object is not a person, the computing device remains in a reduced power state.
71 Citations
24 Claims
-
1. A method for determining whether a user is facing a computing device, the method comprising:
-
capturing an initial image using an image sensor having a first resolution; computing, using a processor, a body detection parameter based on one or more macro features of a human body detected in the captured initial image, wherein the one or more macro features of a human body include at least one of an arm, a leg, a head, or a torso; utilizing the body detection parameter to determine a first observation likelihood that the user is facing the computing device; concurrent with capturing the initial image, capturing data from a second sensor; using the data captured by the second sensor to compute a second observation likelihood that the user is facing the computing device; using the first and the second observation likelihoods to determine a probability that the user is facing the computing device; when the determined probability that the user is facing the computing device exceeds a first threshold, updating a state representation maintained by the computing device; capturing one or more subsequent images with a camera having a second resolution higher than the first resolution of the image sensor; using the captured one or more subsequent images to determine a facial feature detection parameter and a movement detection parameter; and using the facial feature detection parameter and the movement detection parameter to update the probability that the user is facing the computing device; determining that the updated probability that the user is facing the computing device exceeds a second threshold; and further updating the state representation. - View Dependent Claims (2, 3, 24)
-
-
4. A computing device comprising:
-
an image sensor configured to capture image data at a first resolution; at least one input device configured to capture additional data; a processor coupled to the image sensor and to the at least one input device, the processor configured to; process the image data to determine a first observation likelihood that a user is facing the computing device by computing a body presence parameter based on one or more macro features of a human body detected in the image data, wherein the one or more macro features of a human body include at least one of an arm, a leg, a head or a torso; process the additional data to determine a second observation likelihood that the user is facing the computing device; generate a predicted state representation, wherein the predicted state representation comprises an estimate for a position of the user and an estimate for a velocity of the user; combine the predicted state representation and the first and the second observation likelihoods to determine a probability that the user is facing the computing device; and when the probability that the user is facing the computing device exceeds a first threshold, change a state of the computing device; capture additional image data with a camera having a second resolution higher than the first resolution; use the captured additional image data to determine a movement parameter and a facial feature detection parameter; update the probability that the user is facing the computing device using the body presence parameter, the movement parameter, and the facial feature detection parameter; determine that the updated probability that the user is facing the computing device exceeds a second threshold; and update the predicted state representation. - View Dependent Claims (5, 6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A method of operating a computing device to provide user focus based functionality, the method comprising:
-
receiving 3-dimensional (3D) image data acquired by a 3D camera; using the 3D image data to extract a first feature from a set of macro body features related to a user in local proximity to the computing device, wherein the set of macro body features related to the user includes at least one of an arm, a leg, a torso or a head; concurrent with receiving the 3D image data, receiving sensor data from a second sensor;
separate from the 3D camera;using the sensor data to extract a second feature from the set of macro body features related to the user being in local proximity to the computing device, the second feature being different from the first feature; receiving a state representation comprising information related to the user being in local proximity to the computing device, wherein the information includes an estimated position of the user in 3D space and an estimated velocity of the user; combining the state representation with a first observation likelihood based on the first feature and a second observation likelihood based on the second feature to; produce a probability of the user facing the computing device; and update the information in the state representation; and changing a state of the computing device based on the probability of the user facing the computing device. - View Dependent Claims (15, 16, 17, 18, 19, 20, 21, 22, 23)
-
Specification