Multiple camera based motion tracking
First Claim
1. A non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor of a computing device, cause the computing device to:
- capture a first image;
analyze the first image to determine a position of an object relative to a plurality of gesture sensors of the computing device;
determine a first gesture sensor, of the plurality of gesture sensors of the computing device, having the object in a first field of view of the first gesture sensor;
cause the first gesture sensor to be in an activate state and other gesture sensors of the plurality of gesture sensors to be in a deactivated state;
capture one or more additional images of the object using the first gesture sensor while the object is in the first field of view of the first gesture sensor;
track motion of the object relative to the computing device by analyzing the one or more additional images captured using the first gesture sensor;
determine that the object is approaching a second field of view of a second gesture sensor of the plurality of gesture sensors; and
cause the second gesture sensor to be in the active state when the object, determined to be approaching the second field of view, satisfies at least one activation criterion for the second gesture sensor, the activation criterion including at least determining the object will enter the second field of view within a predetermined period of time associated with the second gesture sensor, wherein the computing device is configured to track motion of the object between the first field of view and the second field of view.
1 Assignment
0 Petitions
Accused Products
Abstract
A computing device with multiple image capture elements can selectively activate those elements in order to keep an object of interest within the field of view of at least one of those elements, while conserving resources by not keeping all the image capture elements active. The object can be located using an appropriate object recognition process. The location, and thus the motion, of the object then can be tracked over time using one or the image capture elements. The motion can be monitored to determine when the object is likely to pass into the field of view of another image capture element. The determination can be made based upon factors such as location, direction, speed, acceleration, and predicted time within the current field of view. Such an approach allows an identified object to be tracked and kept in a field of view while conserving resources on the device.
-
Citations
20 Claims
-
1. A non-transitory computer-readable storage medium storing instructions that, when executed by at least one processor of a computing device, cause the computing device to:
-
capture a first image; analyze the first image to determine a position of an object relative to a plurality of gesture sensors of the computing device; determine a first gesture sensor, of the plurality of gesture sensors of the computing device, having the object in a first field of view of the first gesture sensor; cause the first gesture sensor to be in an activate state and other gesture sensors of the plurality of gesture sensors to be in a deactivated state; capture one or more additional images of the object using the first gesture sensor while the object is in the first field of view of the first gesture sensor; track motion of the object relative to the computing device by analyzing the one or more additional images captured using the first gesture sensor; determine that the object is approaching a second field of view of a second gesture sensor of the plurality of gesture sensors; and cause the second gesture sensor to be in the active state when the object, determined to be approaching the second field of view, satisfies at least one activation criterion for the second gesture sensor, the activation criterion including at least determining the object will enter the second field of view within a predetermined period of time associated with the second gesture sensor, wherein the computing device is configured to track motion of the object between the first field of view and the second field of view. - View Dependent Claims (2, 3, 4)
-
-
5. A computer-implemented method, comprising:
-
determining a position of an object relative to a plurality of image capture elements of a computing device; causing a first image capture element, of the plurality of image capture elements on the computing device, to capture one or more images of the object to determine a new position of the object, relative to the first image capture element of the computing device, over a period of time, the new position of the object being within a first field of view of the first image capture element; determining, based at least in part upon analyzing the one or more images captured by the first image capture element, that the object, with respect to the first image capture element of the computing device, is moving toward a second field of view of a second image capture element and will enter the second field of view within a predetermined period of time associated with the second image capture element; and causing the second image capture element, of the plurality of image capture elements, to be in an active state to capture one or more subsequent images to enable determination of where the object is while the object is in the second field of view. - View Dependent Claims (6, 7, 8, 9, 10, 11, 12, 13)
-
-
14. A computing device, comprising:
-
at least one processor; a plurality of cameras; and memory including instructions that, when executed by the at least one processor, cause the computing device to; cause a first camera, of the plurality of cameras, to capture one or more images of a feature to determine motion of the feature while the feature is within a first field of view of the first camera; determine, based at least in part upon analyzing the one or more images, that the feature is moving toward a second field of view of a second camera of the plurality of cameras; cause the second camera to capture one or more subsequent images to enable determination of motion of the feature while the feature is in the second field of view; determine an amount of time needed for the feature to move into the second field of view based at least in part upon a rate of motion of the feature with respect to the computing device; compare the amount of time to a latency period of the second camera; and instruct the second camera to enter an active state when the amount of time needed for the feature to move into the second field of view at least meets the latency period. - View Dependent Claims (15, 16, 17, 18, 19, 20)
-
Specification