Determining time-of-fight measurement parameters
First Claim
Patent Images
1. A system comprising:
- one or more processors;
one or more cameras configured to produce images of an environment, wherein an individual image comprises pixel values corresponding respectively to surface points of the environment;
the one or more cameras including a time-of-flight camera, wherein the images include depth images, and wherein a depth image is produced by the time-of-flight camera using a sensing duration within which a reflected light signal is sensed, an individual pixel value of the depth image indicating a distance of a corresponding surface point from the time-of-flight camera;
one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform acts comprising;
analyzing one or more of the images to detect an object within the environment;
defining a region of the environment that contains the object;
determining a distance of the object from the time-of-flight camera;
selecting a new sensing duration of the time-of-flight camera based at least in part on the sensing duration and the distance of the object from the time-of-flight camera, wherein the selecting the new sensing duration comprises at least one of increasing the sensing duration to increase spatial depth accuracy or decreasing the sensing duration to reduce motion blur;
obtaining a sequence of the depth images produced by the time-of-flight camera using the new sensing duration; and
analyzing a portion of an individual depth image of the sequence that corresponds to the defined region of the environment to track the object over time.
2 Assignments
0 Petitions
Accused Products
Abstract
In a system that monitors the positions and movements of objects within an environment, a depth camera may be configured to produce depth images based on configurable measurement parameters such as illumination intensity and sensing duration. A supervisory component may be configured to roughly identify objects within an environment and to specify observation goals with respect to the objects. The measurement parameters of the depth camera may then be configured in accordance with the goals, and subsequent analyses of the environment may be based on depth images obtained using the measurement parameters.
46 Citations
20 Claims
-
1. A system comprising:
-
one or more processors; one or more cameras configured to produce images of an environment, wherein an individual image comprises pixel values corresponding respectively to surface points of the environment; the one or more cameras including a time-of-flight camera, wherein the images include depth images, and wherein a depth image is produced by the time-of-flight camera using a sensing duration within which a reflected light signal is sensed, an individual pixel value of the depth image indicating a distance of a corresponding surface point from the time-of-flight camera; one or more non-transitory computer-readable media storing computer-executable instructions that, when executed by the one or more processors, cause the one or more processors to perform acts comprising; analyzing one or more of the images to detect an object within the environment; defining a region of the environment that contains the object; determining a distance of the object from the time-of-flight camera; selecting a new sensing duration of the time-of-flight camera based at least in part on the sensing duration and the distance of the object from the time-of-flight camera, wherein the selecting the new sensing duration comprises at least one of increasing the sensing duration to increase spatial depth accuracy or decreasing the sensing duration to reduce motion blur; obtaining a sequence of the depth images produced by the time-of-flight camera using the new sensing duration; and analyzing a portion of an individual depth image of the sequence that corresponds to the defined region of the environment to track the object over time. - View Dependent Claims (2, 3, 4, 5)
-
-
6. A computer-implemented method implemented at least in part by a computing device coupled to a time-of-flight camera, the computer-implemented method comprising:
-
receiving one or more images of an environment; analyzing at least one of the one or more images to identify an object within the environment; changing a sensing duration of the time-of-flight camera based at least in part on a distance to the object from the time-of-flight camera, the changing of the sensing duration comprising at least one of increasing the sensing duration to increase spatial depth accuracy or decreasing the sensing duration to reduce motion blur; configuring the time-of-flight camera to produce a depth image using the sensing duration as changed; and analyzing the depth image to determine one or more characteristics of the object. - View Dependent Claims (7, 8, 9, 10, 11)
-
-
12. A method implemented at least in part by a computing device coupled to a time-of-flight camera, the method comprising:
-
receiving one or more images of an environment; analyzing at least one of the one or more images to detect an object within the environment; determining one or more observation goals for the object, the one or more observation goals comprising at least one of (a) increasing spatial depth resolution or (b) increasing temporal motion resolution to reduce motion blur; selecting one or more measurement parameters for the time-of-flight camera based at least in part on the one or more observation goals, wherein the one or more measurement parameters comprise at least an illumination intensity; configuring the time-of-flight camera to capture a depth image using the one or more measurement parameters; and analyzing the depth image to determine one or more characteristic of the object. - View Dependent Claims (13, 14, 15, 16, 17, 18, 19, 20)
-
Specification