Calibration systems and methods for depth-based interfaces with disparate fields of view
First Claim
1. A display structure for gesture interactions comprising:
- a display screen;
a plurality of sensors, at least two of the plurality of sensors configured to acquire depth information and at least two of the plurality of sensors configured to acquire pixel image information in a region before the display screen;
a computer system configured to perform a method comprising;
capturing a first pixel image at a primary sensor of at least a portion of the primary sensor'"'"'s field of view;
capturing a second pixel image at a first secondary sensor of at least a portion of the first secondary sensor'"'"'s field of view;
causing the display screen to display the first pixel image;
causing the display screen to display the second pixel image;
capturing a first set of depth values associated with a calibration object at the primary sensor when the calibration object is in both the primary sensor'"'"'s field of view and the first secondary sensor'"'"'s field of view;
capturing a second set of depth values associated with the calibration object at the first secondary sensor when the calibration object is in both the primary sensor'"'"'s field of view and the first secondary sensor'"'"'s field of view;
determining a first normal associated with a plane of the calibration object for the first set of depth values;
determining a first on-plane point associated with the plane of the calibration object for the first set of depth values;
determining a second normal associated with the plane of the calibration object for the second set of depth values;
determining a second on-plane point associated with the plane of the calibration object for the second set of depth values;
generating a first system of linear equations, in part, using a linear equation derived from the first normal and the second normal;
solving for components of a rotation transform using the first system of linear equations;
generating a second system of linear equations, in part, using a linear equation derived from the first on-plane point and the second on-plane point; and
solving for components of a translation transform using the second system of linear equations,wherein the rotation transform and translation transform move a point from the perspective of the first secondary sensor to the perspective of the primary sensor.
2 Assignments
0 Petitions
Accused Products
Abstract
Various of the disclosed embodiments provide Human Computer Interfaces (HCI) that incorporate depth sensors at multiple positions and orientations. The depth sensors may be used in conjunction with a display screen to permit users to interact dynamically with the system, e.g., via gestures. Calibration methods for orienting depth values between sensors are also presented. The calibration methods may generate both rotation and translation transformations that can be used to determine the location of a depth value acquired in one sensor from the perspective of another sensor. The calibration process may itself include visual feedback to direct a user assisting with the calibration. In some embodiments, floor estimation techniques may be used alone or in conjunction with the calibration process to facilitate data processing and gesture identification.
77 Citations
20 Claims
-
1. A display structure for gesture interactions comprising:
-
a display screen; a plurality of sensors, at least two of the plurality of sensors configured to acquire depth information and at least two of the plurality of sensors configured to acquire pixel image information in a region before the display screen; a computer system configured to perform a method comprising; capturing a first pixel image at a primary sensor of at least a portion of the primary sensor'"'"'s field of view; capturing a second pixel image at a first secondary sensor of at least a portion of the first secondary sensor'"'"'s field of view; causing the display screen to display the first pixel image; causing the display screen to display the second pixel image; capturing a first set of depth values associated with a calibration object at the primary sensor when the calibration object is in both the primary sensor'"'"'s field of view and the first secondary sensor'"'"'s field of view; capturing a second set of depth values associated with the calibration object at the first secondary sensor when the calibration object is in both the primary sensor'"'"'s field of view and the first secondary sensor'"'"'s field of view; determining a first normal associated with a plane of the calibration object for the first set of depth values; determining a first on-plane point associated with the plane of the calibration object for the first set of depth values; determining a second normal associated with the plane of the calibration object for the second set of depth values; determining a second on-plane point associated with the plane of the calibration object for the second set of depth values; generating a first system of linear equations, in part, using a linear equation derived from the first normal and the second normal; solving for components of a rotation transform using the first system of linear equations; generating a second system of linear equations, in part, using a linear equation derived from the first on-plane point and the second on-plane point; and solving for components of a translation transform using the second system of linear equations, wherein the rotation transform and translation transform move a point from the perspective of the first secondary sensor to the perspective of the primary sensor. - View Dependent Claims (2, 3, 4, 5, 6, 20)
-
-
7. A computer-implemented method for calibrating a plurality of sensors, comprising:
-
capturing a first pixel image at a primary sensor of at least a portion of the primary sensor'"'"'s field of view; capturing a second pixel image at a first secondary sensor of at least a portion of the first secondary sensor'"'"'s field of view; causing a first display screen to display the first pixel image; causing a second display screen to display the second pixel image; capturing a first set of depth values associated with a calibration object at the primary sensor when the calibration object is in both the primary sensor'"'"'s field of view and the first secondary sensor'"'"'s field of view; capturing a second set of depth values associated with the calibration object at the first secondary sensor when the calibration object is in both the primary sensor'"'"'s field of view and the first secondary sensor'"'"'s field of view; determining a first normal associated with a plane of the calibration object for the first set of depth values; determining a first on-plane point associated with the plane of the calibration object for the first set of depth values; determining a second normal associated with the plane of the calibration object for the second set of depth values; determining a second on-plane point associated with the plane of the calibration object for the second set of depth values; generating a first system of linear equations, in part, using a linear equation derived from the first normal and the second normal; solving for components of a rotation transform using the first system of linear equations; generating a second system of linear equations, in part, using a linear equation derived from the first on-plane point and the second on-plane point; and solving for components of a translation transform using the second system of linear equations, wherein the rotation transform and translation transform move a point from the perspective of the first secondary sensor to the perspective of the primary sensor. - View Dependent Claims (8, 9, 10, 11, 12, 13)
-
-
14. A non-transitory computer-readable medium comprising instructions configured to cause a computer system to perform a method comprising:
-
capturing a first set of depth values associated with a calibration object at a primary sensor when the calibration object is in both the primary sensor'"'"'s field of view and a first secondary sensor'"'"'s field of view; capturing a second set of depth values associated with the calibration object at the first secondary sensor when the calibration object is in both the primary sensor'"'"'s field of view and the first secondary sensor'"'"'s field of view; determining a first normal associated with a plane of the calibration object for the first set of depth values; determining a first on-plane point associated with the plane of the calibration object for the first set of depth values; determining a second normal associated with the plane of the calibration object for the second set of depth values; determining a second on-plane point associated with the plane of the calibration object for the second set of depth values; generating a first system of linear equations, in part, using a linear equation derived from the first normal and the second normal; solving for components of a rotation transform using the first system of linear equations; generating a second system of linear equations, in part, using a linear equation derived from the first on-plane point and the second on-plane point; and solving for components of a translation transform using the second system of linear equations, wherein the rotation transform and translation transform move a point from the perspective of the first secondary sensor to the perspective of the primary sensor. - View Dependent Claims (15, 16, 17, 18, 19)
-
Specification