Determining a segmentation boundary based on images representing an object
First Claim
Patent Images
1. A computing system comprising:
- an infrared (IR)-absorbing surface that is disposed approximately horizontally;
an IR camera disposed above and pointed at the IR-absorbing surface to capture an IR image representing an object disposed between the IR camera and the IR-absorbing surface based on IR light reflected by the object;
an RGB camera to capture an RGB image representing the object disposed between the RGB camera and the IR-absorbing surface;
a depth camera to capture a depth image representing the object disposed between the depth camera and the IR-absorbing surface;
a segmentation engine to determine a segmentation boundary representing at least one outer edge of the object based on the IR image, the depth image, and the RGB image, wherein the segmentation boundary is determined independent of any prior image of the object captured by any of the IR, depth, and RGB cameras;
a projection assembly to project visible images on the IR-absorbing surface and the object;
wherein the segmentation engine is to determine the segmentation boundary representing the at least one outer edge of the object based on the IR, depth, and RGB images captured during the projection of the visible images; and
wherein the IR-absorbing surface further comprises a touch-sensitive region to detect physical contact with the touch-sensitive region.
1 Assignment
0 Petitions
Accused Products
Abstract
Examples disclosed herein relate to determining a segmentation boundary based on images representing an object. Examples include an IR image based on IR light reflected by an object disposed between an IR camera and an IR-absorbing surface, a color image representing the object disposed between the color camera and the IR-absorbing surface, and determining a segmentation boundary for the object.
-
Citations
12 Claims
-
1. A computing system comprising:
-
an infrared (IR)-absorbing surface that is disposed approximately horizontally; an IR camera disposed above and pointed at the IR-absorbing surface to capture an IR image representing an object disposed between the IR camera and the IR-absorbing surface based on IR light reflected by the object; an RGB camera to capture an RGB image representing the object disposed between the RGB camera and the IR-absorbing surface; a depth camera to capture a depth image representing the object disposed between the depth camera and the IR-absorbing surface; a segmentation engine to determine a segmentation boundary representing at least one outer edge of the object based on the IR image, the depth image, and the RGB image, wherein the segmentation boundary is determined independent of any prior image of the object captured by any of the IR, depth, and RGB cameras; a projection assembly to project visible images on the IR-absorbing surface and the object; wherein the segmentation engine is to determine the segmentation boundary representing the at least one outer edge of the object based on the IR, depth, and RGB images captured during the projection of the visible images; and wherein the IR-absorbing surface further comprises a touch-sensitive region to detect physical contact with the touch-sensitive region. - View Dependent Claims (2, 3, 4, 5, 6)
-
-
7. A non-transitory machine-readable storage medium comprising instructions executable by a processing resource of a computing system comprising a horizontal infrared (IR)-absorbing surface and IR camera, a depth camera, and a color camera, each disposed above and pointed at the IR-absorbing surface, the instructions executable to:
-
acquire, from the IR camera, an IR image representing an object disposed between the IR camera and the IR-absorbing surface based on IR light reflected by the object; acquire, from the depth camera, a depth image representing respective distances of portions of the object disposed between the depth camera and the IR-absorbing surface; acquire, from the color camera, a color image having a higher resolution than each of the IR image and the depth image and representing the object disposed between the color camera and the IR-absorbing surface; determine a preliminary segmentation boundary for the object based on the IR image data and the depth image data; upsample the preliminary segmentation boundary to the resolution of the color image; and refine the upsampled preliminary segmentation boundary based on the color image to determine a segmentation boundary for the object. - View Dependent Claims (8, 9)
-
-
10. A method comprising:
-
capturing a low-resolution infrared (IR) image with an IR camera disposed above and pointing at an IR-absorbing surface; capturing a low-resolution depth image with a depth camera disposed above and pointing at the IR-absorbing surface; capturing a high-resolution color image with a color camera disposed above and pointing at the IR-absorbing surface, wherein each of the IR image, the depth image, and the color image represents an object disposed between the IR-absorbing surface and the respective camera used to capture the image; combining the IR image and the depth image into a single vector image comprising data from the IR image and from the depth image at each pixel; determining a preliminary segmentation boundary for the object based on the vector image; upsampling the preliminary segmentation boundary to the resolution of the color image; and refining the upsampled preliminary segmentation boundary based on the color image to determine a segmentation boundary for the object. - View Dependent Claims (11, 12)
-
Specification