Automatic visual fact extraction
First Claim
1. A visual fact extraction system comprising:
- an extraction module including a depth sensor mounted to a frame, wherein the extraction module defines an imaging region adapted to receive an object therein, and wherein the depth sensor is configured to capture depth imaging data from the imaging region; and
a computing device in communication with the depth sensor,wherein the computing device is configured to at least;
capture baseline depth imaging data from the imaging region using the depth sensor, wherein the baseline depth imaging data comprises information regarding distances between the depth sensor and the imaging region without an object therein;
receive an object at the imaging region;
capture depth imaging data from the imaging region using the depth sensor, wherein the depth imaging data comprises information regarding distances between the depth sensor and the imaging region with the object therein;
generate net depth imaging data based at least in part on the baseline depth imaging data and the depth imaging data, wherein the net depth imaging data comprises a plurality of pixels corresponding to the imaging region;
estimate a first dimension of the object based at least in part on the net depth imaging data;
determine a two-dimensional size of at least one of the plurality of pixels based at least in part on the first dimension;
identify a portion of the plurality of pixels corresponding to the object;
estimate a second dimension of the object based at least in part on the two-dimensional size and the portion of the pixels;
estimate a third dimension of the object based at least in part on the two-dimensional size and the portion of the pixels; and
select a container for the object based at least in part on the first dimension, the second dimension and the third dimension.
1 Assignment
0 Petitions
Accused Products
Abstract
By evaluating readily available facts regarding an object, an item may be identified, or one or more characteristics of the item may be determined, and a destination for the item may be selected. An extraction module including a depth sensor may capture depth imaging data regarding the item, which may then be processed in order to estimate one or more dimensions of the item, and an appropriate container or storage area for the item may be selected. Additionally, the extraction module may further include a scale for determining a mass of the item, or digital cameras for capturing one or more images of the item. The images of the item may be analyzed in order to interpret any markings, labels or identifiers disposed on the item, and a destination for the item may be selected based on the mass, the analyzed images or the depth imaging data.
-
Citations
21 Claims
-
1. A visual fact extraction system comprising:
-
an extraction module including a depth sensor mounted to a frame, wherein the extraction module defines an imaging region adapted to receive an object therein, and wherein the depth sensor is configured to capture depth imaging data from the imaging region; and a computing device in communication with the depth sensor, wherein the computing device is configured to at least; capture baseline depth imaging data from the imaging region using the depth sensor, wherein the baseline depth imaging data comprises information regarding distances between the depth sensor and the imaging region without an object therein; receive an object at the imaging region; capture depth imaging data from the imaging region using the depth sensor, wherein the depth imaging data comprises information regarding distances between the depth sensor and the imaging region with the object therein; generate net depth imaging data based at least in part on the baseline depth imaging data and the depth imaging data, wherein the net depth imaging data comprises a plurality of pixels corresponding to the imaging region; estimate a first dimension of the object based at least in part on the net depth imaging data; determine a two-dimensional size of at least one of the plurality of pixels based at least in part on the first dimension; identify a portion of the plurality of pixels corresponding to the object; estimate a second dimension of the object based at least in part on the two-dimensional size and the portion of the pixels; estimate a third dimension of the object based at least in part on the two-dimensional size and the portion of the pixels; and select a container for the object based at least in part on the first dimension, the second dimension and the third dimension. - View Dependent Claims (2, 3, 4, 20)
-
-
5. A method for extracting facts comprising:
-
capturing a first set of depth imaging data of a defined region using a first depth sensor; capturing a second set of depth imaging data of the defined region using the first depth sensor, wherein the second set of depth imaging data is captured with an object present within the defined region; determining a net depth profile of the object based at least in part on the first set of depth imaging data and the second set of depth imaging data using at least one computer processor, wherein the net depth profile comprises a plurality of pixels corresponding to the defined region; estimating a first dimension of the object based at least in part on the net depth profile; determining a two-dimensional size of at least one of the plurality of pixels based at least in part on the first dimension; identifying a portion of the plurality of pixels corresponding to the object; estimating a second dimension of the object based at least in part on the two-dimensional size and the portion of the pixels; estimating a third dimension of the object based at least in part on the two-dimensional size and the portion of the pixels; storing an association of the net depth profile and the object in at least one data store; and selecting a container for the object based at least in part on the first dimension, the second dimension and the third dimension. - View Dependent Claims (6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 21)
-
-
16. A system comprising:
-
an electronic scale; a depth sensor; a plurality of imaging devices; and a computing device in communication with the electronic scale, the depth sensor and at least one of the plurality of the imaging devices, wherein the computing device is configured to at least; capture at least one image of a visible surface of an object on a surface of the electronic scale using one or more of the imaging devices, wherein the visible surface comprises at least one identifier disposed thereon; determine a mass of the object using the electronic scale; capture a depth image of the object using the depth sensor, wherein the depth image comprises a plurality of pixels representing distances between the depth sensor and the electronic scale with the object thereon; analyze the at least one image to interpret the at least one identifier; estimate a maximum height of the object based at least in part on the depth image; determine a two-dimensional area of at least one of the plurality of pixels based at least in part on the maximum height of the object; identify a portion of the depth image corresponding to the object; estimate a maximum length and a maximum width of the object based at least in part on the two-dimensional area and the portion of the depth image corresponding to the object; and select a container for the object based at least in part on the mass of the object, the at least one identifier, the maximum height, the maximum length and the maximum width. - View Dependent Claims (17, 18, 19)
-
Specification