System for volume dimensioning via holographic sensor fusion
First Claim
1. An apparatus for volume dimensioning via sensor fusion, comprising:
- a housing capable of being carried by an operator;
at least one two-dimensional (2D) image sensor disposed within the housing, the 2D image sensor configured to capture at least one image stream corresponding to a field of view (FOV), the FOV including at least one target object;
at least one three-dimensional (3D) imager disposed within the housing, the 3D imager configured to generate 3D image data associated with the FOV, the 3D image data including at least one plurality of points associated with the target object, each point corresponding to a coordinate set and a distance from the apparatus;
at least one processor disposed within the housing and operatively coupled to the 2D image sensor and the 3D imager, the processor configured to;
a) distinguish the target object within the FOV by analyzing at least one of the captured image stream and the 3D image data;
b) generate at least one holographic model corresponding to the target object by correlating the 3D image data and the captured image stream, the holographic model including at least one of a surface of the target object, a vertex of the target object, and an edge of the target object;
c) determine at least one dimension of the target object by measuring the holographic model;
d) detect at least one object identifier corresponding to the target object by analyzing the holographic model;
ande) acquire object data corresponding to the target object by decoding the object identifier;
a touch-sensitive display surface disposed within the housing and coupled to the processor, the display surface configured to;
a) display the captured image stream;
b) superimpose the holographic model over the captured image stream;
c) receive control input from the operator;
andd) adjust the holographic model based on the received control input;
andat least one wireless transceiver disposed within the housing and configured to establish a wireless link to at least one remote source.
1 Assignment
0 Petitions
Accused Products
Abstract
A system for volume dimensioning via two-dimensional (2D)/three-dimensional (3D) sensor fusion, based in a tablet, phablet, or like mobile device, is disclosed. The mobile device includes a 2D imager for capturing an image stream of its field of view (FOV), the FOV including target objects. The mobile device includes a 3D imager for collecting 3D imaging data of the FOV including point clouds of each target object within the FOV. Processors of the mobile device positively identify a particular target object by correlating the 2D and 3D image streams and generating a holographic model of the target object overlaid on the video stream, with adjustable surface, edge, and vertex guides. The processors determine the precise dimensions of the target object by measuring the holographic model, and detect and decode object identifiers (2D or 3D) on the surface of the target object to acquire and supplement object data.
-
Citations
20 Claims
-
1. An apparatus for volume dimensioning via sensor fusion, comprising:
-
a housing capable of being carried by an operator; at least one two-dimensional (2D) image sensor disposed within the housing, the 2D image sensor configured to capture at least one image stream corresponding to a field of view (FOV), the FOV including at least one target object; at least one three-dimensional (3D) imager disposed within the housing, the 3D imager configured to generate 3D image data associated with the FOV, the 3D image data including at least one plurality of points associated with the target object, each point corresponding to a coordinate set and a distance from the apparatus; at least one processor disposed within the housing and operatively coupled to the 2D image sensor and the 3D imager, the processor configured to; a) distinguish the target object within the FOV by analyzing at least one of the captured image stream and the 3D image data; b) generate at least one holographic model corresponding to the target object by correlating the 3D image data and the captured image stream, the holographic model including at least one of a surface of the target object, a vertex of the target object, and an edge of the target object; c) determine at least one dimension of the target object by measuring the holographic model; d) detect at least one object identifier corresponding to the target object by analyzing the holographic model; and e) acquire object data corresponding to the target object by decoding the object identifier; a touch-sensitive display surface disposed within the housing and coupled to the processor, the display surface configured to; a) display the captured image stream; b) superimpose the holographic model over the captured image stream; c) receive control input from the operator; and d) adjust the holographic model based on the received control input; and at least one wireless transceiver disposed within the housing and configured to establish a wireless link to at least one remote source. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12)
-
-
13. A system for remote volume dimensioning via sensor fusion, comprising:
-
a mobile computing device capable of being carried by an operator, the mobile computing device comprising; at least one two-dimensional (2D) image sensor configured to capture at least one image stream corresponding to a field of view (FOV), the FOV including at least one target object; at least one three-dimensional (3D) imager configured to generate 3D image data associated with the FOV, the 3D image data including at least one plurality of points associated with the target object, each corresponding to a coordinate set and a distance from the apparatus; at least one processor disposed within the housing and operatively coupled to the 2D image sensor and the 3D imager, the processor configured to; a) distinguish the target object within the FOV by analyzing at least one of the captured image stream and the 3D image data; b) generate at least one holographic model corresponding to the target object by correlating the 3D image data and the captured image stream, the holographic model including at least one of a surface of the target object, a vertex of the target object, and an edge of the target object; c) determine at least one dimension of the target object by measuring the holographic model; d) detect at least one object identifier corresponding to the target object by analyzing the holographic model; and e) acquire object data corresponding to the target object by decoding the object identifier; and at least one wireless transceiver disposed within the housing and configured to establish a wireless link; and at least one augmented reality (AR) viewing device communicatively coupled to the mobile computing device via wireless link and wearable by a viewer, the AR viewing device configured to; a) display the captured image stream to the viewer via a display surface proximate to one or more eyes of the viewer; b) superimpose the holographic model over the captured image stream; c) detect control input provided by the viewer; and d) adjust the holographic model based on the detected control input. - View Dependent Claims (14, 15, 16, 17, 18, 19)
-
-
20. A method for volume dimensioning via sensor fusion, comprising:
-
capturing, via a two-dimensional (2D) camera attached to a mobile device, a 2D image stream corresponding to a field of view (FOV) and including at least one target object within the FOV; capturing, via a three-dimensional (3D) imager attached to the mobile device, 3D image data corresponding to the FOV, the 3D image data including a plurality of points corresponding to the target object, each point comprising a coordinate set and a distance from the mobile device; distinguishing, via at least one processor of the mobile device, the target object from the FOV by analyzing at least one of the captured image stream and the 3D image data; generating, via at least one processor of the mobile device, a holographic model corresponding to the target object by correlating the 2D image stream and the plurality of points, the holographic model including at least one of a surface of the target object, an edge of the target object, and a vertex of the target object; determining, via the processor, at least one dimension of the target object by measuring the holographic model; detecting, via the processor, at least one object identifier corresponding to the target object by analyzing the holographic model; and acquiring object data corresponding to the target object by decoding the object identifier.
-
Specification