METHOD OF AND APPARATUS FOR PROCESSING CAPTURED DIGITAL IMAGES OF OBJECTS WITHIN A SEMI-AUTOMATIC HAND-SUPPORTABLE IMAGING-BASED BAR CODE SYMBOL READER SO AS TO READ 1D AND/OR 2D BAR CODE SYMBOLS GRAPHICALLY REPRESENTED THEREIN
First Claim
1. A method of processing captured images of objects within a hand-supportable semi-automatic imaging-based bar code symbol reader so as to decode bar code symbols graphically represented therein, said method comprising the steps of:
- (a) providing a hand-supportable semi-automatic imaging-based bar code symbol reader including (1) a manually-actuatable trigger switch, (2) a multi-mode image formation and detection subsystem having an area-type image sensing array with a field of view (FOV) and a narrow-area image capture mode in which a few central rows of pixels on said area-type image sensing array are enabled, and a wide-area image capture mode in which substantially all rows of said area-type image sensing array are enabled, (3) an automatic object detection subsystem for automatically detecting an object within said FOV, (4) an LED-based multi-mode illumination subsystem for selectively generating a field of narrow-area narrow-band illumination within said FOV and also a field of wide-area narrow-band illumination within said FOV, (5) an image capture and buffering subsystem, and (6) an image-processing based bar code symbol reading subsystem;
(b) automatically detecting the presence of an object within said FOV using said automatic object detection subsystem;
(c) in response to object detection within step (b), automatically illuminating said object within said field of narrow-area narrow-band illumination using said LED-based multi-mode illumination subsystem;
(d) forming and detecting a narrow-area digital image of the object illuminated during step (c) using said multi-mode image formation and detection subsystem operated in said narrow-area image capture mode;
(e) capturing and buffering said narrow-area digital image formed and detected in step (d) using said image capture and buffering subsystem;
(f) directly processing said narrow-area digital image captured and buffered during step (e) using said image-processing based bar code symbol reading subsystem so as to attempt to automatically read at least one 1D bar code symbol represented therein, wherein said image processing operations comprise automatically processing said captured narrow-area digital image without performing feature extraction or marking operations;
(g) if at least one 1D bar code symbol is not read during step (f), and said manually-actuatable trigger switch is manually actuated, then automatically illuminating said object to be imaged within said field of wide-area narrow-band illumination using said LED-based multi-mode illumination subsystem;
(h) forming and detecting a wide-area digital image of the object illuminated during step (g) using said multi-mode image formation and detection subsystem operated in said wide-area image capture mode;
(i) capturing and buffering said wide-area digital image formed and detected in step (h) using said image capture and buffering subsystem;
(j) automatically processing said wide-area digital image captured and buffered during step (i), using said image-processing based bar code symbol reading subsystem, starting from the center or middle region of said wide-area digital image of the object at which the user would have aimed said hand-supportable semi-automatic imaging-based bar code symbol reader, so as to find one or more bar code symbols represented therein, by searching in a helical manner through blocks of extracted image feature data and marking said blocks of extracted image feature data and processing the corresponding digital image data until a 1D or 2D bar code symbol is recognized/read within said captured 2D wide-area digital image;
(k) if at least one 1D or 2D bar code symbol is not read during step (j), and said manually-actuatable trigger switch is still being manually actuated, then once again automatically illuminating said object to be imaged in said field of wide-area narrow-band illumination, and repeating steps (g), (h), (i) and (j) until either at least one 1D or 2D bar code symbol is read or said manually-actuatable trigger switch is no longer being manually-actuated.
5 Assignments
0 Petitions
Accused Products
Abstract
Method of and apparatus for processing captured images within a semi-automatic hand-supportable imaging-based bar code symbol reader in order to read 1D and/or 2D bar code symbols graphically represented therein. The semi-automatic hand-supportable digital imaging-based bar code symbol reader comprises: an automatic object presence detection subsystem; a multi-mode area-type image formation and detection subsystem having narrow-area and wide area image capture modes of operation; a multi-mode LED-based illumination subsystem having narrow-area and wide area illumination modes of operation; an image capturing and buffering subsystem; a multi-mode image-processing bar code symbol reading subsystem; an input/output subsystem; a manually-activatable trigger switch; and a system control subsystem integrated with each of the above-described subsystems. The bar code symbol reader embodies advanced image processing methods for reading 1D and/or 2D bar code symbols graphically represented in captured digital images.
-
Citations
28 Claims
-
1. A method of processing captured images of objects within a hand-supportable semi-automatic imaging-based bar code symbol reader so as to decode bar code symbols graphically represented therein, said method comprising the steps of:
-
(a) providing a hand-supportable semi-automatic imaging-based bar code symbol reader including (1) a manually-actuatable trigger switch, (2) a multi-mode image formation and detection subsystem having an area-type image sensing array with a field of view (FOV) and a narrow-area image capture mode in which a few central rows of pixels on said area-type image sensing array are enabled, and a wide-area image capture mode in which substantially all rows of said area-type image sensing array are enabled, (3) an automatic object detection subsystem for automatically detecting an object within said FOV, (4) an LED-based multi-mode illumination subsystem for selectively generating a field of narrow-area narrow-band illumination within said FOV and also a field of wide-area narrow-band illumination within said FOV, (5) an image capture and buffering subsystem, and (6) an image-processing based bar code symbol reading subsystem;
(b) automatically detecting the presence of an object within said FOV using said automatic object detection subsystem;
(c) in response to object detection within step (b), automatically illuminating said object within said field of narrow-area narrow-band illumination using said LED-based multi-mode illumination subsystem;
(d) forming and detecting a narrow-area digital image of the object illuminated during step (c) using said multi-mode image formation and detection subsystem operated in said narrow-area image capture mode;
(e) capturing and buffering said narrow-area digital image formed and detected in step (d) using said image capture and buffering subsystem;
(f) directly processing said narrow-area digital image captured and buffered during step (e) using said image-processing based bar code symbol reading subsystem so as to attempt to automatically read at least one 1D bar code symbol represented therein, wherein said image processing operations comprise automatically processing said captured narrow-area digital image without performing feature extraction or marking operations;
(g) if at least one 1D bar code symbol is not read during step (f), and said manually-actuatable trigger switch is manually actuated, then automatically illuminating said object to be imaged within said field of wide-area narrow-band illumination using said LED-based multi-mode illumination subsystem;
(h) forming and detecting a wide-area digital image of the object illuminated during step (g) using said multi-mode image formation and detection subsystem operated in said wide-area image capture mode;
(i) capturing and buffering said wide-area digital image formed and detected in step (h) using said image capture and buffering subsystem;
(j) automatically processing said wide-area digital image captured and buffered during step (i), using said image-processing based bar code symbol reading subsystem, starting from the center or middle region of said wide-area digital image of the object at which the user would have aimed said hand-supportable semi-automatic imaging-based bar code symbol reader, so as to find one or more bar code symbols represented therein, by searching in a helical manner through blocks of extracted image feature data and marking said blocks of extracted image feature data and processing the corresponding digital image data until a 1D or 2D bar code symbol is recognized/read within said captured 2D wide-area digital image;
(k) if at least one 1D or 2D bar code symbol is not read during step (j), and said manually-actuatable trigger switch is still being manually actuated, then once again automatically illuminating said object to be imaged in said field of wide-area narrow-band illumination, and repeating steps (g), (h), (i) and (j) until either at least one 1D or 2D bar code symbol is read or said manually-actuatable trigger switch is no longer being manually-actuated. - View Dependent Claims (2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14)
-
-
15. Apparatus for processing captured images of objects within a hand-supportable semi-automatic imaging-based bar code symbol reader so as to decode bar code symbols graphically represented therein, said apparatus comprising:
-
a hand-supportable semi-automatic imaging-based bar code symbol reader including (1) a manually-actuatable trigger switch, (2) a multi-mode image formation and detection subsystem having an area-type image sensing array with a field of view (FOV) and a narrow-area image capture mode in which a few central rows of pixels on said area-type image sensing array are enabled, and a wide-area image capture mode in which substantially all rows of said area-type image sensing array are enabled, (3) an automatic object detection subsystem for automatically detecting an object within said FOV, (4) an LED-based multi-mode illumination subsystem for selectively generating a field of narrow-area narrow-band illumination within said FOV and also a field of wide-area narrow-band illumination within said FOV, (5) an image capture and buffering subsystem, and (6) an image-processing based bar code symbol reading subsystem;
wherein (a) said automatic object detection subsystem automatically detects the presence of an object within said FOV;
(b) in response to object detection within step (a), said LED-based multi-mode illumination subsystem automatically illuminates said object within said field of narrow-area narrow-band illumination;
(c) said multi-mode image formation and detection subsystem operated in said narrow-area image capture mode, forms and detects a narrow-area digital image of the illuminated object;
(d) said image capture and buffering subsystem captures and buffers said formed and detected narrow-area digital image of said object;
(e) said image-processing based bar code symbol reading subsystem directly processing said captured and buffered narrow-area digital image, so as to attempt to automatically read at least one 1D bar code symbol represented therein, wherein said image processing operations comprise automatically processing said captured narrow-area digital image without performing feature extraction or marking operations;
(f) if at least one 1D bar code symbol is not read during step (e), and said manually-actuatable trigger switch is manually actuated, then said LED-based multi-mode illumination subsystem automatically illuminates said object to be imaged within said field of wide-area narrow-band illumination;
(h) said multi-mode image formation and detection subsystem operating in said wide-area image capture mode, forms and detects a wide-area digital image of the illuminated object;
(i) said image capture and buffering subsystem captures and buffers said formed and detected wide-area digital image;
(j) said image-processing based bar code symbol reading subsystem automatically processes said captured and buffered wide-area digital image, starting from the center or middle spot of said wide-area digital image of the object at which the user would have aimed said hand-supportable semi-automatic imaging-based bar code symbol reader, so as to find one or more bar code symbols represented therein, by searching in a helical manner through blocks of extracted image feature data and marking said blocks of extracted image feature data and processing the corresponding digital image data until a 1D or 2D bar code symbol is recognized/read within said captured wide-area digital image;
(k) if at least one 1D or 2D bar code symbol is not read during step (j), and said manually-actuatable trigger switch is still being manually actuated, then once again said LED-based multi-mode illumination subsystem automatically illuminates said object in said field of wide-area narrow-band illumination, and steps (g), (h), (i) and (j) are repeated by the respective subsystems until either at least one 1D or 2D bar code symbol is read or said manually-actuatable trigger switch is no longer being manually-actuated. - View Dependent Claims (16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28)
wherein (2) the second stage of processing involves marking ROIs by examining the feature vectors for regions of high-modulation, calculating bar code orientation and marking the four corners of a bar code symbol as a ROI, and wherein (3) the third stage of processing involves reading any bar code symbols represented within said ROI by traversing the bar code and updating the feature vectors, examining the zero-crossings of filtered digital images, creating bar and space patterns, and decoding the bar and space patterns using decoding algorithms.
-
-
19. The apparatus of claim 18, wherein said first stage of image processing comprises:
(1) generating a low-resolution image from said high-resolution narrow-area digital image captured in step (i).
-
20. The apparatus of claim 19, wherein said second stage of image processing further comprises:
-
(2) partitioning the low-resolution image of the package label;
(3) calculating feature vectors using the same; and
(4) analyzing these feature vectors to detect the presence of parallel lines representative of bars within code structures.
-
-
21. The apparatus of claim 20, wherein, during second stage of image processing, feature vectors within each block of low-resolution image data are calculated using one or more of the following metrics:
- gradient vectors, edge density measures, the number of parallel edge vectors, centroids of edgels, intensity variance, and the histogram of intensities captured from the low-resolution digital image.
-
22. The apparatus of claim 18, wherein updating the feature vectors during the third stage of processing comprises:
-
updating the histogram component of the feature vector while traversing the bar code symbol;
calculating the estimate of the black-to-white transition; and
calculating an estimate of narrow and wide elements of the bar code symbol.
-
-
23. The apparatus of claim 20, wherein analyzing feature vectors comprises looking for high edge density, large number of parallel edge vectors and large intensity variance.
-
24. The apparatus of claim 18, wherein searching for zero crossings during the third stage of processing comprises:
-
median filtering the high-resolution bar code image in a direction perpendicular to bar code orientation;
estimating black/white edge transitions using only second derivative zero crossings; and
determining the upper and lower bounds on the grey levels of the bars and spaces of the bar code symbol represented within the captured image, using said estimated black/white edge transitions.
-
-
25. The apparatus of claim 20, wherein said second stage of image processing further comprises:
(5) calculating bar code element orientation, wherein for each feature vector block, the bar code structure is traversed (i.e. sliced) at different angles, the slices are matched with each other based on “
least mean square error”
, and the correct orientation is determined to be that angle which matches the mean square error sense through every slice of the bar code symbol represented within the captured digital image.
-
26. The apparatus of claim 25, wherein said second stage of image processing further comprises:
(6) marking of the four corners of the detected bar code symbol, and wherein (i) such marking operations are performed on the full high-resolution digital image, (ii) the bar code is traversed in either direction starting from the center of the block, (iii) the extent of modulation is detected using the intensity variance, and (iv) the x,y coordinates (pixels) of the four corners of the bar code are detected and define the ROI by the detected four corners of the bar code symbol within the high-resolution digital image.
-
27. The apparatus of claim 18, wherein creating bar and space pattern during the third stage of processing comprises:
-
modeling said black/white edge transitions as a ramp function;
assuming each said edge transition to be 1 pixel wide;
determining each edge transition location at the subpixel level; and
gathering the bar and space counts using black/white edge transition data.
-
-
28. The apparatus of claim 27, wherein said third stage of processing further comprises:
-
framing the bar and space count data with borders; and
decoding the bar and space data using one or more laser scanning bar code decoding algorithms.
-
Specification