Systems and methods for deflectometry are disclosed. The deflectometry display is divided into subregions, with the subregions collectively covering the entire display. Deflectometry data sets are acquired for each of the subregions and all of the data is processed to compute fused deflectometry images having enhanced quality. By using display subregions, smaller portions of the object of interest are illuminated, so the amount of diffuse reflection is correspondingly reduced. By focusing on smaller regions of deflectometry patterns, the ratio of specular-to-diffuse reflection intensity can be increased. This allows display brightness and camera acquisition time to be increased without saturation and improves signal-to-noise ratio quality of the specular signal, which improves the quality of the subsequently computed deflectometry images.
Systems and methods are provided for generating machine vision tunnel configurations. The systems and methods described herein may automatically generate a configuration summary of tunnel configurations for a prospective machine vision tunnel based on received parameters. The configuration summary may be modified via operator interaction with the configuration summary. The systems and methods described herein may also automatically generate and transmit a bill of materials report, a tunnel commissioning report, or a graphical representation of an approved tunnel configuration, including generating some or all of these data sets dynamically in response to operator inputs.
The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a candidate three-dimensional (3D) orientation of an object represented by a three-dimensional (3D) point cloud. The method includes receiving data indicative of a 3D point cloud comprising a plurality of 3D points, determining a first histogram for the plurality of 3D points based on geometric features determined based, on the plurality of 3D points, accessing data indicative of a. second histogram of geometric features of a 3D representation of a reference object, computing, for each of a plurality of different rotations between the first histogram and the second histogram in 3D space, a scoring metric for the associated rotation, and determining the candidate 3D orientation based on the scoring metrics of the plurality of different rotations.
Methods and systems are provided for commissioning machine vision systems. The methods and systems described herein may automatically configure, or otherwise assist users in configuring, a machine vision system based on a specification package.
A method for three-dimensional (3D) field calibration of a machine vision system includes receiving a set of calibration parameters and an identification of one or more machine vision system imaging devices, determining a camera acquisition parameter for calibration based on the set of calibration parameters, validating the set of calibration parameters and the camera acquisition parameter, and controlling the imaging device(s) to collect image data of a calibration target. The image data may be collected using the determined camera acquisition parameter. The method further includes generating a set of calibration data for the imaging device(s) using the collected image data for the imaging device(s). The set of calibration data can include a maximum error. The method further includes generating a report including the set of calibration data for the imaging device(s) and an indication of whether the maximum error for the imaging device(s) is within an acceptable tolerance and displaying the report on a display.
A method for dynamic testing of a machine vision system includes receiving a set of testing parameters and a selection of a tunnel system. The machine vision system can include the tunnel system and the tunnel system can include a conveyor and at least one imaging device. The method can further include validating the testing parameters and controlling the at least one imaging device to acquire a set of image data of a testing target positioned at a predetermined justification on the conveyor. The testing target can include a plurality of target symbols. The method can further include determining a test result by analyzing the set of image data to determine if the at least one imaging device reads a target symbol associated with the at least one imaging device and generating a report including the test result.
An optical assembly (120) for a machine vision system (100) having an image sensor (104) includes a lens assembly (108) and a motor system (118) coupled to the lens assembly (108). The lens assembly (108) can include a plurality of solid lens elements (126) and a liquid lens (128), where the liquid lens (128) includes an adjustable membrane (136). The motor system (118) can be configured to move the lens assembly (108) to adjust a distance between the lens assembly (108) and the image sensor (104) of the vision system (100).
This invention provides a system and method inspecting transparent or translucent features on a substrate of an object. A vision system camera, having an image sensor that provides image data to a vision system processor, receives light from a field of view that includes the object through a light-polarizing filter assembly. An illumination source projects polarized light onto the substrate within the field of view. A vision system process locates and registers the substrate, and locates thereon, based upon registration, the transparent or translucent features. A vision system process then performs inspection on the features using predetermined thresholds. The substrate can be a shipping box on a conveyor, having flaps sealed at a seam by transparent tape. Alternatively, a plurality of illuminators or cameras can project and receive polarized light oriented in a plurality of polarization angles, which generates a plurality of images that are combined into a result image.
An opto-electronic system includes a laser operable to produce a laser beam; an optical element including two or more beam-shaping portions, each of the two or more beam-shaping portions having a different optical property; a beam deflector arranged to sweep the laser beam across the optical element to produce output light; and electronics communicatively coupled with the laser, the beam deflector, or both the laser and the beam deflector. The electronics are configured to cause selective impingement of the laser beam onto a proper subset of the two or more beam-shaping portions of the optical element to modify one or more optical parameters of the output light.
This invention overcomes disadvantages of the prior art by providing a vision system and method of use, and graphical user interface (GUI), which employs a camera assembly having an on-board processor of low to modest processing power. At least one vision system tool analyzes image data, and generates results therefrom, based upon a deep learning process. A training process provides training image data to a processor remote from the on-board processor to cause generation of the vision system tool therefrom, and provides a stored version of the vision system tool for runtime operation on the on-board processor. The GUI allows manipulation of thresholds applicable to the vision system tool and refinement of training of the vision system tool by the training process. A scoring process allows unlabeled images from a set of acquired and/or stored images to be selected automatically for labelling as training images using a computed confidence score.
The techniques described herein relate to computerized methods and apparatuses for detecting objects in an image. The techniques described herein further relate to computerized methods and apparatuses for detecting one or more objects using a pretrained machine learning model and one or more other machine learning models that can be trained in a field training process. The pre-trained machine learning model may be a deep machine learning model.
A method for an imaging module can include rotating an imaging assembly that includes an imaging device about a first pivot point of a bracket to a select first orientation, fastening the imaging assembly to the bracket at the first orientation, rotating a mirror assembly that includes a mirror about a second pivot point of the bracket to a select second orientation, and fastening the mirror assembly to the bracket at the second orientation. An adjustable, selectively oriented imaging assembly of a first imaging module can acquire images using an adjustable, selectively oriented mirror assembly of a second imaging module.
G02B 7/182 - Mountings, adjusting means, or light-tight connections, for optical elements for mirrors for mirrors
G02B 7/198 - Mountings, adjusting means, or light-tight connections, for optical elements for mirrors for mirrors with means for adjusting the mirror relative to its support
A multispectral light assembly for an illumination system includes a multispectral light source configured to generate a plurality of different wavelengths of light and a light pipe positioned in front of the multispectral light source and configured to provide color mixing for two or more of the plurality of different wavelengths. The multispectral light assembly also includes a diffusive surface on the light pipe and a projection lens positioned in front of the diffusive surface. A processor device may be in communication with the multispectral light assemblies and may be configured to control activation of the multispectral light source.
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
B60Q 1/02 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
F21K 9/23 - Retrofit light sources for lighting devices with a single fitting for each light source, e.g. for substitution of incandescent lamps with bayonet or threaded fittings
F21V 23/00 - Arrangement of electric circuit elements in or on lighting devices
G06K 7/12 - Methods or arrangements for sensing record carriers by corpuscular radiation using a selected wavelength, e.g. to sense red marks and ignore blue marks
G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
G02B 7/08 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism
14.
MACHINE VISION SYSTEM AND METHOD WITH STEERABLE MIRROR
A computer-implemented method for scanning a side of an object (22). The method can include determining a scanning pattern for an imaging device (e.g., based on a distance between the side of the object and the imaging device), and moving the controllable mirror (30) according to the scanning pattern to acquire a plurality of images of the side of the object. A region of interest can be identified based on the plurality of images.
A method for assigning a symbol to an object in an image includes receiving the image captured by an imaging device where the symbol may be located within the image. The method further includes receiving, in a first coordinate system, a three-dimensional (3D) location of one or more points that corresponds to pose information indicative of a 3D pose of the object in the image, mapping the 3D location of the one or more points of the object to a 2D location within the image, and assigning the symbol to the object based on a relationship between a 2D location of the symbol in the image and the 2D location of the one or more points of the object in the image.
In accordance with some embodiments of the disclosed subject matter, methods, systems, and media for generating images of multiple sides of an object are provided. In some embodiments, a method comprises receiving information indicative of a 3D pose of a first object in a first coordinate space at a first time; receiving a group of images captured using at least one image sensor, each image associated with a field of view within the first coordinate space; mapping at least a portion of a surface of the first object to a 2D area with respect to the image based on the 3D pose of the first object; associating, for images including the surface, a portion of that image with the surface of the first object based on the 2D area; and generating a composite image of the surface using images associated with the surface.
Some embodiments of the disclosure provide systems and methods for improving sorting and routing of objects, including in sorting systems. Characteristic dimensional data for one or more objects with common barcode information can be compared to dimensional data of another object with the common barcode information to evaluate a classification (e.g., a side-by-side exception) of the other object. In some cases, the evaluation can include identifying the classification as incorrect (e.g., as a false side-by-side exception).
The present invention relates to optical imaging devices and methods for reading optical codes. The image device comprises a sensor, a lens, a plurality of illumination devices, and a plurality of reflective surfaces. The sensor is configured to sense with a predetermined number of lines of pixels, where the predetermined lines of pixels are arranged in a predetermined position. The lens has an imaging path along an optical axis. The plurality of illumination devices are configured to transmit an illumination pattern along the optical axis, and the plurality of reflective surfaces are configured to fold the optical axis.
An optical system can include a receiver secured to a first optical component and a flexure arrangement secured to a second optical component. The flexure arrangement can include a plurality of flexures, each with a free end that can extend away from the second optical component and into a corresponding cavity of the receiver. Each of the cavities can be sized to receive adhesive that secures the corresponding flexure within the cavity when the adhesive has hardened, and to permit adjustment of the corresponding flexure within the cavity, before the adhesive has hardened, to adjust an alignment of the first and second optical components relative to multiple degrees of freedom.
This invention provides a system and method that efficiently detects objects imaged using a 3D camera arrangement by referencing a cylindrical or spherical surface represented by a point cloud, and measures variant features of an extracted object including volume, height, and center of mass, bounding box, and other relevant metrics. The system and method, advantageously, operates directly on unorganized and unordered points, requiring neither a mesh/surface reconstruction nor voxel grid representation of object surfaces in a point cloud. Based upon a cylinder/sphere reference model, an acquired 3D point cloud is flattened. Object (blob) detection is carried out in the flattened 3D space, and objects are converted back to the 3D space to compute the features, which can include regions that differ from the regular shape of the cylinder/sphere. Downstream utilization devices and/or processes, such as part reject mechanism and/or robot manipulators can act on the identified feature data.
An on-axis aimer and distance measurement apparatus for a vision system can include a light source configured to generate a first light beam along a first axis. The first light beam can project an aimer pattern on an object and a receiver can be configured to receive reflected light from the first light beam to determine a distance between a lens of the vision system and the object. One or more parameters of vision system can be controlled based on the determined distance.
An apparatus for controlling a depth of field for a reader in a vision system includes a dual aperture assembly having an inner region and an outer region. A first light source can be used to generate a light beam associated with the inner region and a second light source can be used to generate a light beam associated with the outer region. The depth of field of the reader can be controlled by selecting one of the first light source and second light source to illuminate an object to acquire an image of the object. The selection of the first light source or the second light source can be based on at least one parameter of the vision system.
This invention provides a system and method for enhanced depth of field (DOF) advantageously used in logistics applications, scanning for features and ID codes on objects. It effectively combines a vision system, a glass lens designed for on-axis and Scheimpflug configurations, a variable lens and a mechanical system to adapt the lens to the different configurations without detaching the optics. The optics can be steerable, which allows it to adjust between variable angles so as to optimize the viewing angle to optimize DOF for the object in a Scheimpflug configuration. One, or a plurality, of images can be acquired of the object at one, or differing angle settings, with the entire region of interest clearly imaged. In another implementation, the optical path can include a steerable mirror and a folding mirror overlying the region of interest, which allows different multiple images to be acquired at different locations on the object.
The techniques described herein relate to methods, apparatus, and computer readable media configured to identify a surface feature of a portion of a three-dimensional (3D) point cloud. Data indicative of a path along a 3D point cloud is received, wherein the 3D point cloud comprises a plurality of 3D data points. A plurality of lists of 3D data points are generated, wherein: each list of 3D data points extends across the 3D point cloud at a location that intersects the received path; and each list of 3D data points intersects the received path at different locations. A characteristic associated with a surface feature is identified in at least some of the plurality of lists of 3D data points. The identified characteristics are grouped based on one or more properties of the identified characteristics. The surface feature is identified based on the grouped characteristics.
The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin.
The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud. A 3D region of interest is determined that includes a width along a first axis, a height along a second axis, and a depth along a third axis. The 3D points within the 3D region of interest are represented as a set of 2D points based on coordinate values of the first and second axes. The 2D points are grouped into a plurality of 2D bins arranged along the first axis. For each 2D bin, a representative 2D position is determined based on the associated set of 2D points. Each of the representative 2D positions are connected to neighboring representative 2D positions to generate the 2D profile.
The techniques described herein relate to methods, apparatus, and computer readable media configured to determine an estimated volume of an object captured by a three-dimensional (3D) point cloud. A 3D point cloud comprising a plurality of 3D points and a reference plane in spatial relation to the 3D point cloud is received. A 2D grid of bins is configured along the reference plane, wherein each bin of the 2D grid comprises a length and width that extends along the reference plane. For each bin of the 2D grid, a number of 3D points in the bin and a height of the bin from the reference plane is determined. A n estimated volume of an object captured by the 3D point cloud based on the calculated number of 3D points in each bin and the height of each bin.
This invention provides a system and method for using an area scan sensor of a vision system, in conjunction with an encoder or other knowledge of motion, to capture an accurate measurement of an object larger than a single field of view (FOV) of the sensor. It identifies features/edges of the object, which are tracked from image to image, thereby providing a lightweight way to process the overall extents of the object for dimensioning purposes. Logic automatically determines if the object is longer than the FOV, and thereby causes a sequence of image acquisition snapshots to occur while the moving/conveyed object remains within the FOV until the object is no longer present in the FOV. At that point, acquisition ceases and the individual images are combined as segments in an overall image. These images can be processed to derive overall dimensions of the object based on input application details.
This invention provides a system and method that performs 3D imaging of a complex object, where image data is likely lost. Available 3D image data, in combination with an absence/loss of image data, allows computation of x, y and z dimensions. Absence/loss of data is assumed to be just another type of image data, and represents the presence of something that has prevented accurate data from being generated in the subject image. Segments of data can be connected to areas of absent data and generate a maximum bounding box. The shadow that this object generates can be represented as negative or missing data, but is not representative of the physical object. The height from the positive data, the object shadow size based on that height, the location in the FOV, and the ray angles that generate the images, are estimated and the object shadow size is removed from the result.
Determining dimensions of an object can include determining a distance between the object and an imaging device, and an angle of an optical axis of the imaging device. One of more features of the object can be identified in an image of the object. The dimensions of the object can be determined based upon the distance, the angle, and the one or more identified features.
This invention provides a system and method for finding patterns in images that incorporates neural net classifiers. A pattern finding tool is coupled with a classifier that can be run before or after the tool to have labeled pattern results with sub-pixel accuracy. In the case of a pattern finding tool that can detect multiple templates, its performance is improved when a neural net classifier informs the pattern finding tool to work only on a subset of the originally trained templates. Similarly, in the case of a pattern finding tool that initially detects a pattern, a neural network classifier can then determine whether it has found the correct pattern. The neural network can also reconstruct/ clean-up an imaged shape, and/or to eliminate pixels less relevant to the shape of interest, therefore reducing the search time, as well significantly increasing the chance of lock on the correct shapes.
This invention provides a calibration target with a calibration pattern on at least one surface. The relationship of locations of calibration features on the pattern are determined for the calibration target and stored for use during a calibration procedure by a calibrating vision system. Knowledge of the calibration target's feature relationships allow the calibrating vision to image the calibration target in a single pose and rediscover each of the calibration features in a predetermined coordinate space. The calibrating vision can then transform the relationships between features from the stored data into the calibrating vision system's local coordinate space. The locations can be encoded in a barcode that is applied to the target, provided in a separate encoded element, or obtained from an electronic data source. The target can include encoded information within the pattern defining a location of adjacent calibration features with respect to the overall geometry of the target.
This invention provides a system and method for selecting the correct profile from a range of peaks generated by analyzing a surface with multiple exposure levels applied at discrete intervals. The cloud of peak information is resolved by comparison to a model profile into a best candidate to represent an accurate representation of the object profile. Illustratively, a displacement sensor projects a line of illumination on the surface and receives reflected light at a sensor assembly at a set exposure level. A processor varies the exposure level setting in a plurality of discrete increments, and stores an image of the reflected light for each of the increments. A determination process combines the stored images and aligns the combined images with respect to a model image. Points from the combined images are selected based upon closeness to the model image to provide a candidate profile of the surface.
An illumination apparatus for reducing speckle effect in light reflected off an illumination target (120) includes a laser (150); a linear diffuser (157) positioned in an optical path between an illumination target and the laser to diffuse collimated laser light (154) in a planar fan of diffused light (158, 159) that spreads in one dimension across at least a portion of the illumination target; and a beam deflector (153) to direct the collimated laser light incident on the beam deflector to sweep across different locations on the linear diffuser within an exposure time for illumination of the illumination target by the diffused light. The different locations span a distance across the linear diffuser that provides sufficient uncorrelated speckle patterns, at an image sensor (164), in light reflected from an intersection of the planar fan of light with the illumination target to add incoherently when imaged by the image sensor within the exposure time.
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
A method includes producing two or more measurements by an image sensor having a pixel array, the measurements including information contained in a set of sign-bits, the producing of each measurement including (i) forming an image signal on the pixel array; and (ii) comparing accumulated pixel currents output from pixels of the pixel array in accordance with the image signal and a set of pixel sampling patterns to produce the set of sign-bits of the measurement; buffering at least one of the measurements to form a buffered measurement; comparing information of the buffered measurement to information of the measurements to produce a differential measurement; and combining the differential measurement with information of the set of pixel sampling patterns to produce at least a portion of one or more digital images relating to one or more of the image signals formed on the pixel array.
H04N 5/361 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to dark current
H04N 5/345 - Extracting pixel data from an image sensor by controlling scanning circuits, e.g. by modifying the number of pixels having been sampled or to be sampled by partially reading an SSIS array
36.
APPARATUS FOR PROJECTING A TIME-VARIABLE OPTICAL PATTERN ONTO AN OBJECT TO BE MEASURED IN THREE DIMENSIONS
The invention relates to an apparatus for projecting a time-variable optical pattern onto an object to be measured in three dimensions, comprising a holder for an optical pattern, a light source having an illumination optical unit and an imaging optical unit, wherein the optical pattern is secured as a slide on a displacement mechanism which displaces the optical pattern relative to the illumination optical unit and relative to the imaging optical unit, wherein the displacement mechanism effectuates a displacement of the optical pattern in a slide plane that is oriented perpendicular to the optical axis of the imaging optical unit.
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G02B 7/00 - Mountings, adjusting means, or light-tight connections, for optical elements
37.
METHOD FOR THE THREE DIMENSIONAL MEASUREMENT OF MOVING OBJECTS DURING A KNOWN MOVEMENT
The invention relates to a method for the three dimensional measurement of a moving object during a known movement comprising the following method steps: Projecting a pattern sequence consisting of N patterns onto the moving object, recording a first image sequence consisting of N images using a first camera and recording a second image sequence synchronous to the first image sequence consisting of N images using a second camera, identifying corresponding image points in the first sequence and in the second sequence, wherein trajectories of potential object points are computed from the known movement data and object positions determined therefrom are projected onto each image plane of the first and second cameras, wherein the positions of corresponding image points are determined in advance as a first image point trajectory in the first camera and a second image point trajectory in the second camera, the image points along the previously determined image point trajectories are compared with one another and checked for correspondence and, in a concluding step, the moving object is measured three dimensionally using triangulation from the corresponding image points.
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G06T 7/579 - Depth or shape recovery from multiple images from motion
G06T 7/593 - Depth or shape recovery from multiple images from stereo images
38.
METHOD FOR CALIBRATING AN IMAGE-CAPTURING SENSOR CONSISTING OF AT LEAST ONE SENSOR CAMERA, USING A TIME-CODED PATTERNED TARGET
The invention relates to a method for calibrating an image-capturing sensor consisting of at least one sensor camera, using a time-coded patterned target, said time-coded patterned target being displayed on a flat-screen display. A sequence of patterns is displayed on the flat-screen display and is captured by the at least one sensor camera as a series of camera images. An association of the pixels of the time-coded patterned target with the respective pixels captured in the camera images is carried out for a fixed position of the flat-screen display in the surroundings, or for at least two different positions of the flat-screen display in the surroundings, the sensor being calibrated by means of corresponding pixels. A gamma-correction is carried out, in which, before the time-coded patterned target is displayed, a gamma curve of the flat-screen display is captured in a random position and/or in each position of the flat-screen display, together with the recording of the sequence of patterns by means of the sensor, and is corrected.
An image sensor for forming projective measurements includes a pixel-array wherein each pixel is coupled with conductors of a pixel output control bus and with a pair of conductors of a pixel output bus. In certain pixels, the pattern of coupling to the pixel output bus is reversed, thereby beneficially de-correlating the image noise induced by pixel output control signals from vectors of the projective basis.
A vision system capable of performing run-time 3D calibration includes a mount configured to hold an object, the mount including a 3D calibration structure; a camera; a motion stage coupled with the mount or the camera; and a computing device configured to perform operations including: acquiring images from the camera when the mount is in respective predetermined orientations relative to the camera, each of the acquired images including a representation of at least a portion of the object and at least a portion of the 3D calibration structure that are concurrently in a field of view of the camera; performing at least an adjustment of a 3D calibration for each of the acquired images based on information relating to the 3D calibration structure as imaged in the acquired images; and determining 3D positions, dimensions or both of one or more features of the object.
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01B 11/02 - Measuring arrangements characterised by the use of optical techniques for measuring length, width, or thickness
G01B 21/04 - Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
Joints are described used for mating two components of an apparatus that have been precisely aligned with respect to each other, e.g., based on a six degrees of freedom alignment procedure. For example, the precisely aligned components can be optical components that are part of an optical apparatus with highly sensitive mechanical tolerances.
A machine vision system to form a one dimensional digital representation of a low information content scene, e.g., a scene that is sparsely illuminated by an illumination plane, and the one dimensional digital representation is a projection formed with respect to columns of a rectangular pixel array of the machine vision system.
This invention provides a mechanism for clearing debris and vapors from the region around the optical axis (OA) of a vision system (100) that employs a directed airflow (AF) in the region. The airflow (AF) is guided by an air knife (170) that surrounds a viewing gap placed in front of the camera optics (140). The air knife (170) delivers airflow (AF) in a manner that takes advantage of the Coanda effect to generate an airflow (AF) that prevents infiltration of debris and contaminants into the optical path. Illustratively, the air knife (170) defines a geometry that effectively multiplies the delivered airflow approximately fifty times (twenty-five times on each of two air-knife sides) that of the supplied compressed air. This provides an extremely strong air curtain along the scan direction that essentially blocks infiltration of environmental contamination to the optics (140) of the camera (110).
This invention provides a trigger for a vision system that can be set using a user interface that allows the straightforward variation of a plurality of exposed trigger parameters. Illustratively, the vision system includes a triggering mode in which the system keeps acquiring an image of a field of view with respect to objects in relative motion. The system runs user-configurable "trigger logic". When the trigger logic succeeds/passes, the current image or a newly acquired image is then transmitted to the main inspection logic for processing. The trigger logic can be readily configured by a user operating an interface, which can also be used to configure the main inspection process, to trigger the vision system by tools such as presence-absence, edge finding, barcode finding, pattern matching, image thresholding, or any arbitrary combination of tools exposed by the vision system in the interface.
Methods and apparatus are disclosed for extracting a one-dimensional digital signal from a two-dimensional digital image along a projection line. Disclosed embodiments provide an image memory in which is stored the digital image, a working memory, a direct memory access controller, a table memory that holds a plurality of transfer templates, and a processor. The processor selects a transfer template from the table memory responsive to an orientation of the projection line, computes a customized set of transfer parameters from the selected transfer template and parameters of the projection line, transmits the transfer parameters to the direct memory access controller, commands the direct memory access controller to transfer data from the image memory to the working memory as specified by the transfer parameters, and computes the one-dimensional digital signal using at least a portion of the data transferred by the direct memory access controller into the working memory.
G06K 9/50 - Extraction of features or characteristics of the image by analysing segments intersecting the pattern
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06T 11/20 - Drawing from basic elements, e.g. lines or circles
46.
SYSTEM AND METHOD FOR CONTROLLING ILLUMINATION IN A VISION SYSTEM
This invention provides a system and method for enabling control of an illuminator having predetermined operating parameters by a vision system processor/core based upon stored information regarding parameters that are integrated with the illuminator. The parameters are retrieved by the processor, and are used to control the operation of the illuminator and/or the camera during image acquisition. In an embodiment, the stored parameters are a discrete numerical or other value that corresponds to the illuminator type. The discrete value maps to a corresponding value in look-up table/database associated with the camera that contains parameter sets associated with each of a plurality of values in the database. The data associated with the discrete value in the camera contains the necessary parameters or settings for that illuminator type. In other embodiments, some or all of the actual parameter information can be stored with the illuminator and retrieved by the camera processor.
This invention provides an electro-mechanical auto-focus function for a smaller-diameter lens type that nests, and is removably mounted, within the mounting space and thread arrangement of a larger-diameter lens base of a vision camera assembly housing. In an illustrative embodiment, the camera assembly includes a threaded base having a first diameter, which illustratively defines a C-mount base. A motor-driven gear-reduction drive assembly is mounted internally, and includes teeth that engage corresponding teeth on the outer diameter of a cylindrical focus gear, which has an internal lead screw. The focus gear is freely rotatable, and removably captured, within the threaded C-mount base in a nested, coaxial relationship. The internal lead screw of the focus gear threadingly engages the external threads of a coaxial lens holder. This converts the drive gear rotation into linear/axial lens holder motion. The lens holder includes anti-rotation stops, which allow its linear/axial movement but restrain any rotational motion.
The technology provides, in some aspects, methods and systems for securely transmitting data using a machine vision system (e.g., within a pharmaceutical facility). Thus, for example, in one aspect, the technology provides a method that includes the steps of establishing a communications link between a machine vision processor and a remote digital data processor (e.g., a database server, personal computer, etc.); encrypting, on the machine vision processor, (i) at least one network packet containing machine vision data, and (ii) at least one network packet containing non-machine vision data; and sending to the remote digital data processor the encrypted network packets from the machine vision processor.
The technology provides, in some aspects, methods and systems for triggering a master machine vision processor and a slave machine vision processor in a multi-camera machine vision system. Thus, for example, in one aspect, the technology provides a method that includes the steps of establishing a communications link between a slave machine vision processor and a master machine vision processor; receiving on the slave machine vision processor a data message from the master machine vision processor; and triggering the slave machine vision processor to perform a machine vision function, the triggering occurring at a frequency based upon the data message, wherein at least one triggering of the slave machine vision processor occurs independent of the master machine vision processor.
Described are methods and apparatuses, including computer program products, for determining model uniqueness with a quality metric of a model of an object in a machine vision application. Determining uniqueness involves receiving a training image, generating a model of an object based on the training image, generating a modified training image based on the training image, determining a set of poses that represent possible instances of the model in the modified training image, and computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.
This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features, to generate a set of 3D image features and thereby determine a 3D pose. Also provided is a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime.
The problem for the present invention is, in a mobile object control system that controls a mobile object on the basis of images captured by a plurality of image capturing units, to enhance the accuracy of the correspondence relationships between the respective image capture coordinate systems that are determined for the respective image capturing units. And, according to the present invention, a first rotational center position specification unit specifies a first rotational center position in a first image capture coordinate system corresponding to a first point on the basis of respective first reference guide marks included in respective first images. Moreover, a second rotational center position specification unit specifies a second rotational center position in a second image capture coordinate system corresponding to the first point on the basis of respective second reference guide marks included in respective second images. And, on the basis of the first rotational center position and the second rotational center position, a coordinate system correspondence relationship storage unit stores a coordinate system correspondence relationship that specifies the correspondence relationship between the first image capture coordinate system and second image capture coordinate system.
This invention provides a system and method for processing discrete image data within an overall set of acquired image data based upon a focus of attention within that image. The result of such processing is to operate upon a more limited subset of the overall image data to generate output values required by the vision system process. Such output value can be a decoded ID or other alphanumeric data. The system and method is performed in a vision system having two processor groups, along with a data memory that is smaller in capacity than the amount of image data to be read out from the sensor array. The first processor group is a plurality of SIMD processors and at least one general purpose processor, co-located on the same die with the data memory. A data reduction function operates within the same clock cycle as data-readout from the sensor to generate a reduced data set that is stored in the on-die data memory. At least a portion of the overall, unreduced image data is concurrently (in the same clock cycle) transferred to the second processor while the first processor transmits at least one region indicator with respect to the reduced data set to the second processor. The region indicator represents at least one focus of attention for the second processor to operate upon.
This invention provides a system and method for synchronization of vision system inspection results produced by each of a plurality of processors that includes a first bank (that can be a "master" bank) containing a master vision system processor and at least one slave vision system processor. At least a second bank (that can be one of a plurality of "slave" banks) contains a master vision system processor and at least one slave vision system processor. Each vision system processor in each bank generates results from an image acquired and processed in a given inspection cycle. The inspection cycle can be based on an external trigger or other trigger signal, and it can enable some or all of the processors/banks to acquire and process images at a given time/cycle. In a given cycle, each of the multiple banks can be positioned to acquire an image of a respective region of a plurality of succeeding regions on a moving line. A synchronization process (a) generates a unique identifier and that passes a trigger signal with the unique identifier associated with the master processor in the first bank to each of the slave processor in the master bank and each of the master and slave processor and (b) receives consolidated results via the master processor of the second bank, having the unique identifier and consolidated results from the results from the first bank. The process then (c) consolidates the results for transmission to a destination if the results are complete and the unique identifier of each of the results is the same.
This invention provides a point-of-sale scanning device that employs vision sensors and vision processing to decode symbology and matrices of information of objects, documents and other substrates as such objects are moved (swiped) through the field-of-view of the scanning device's window. The scanning device defines a form factor that conforms to that of a conventional laser-based point-of-sale scanning device using a housing having a plurality of mirrors, oriented generally at 45-degree angles with respect to the window's plane so as to fold the optical path, thereby allowing for an extended depth of field. The path is divided laterally so as to reach opposing lenses and image sensors, which face each other and are oriented along a lateral optical axis between sidewalls of the device. The sensors and lenses can be adapted to perform different parts of the overall vision system and/or code recognition process. The housing also provides illumination that fills the volume space. Illustratively, illumination is provided adjacent to the window in a ring having two rows for intermediate and long-range illumination of objects. Illumination of objects at or near the scanning window is provided by illuminators positioned along the sidewalls in a series of rows, these rows directed to avoid flooding the optical path.
This invention provides a system and method for runtime determination (self- diagnosis) of camera miscalibration (accuracy), typically related to camera extrinsics, based on historical statistics of runtime alignment scores for objects acquired in the scene, which are defined based on matching of observed and expected image data of trained object models. This arrangement avoids a need to cease runtime operation of the vision system and/or stop the production line that is served by the vision system to diagnose if the system's camera(s) remain calibrated. Under the assumption that objects or features inspected by the vision system over time are substantially the same, the vision system accumulates statistics of part alignment results and stores intermediate results to be used as indicator of current system accuracy. For multi-camera vision systems, cross validation is illustratively employed to identify individual problematic cameras. The system and method allows for faster, less- expensive and more-straightforward diagnosis of vision system failures related to deteriorating camera calibration.
The problem for the present invention is to provide an object control system that can prevent shifting an object to a target position from requiring a long time, even if, for example, the position of installation of an image capturing unit is deviated. And, according to the means for solution of the present invention, an object control system includes: a first image capturing unit that captures a first image including a first reference mark that specifies a first object line determined in advance with respect to an object; an angle acquisition unit that, on the basis of said first reference mark within said first image, acquires a first differential angle that specifies the angle between a first target object line, determined in advance with respect to said first image, and said first object line; and an object control unit that controls a rotation mechanism that rotates said object, on the basis of said first differential angle.
A system and method for high-speed alignment and inspection of components, such as BGA devices, having non-uniform features is provided. During training time of a machine vision system, a small subset of alignment significant blobs along with a quantum of geometric analysis for picking granularity is determined. Also, during training time, balls may be associated with groups, each of which may have its own set of parameters for inspection.
This invention provides a system and method to capture a moving image of a scene that can be more readily de-blurred as compared to images captured through other known methods such as coded exposure de-blurring (flutter shutter) operating on an equivalent exposure -time interval Rather than stopping and starting the integration of light measurement during the exposure-time interval, photo-generated current is switched between multiple charge storage sites in accordance with a temporal switching pattern that optimizes the conditioning of the solution to the inverse blur transform. By switching the image intensity signal between storage sites all of the light energy available during the exposure-time interval is transduced to electronic charge and captured to form a temporally decomposed representation of the moving image As compared to related methods that discard approximately half of the image intensity signal available over an equivalent exposure-time interval, such a temporally decomposed image is a far more complete representation of the moving image and more effectively de-blurred using simple linear de-convolution techniques
This invention provides a system and method for capturing, detecting and extracting features of an ID, such as a ID barcode, that employs an efficient processing system based upon a CPU-controlled vision system on a chip (VSoC) architecture, which illustratively provides a linear array processor (LAP) constructed with a single instruction multiple data (SIMD) architecture in which each pixel of the rows of the pixel array are directed to individual processors in a similarly wide array. The pixel data are processed in a front end (FE) process that performs rough finding and tracking of regions of interest (ROIs) that potentially contain ID-like features. The ROI-fmding process occurs in two parts so as to optimize the efficiency of the LAP in neighborhood operations — a row-processing step that occurs during image pixel readout from the pixel array and an image-processing step that occurs typically after readout occurs. The relative motion of the ID-containing ROI with respect to the pixel array is tracked and predicted. An optional back end (BE) process employs the predicted ROI to perform feature-extraction after image capture. The feature extraction derives candidate ID features that are verified by a verification step that confirms the ID, creates a refined ROI, angle of orientation and feature set. These are transmitted to a decoding processor or other device.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
Systems and methods for utilizing linear optoelectronic sensors are provided. A plurality of linear sensors (525A, 525B) may be utilized to obtain velocity measurements of a web material (505) at two points. The acceleration of the web material (505) may be determined from the velocity measurements and a control signal issued to a servo (515) to maintain proper tension along the web material.
D21F 7/00 - Other details of machines for making continuous webs of paper
D21G 9/00 - Other accessories for paper-making machines
B65H 7/14 - Controlling article feeding, separating, pile-advancing, or associated apparatus, to take account of incorrect feeding, absence of articles, or presence of faulty articles by feelers or detectors by photoelectric feelers or detectors
B65H 23/04 - Registering, tensioning, smoothing, or guiding webs longitudinally
G01N 33/00 - Investigating or analysing materials by specific methods not covered by groups
G01S 5/16 - Position-fixing by co-ordinating two or more direction or position-line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
This invention provides a system and method for determining the three-dimensional alignment of a modeled object or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.
G06K 9/64 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
Disclosed are methods and systems for dynamic feature detection of physical features of objects in the field of view of a sensor. Dynamic feature detection substantially reduces the effects of accidental alignment of physical features with the pixel grid of a digital image by using the relative motion of objects or material in and/or through the field of view to capture and process a plurality of images that correspond to a plurality of alignments. Estimates of the position, weight, and other attributes of a feature are based on an analysis of the appearance of the feature as it moves in the field of view and appears at a plurality of pixel grid alignments. The resulting reliability and accuracy is superior to prior art static feature detection systems and methods.
G06K 9/64 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix
A system and method for performing multi-image training for pattern recognition and registration is provided. A machine vision system first obtains N training images of the scene. Each of the N images is used as a baseline image and the N-1 images are registered to the baseline. Features that represent a set of corresponding image features are added to the model. The feature to be added to the model may comprise an average of the features from each of the images in which the feature appears. The process continues until every feature that meets a threshold requirement is accounted for. The model that results from the present invention represents those stable features that are found in at least the threshold number of the N training images. The model may then be used to train an alignment/inspection tool with the set of features.
A single chip vision sensor of an embodiment includes a pixel array and one or more circuits. The one or more circuits are configured to search an image for one or more features using a model of the one or more features. A method of an embodiment in a single chip vision sensor includes obtaining an image based at least partially on sensed light, and searching the image for one or more features using a model of the one or more features. A system of an embodiment includes the single chip vision sensor and a device. The device is configured to receive one or more signals from the single chip vision sensor and to control an operation based at least partially on the one or more signals.
This invention provides a system and method for decoding symbology that contains a respective data set using multiple image frames (420, 422, 424) of the symbol, wherein at least some of those frames can have differing image parameters (for example orientation, lens zoom, aperture, etc.) so that combining the frames with an illustrative multiple image application (430) allows the most-readable portions of each frame to be stitched togβther. And unlike prior systems which may select one 'best' image, the illustrative system method allows this stitched image to form a complete, readable image of the underlying symbol (310). In an illustrative embodiment the system and method includes an imaging assembly that acquires multiple image frames of the symbol in which some of those image frames have discrete, differing image parameters from others of the frames. A processor, which is operatively connected to the imaging assembly processes the plurality of acquired image frames of the symbol to decode predetermined code data from at least some of the plurality of image frames, and' to combine the predeterminded code data from the at least some of the plurality of image frames to define a decodable version of the data set represented by the symbol.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
67.
CIRCUITS AND METHODS ALLOWING FOR PIXEL ARRAY EXPOSURE PATTERN CONTROL
An image processing system includes an image sensor circuit. The image sensor circuit is configured to obtain an image using a type of shutter operation in which an exposure pattern of a pixel array is set according to exposure information that changes over time based at least partially on charge accumulated in at least a portion of the pixel array. An image sensor circuit includes a pixel array and one or more circuits. The one or more circuits are configured to update exposure information based at least partially on one or more signals output from the pixel array, and to control an exposure pattern of the pixel array based on the exposure information. A pixel circuit includes a first transistor connected between a photodiode and a sense node, and a second transistor connected between an exposure control signal line and a gate of the first transistor.
H04N 3/14 - Scanning details of television systems; Combination thereof with generation of supply voltages by means not exclusively optical-mechanical by means of electrically scanned solid-state devices
This invention provides a vehicle-borne system and method for traffic sign recognition that provides greater accuracy and efficiency in the location and classification of various types of traffic signs by employing rotation and scale-invariant (RSI)-based geometric pattern-matching on candidate traffic signs acquired by a vehicle-mounted forward-looking camera and applying one or more discrimination processes to the recognized sign candidates from the pattern-matching process to increase or decrease the confidence of the recognition. These discrimination processes include discrimination based upon sign color versus model sign color arrangements, discrimination based upon the pose of the sign candidate versus vehicle location and/or changes in the pose between image frames, and/or discrimination of the sign candidate versus stored models of fascia characteristics. The sign candidates that pass with high confidence are classified based upon the associated model data and the drive/vehicle is informed of their presence. In an illustrative embodiment, a preprocess step converts a color image of the sign candidates into a grayscale image in which the contrast between sign colors is appropriate enhanced to assist the pattern-matching process.
G06K 9/64 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix
69.
METHOD AND SYSTEM FOR OPTOELECTRONIC DETECTION AND LOCATION OF OBJECTS
Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art.
Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art.
This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets. The overall, refined/optimized pose estimate incorporates data from each of the cameras' acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/view's acquired runtime edgelets. A final transformation of trained features relative to the runtime features is derived from the iterative error computation.
This invention provides a Graphical User Interface (GUI) that operates in con¬ nection with a machine vision detector or other machine vision system, which pro¬ vides a highly intuitive and industrial machine-like appearance and layout. The GUI includes a centralized image frame window surrounded by panes having buttons and specific interface components that the user employs in each step of a machine vision system set up and run procedure. One pane allows the user to view and manipulate a recorded filmstrip of image thumbnails taken in a sequence, and provides the filmstrip with specialized highlighting (colors or patterns) that indicate useful information about the underlying images. The system is set up and run are using a sequential se¬ ries of buttons or switches that are activated by the user in turn to perform each of the steps needed to connect to a vision system, train the system to recognize or detect ob¬ jects/parts, configure the logic that is used to handle recognition/detection signals, set up system outputs from the system based upon the logical results, and finally, run the programmed system in real time. The programming of logic is performed using a programming window that includes a ladder logic arrangement. A thumbnail window is provided on the programming window in which an image from a filmstrip is dis¬ played, focusing upon the locations of the image (and underlying viewed object/part) in which the selected contact element is provided.
The invention provides, in some aspects, a wafer alignment system comprising an image acquisition device, an illumination source, a rotatable wafer platform, and an image processor that includes functionality for mapping coordinates in an image of an article (such as a wafer) on the platform to a 'world' frame of reference at each of a plurality of angles of rotation of the platform.
A method for automatically determining machine vision tool parameters is presented, including: marking to indicate a desired image result for each image of a plurality of images; selecting a combination of machine vision tool. parameters, and running the machine visjp꧀*.tool on the plurality of images using the combination of parameters to provide' a computed image result for each image of the plurality of images, each computed image result including a plurality of computed measures; comparing each desired image result with a corresponding computed image result to provide a comparison result vector associated with the combination of machine vision tool parameters, then comparing the comparison result vector associated with the combination of machine vision tool parameters to a previously computed comparison result vector associated with a previous combination of machine vision tool parameters using a result comparison heuristic to determine which combination of machine vision tool parameters is best overall.
A traffic signal head having a signal lamp or signal ball with an embedded video monitoring system can be provided to perform vehicle detection to inform an intelligent traffic control system. Video monitoring of traffic lanes facing the signal head can be analyzed by such a system to emulate inductive loop signals that are input signals to traffic control systems.
The invention provides inter alia methods and apparatus for determining the pose,e.g., position along x-, y- and z-axes, pitch, roll and yaw (or one or more characteristics of that pose) of an object in three dimensions by triangulation of data gleaned from multiple images of the object. Thus, for example, in one aspect, the invention provides a method for 3D machine vision in which, during a calibration step, multiple cameras disposed to acquire images of the object from different respective viewpoints are calibrated to discern a mapping function that identifies rays in 3D space emanating from each respective camera's lens that correspond to pixel locations in that camera's field of view. In a training step, functionality associated with the cameras is trained to recognize expected patterns in images to be acquired of the object. A runtime step triangulates locations in 3D space of one or more of those patterns from pixel-wise positions of those patterns in images of the object and from the mappings discerned during calibration step.
The invention provides methods and apparatus for analysis of images of two-dimensional (2D) bar codes (12) in which a model that has proven successful in decoding of a prior 2D image of a 2D bar code (12) is utilized to speed analysis of images of subsequent 2D bar codes (12). hi its various aspects, the invention can be used in analyzing conventional 2D bar codes (12), e.g., those complying with Maxicode and DataMatrix standards, as well as stacked linear bar codes, e.g., those utilizing the Codablock symbology. Bar code readers (16), digital data processing apparatus and other devices according to the invention be used, by way of non-limiting example, to decode bar codes (12) on damaged labels, as well as those screened, etched, peened or otherwise formed on manufactured articles (e.g., from semiconductors to airplane wings). In addition to making bar code reading possible under those conditions, devices utilizing such methods can speed bar codeanalysis in applications where multiple bar codes of like type are read in succession and/or are read under like circumstances - e.g., on the factory floor, at point-of-sale locations, in parcel deliver and so forth. Such devices can also speed and/or make possible bar code analysis where in applications where multiple bar codes read from a single article (14) -e.g., as in the case of a multiply-encoded airplane propellor or other milled parts. The invention also provides methods and apparatus for optical character recognition and other image-based analysis paralleling the above.
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code