The techniques described herein relate to methods, apparatus, and computer readable media configured to test a pose of a three-dimensional model. A three-dimensional model is stored, the three dimensional model comprising a set of probes. Three-dimensional data of an object is received, the three-dimensional data comprising a set of data entries. The three-dimensional data is converted into a set of fields, comprising generating a first field comprising a first set of values, where each value of the first set of values is indicative of a first characteristic of an associated one or more data entries from the set of data entries, and generating a second field comprising a second set of values, where each second value of the second set of values is indicative of a second characteristic of an associated one or more data entries from the set of data entries, wherein the second characteristic is different than the first characteristic. A pose of the three-dimensional model is tested with the set of fields, comprising testing the set of probes to the set of fields, to determine a score for the pose.
This invention provides a system and method for finding line features in an image that allows multiple lines to be efficiently and accurately identified and characterized. When lines are identified, the user can train the system to associate predetermined (e.g. text) labels with respect to such lines. These labels can be used to define neural net classifiers. The neural net operates at runtime to identify and score lines in a runtime image that are found using a line-finding process. The found lines can be displayed to the user with labels and an associated probability score map based upon the neural net results. Lines that are not labeled are generally deemed to have a low score, and are either not flagged by the interface, or identified as not relevant.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
G06F 18/40 - Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
G06T 7/143 - Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/50 - Extraction of image or video features by summing image-intensity values; Projection analysis
3.
METHODS AND APPARATUS FOR DETERMINING ORIENTATIONS OF AN OBJECT IN THREE-DIMENSIONAL DATA
The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a candidate three-dimensional (3D) orientation of an object represented by a three-dimensional (3D) point cloud. The method includes receiving data indicative of a 3D point cloud comprising a plurality of 3D points, determining a first histogram for the plurality of 3D points based on geometric features determined based on the plurality of 3D points, accessing data indicative of a second histogram of geometric features of a 3D representation of a reference object, computing, for each of a plurality of different rotations between the first histogram and the second histogram in 3D space, a scoring metric for the associated rotation, and determining the candidate 3D orientation based on the scoring metrics of the plurality of different rotations.
This invention provides a system and method for finding patterns in images that incorporates neural net classifiers. A pattern finding tool is coupled with a classifier that can be run before or after the tool to have labeled pattern results with sub-pixel accuracy. In the case of a pattern finding tool that can detect multiple templates, its performance is improved when a neural net classifier informs the pattern finding tool to work only on a subset of the originally trained templates. Similarly, in the case of a pattern finding tool that initially detects a pattern, a neural network classifier can then determine whether it has found the correct pattern. The neural network can also reconstruct/clean-up an imaged shape, and/or to eliminate pixels less relevant to the shape of interest, therefore reducing the search time, as well significantly increasing the chance of lock on the correct shapes.
G06V 10/50 - Extraction of image or video features by summing image-intensity values; Projection analysis
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
This invention provides a vision system camera, and associated methods of operation, having a multi-core processor, high-speed, high-resolution imager, FOVE, auto-focus lens and imager-connected pre-processor to pre-process image data provides the acquisition and processing speed, as well as the image resolution that are highly desirable in a wide range of applications. This arrangement effectively scans objects that require a wide field of view, vary in size and move relatively quickly with respect to the system field of view. This vision system provides a physical package with a wide variety of physical interconnections to support various options and control functions. The package effectively dissipates internally generated heat by arranging components to optimize heat transfer to the ambient environment and includes dissipating structure (e.g. fins) to facilitate such transfer. The system also enables a wide range of multi-core processes to optimize and load-balance both image processing and system operation (i.e. auto-regulation tasks).
Embodiments relate to predicting height information for an object. First distance data is determined at a first time when an object is at a first position that is only partially within the field-of-view. Second distance data is determined at a second, later time when the object is at a second, different position that is only partially within the field-of-view. A distance measurement model that models a physical parameter of the object within the field-of-view is determined for the object based on the first and second distance data. Third distance data indicative of an estimated distance to the object prior to the object being entirely within the field-of-view of the distance sensing device is determined based on the first distance data, the second distance data, and the distance measurement model. Data indicative of a height of the object is determined based on the third distance data.
G01B 11/06 - Measuring arrangements characterised by the use of optical techniques for measuring length, width, or thickness for measuring thickness
G01B 11/14 - Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
G01B 11/28 - Measuring arrangements characterised by the use of optical techniques for measuring areas
A system or method can analyze symbols on a set of objects having different sizes. The system can identify a characteristic object dimension corresponding to the set of objects. An image of a first object can be received, and, a first virtual object boundary feature (e.g., edge) in the image can be identified for the first object based on the characteristic object dimension. A first symbol can be identified in the image, and whether the first symbol is positioned on the first object can be determined, based on the first virtual object boundary feature.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
An optical system can include a receiver secured to a first optical component and a flexure arrangement secured to a second optical component. The flexure arrangement can include a plurality of flexures, each with a free end that can extend away from the second optical component and into a corresponding cavity of the receiver. Each of the cavities can be sized to receive adhesive that secures the corresponding flexure within the cavity when the adhesive has hardened, and to permit adjustment of the corresponding flexure within the cavity, before the adhesive has hardened, to adjust an alignment of the first and second optical components relative to multiple degrees of freedom.
Methods, systems, and apparatuses are provided for estimating a location on an object in a three-dimensional scene. Multiple radiation patterns are produced by spatially modulating each of multiple first radiations with a distinct combination of one or more modulating structures, each first radiation having at least one of a distinct radiation path, a distinct source, a distinct source spectrum, or a distinct source polarization with respect to the other first radiations. The location on the object is illuminated with a portion of each of two or more of the radiation patterns, the location producing multiple object radiations, each object radiation produced in response to one of the multiple radiation patterns. Multiple measured values are produced by detecting the object radiations from the location on the object due to each pattern separately using one or more detector elements. The location on the object is estimated based on the multiple measured values.
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 7/499 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group using polarisation effects
G01S 17/48 - Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
An opto-electronic system includes a laser operable to produce a laser beam; an optical element including two or more beam-shaping portions, each of the two or more beam-shaping portions having a different optical property; a beam deflector arranged to sweep the laser beam across the optical element to produce output light; and electronics communicatively coupled with the laser, the beam deflector, or both the laser and the beam deflector. The electronics are configured to cause selective impingement of the laser beam onto a proper subset of the two or more beam-shaping portions of the optical element to modify one or more optical parameters of the output light.
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
G02B 27/28 - Optical systems or apparatus not provided for by any of the groups , for polarising
14.
SYSTEM AND METHOD FOR REDUCED-SPECKLE LASER LINE GENERATION
An illumination apparatus for reducing speckle effect in light reflected off an illumination target includes a laser; a linear diffuser positioned in an optical path between an illumination target and the laser to diffuse collimated laser light in a planar fan of diffused light that spreads in one dimension across at least a portion of the illumination target; and a beam deflector to direct the collimated laser light incident on the beam deflector to sweep across different locations on the linear diffuser within an exposure time for illumination of the illumination target by the diffused light. The different locations span a distance across the linear diffuser that provides sufficient uncorrelated speckle patterns, at an image sensor, in light reflected from an intersection of the planar fan of light with the illumination target to add incoherently when imaged by the image sensor within the exposure time.
G01B 11/14 - Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
H04N 13/254 - Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
Methods, systems, and computer readable media for generating a three-dimensional reconstruction of an object with reduced distortion are described. In some aspects, a system includes at least two image sensors, at least two projectors, and a processor. Each image sensor is configured to capture one or more images of an object. Each projector is configured to illuminate the object with an associated optical pattern and from a different perspective. The processor is configured to perform the acts of receiving, from each image sensor, for each projector, images of the object illuminated with the associated optical pattern and generating, from the received images, a three-dimensional reconstruction of the object. The three-dimensional reconstruction has reduced distortion due to the received images of the object being generated when each projector illuminates the object with an associated optical pattern from the different perspective.
This invention provides an integrated time-of-flight sensor that delivers distance information to a processor associated with the camera assembly and vison system. The distance is processed with the above-described feedback control, to auto-focus the camera assembly's variable lens during runtime operation based on the particular size/shape object(s) within the field of view. The shortest measured distance is used to set the focus distance of the lens. To correct for calibration or drift errors, a further image-based focus optimization can occur around the measured distance and/or based on the measured temperature. The distance information generated by the time-of-flight sensor can be employed to perform other functions. Other functions include self-triggering of image acquisition, object size dimensioning, detection and analysis of object defects and/or gap detection between objects in the field of view and software-controlled range detection to prevent unintentional reading of (e.g.) IDs on objects outside a defined range (presentation mode).
G02B 7/04 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
G01S 17/36 - Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
G02B 7/08 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism
G02B 7/40 - Systems for automatic generation of focusing signals using time delay of the reflected waves, e.g. of ultrasonic waves
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
H04N 23/54 - Mounting of pick-up tubes, electronic image sensors, deviation or focusing coils
H04N 23/55 - Optical parts specially adapted for electronic image sensors; Mounting thereof
H04N 23/67 - Focus control based on electronic image sensor signals
The techniques described herein relate to methods, apparatus, and computer readable media for editing a graphical program using a graphical programming interface. Editing the graphical program may include displaying, via the graphical programming interface, a plurality of existing graphical components that provide functionality for at least one computer program thread; receiving data indicating a selection of a new graphical component for inserting into the plurality of existing graphical components; determining, based on an associated graphical component of the plurality of existing graphical components, a set of one or more placement locations for inserting the new graphical component; and displaying, on the graphical programming interface, the set of one or more placement locations.
The techniques described herein relate to methods, apparatus, and computer readable media for measuring object characteristics by interpolating the object characteristics using stored associations. A first image of at least part of a ground surface with a first representation of a laser line projected onto the ground surface from a first pose is received. A first association between a known value of the characteristic of the ground surface of the first image with the first representation is determined. A second image of at least part of a first training object on the ground surface with a second representation of the laser line projected onto the first training object from the first pose is received. A second association between a known value of the characteristic of the first training object with the second representation is determined. The first and second association for measuring the characteristic of a new object are stored.
This invention provides an aimer assembly for a vision system that is coaxial (on-axis) with the camera optical axis, thus providing an aligned aim point at a wide range of working distances. The aimer includes a projecting light element located aside the camera optical axis. The beam and received light from the imaged (illuminated) scene are selectively reflected or transmitted through a dichoric mirror assembly in a manner that permits the beam to be aligned with the optical axis and projected to the scene while only light from the scene is received by the sensor. The aimer beam and illuminator employ differing light wavelengths. In a further embodiment, an internal illuminator includes a plurality of light sources below the camera optical axis. Some of the light sources are covered by a prismatic structure for close distance, and other light sources are collimated, projecting over a longer distance.
The techniques described herein relate to computerized methods and apparatuses for detecting objects in an image. The techniques described herein further relate to computerized methods and apparatuses for detecting one or more objects using a pre-trained machine learning model and one or more other machine learning models that can be trained in a field training process. The pre-trained machine learning model may be a deep machine learning model.
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/77 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
This invention provides a vision system having a housing and an interchangeable lens module is provided. The module is adapted to seat on a C-mount ring provided on the front, mounting face of the housing. The module is attached via a plurality of fasteners that pass through a frame of the module and into the mounting face. The module includes a connector in a fixed location, which mates with a connector well on the mounting face to provide power and control to a driver board that operates a variable (e.g. liquid) lens within the optics of the lens module. The driver board is connected to the lens body by a flexible printed circuit board (PCB), which also allows for axial motion of the lens body with respect to the frame. This axial motion can be effected by an adjustment ring that can include an indexed/lockable, geared, outer surface.
A method for an imaging module can include rotating an imaging assembly that includes an imaging device about a first pivot point of a bracket to a select first orientation, fastening the imaging assembly to the bracket at the first orientation, rotating a mirror assembly that includes a mirror about a second pivot point of the bracket to a select second orientation, and fastening the mirror assembly to the bracket at the second orientation. An adjustable, selectively oriented imaging assembly of a first imaging module can acquire images using an adjustable, selectively oriented mirror assembly of a second imaging module.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
23.
Machine vision system with multispectral light assembly
A multispectral light assembly includes a multispectral light source configured to generate a plurality of different wavelengths of light, a light pipe positioned in front of the multispectral light source and configured to provide color mixing for two or more of the plurality of different wavelengths, a diffusive surface on the light pipe exit surface, and a projection lens positioned in front of the diffusive surface. A processor device is in communication with the multispectral light assemblies to control activation of the multispectral light source. A machine vision system includes an illumination assembly with a plurality of multispectral light assemblies, an optics assembly, a sensor assembly, and a processor device in communication with the optics assembly, the sensor assembly, and the illumination assembly.
F21K 9/62 - Optical arrangements integrated in the light source, e.g. for improving the colour rendering index or the light extraction using mixing chambers, e.g. housings with reflective walls
A machine vision system can include an image sensor assembly including an image sensor, a lens assembly coupled to the image sensor assembly, an illumination assembly coupled to the lens assembly, and a removable front cover positioned in front of the illumination assembly. The illumination assembly can include a plurality of multispectral light assemblies. Each multispectral light assembly of the plurality of multispectral light assemblies can include a multispectral light source having a plurality of color LED dies configured to generate at least two different wavelengths of light, a light pipe positioned in front of the multispectral light source and having an exit surface, a diffusive surface on the exit surface of the light pipe, and a projection lens positioned in front of the diffusive surface. The machine vision system can also include an illumination sensor configured to detect light from the illumination assembly.
H04N 5/235 - Circuitry for compensating for variation in the brightness of the object
F21V 8/00 - Use of light guides, e.g. fibre optic devices, in lighting devices or systems
G01S 17/36 - Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
A method for assigning a symbol to an object in an image includes receiving the image captured by an imaging device where the symbol may be located within the image. The method further includes receiving, in a first coordinate system, a three-dimensional (3D) location of one or more points that corresponds to pose information indicative of a 3D pose of the object in the image, mapping the 3D location of the one or more points of the object to a 2D location within the image, and assigning the symbol to the object based on a relationship between a 2D location of the symbol in the image and the 2D location of the one or more points of the object in the image.
In accordance with some embodiments of the disclosed subject matter, methods, systems, and media for generating images of multiple sides of an object are provided. In some embodiments, a method comprises receiving information indicative of a 3D pose of a first object in a first coordinate space at a first time; receiving a group of images captured using at least one image sensor, each image associated with a field of view within the first coordinate space; mapping at least a portion of a surface of the first object to a 2D area with respect to the image based on the 3D pose of the first object; associating, for images including the surface, a portion of that image with the surface of the first object based on the 2D area; and generating a composite image of the surface using images associated with the surface.
A system and method for estimating dimensions of an approximately cuboidal object from a 3D image of the object acquired by an image sensor of the vision system processor is provided. An identification module, associated with the vision system processor, automatically identifies a 3D region in the 3D image that contains the cuboidal object. A selection module, associated with the vision system processor, automatically selects 3D image data from the 3D image that corresponds to approximate faces or boundaries of the cuboidal object. An analysis module statistically analyzes, and generates statistics for, the selected 3D image data that correspond to approximate cuboidal object faces or boundaries. A refinement module chooses statistics that correspond to improved cuboidal dimensions from among cuboidal object length, width and height. The improved cuboidal dimensions are provided as dimensions for the object. A user interface displays a plurality of interface screens for setup and runtime operation.
This invention provides an illumination assembly that is typically attached to the front end of a vision system camera assembly, adapted to generate an illumination pattern onto an object, which allows the vision system process(or) to perform basic shape inspection of the object in addition to feature detection and decoding. A dome illuminator with a diffuse inner surface is provided to the camera assembly with a sufficient opening side to surround the object. The dome illuminator has two systems to create the pattern on an object, including a diffuse illuminator for specular/shiny object surfaces and a secondary, projecting illuminator for matte/diffusive object surfaces. The diffuse illuminator includes a set of light-filtering structures on its inner surface—for example concentric strips or rings that allow projection of a ringed fringe pattern on an (e.g. shiny/specular) object. The fringes can additionally be generated in a given a certain wavelength and/or visible color.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
This invention provides a system and method for detecting and acquiring one or more in-focus images of one or more barcodes within the field of view of an imaging device. A measurement process measures depth-of-field of barcode detection. A plurality of nominal coarse focus settings of a variable lens allow sampling, in steps, of a lens adjustment range corresponding to allowable distances between the one or more barcodes and the image sensor, so that a step size of the sampling is less than a fraction of the depth-of-field of barcode detection. An acquisition process acquires a nominal coarse focus image for each nominal coarse focus setting. A barcode detection process detects one or more barcode-like regions and respective likelihoods. A fine focus process fine-adjusts, for each high-likelihood barcode, the variable lens near a location of the barcode-like regions. The process acquires an image for decoding using the fine adjusted setting.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
H04N 23/959 - Computational photography systems, e.g. light-field imaging systems for extended depth of field imaging by adjusting depth of field during image capture, e.g. maximising or setting range based on scene characteristics
H04N 23/67 - Focus control based on electronic image sensor signals
30.
SYSTEMS AND METHODS FOR DETECTING AND ADDRESSING ISSUE CLASSIFICATIONS FOR OBJECT SORTING
Some embodiments of the disclosure provide systems and methods for improving sorting and routing of objects, including in sorting systems. Characteristic dimensional data for one or more objects with common barcode information can be compared to dimensional data of another object with the common barcode information to evaluate a classification (e.g., a side-by-side exception) of the other object. In some cases, the evaluation can include identifying the classification as incorrect (e.g., as a false side-by-side exception).
G05B 19/418 - Total factory control, i.e. centrally controlling a plurality of machines, e.g. direct or distributed numerical control (DNC), flexible manufacturing systems (FMS), integrated manufacturing systems (IMS), computer integrated manufacturing (CIM)
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
B07C 5/10 - Sorting according to size measured by light-responsive means
B07C 3/14 - Apparatus characterised by the means used for detection of the destination using light-responsive detecting means
31.
System and method for automatic generation of human-machine interface in a vision system
The invention provides a system and method that automatically generates a user interface (HMI) based on a selection of spreadsheet cells. The spreadsheet controls operations within the processor(s) of one or more vision system cameras. After selecting a range of cells in the spreadsheet, the user applies the system and method by pressing a button, or using a menu command that results in an automatically generated HMI with appropriate scaling of interface elements and a desired layout of such elements on the associated screen. Advantageously, the system and method essentially reduces the user's workflow to two steps, selecting spreadsheet cells and generating the HMI around them. The generated HMI runs in a web browser that can be instantiated on a user device, and communicates directly to a vision system processor(s). Data can pass directly between the user interface running in a web browser and the vision system processor(s).
Systems and methods are provided for acquiring images of objects using an imaging device and a controllable mirror. The controllable mirror can be controlled to change a field of view for the imaging device, including so as to acquire images of different locations, of different parts of an object, or with different degrees of zoom.
G02B 7/182 - Mountings, adjusting means, or light-tight connections, for optical elements for mirrors for mirrors
H04N 23/69 - Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
G06V 10/147 - Optical characteristics of the device performing the acquisition or on the illumination arrangements - Details of sensors, e.g. sensor lenses
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
G06V 10/24 - Aligning, centring, orientation detection or correction of the image
33.
Forming a homogenized illumination line which can be imaged as a low-speckle line
A system for forming a homogenized illumination line which can be imaged as a low-speckle line is disclosed. The system includes a laser configured to emit a collimated laser beam; and an illumination-fan generator that includes one or more linear diffusers. The illumination-fan generator is arranged and configured to (i) receive the collimated laser beam, (ii) output a planar fan of diffused light, such that the planar fan emanates from a light line formed on the distal-most one of the one or more linear diffusers, and (iii) cause formation of an illumination line at an intersection of the planar fan and an object.
G01N 21/89 - Investigating the presence of flaws, defects or contamination in moving material, e.g. paper, textiles
G01N 21/88 - Investigating the presence of flaws, defects or contamination
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
34.
Systems and methods for detecting motion during 3D data reconstruction
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for detecting movement in a scene. A first temporal pixel image is generated based on a first set of images of a scene over time, and a second temporal pixel image is generated based on a second set of images. One or more derived values are determined based on values of the temporal pixels in the first temporal pixel image, the second temporal pixel image, or both. Correspondence data is determined based on the first temporal pixel image and the second temporal pixel image indicative of a set of correspondences between image points of the first set of images and image points of the second set of images. An indication of whether there is a likelihood of motion in the scene is determined based on the one or more derived values and the correspondence data.
This invention overcomes disadvantages of the prior art by providing a vision system and method of use, and graphical user interface (GUI), which employs a camera assembly having an on-board processor of low to modest processing power. At least one vision system tool analyzes image data, and generates results therefrom, based upon a deep learning process. A training process provides training image data to a processor remote from the on-board processor to cause generation of the vision system tool therefrom, and provides a stored version of the vision system tool for runtime operation on the on-board processor. The GUI allows manipulation of thresholds applicable to the vision system tool and refinement of training of the vision system tool by the training process. A scoring process allows unlabeled images from a set of acquired and/or stored images to be selected automatically for labelling as training images using a computed confidence score.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/04842 - Selection of displayed objects or displayed text elements
36.
SYSTEM AND METHOD FOR DETERMINING 3D SURFACE FEATURES AND IRREGULARITIES ON AN OBJECT
This invention provides a system and method for determining the location and characteristics of certain surface features that comprises elevated or depressed regions with respect to a smooth surrounding surface on an object. A filter acts on a range image of the scene. A filter defines an annulus or other perimeter shape around each pixel in which a best-fit surface is established. A normal to the pixel allows derivation of local displacement height. The displacement height is used to establish a height deviation image of the object, with which bumps, dents or other height-displacement features can be determined. The bump filter can be used to locate regions on a surface with minimal irregularities by mapping such irregularities to a grid and then thresholding the grid to generate a cost function. Regions with a minimal cost are acceptable candidates for application of labels and other items in which a smooth surface is desirable.
G01S 17/42 - Simultaneous measurement of distance and other coordinates
G01S 17/48 - Active triangulation systems, i.e. using the transmission and reflection of electromagnetic waves other than radio waves
G01B 11/06 - Measuring arrangements characterised by the use of optical techniques for measuring length, width, or thickness for measuring thickness
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
G06T 7/44 - Analysis of texture based on statistical description of texture using image operators, e.g. filters, edge density metrics or local histograms
37.
System and method for configuring an ID reader using a mobile device
A system and method for communicating at least one of updated configuration information and hardware setup recommendations to a user of an ID decoding vision system is provided. An image of an object containing one or more IDs is acquired with a mobile device. The ID associated with the object is decoded to derive information. Physical dimensions of the ID associated with the object are determined. Based on the information and the dimensions, configuration data can be transmitted to a remote server that automatically determines setup information for the vision system based upon the configuration data. The remote server thereby transmits at least one of (a) updated configuration information to the vision system and (b) hardware setup recommendations to a user of the vision system based upon the configuration data.
G06K 9/80 - Combination of image preprocessing and recognition functions
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
H04L 67/00 - Network arrangements or protocols for supporting network services or applications
H04M 1/72403 - User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
H04M 1/02 - Constructional features of telephone sets
H04M 1/2755 - Devices whereby a plurality of signals may be stored simultaneously with provision for storing more than one subscriber number at a time using static electronic memories, e.g. chips providing data content by optical scanning
38.
System and method for refining dimensions of a generally cuboidal 3D object imaged by 3D vision system and controls for the same
A system and method for estimating dimensions of an approximately cuboidal object from a 3D image of the object acquired by an image sensor of the vision system processor is provided. An identification module, associated with the vision system processor, automatically identifies a 3D region in the 3D image that contains the cuboidal object. A selection module, associated with the vision system processor, automatically selects 3D image data from the 3D image that corresponds to approximate faces or boundaries of the cuboidal object. An analysis module statistically analyzes, and generates statistics for, the selected 3D image data that correspond to approximate cuboidal object faces or boundaries. A refinement module chooses statistics that correspond to improved cuboidal dimensions from among cuboidal object length, width and height. The improved cuboidal dimensions are provided as dimensions for the object. A user interface displays a plurality of interface screens for setup and runtime operation.
This invention provides a system and method that efficiently detects objects imaged using a 3D camera arrangement by referencing a cylindrical or spherical surface represented by a point cloud, and measures variant features of an extracted object including volume, height, and center of mass, bounding box, and other relevant metrics. The system and method, advantageously, operates directly on unorganized and un-ordered points, requiring neither a mesh/surface reconstruction nor voxel grid representation of object surfaces in a point cloud. Based upon a cylinder/sphere reference model, an acquired 3D point cloud is flattened. Object (blob) detection is carried out in the flattened 3D space, and objects are converted back to the 3D space to compute the features, which can include regions that differ from the regular shape of the cylinder/sphere. Downstream utilization devices and/or processes, such as part reject mechanism and/or robot manipulators can act on the identified feature data.
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
G06T 3/00 - Geometric image transformation in the plane of the image
An optical system can include a receiver secured to a first optical component and a flexure arrangement secured to a second optical component. The flexure arrangement can include a plurality of flexures, each with a free end that can extend away from the second optical component and into a corresponding cavity of the receiver. Each of the cavities can be sized to receive adhesive that secures the corresponding flexure within the cavity when the adhesive has hardened, and to permit adjustment of the corresponding flexure within the cavity, before the adhesive has hardened, to adjust an alignment of the first and second optical components relative to multiple degrees of freedom.
This invention provides a single-camera vision system, typically for use in logistics applications, that allows for adjustment of the camera viewing angle to accommodate a wide range of object heights and associated widths moving relative to an imaged scene with constant magnification. The camera assembly employs an image sensor that is more particularly suited to such applications, with an aspect (height-to-width) ratio of approximately 1:4 to 1:8. The camera assembly includes a distance sensor to determine the distance to the top of each object. The camera assembly employs a zoom lens that can change at relatively high speed (e.g. <10 ms) to allow adjustment of the viewing angle from object to object as each one passes under the camera's field of view (FOV). Optics that allow the image to be resolved on the image sensor within the desired range of viewing angles are provided in the camera lens assembly.
This invention provides a vision system with an exchangeable illumination assembly that allows for increased versatility in the type and configuration of illumination supplied to the system without altering the underlying optics, sensor, vision processor, or the associated housing. The vision system housing includes a front plate that optionally includes a plurality of mounting bases for accepting different types of lenses, and a connector that allows removable interconnection with the illustrative illumination assembly. The illumination assembly includes a cover that is light transmissive. The cover encloses an illumination component that can include a plurality of lighting elements that surround an aperture through which received light rays from the imaged scene pass through to the lens. The arrangement of lighting elements is highly variable and the user can be supplied with an illumination assembly that best suits its needs without need to change the vision system processor, sensor or housing.
An apparatus for controlling a depth of field for a reader in a vision system includes a dual aperture assembly having an inner region and an outer region. A first light source can be used to generate a light beam associated with the inner region and a second light source can be used to generate a light beam associated with the outer region. The depth of field of the reader can be controlled by selecting one of the first light source and second light source to illuminate an object to acquire an image of the object. The selection of the first light source or the second light source can be based on at least one parameter of the vision system.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
45.
Systems and methods for decoding two-dimensional matrix symbols with incomplete or absent fixed patterns
Systems and methods for reading a two-dimensional matrix symbol or for determining if a two-dimensional matrix symbol is decodable are disclosed. The systems and methods can include a data reading algorithm that receives an image, locates at least a portion of the data modules within the image without using a fixed pattern, fits a model of the module positions from the image, extrapolates the model resulting in predicted module positions, determines module values from the image at the predicted module positions, and extracts a binary matrix from the module values.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
46.
SYSTEM AND METHOD FOR EXTENDING DEPTH OF FIELD FOR 2D VISION SYSTEM CAMERAS IN THE PRESENCE OF MOVING OBJECTS
This invention provides a system and method for enhanced depth of field (DOF) advantageously used in logistics applications, scanning for features and ID codes on objects. It effectively combines a vision system, a glass lens designed for on-axis and Scheimpflug configurations, a variable lens and a mechanical system to adapt the lens to the different configurations without detaching the optics. The optics can be steerable, which allows it to adjust between variable angles so as to optimize the viewing angle to optimize DOF for the object in a Scheimpflug configuration. One, or a plurality, of images can be acquired of the object at one, or differing angle settings, with the entire region of interest clearly imaged. In another implementation, the optical path can include a steerable mirror and a folding mirror overlying the region of interest, which allows different multiple images to be acquired at different locations on the object.
G03B 5/06 - Swinging lens about normal to the optical axis
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
47.
Handheld ID-reading system with integrated illumination assembly
This provides an ID reader, typically configured for handheld operation, which integrates three types of illumination into a compact package that generates robust performance and resistance to harsh environmental conditions, such as dust and moisture. These illumination types include, direct (diffuse) light, low-angle light and polarized light. The ID reader includes a sealed reader module assembly having the illuminators in combination with an imager assembly (optics and image sensor) at its relative center. Additionally, also an on-axis aimer and a variable focus system with liquid lens have been integrated in this module and is placed on axis using a mirror assembly that includes a dichroic filter. As the optimal distance to read a code with low-angle light is typically shorter than the optimal distance to use the polarized illumination a variable (e.g. liquid) lens can adjust the focus of the reader to the optimal distance for the selected illumination.
Disclosed herein are systems and methods for machine vision. A machine vision system includes a motion rendering device, a first image sensor, and a second image sensor. The machine visions system includes a processor configured to run a computer program stored in memory that is configured to determine a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor, and to determine a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor.
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for data pre-processing for stereo-temporal image sequences to improve three-dimensional data reconstruction. In some aspects, the techniques described herein relate to systems, methods, and computer readable media for improved correspondence refinement for image areas affected by oversaturation. In some aspects, the techniques described herein relate to systems, methods, and computer readable media configured to fill missing correspondences to improve three-dimensional (3-D) reconstruction. The techniques include identifying image points without correspondences, using existing correspondences and/or other information to generate approximated correspondences, and cross-checking the approximated correspondences to determine whether the approximated correspondences should be used for the image processing.
Systems and methods reduce temperature induced drift effects on a liquid lens used in a vision system. A feedback loop receives a temperature value from a temperature sensor, and based on the received temperature value, controls a power to the heating element based on a difference between the measured temperature of the liquid lens and a predetermined control temperature to maintain the temperature value within a predetermined control temperature range to reduce the effects of drift. A processor can also control a bias signal applied to the lens or a lens actuator to control temperature variations and the associated induced drift effects. An image sharpness can also be determined over a series of images, alone or in combination with controlling the temperature of the liquid lens, to adjust a focal distance of the lens.
G02B 3/14 - Fluid-filled or evacuated lenses of variable focal length
G02B 7/00 - Mountings, adjusting means, or light-tight connections, for optical elements
G02B 7/08 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism
G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
G02B 26/00 - Optical devices or arrangements for the control of light using movable or deformable optical elements
An illumination system is provided for an optical system that includes an imaging device for acquiring an image of a target, for decoding of a symbol or other analysis. The illumination system can include a first light source configured to provide illumination of a first wavelength, a second light source configured to provide illumination of a second wavelength that is different from the first wavelength. The light sources can be controlled for operations that include: illuminating the target with the first and second light sources simultaneously for acquisition of the image of the target; and altering an illumination output of at least one of the first light source or the second light source, while maintaining non-zero illumination output for at least one of the first light source or the second light source, to indicate a status of the optical system.
This invention applies dynamic weighting between a point-to-plane and point-to-edge metric on a per-edge basis in an acquired image using a vision system. This allows an applied ICP technique to be significantly more robust to a variety of object geometries and/or occlusions. A system and method herein provides an energy function that is minimized to generate candidate 3D poses for use in alignment of runtime 3D image data of an object with model 3D image data. Since normals are much more accurate than edges, the use of normal is desirable when possible. However, in some use cases, such as a plane, edges provide information in relative directions the normals do not. Hence the system and method defines a “normal information matrix”, which represents the directions in which sufficient information is present. Performing (e.g.) a principal component analysis (PCA) on this matrix provides a basis for the available information.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
H04N 13/275 - Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
G06T 17/10 - Volume description, e.g. cylinders, cubes or using CSG [Constructive Solid Geometry]
G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
G06V 30/24 - Character recognition characterised by the processing or recognition method
G06F 18/22 - Matching criteria, e.g. proximity measures
G06V 10/77 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
The techniques described herein relate to methods, apparatus, and computer readable media configured to determine an estimated volume of an object captured by a three-dimensional (3D) point cloud. A 3D point cloud comprising a plurality of 3D points and a reference plane in spatial relation to the 3D point cloud is received. A 2D grid of bins is configured along the reference plane, wherein each bin of the 2D grid comprises a length and width that extends along the reference plane. For each bin of the 2D grid, a number of 3D points in the bin and a height of the bin from the reference plane is determined. An estimated volume of an object captured by the 3D point cloud based on the calculated number of 3D points in each bin and the height of each bin.
This invention provides a system and method for using an area scan sensor of a vision system, in conjunction with an encoder or other knowledge of motion, to capture an accurate measurement of an object larger than a single field of view (FOV) of the sensor. It identifies features/edges of the object, which are tracked from image to image, thereby providing a lightweight way to process the overall extents of the object for dimensioning purposes. Logic automatically determines if the object is longer than the FOV, and thereby causes a sequence of image acquisition snapshots to occur while the moving/conveyed object remains within the FOV until the object is no longer present in the FOV. At that point, acquisition ceases and the individual images are combined as segments in an overall image. These images can be processed to derive overall dimensions of the object based on input application details.
The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin.
The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin.
The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud. A 3D region of interest is determined that includes a width along a first axis, a height along a second axis, and a depth along a third axis. The 3D points within the 3D region of interest are represented as a set of 2D points based on coordinate values of the first and second axes. The 2D points are grouped into a plurality of 2D bins arranged along the first axis. For each 2D bin, a representative 2D position is determined based on the associated set of 2D points. Each of the representative 2D positions are connected to neighboring representative 2D positions to generate the 2D profile.
G06F 18/2134 - Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods based on separation criteria, e.g. independent component analysis
58.
METHODS AND APPARATUS FOR IDENTIFYING SURFACE FEATURES IN THREE-DIMENSIONAL IMAGES
The techniques described herein relate to methods, apparatus, and computer readable media configured to identify a surface feature of a portion of a three-dimensional (3D) point cloud. Data indicative of a path along a 3D point cloud is received, wherein the 3D point cloud comprises a plurality of 3D data points. A plurality of lists of 3D data points are generated, wherein: each list of 3D data points extends across the 3D point cloud at a location that intersects the received path; and each list of 3D data points intersects the received path at different locations. A characteristic associated with a surface feature is identified in at least some of the plurality of lists of 3D data points. The identified characteristics are grouped based on one or more properties of the identified characteristics. The surface feature is identified based on the grouped characteristics.
This invention provides a system and method that performs 3D imaging of a complex object, where image data is likely lost. Available 3D image data, in combination with an absence/loss of image data, allows computation of x, y and z dimensions. Absence/loss of data is assumed to be just another type of image data, and represents the presence of something that has prevented accurate data from being generated in the subject image. Segments of data can be connected to areas of absent data and generate a maximum bounding box. The shadow that this object generates can be represented as negative or missing data, but is not representative of the physical object. The height from the positive data, the object shadow size based on that height, the location in the FOV, and the ray angles that generate the images, are estimated and the object shadow size is removed from the result.
A base station or handheld device can be equipped with a latch system or a multi-hinge arrangement for electrical contacts. The latch system can be adjustable between different latching configurations in which the base station and handheld device are retained together by different degrees. The multi-hinge arrangement can provide rotation about multiple axes to provide rolling contact between electrical contacts of the base station and the handheld device.
A computer-implemented method for scanning a side of an object to identify a region of interest is provided. The method can include determining, using one or more computing devices, a distance between a side of an object and an imaging device, determining, using the one or more computing devices, a scanning pattern for an imaging device that includes a controllable mirror, based on the distance between the side of the object and the imaging device, moving a controllable mirror according to the scanning pattern to acquire, using the one or more computing device and the imaging device, a plurality of images of the side of the object, and identifying, using the one or more computing devices, the region of interest based on the plurality of images.
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G05D 1/02 - Control of position or course in two dimensions
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
G06V 20/56 - Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
G06V 10/147 - Optical characteristics of the device performing the acquisition or on the illumination arrangements - Details of sensors, e.g. sensor lenses
62.
Body portion of a lighting device for imaging systems
Disclosed is a defect inspection device for determining anomaly of an inspection object. The defect inspection device may include: a lighting system which includes a light source for transmitting light onto the inspection object; and a dynamic diffuser located between the light source and the inspection object and capable of controlling a diffusivity of light transmitted onto the inspection object; and one or more processors for controlling the dynamic diffuser based on characteristics of the inspection object.
Disclosed is a defect inspection device. The defect inspection device may include a lighting system designed for transmitting a lighting pattern having different illuminances for each area on a surface of an inspection object; a photographing unit for obtaining an image data of the inspection object; one or more processors for processing the image data; and a memory for storing a deep learning-based model. In addition, the one or more processors are adapted to control, the lighting system to transmit a lighting pattern having a different illuminance for each area on a surface of an inspection object, input, an image data obtained by the photographing unit into the deep learning-based model, wherein the image data includes a rapid change of illuminance in at least a part of the object surface; and determine, a defect on a surface of the inspection object using the deep learning-based model.
This invention provides a system and method for calibration of a 3D vision system using a multi-layer 3D calibration target that removes the requirement of accurate pre-calibration of the target. The system and method acquires images of the multi-layer 3D calibration target at different spatial locations and at different times, and computes the orientation difference of the 3D calibration target between the two acquisitions. The technique can be used to perform vision-based single-plane orientation repeatability inspection and monitoring. By applying this technique to an assembly working plane, vision-based assembly working plane orientation repeatability, inspection and monitoring can occur. Combined with a moving robot end effector, this technique provides vision-based robot end-effector orientation repeatability inspection and monitoring. Vision-guided adjustment of two planes to achieve parallelism can be achieved. The system and method operates to perform precise vision-guided robot setup to achieve parallelism of the robot's end-effector and the assembly working plane.
A modular vision system that can include a housing with a faceplate and a first and second optical module mounted to the faceplate. Each of the first and second optical modules can include a mounting body, a rectangular image sensor, and an imaging lens that defines an optical axis and a field of view. The first optical module can be configured to be mounted to the faceplate in a first plurality of mounting orientations and the second optical module can be configured to be mounted to the faceplate in a second plurality of mounting orientations. The first and second optical modules can thus collectively provide a plurality of imaging configurations.
H04N 23/45 - Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
This invention provides a vision system camera, and associated methods of operation, having a multi-core processor, high-speed, high-resolution imager, FOVE, auto-focus lens and imager-connected pre-processor to pre-process image data provides the acquisition and processing speed, as well as the image resolution that are highly desirable in a wide range of applications. This arrangement effectively scans objects that require a wide field of view, vary in size and move relatively quickly with respect to the system field of view. This vision system provides a physical package with a wide variety of physical interconnections to support various options and control functions. The package effectively dissipates internally generated heat by arranging components to optimize heat transfer to the ambient environment and includes dissipating structure (e.g. fins) to facilitate such transfer. The system also enables a wide range of multi-core processes to optimize and load-balance both image processing and system operation (i.e. auto-regulation tasks).
Embodiments relate to predicting height information for an object. First distance data is determined at a first time when an object is at a first position that is only partially within the field-of-view. Second distance data is determined at a second, later time when the object is at a second, different position that is only partially within the field-of-view. A distance measurement model that models a physical parameter of the object within the field-of-view is determined for the object based on the first and second distance data. Third distance data indicative of an estimated distance to the object prior to the object being entirely within the field-of-view of the distance sensing device is determined based on the first distance data, the second distance data, and the distance measurement model. Data indicative of a height of the object is determined based on the third distance data.
G01B 11/06 - Measuring arrangements characterised by the use of optical techniques for measuring length, width, or thickness for measuring thickness
G01B 11/14 - Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
G01B 11/28 - Measuring arrangements characterised by the use of optical techniques for measuring areas
69.
Vision systems and methods of making and using the same
Vision systems and methods for acquiring an image of an image scene and/or measuring a three-dimensional location of an object are disclosed. The vision systems can include a single image sensor, a first optical path, and a second optical path. The first optical path can be selectively transmissive of a first light, the second optical path can be selectively transmissive of a second light, and the first and second light can have a different distinguishing characteristic.
H04N 13/218 - Image signal generators using stereoscopic image cameras using a single 2D image sensor using spatial multiplexing
H04N 13/254 - Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
G02B 30/35 - Stereoscopes providing a stereoscopic pair of separated images corresponding to parallactically displaced views of the same object, e.g. 3D slide viewers using reflective optical elements in the optical path between the images and the observer
This invention provides a system and method for finding line features in an image that allows multiple lines to be efficiently and accurately identified and characterized. When lines are identified, the user can train the system to associate predetermined (e.g. text) labels with respect to such lines. These labels can be used to define neural net classifiers. The neural net operates at runtime to identify and score lines in a runtime image that are found using a line-finding process. The found lines can be displayed to the user with labels and an associated probability score map based upon the neural net results. Lines that are not labeled are generally deemed to have a low score, and are either not flagged by the interface, or identified as not relevant.
G06T 7/143 - Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/50 - Extraction of image or video features by summing image-intensity values; Projection analysis
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06F 18/40 - Software arrangements specially adapted for pattern recognition, e.g. user interfaces or toolboxes therefor
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
G06K 9/62 - Methods or arrangements for recognition using electronic means
72.
Systems and method for vision inspection with multiple types of light
Systems and methods are provided for acquiring images of objects. Light of different types (e.g., different polarization orientations) can be directed onto an object from different respective directions (e.g., from different sides of the object). A single image acquisition can be executed in order to acquire different sub-images corresponding to the different light types. An image of a surface of the object, including representation of surface features of the surface, can be generated based on the sub-images.
A base station or handheld device can be equipped with a latch system or a multi-hinge arrangement for electrical contacts. The latch system can be adjustable between different latching configurations in which the base station and handheld device are retained together by different degrees. The multi-hinge arrangement can provide rotation about multiple axes to provide rolling contact between electrical contacts of the base station and the handheld device.
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
H05K 5/02 - Casings, cabinets or drawers for electric apparatus - Details
74.
On-axis aimer for vision system and multi-range illuminator for same
This invention provides an aimer assembly for a vision system that is coaxial (on-axis) with the camera optical axis, thus providing an aligned aim point at a wide range of working distances. The aimer includes a projecting light element located aside the camera optical axis. The beam and received light from the imaged (illuminated) scene are selectively reflected or transmitted through a dichoric mirror assembly in a manner that permits the beam to be aligned with the optical axis and projected to the scene while only light from the scene is received by the sensor. The aimer beam and illuminator employ differing light wavelengths. In a further embodiment, an internal illuminator includes a plurality of light sources below the camera optical axis. Some of the light sources are covered by a prismatic structure for close distance, and other light sources are collimated, projecting over a longer distance.
This invention provides a system and method for finding multiple line features in an image. Two related steps are used to identify line features. First, the process computes x and y-components of the gradient field at each image location, projects the gradient field over a plurality subregions, and detects a plurality of gradient extrema, yielding a plurality of edge points with position and gradient. Next, the process iteratively chooses two edge points, fits a model line to them, and if edge point gradients are consistent with the model, computes the full set of inlier points whose position and gradient are consistent with that model. The candidate line with greatest inlier count is retained and the set of remaining outlier points is derived. The process then repeatedly applies the line fitting operation on this and subsequent outlier sets to find a plurality of line results. The process can be exhaustive RANSAC-based.
Systems and methods are provided for acquiring images of objects using an imaging device and a controllable mirror. The controllable mirror can be controlled to change a field of view for the imaging device, including so as to acquire images of different locations, of different parts of an object, or with different degrees of zoom.
Systems and methods are provided for acquiring images of objects using an imaging device and a controllable mirror. The controllable mirror can be controlled to change a field of view for the imaging device, including so as to acquire images of different locations, of different parts of an object, or with different degrees of zoom.
A calibration fixture that enables more accurate calibration of a touch probe on, for example, a CMM, with respect to the camera. The camera is mounted so that its optical axis is approximately or substantially parallel with the z-axis of the probe. The probe and workpiece are in relative motion, along a plane defined by orthogonal x and y axes, and optionally the z-axis and/or and rotation R about the z-axis. The calibration fixture is arranged to image from beneath the touch surface of the probe and, via a 180-degree prism structure, to transmit light from the probe touch point along the optical axis to the camera. Alternatively, two cameras respectively view the fiducial location relative to the CMM arm and the probe location when aligned on the fiducial. The fixture can define an integrated assembly with an optics block and a camera assembly.
G01B 21/04 - Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
H04N 17/00 - Diagnosis, testing or measuring for television systems or their details
G12B 5/00 - Adjusting position or attitude, e.g. level, of instruments or other apparatus, or of parts thereof; Compensating for the effects of tilting or acceleration, e.g. for optical apparatus
G01B 21/20 - Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring contours or curvatures, e.g. determining profile
Systems and methods reduce temperature induced drift effects on a liquid lens used in a vision system. A feedback loop receives a temperature value from a temperature sensor, and based on the received temperature value, controls a power to the heating element based on a difference between the measured temperature of the liquid lens and a predetermined control temperature to maintain the temperature value within a predetermined control temperature range to reduce the effects of drift. A processor can also control a bias signal applied to the lens or a lens actuator to control temperature variations and the associated induced drift effects. An image sharpness can also be determined over a series of images, alone or in combination with controlling the temperature of the liquid lens, to adjust a focal distance of the lens.
G02B 3/14 - Fluid-filled or evacuated lenses of variable focal length
G02B 7/00 - Mountings, adjusting means, or light-tight connections, for optical elements
G02B 7/08 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism
G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
G02B 26/00 - Optical devices or arrangements for the control of light using movable or deformable optical elements
80.
Methods and apparatus for testing multiple fields for machine vision
The techniques described herein relate to methods, apparatus, and computer readable media configured to test a pose of a three-dimensional model. A three-dimensional model is stored, the three dimensional model comprising a set of probes. Three-dimensional data of an object is received, the three-dimensional data comprising a set of data entries. The three-dimensional data is converted into a set of fields, comprising generating a first field comprising a first set of values, where each value of the first set of values is indicative of a first characteristic of an associated one or more data entries from the set of data entries, and generating a second field comprising a second set of values, where each second value of the second set of values is indicative of a second characteristic of an associated one or more data entries from the set of data entries, wherein the second characteristic is different than the first characteristic. A pose of the three-dimensional model is tested with the set of fields, comprising testing the set of probes to the set of fields, to determine a score for the pose.
This invention provides a system and method for selecting the correct profile from a range of peaks generated by analyzing a surface with multiple exposure levels applied at discrete intervals. The cloud of peak information is resolved by comparison to a model profile into a best candidate to represent an accurate representation of the object profile. Illustratively, a displacement sensor projects a line of illumination on the surface and receives reflected light at a sensor assembly at a set exposure level. A processor varies the exposure level setting in a plurality of discrete increments, and stores an image of the reflected light for each of the increments. A determination process combines the stored images and aligns the combined images with respect to a model image. Points from the combined images are selected based upon closeness to the model image to provide a candidate profile of the surface.
G01B 11/03 - Measuring arrangements characterised by the use of optical techniques for measuring length, width, or thickness by measuring coordinates of points
G06T 7/521 - Depth or shape recovery from the projection of structured light
H04N 5/235 - Circuitry for compensating for variation in the brightness of the object
Techniques include systems, computerized methods, and computer readable media for creating a graphical program in a graphical program development environment. A spreadsheet node having an input terminal in the graphical program is instantiated. The spreadsheet node is associated with a spreadsheet that specifies a list of functions to be executed in a computing device, and the input terminal is connected to the first terminal of the first node, indicating a data connection between the first terminal of the first node and the input terminal of the spreadsheet node. The input terminal of the spreadsheet node is associated with a first cell in the spreadsheet, indicating that the first cell in the spreadsheet be populated with any data received by the input terminal. A human readable file is generated specifying the graphical program, including the spreadsheet node.
An illumination apparatus for reducing speckle effect in light reflected off an illumination target includes a laser; diffuser positioned in an optical path between an illumination target and the laser to diffuse collimated laser light in a planar fan of diffused light that spreads in one dimension across at least a portion of the illumination target; and a beam deflector to direct the collimated laser light incident on the beam deflector to sweep across different locations on the linear diffuser within an exposure time for illumination of the illumination target by the diffused light. The different locations span a distance across the linear diffuser that provides sufficient uncorrelated speckle patterns, at an image sensor, in light reflected from an intersection of the planar fan of light with the illumination target to add incoherently when imaged by the image sensor within the exposure time.
H04N 13/254 - Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
G01B 11/14 - Measuring arrangements characterised by the use of optical techniques for measuring distance or clearance between spaced objects or spaced apertures
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
Described are methods, systems, apparatus, and computer program products for determining the presence of an object on a target surface. A machine vision system includes a first image capture device configured to image a first portion of a target surface from a first viewpoint and a second image capture device configured to image a second portion of the target surface from a second viewpoint. The machine vision system is configured to acquire a first image from the first image capture device, a second image from the second image capture device, rectify the first image and second image, retrieve a disparity field, generate difference data by comparing, based on the mappings of the disparity field, image elements in the first rectified image and a second image elements in the second rectified image; and determine whether the difference data is indicative of an object on the target surface.
A system or method can analyze symbols on a set of objects having different sizes. The system can identify a characteristic object dimension corresponding to the set of objects. An image of a first object can be received, and, a first virtual object boundary feature (e.g., edge) in the image can be identified for the first object based on the characteristic object dimension. A first symbol can be identified in the image, and whether the first symbol is positioned on the first object can be determined, based on the first virtual object boundary feature.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
B07C 3/14 - Apparatus characterised by the means used for detection of the destination using light-responsive detecting means
86.
Methods and apparatus for processing image data for machine vision
The techniques described herein relate to methods, apparatus, and computer readable media configured to test a pose of a model to image data. Image data of an object is received, the image data comprising a set of data entries. A set of regions of the image data are determined, wherein each region in the set of regions comprises an associated set of neighboring data entries in the set of data entries. Processed image data is generated, wherein the processed image data comprises a set of cells that each have an associated value, and generating the processed image data comprises, for each region in the set of regions, determining a maximum possible score of each data entry in the associated set of neighboring data entries from the image data, setting one or more values of the set of values based on the determined maximum possible score, and testing the pose of the model using the processed image data.
This invention provides a vision system that is arranged to compensate for optical drift that can occur in certain variable lens assemblies, including, but not limited to, liquid lens arrangements. The system includes an image sensor operatively connected to a vision system processor, and a variable lens assembly that is controlled (e.g. by the vision processor or another range-determining device) to vary a focal distance thereof. A positive lens assembly is configured to weaken an effect of the variable lens assembly over a predetermined operational range of the object from the positive lens assembly. The variable lens assembly is located adjacent to a front or rear focal point of the positive lens. The variable lens assembly illustratively comprises a liquid lens assembly that can be inherently variable over approximately 20 diopter. In an embodiment, the lens barrel has a C-mount lens base.
G02B 3/14 - Fluid-filled or evacuated lenses of variable focal length
G02B 7/04 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification
G02B 7/14 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses adapted to interchange lenses
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
H04N 5/217 - Circuitry for suppressing or minimising disturbance, e.g. moire or halo in picture signal generation
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
The present invention relates to optical imaging devices and methods for reading optical codes. The image device comprises a sensor, a lens, a plurality of illumination devices, and a plurality of reflective surfaces. The sensor is configured to sense with a predetermined number of lines of pixels, where the predetermined lines of pixels are arranged in a predetermined position. The lens has an imaging path along an optical axis. The plurality of illumination devices are configured to transmit an illumination pattern along the optical axis, and the plurality of reflective surfaces are configured to fold the optical axis.
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
G02B 27/14 - Beam splitting or combining systems operating by reflection only
G02B 17/00 - Systems with reflecting surfaces, with or without refracting elements
89.
System and method for efficiently scoring probes in an image with a vision system
A system and method for scoring trained probes for use in analyzing one or more candidate poses of a runtime image is provided. A set of probes with location and gradient direction based on a trained model are applied to one or more candidate poses based upon a runtime image. The applied probes each respectively include a discrete set of position offsets with respect to the gradient direction thereof. A match score is computed for each of the probes, which includes estimating a best match position for each of the probes respectively relative to one of the offsets thereof, and generating a set of individual probe scores for each of the probes, respectively at the estimated best match position.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G06F 18/28 - Determining representative reference patterns, e.g. by averaging or distorting; Generating dictionaries
G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
90.
Methods and apparatus for optimizing image acquisition of objects subject to illumination patterns
The techniques described herein relate to methods, apparatus, and computer readable media configured to determine parameters for image acquisition. One or more image sensors are each arranged to capture a set of images of a scene, and each image sensor comprises a set of adjustable imaging parameters. A projector is configured to project a moving pattern on the scene, wherein the projector comprises a set of adjustable projector parameters. The set of adjustable projector parameters and the set of adjustable imaging parameters are determined, based on a set of one or more constraints, to reduce noise in 3D data generated based on the set of images.
Evaluating a symbol on an object can include acquiring a first image of the object, including the symbol. A second image can be derived from the first image based upon determining a saturation threshold for the second image and possibly scaling of pixel values to a reduced bit-depth for the second image.
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06T 7/90 - Determination of colour characteristics
92.
Systems and methods for stitching sequential images of an object
A system may comprise a transport device for moving at least one object, wherein at least one substantially planar surface of the object is moved in a known plane locally around a viewing area, wherein the substantially planar surface of the object is occluded except when the at least one substantially planar surface passes by the viewing area, at least one 2D digital optical sensor configured to capture at least two sequential 2D digital images of the at least one substantially planar surface of the at least one object that is moved in the known plane around the viewing area, and a controller operatively coupled to the 2D digital optical sensor, the controller performing the steps of: a) receiving a first digital image, b) receiving a second digital image, and c) stitching the first digital image and the second digital image using a stitching algorithm to generate a stitched image.
G06T 11/60 - Editing figures and text; Combining figures or text
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
G06T 3/00 - Geometric image transformation in the plane of the image
G06V 20/40 - Scenes; Scene-specific elements in video content
H04N 23/74 - Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
H04N 23/698 - Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture
93.
Method for the three dimensional measurement of moving objects during a known movement
A 3D measurement method including: projecting a pattern sequence onto a moving object; capturing a first image sequence with a first camera and a second image sequence synchronously to the first image sequence with a second camera; determining corresponding image points in the two sequences; computing a trajectory of a potential object point from imaging parameters and from known movement data for each pair of image points that is to be checked for correspondence. The potential object point is imaged by both image points in case they correspond. Imaging object positions derived therefrom at each of the capture points in time into image planes respectively of the two cameras. Corresponding image points positions are determined as trajectories in the two cameras and the image points are compared with each other along predetermined image point trajectories and examined for correspondence; lastly performing 3D measurement of the moved object by triangulation.
G01B 11/30 - Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G06T 7/593 - Depth or shape recovery from multiple images from stereo images
G06T 7/579 - Depth or shape recovery from multiple images from motion
G06T 7/285 - Analysis of motion using a sequence of stereo image pairs
Methods, systems, and devices for 3D measurement and/or pattern generation are provided in accordance with various embodiments. Some embodiments include a method of pattern projection that may include projecting one or more patterns. Each pattern from the one or more patterns may include an arrangement of three or more symbols that are arranged such that for each symbol in the arrangement, a degree of similarity between said symbol and a most proximal of the remaining symbols in the arrangement is less than a degree of similarity between said symbol and a most distal of the remaining symbols in the arrangement. Some embodiments further include: illuminating an object using the one or more projected patterns; collecting one or more images of the illuminated object; and/or computing one or more 3D locations of the illuminated object based on the one or more projected patterns and the one or more collected images.
This invention provides a system and method for generating camera calibrations for a vision system camera along three discrete planes in a 3D volume space that uses at least two (e.g. parallel) object planes at different known heights. For any third (e.g. parallel) plane of a specified height, the system and method then automatically generates calibration data for the camera by interpolating/extrapolating from the first two calibrations. This alleviates the need to set the calibration object at more than two heights, speeding the calibration process and simplifying the user's calibration setup, and also allowing interpolation/extrapolation to heights that are space-constrained, and not readily accessible by a calibration object. The calibration plate can be calibrated at each height using a full 2D hand-eye calibration, or using a hand-eye calibration at the first height and then at a second height with translation to a known position along the height (e.g. Z) direction.
This invention provides a removably mountable lens assembly for a vision system camera that includes an integral auto-focusing liquid lens unit, in which the lens unit compensates for focus variations by employing a feedback control circuit that is integrated into the body of the lens assembly. The feedback control circuit receives motion information related to the bobbin of the lens from a position sensor (e.g. a Hall sensor) and uses this information internally to correct for motion variations that deviate from the lens setting position at a desired lens focal distance setting. Illustratively, the feedback circuit can be interconnected with one or more temperature sensors that adjust the lens setting position for a particular temperature value. In addition, the feedback circuit can communicate with an accelerometer that reads a direction of gravity and thereby corrects for potential sag in the lens membrane based upon the spatial orientation of the lens.
G02B 7/08 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism
G02B 3/14 - Fluid-filled or evacuated lenses of variable focal length
G02B 7/30 - Systems for automatic generation of focusing signals using parallactic triangle with a base line
G02B 7/38 - Systems for automatic generation of focusing signals using image sharpness techniques measured at different points on the optical axis
97.
System and method for auto-focusing a vision system camera on barcodes
This invention provides a system and method for detecting and acquiring one or more in-focus images of one or more barcodes within the field of view of an imaging device. A measurement process measures depth-of-field of barcode detection. A plurality of nominal coarse focus settings of a variable lens allow sampling, in steps, of a lens adjustment range corresponding to allowable distances between the one or more barcodes and the image sensor, so that a step size of the sampling is less than a fraction of the depth-of-field of barcode detection. An acquisition process acquires a nominal coarse focus image for each nominal coarse focus setting. A barcode detection process detects one or more barcode-like regions and respective likelihoods. A fine focus process fine-adjusts, for each high-likelihood barcode, the variable lens near a location of the barcode-like regions. The process acquires an image for decoding using the fine adjusted setting.
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
98.
Systems and methods for detecting motion during 3D data reconstruction
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for detecting movement in a scene. A first temporal pixel image is generated based on a first set of images of a scene over time, and a second temporal pixel image is generated based on a second set of images. One or more derived values are determined based on values of the temporal pixels in the first temporal pixel image, the second temporal pixel image, or both. Correspondence data is determined based on the first temporal pixel image and the second temporal pixel image indicative of a set of correspondences between image points of the first set of images and image points of the second set of images. An indication of whether there is a likelihood of motion in the scene is determined based on the one or more derived values and the correspondence data.
A system and method for estimating dimensions of an approximately cuboidal object from a 3D image of the object acquired by an image sensor of the vision system processor is provided. An identification module, associated with the vision system processor, automatically identifies a 3D region in the 3D image that contains the cuboidal object. A selection module, associated with the vision system processor, automatically selects 3D image data from the 3D image that corresponds to approximate faces or boundaries of the cuboidal object. An analysis module statistically analyzes, and generates statistics for, the selected 3D image data that correspond to approximate cuboidal object faces or boundaries. A refinement module chooses statistics that correspond to improved cuboidal dimensions from among cuboidal object length, width and height. The improved cuboidal dimensions are provided as dimensions for the object. A user interface displays a plurality of interface screens for setup and runtime operation.
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06T 7/64 - Analysis of geometric attributes of convexity or concavity
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
H04N 13/296 - Synchronisation thereof; Control thereof
H04N 5/235 - Circuitry for compensating for variation in the brightness of the object
H04N 13/361 - Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
The present invention relates to optical imaging devices and methods for reading optical codes. The image device comprises a sensor, a lens, a plurality of illumination devices, and a plurality of reflective surfaces. The sensor is configured to sense with a predetermined number of lines of pixels, where the predetermined lines of pixels are arranged in a predetermined position. The lens has an imaging path along an optical axis. The plurality of illumination devices are configured to transmit an illumination pattern along the optical axis, and the plurality of reflective surfaces are configured to fold the optical axis.
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code