A system and method for scoring trained probes for use in analyzing one or more candidate poses of a runtime image is provided. A set of probes with location and gradient direction based on a trained model are applied to one or more candidate poses based upon a runtime image. The applied probes each respectively include a discrete set of position offsets with respect to the gradient direction thereof. A match score is computed for each of the probes, which includes estimating a best match position for each of the probes respectively relative to one of the offsets thereof, and generating a set of individual probe scores for each of the probes, respectively at the estimated best match position.
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur les caractéristiques
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p.ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]; Caractéristiques régionales saillantes
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
G06V 10/772 - Détermination de motifs de référence représentatifs, p.ex. motifs de valeurs moyennes ou déformants; Génération de dictionnaires
2.
METHODS AND APPARATUS FOR TESTING MULTIPLE FIELDS FOR MACHINE VISION
The techniques described herein relate to methods, apparatus, and computer readable media configured to test a pose of a three-dimensional model. A three-dimensional model is stored, the three dimensional model comprising a set of probes. Three-dimensional data of an object is received, the three-dimensional data comprising a set of data entries. The three-dimensional data is converted into a set of fields, comprising generating a first field comprising a first set of values, where each value of the first set of values is indicative of a first characteristic of an associated one or more data entries from the set of data entries, and generating a second field comprising a second set of values, where each second value of the second set of values is indicative of a second characteristic of an associated one or more data entries from the set of data entries, wherein the second characteristic is different than the first characteristic. A pose of the three-dimensional model is tested with the set of fields, comprising testing the set of probes to the set of fields, to determine a score for the pose.
This invention provides a system and method for finding line features in an image that allows multiple lines to be efficiently and accurately identified and characterized. When lines are identified, the user can train the system to associate predetermined (e.g. text) labels with respect to such lines. These labels can be used to define neural net classifiers. The neural net operates at runtime to identify and score lines in a runtime image that are found using a line-finding process. The found lines can be displayed to the user with labels and an associated probability score map based upon the neural net results. Lines that are not labeled are generally deemed to have a low score, and are either not flagged by the interface, or identified as not relevant.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06F 18/2415 - Techniques de classification relatives au modèle de classification, p.ex. approches paramétriques ou non paramétriques basées sur des modèles paramétriques ou probabilistes, p.ex. basées sur un rapport de vraisemblance ou un taux de faux positifs par rapport à un taux de faux négatifs
G06F 18/40 - Dispositions logicielles spécialement adaptées à la reconnaissance des formes, p.ex. interfaces utilisateur ou boîtes à outils à cet effet
G06T 7/143 - Découpage; Détection de bords impliquant des approches probabilistes, p.ex. la modélisation à "champs aléatoires de Markov [MRF]"
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’image; Analyse de projection
4.
METHODS AND APPARATUS FOR DETERMINING ORIENTATIONS OF AN OBJECT IN THREE-DIMENSIONAL DATA
The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a candidate three-dimensional (3D) orientation of an object represented by a three-dimensional (3D) point cloud. The method includes receiving data indicative of a 3D point cloud comprising a plurality of 3D points, determining a first histogram for the plurality of 3D points based on geometric features determined based on the plurality of 3D points, accessing data indicative of a second histogram of geometric features of a 3D representation of a reference object, computing, for each of a plurality of different rotations between the first histogram and the second histogram in 3D space, a scoring metric for the associated rotation, and determining the candidate 3D orientation based on the scoring metrics of the plurality of different rotations.
This invention provides a system and method for finding patterns in images that incorporates neural net classifiers. A pattern finding tool is coupled with a classifier that can be run before or after the tool to have labeled pattern results with sub-pixel accuracy. In the case of a pattern finding tool that can detect multiple templates, its performance is improved when a neural net classifier informs the pattern finding tool to work only on a subset of the originally trained templates. Similarly, in the case of a pattern finding tool that initially detects a pattern, a neural network classifier can then determine whether it has found the correct pattern. The neural network can also reconstruct/clean-up an imaged shape, and/or to eliminate pixels less relevant to the shape of interest, therefore reducing the search time, as well significantly increasing the chance of lock on the correct shapes.
G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’image; Analyse de projection
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
This invention provides a vision system camera, and associated methods of operation, having a multi-core processor, high-speed, high-resolution imager, FOVE, auto-focus lens and imager-connected pre-processor to pre-process image data provides the acquisition and processing speed, as well as the image resolution that are highly desirable in a wide range of applications. This arrangement effectively scans objects that require a wide field of view, vary in size and move relatively quickly with respect to the system field of view. This vision system provides a physical package with a wide variety of physical interconnections to support various options and control functions. The package effectively dissipates internally generated heat by arranging components to optimize heat transfer to the ambient environment and includes dissipating structure (e.g. fins) to facilitate such transfer. The system also enables a wide range of multi-core processes to optimize and load-balance both image processing and system operation (i.e. auto-regulation tasks).
Embodiments relate to predicting height information for an object. First distance data is determined at a first time when an object is at a first position that is only partially within the field-of-view. Second distance data is determined at a second, later time when the object is at a second, different position that is only partially within the field-of-view. A distance measurement model that models a physical parameter of the object within the field-of-view is determined for the object based on the first and second distance data. Third distance data indicative of an estimated distance to the object prior to the object being entirely within the field-of-view of the distance sensing device is determined based on the first distance data, the second distance data, and the distance measurement model. Data indicative of a height of the object is determined based on the third distance data.
G01B 11/06 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la longueur, la largeur ou l'épaisseur pour mesurer l'épaisseur
G01B 11/14 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la distance ou la marge entre des objets ou des ouvertures espacés
G01B 11/28 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des superficies
A system or method can analyze symbols on a set of objects having different sizes. The system can identify a characteristic object dimension corresponding to the set of objects. An image of a first object can be received, and, a first virtual object boundary feature (e.g., edge) in the image can be identified for the first object based on the characteristic object dimension. A first symbol can be identified in the image, and whether the first symbol is positioned on the first object can be determined, based on the first virtual object boundary feature.
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
An optical system can include a receiver secured to a first optical component and a flexure arrangement secured to a second optical component. The flexure arrangement can include a plurality of flexures, each with a free end that can extend away from the second optical component and into a corresponding cavity of the receiver. Each of the cavities can be sized to receive adhesive that secures the corresponding flexure within the cavity when the adhesive has hardened, and to permit adjustment of the corresponding flexure within the cavity, before the adhesive has hardened, to adjust an alignment of the first and second optical components relative to multiple degrees of freedom.
Methods, systems, and apparatuses are provided for estimating a location on an object in a three-dimensional scene. Multiple radiation patterns are produced by spatially modulating each of multiple first radiations with a distinct combination of one or more modulating structures, each first radiation having at least one of a distinct radiation path, a distinct source, a distinct source spectrum, or a distinct source polarization with respect to the other first radiations. The location on the object is illuminated with a portion of each of two or more of the radiation patterns, the location producing multiple object radiations, each object radiation produced in response to one of the multiple radiation patterns. Multiple measured values are produced by detecting the object radiations from the location on the object due to each pattern separately using one or more detector elements. The location on the object is estimated based on the multiple measured values.
G01B 11/00 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
G01S 7/481 - Caractéristiques de structure, p.ex. agencements d'éléments optiques
G01S 7/499 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe utilisant des effets de polarisation
G01S 17/48 - Systèmes de triangulation active, c. à d. utilisant la transmission et la réflexion d'ondes électromagnétiques autres que les ondes radio
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
G01C 3/08 - Utilisation de détecteurs électriques de radiations
13.
Lighting device for imaging systems with light pattern
An opto-electronic system includes a laser operable to produce a laser beam; an optical element including two or more beam-shaping portions, each of the two or more beam-shaping portions having a different optical property; a beam deflector arranged to sweep the laser beam across the optical element to produce output light; and electronics communicatively coupled with the laser, the beam deflector, or both the laser and the beam deflector. The electronics are configured to cause selective impingement of the laser beam onto a proper subset of the two or more beam-shaping portions of the optical element to modify one or more optical parameters of the output light.
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
G02B 27/28 - Systèmes ou appareils optiques non prévus dans aucun des groupes , pour polariser
15.
SYSTEM AND METHOD FOR REDUCED-SPECKLE LASER LINE GENERATION
An illumination apparatus for reducing speckle effect in light reflected off an illumination target includes a laser; a linear diffuser positioned in an optical path between an illumination target and the laser to diffuse collimated laser light in a planar fan of diffused light that spreads in one dimension across at least a portion of the illumination target; and a beam deflector to direct the collimated laser light incident on the beam deflector to sweep across different locations on the linear diffuser within an exposure time for illumination of the illumination target by the diffused light. The different locations span a distance across the linear diffuser that provides sufficient uncorrelated speckle patterns, at an image sensor, in light reflected from an intersection of the planar fan of light with the illumination target to add incoherently when imaged by the image sensor within the exposure time.
G02B 27/48 - Systèmes optiques utilisant la granulation produite par laser
G01B 11/14 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la distance ou la marge entre des objets ou des ouvertures espacés
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
Methods, systems, and computer readable media for generating a three-dimensional reconstruction of an object with reduced distortion are described. In some aspects, a system includes at least two image sensors, at least two projectors, and a processor. Each image sensor is configured to capture one or more images of an object. Each projector is configured to illuminate the object with an associated optical pattern and from a different perspective. The processor is configured to perform the acts of receiving, from each image sensor, for each projector, images of the object illuminated with the associated optical pattern and generating, from the received images, a three-dimensional reconstruction of the object. The three-dimensional reconstruction has reduced distortion due to the received images of the object being generated when each projector illuminates the object with an associated optical pattern from the different perspective.
This invention provides an integrated time-of-flight sensor that delivers distance information to a processor associated with the camera assembly and vison system. The distance is processed with the above-described feedback control, to auto-focus the camera assembly's variable lens during runtime operation based on the particular size/shape object(s) within the field of view. The shortest measured distance is used to set the focus distance of the lens. To correct for calibration or drift errors, a further image-based focus optimization can occur around the measured distance and/or based on the measured temperature. The distance information generated by the time-of-flight sensor can be employed to perform other functions. Other functions include self-triggering of image acquisition, object size dimensioning, detection and analysis of object defects and/or gap detection between objects in the field of view and software-controlled range detection to prevent unintentional reading of (e.g.) IDs on objects outside a defined range (presentation mode).
G01S 17/10 - Systèmes déterminant les données relatives à la position d'une cible pour mesurer la distance uniquement utilisant la transmission d'ondes à modulation d'impulsion interrompues
G02B 7/04 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement
G01S 17/36 - Systèmes déterminant les données relatives à la position d'une cible pour mesurer la distance uniquement utilisant la transmission d'ondes continues, soit modulées en amplitude, en fréquence ou en phase, soit non modulées avec comparaison en phase entre le signal reçu et le signal transmis au même moment
G02B 7/08 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement adaptés pour fonctionner en combinaison avec un mécanisme de télécommande
G02B 7/40 - Systèmes pour la génération automatique de signaux de mise au point utilisant le retard des ondes réfléchies, p.ex. d'ondes ultrasonores
G01S 17/86 - Combinaisons de systèmes lidar avec des systèmes autres que lidar, radar ou sonar, p.ex. avec des goniomètres
H04N 23/54 - Montage de tubes analyseurs, de capteurs d'images électroniques, de bobines de déviation ou de focalisation
H04N 23/55 - Pièces optiques spécialement adaptées aux capteurs d'images électroniques; Leur montage
H04N 23/67 - Commande de la mise au point basée sur les signaux électroniques du capteur d'image
The techniques described herein relate to methods, apparatus, and computer readable media for editing a graphical program using a graphical programming interface. Editing the graphical program may include displaying, via the graphical programming interface, a plurality of existing graphical components that provide functionality for at least one computer program thread; receiving data indicating a selection of a new graphical component for inserting into the plurality of existing graphical components; determining, based on an associated graphical component of the plurality of existing graphical components, a set of one or more placement locations for inserting the new graphical component; and displaying, on the graphical programming interface, the set of one or more placement locations.
The techniques described herein relate to methods, apparatus, and computer readable media for measuring object characteristics by interpolating the object characteristics using stored associations. A first image of at least part of a ground surface with a first representation of a laser line projected onto the ground surface from a first pose is received. A first association between a known value of the characteristic of the ground surface of the first image with the first representation is determined. A second image of at least part of a first training object on the ground surface with a second representation of the laser line projected onto the first training object from the first pose is received. A second association between a known value of the characteristic of the first training object with the second representation is determined. The first and second association for measuring the characteristic of a new object are stored.
This invention provides an aimer assembly for a vision system that is coaxial (on-axis) with the camera optical axis, thus providing an aligned aim point at a wide range of working distances. The aimer includes a projecting light element located aside the camera optical axis. The beam and received light from the imaged (illuminated) scene are selectively reflected or transmitted through a dichoric mirror assembly in a manner that permits the beam to be aligned with the optical axis and projected to the scene while only light from the scene is received by the sensor. The aimer beam and illuminator employ differing light wavelengths. In a further embodiment, an internal illuminator includes a plurality of light sources below the camera optical axis. Some of the light sources are covered by a prismatic structure for close distance, and other light sources are collimated, projecting over a longer distance.
G06K 7/015 - Alignement ou centrage du dispositif de lecture par rapport au support d'enregistrement
F21V 13/04 - Combinaisons de deux sortes d'éléments uniquement les éléments étant des réflecteurs et des réfracteurs
F21V 5/04 - Réfracteurs pour sources lumineuses de forme lenticulaire
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
G03B 3/00 - Dispositions pour la mise au point présentant un intérêt général pour les appareils photographiques, les appareils de projection ou les tireuses
The techniques described herein relate to computerized methods and apparatuses for detecting objects in an image. The techniques described herein further relate to computerized methods and apparatuses for detecting one or more objects using a pre-trained machine learning model and one or more other machine learning models that can be trained in a field training process. The pre-trained machine learning model may be a deep machine learning model.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/77 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
This invention provides a vision system having a housing and an interchangeable lens module is provided. The module is adapted to seat on a C-mount ring provided on the front, mounting face of the housing. The module is attached via a plurality of fasteners that pass through a frame of the module and into the mounting face. The module includes a connector in a fixed location, which mates with a connector well on the mounting face to provide power and control to a driver board that operates a variable (e.g. liquid) lens within the optics of the lens module. The driver board is connected to the lens body by a flexible printed circuit board (PCB), which also allows for axial motion of the lens body with respect to the frame. This axial motion can be effected by an adjustment ring that can include an indexed/lockable, geared, outer surface.
G03B 17/14 - Corps d'appareils avec moyens pour supporter des objectifs, des lentilles additionnelles, des filtres, des masques ou des tourelles de façon interchangeable
H04N 23/50 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande - Détails de structure
A method for an imaging module can include rotating an imaging assembly that includes an imaging device about a first pivot point of a bracket to a select first orientation, fastening the imaging assembly to the bracket at the first orientation, rotating a mirror assembly that includes a mirror about a second pivot point of the bracket to a select second orientation, and fastening the mirror assembly to the bracket at the second orientation. An adjustable, selectively oriented imaging assembly of a first imaging module can acquire images using an adjustable, selectively oriented mirror assembly of a second imaging module.
H04N 5/247 - Disposition des caméras de télévision
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
24.
MACHINE VISION SYSTEM AND METHOD WITH MULTISPECTRAL LIGHT ASSEMBLY
A machine vision system can include an image sensor assembly including an image sensor, a lens assembly coupled to the image sensor assembly, an illumination assembly coupled to the lens assembly, and a removable front cover positioned in front of the illumination assembly. The illumination assembly can include a plurality of multispectral light assemblies. Each multispectral light assembly of the plurality of multispectral light assemblies can include a multispectral light source having a plurality of color LED dies configured to generate at least two different wavelengths of light, a light pipe positioned in front of the multispectral light source and having an exit surface, a diffusive surface on the exit surface of the light pipe, and a projection lens positioned in front of the diffusive surface. The machine vision system can also include an illumination sensor configured to detect light from the illumination assembly.
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G01S 17/36 - Systèmes déterminant les données relatives à la position d'une cible pour mesurer la distance uniquement utilisant la transmission d'ondes continues, soit modulées en amplitude, en fréquence ou en phase, soit non modulées avec comparaison en phase entre le signal reçu et le signal transmis au même moment
A multispectral light assembly includes a multispectral light source configured to generate a plurality of different wavelengths of light, a light pipe positioned in front of the multispectral light source and configured to provide color mixing for two or more of the plurality of different wavelengths, a diffusive surface on the light pipe exit surface, and a projection lens positioned in front of the diffusive surface. A processor device is in communication with the multispectral light assemblies to control activation of the multispectral light source. A machine vision system includes an illumination assembly with a plurality of multispectral light assemblies, an optics assembly, a sensor assembly, and a processor device in communication with the optics assembly, the sensor assembly, and the illumination assembly.
F21K 9/62 - Agencements optiques intégrés dans la source lumineuse, p.ex. pour améliorer l’indice de rendu des couleurs ou l’extraction de lumière en utilisant des chambres de mélange, p.ex. des enceintes à parois réfléchissantes
A method for assigning a symbol to an object in an image includes receiving the image captured by an imaging device where the symbol may be located within the image. The method further includes receiving, in a first coordinate system, a three-dimensional (3D) location of one or more points that corresponds to pose information indicative of a 3D pose of the object in the image, mapping the 3D location of the one or more points of the object to a 2D location within the image, and assigning the symbol to the object based on a relationship between a 2D location of the symbol in the image and the 2D location of the one or more points of the object in the image.
G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
27.
METHODS, SYSTEMS, AND MEDIA FOR GENERATING IMAGES OF MULTIPLE SIDES OF AN OBJECT
In accordance with some embodiments of the disclosed subject matter, methods, systems, and media for generating images of multiple sides of an object are provided. In some embodiments, a method comprises receiving information indicative of a 3D pose of a first object in a first coordinate space at a first time; receiving a group of images captured using at least one image sensor, each image associated with a field of view within the first coordinate space; mapping at least a portion of a surface of the first object to a 2D area with respect to the image based on the 3D pose of the first object; associating, for images including the surface, a portion of that image with the surface of the first object based on the 2D area; and generating a composite image of the surface using images associated with the surface.
A system and method for estimating dimensions of an approximately cuboidal object from a 3D image of the object acquired by an image sensor of the vision system processor is provided. An identification module, associated with the vision system processor, automatically identifies a 3D region in the 3D image that contains the cuboidal object. A selection module, associated with the vision system processor, automatically selects 3D image data from the 3D image that corresponds to approximate faces or boundaries of the cuboidal object. An analysis module statistically analyzes, and generates statistics for, the selected 3D image data that correspond to approximate cuboidal object faces or boundaries. A refinement module chooses statistics that correspond to improved cuboidal dimensions from among cuboidal object length, width and height. The improved cuboidal dimensions are provided as dimensions for the object. A user interface displays a plurality of interface screens for setup and runtime operation.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
G06T 7/64 - Analyse des attributs géométriques de la convexité ou de la concavité
29.
Dome illuminator for vision system camera and method for using the same
This invention provides an illumination assembly that is typically attached to the front end of a vision system camera assembly, adapted to generate an illumination pattern onto an object, which allows the vision system process(or) to perform basic shape inspection of the object in addition to feature detection and decoding. A dome illuminator with a diffuse inner surface is provided to the camera assembly with a sufficient opening side to surround the object. The dome illuminator has two systems to create the pattern on an object, including a diffuse illuminator for specular/shiny object surfaces and a secondary, projecting illuminator for matte/diffusive object surfaces. The diffuse illuminator includes a set of light-filtering structures on its inner surface—for example concentric strips or rings that allow projection of a ringed fringe pattern on an (e.g. shiny/specular) object. The fringes can additionally be generated in a given a certain wavelength and/or visible color.
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
This invention provides a system and method for detecting and acquiring one or more in-focus images of one or more barcodes within the field of view of an imaging device. A measurement process measures depth-of-field of barcode detection. A plurality of nominal coarse focus settings of a variable lens allow sampling, in steps, of a lens adjustment range corresponding to allowable distances between the one or more barcodes and the image sensor, so that a step size of the sampling is less than a fraction of the depth-of-field of barcode detection. An acquisition process acquires a nominal coarse focus image for each nominal coarse focus setting. A barcode detection process detects one or more barcode-like regions and respective likelihoods. A fine focus process fine-adjusts, for each high-likelihood barcode, the variable lens near a location of the barcode-like regions. The process acquires an image for decoding using the fine adjusted setting.
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
H04N 23/959 - Systèmes de photographie numérique, p. ex. systèmes d'imagerie par champ lumineux pour l'imagerie à grande profondeur de champ en ajustant la profondeur de champ pendant la capture de l'image, p. ex. en maximisant ou en réglant la portée en fonction des caractéristiques de la scène
H04N 23/67 - Commande de la mise au point basée sur les signaux électroniques du capteur d'image
31.
SYSTEMS AND METHODS FOR DETECTING AND ADDRESSING ISSUE CLASSIFICATIONS FOR OBJECT SORTING
Some embodiments of the disclosure provide systems and methods for improving sorting and routing of objects, including in sorting systems. Characteristic dimensional data for one or more objects with common barcode information can be compared to dimensional data of another object with the common barcode information to evaluate a classification (e.g., a side-by-side exception) of the other object. In some cases, the evaluation can include identifying the classification as incorrect (e.g., as a false side-by-side exception).
G05B 19/418 - Commande totale d'usine, c.à d. commande centralisée de plusieurs machines, p.ex. commande numérique directe ou distribuée (DNC), systèmes d'ateliers flexibles (FMS), systèmes de fabrication intégrés (IMS), productique (CIM)
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
B07C 5/10 - Séparation selon la dimension les mesures étant faites par des moyens photosensibles
B07C 3/14 - Appareillages caractérisés par les moyens utilisés pour détecter la destination utilisant des moyens de détection photosensibles
32.
System and method for automatic generation of human-machine interface in a vision system
The invention provides a system and method that automatically generates a user interface (HMI) based on a selection of spreadsheet cells. The spreadsheet controls operations within the processor(s) of one or more vision system cameras. After selecting a range of cells in the spreadsheet, the user applies the system and method by pressing a button, or using a menu command that results in an automatically generated HMI with appropriate scaling of interface elements and a desired layout of such elements on the associated screen. Advantageously, the system and method essentially reduces the user's workflow to two steps, selecting spreadsheet cells and generating the HMI around them. The generated HMI runs in a web browser that can be instantiated on a user device, and communicates directly to a vision system processor(s). Data can pass directly between the user interface running in a web browser and the vision system processor(s).
G06F 17/00 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES Équipement ou méthodes de traitement de données ou de calcul numérique, spécialement adaptés à des fonctions spécifiques
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
33.
Machine vision system and method with steerable mirror
Systems and methods are provided for acquiring images of objects using an imaging device and a controllable mirror. The controllable mirror can be controlled to change a field of view for the imaging device, including so as to acquire images of different locations, of different parts of an object, or with different degrees of zoom.
G02B 7/182 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour miroirs pour miroirs
H04N 23/69 - Commande de moyens permettant de modifier l'angle du champ de vision, p. ex. des objectifs de zoom optique ou un zoom électronique
G06V 10/147 - Caractéristiques optiques de l’appareil qui effectue l’acquisition ou des dispositifs d’éclairage - Détails de capteurs, p.ex. lentilles de capteurs
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une forme; Localisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
G06V 10/24 - Alignement, centrage, détection de l’orientation ou correction de l’image
34.
Forming a homogenized illumination line which can be imaged as a low-speckle line
A system for forming a homogenized illumination line which can be imaged as a low-speckle line is disclosed. The system includes a laser configured to emit a collimated laser beam; and an illumination-fan generator that includes one or more linear diffusers. The illumination-fan generator is arranged and configured to (i) receive the collimated laser beam, (ii) output a planar fan of diffused light, such that the planar fan emanates from a light line formed on the distal-most one of the one or more linear diffusers, and (iii) cause formation of an illumination line at an intersection of the planar fan and an object.
G01N 21/89 - Recherche de la présence de criques, de défauts ou de souillures dans un matériau mobile, p.ex. du papier, des textiles
G01N 21/88 - Recherche de la présence de criques, de défauts ou de souillures
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
35.
Systems and methods for detecting motion during 3D data reconstruction
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for detecting movement in a scene. A first temporal pixel image is generated based on a first set of images of a scene over time, and a second temporal pixel image is generated based on a second set of images. One or more derived values are determined based on values of the temporal pixels in the first temporal pixel image, the second temporal pixel image, or both. Correspondence data is determined based on the first temporal pixel image and the second temporal pixel image indicative of a set of correspondences between image points of the first set of images and image points of the second set of images. An indication of whether there is a likelihood of motion in the scene is determined based on the one or more derived values and the correspondence data.
This invention overcomes disadvantages of the prior art by providing a vision system and method of use, and graphical user interface (GUI), which employs a camera assembly having an on-board processor of low to modest processing power. At least one vision system tool analyzes image data, and generates results therefrom, based upon a deep learning process. A training process provides training image data to a processor remote from the on-board processor to cause generation of the vision system tool therefrom, and provides a stored version of the vision system tool for runtime operation on the on-board processor. The GUI allows manipulation of thresholds applicable to the vision system tool and refinement of training of the vision system tool by the training process. A scoring process allows unlabeled images from a set of acquired and/or stored images to be selected automatically for labelling as training images using a computed confidence score.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
37.
SYSTEM AND METHOD FOR DETERMINING 3D SURFACE FEATURES AND IRREGULARITIES ON AN OBJECT
This invention provides a system and method for determining the location and characteristics of certain surface features that comprises elevated or depressed regions with respect to a smooth surrounding surface on an object. A filter acts on a range image of the scene. A filter defines an annulus or other perimeter shape around each pixel in which a best-fit surface is established. A normal to the pixel allows derivation of local displacement height. The displacement height is used to establish a height deviation image of the object, with which bumps, dents or other height-displacement features can be determined. The bump filter can be used to locate regions on a surface with minimal irregularities by mapping such irregularities to a grid and then thresholding the grid to generate a cost function. Regions with a minimal cost are acceptable candidates for application of labels and other items in which a smooth surface is desirable.
G01S 17/42 - Mesure simultanée de la distance et d'autres coordonnées
G01S 17/48 - Systèmes de triangulation active, c. à d. utilisant la transmission et la réflexion d'ondes électromagnétiques autres que les ondes radio
G01B 11/06 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la longueur, la largeur ou l'épaisseur pour mesurer l'épaisseur
G01S 7/48 - DÉTERMINATION DE LA DIRECTION PAR RADIO; RADIO-NAVIGATION; DÉTERMINATION DE LA DISTANCE OU DE LA VITESSE EN UTILISANT DES ONDES RADIO; LOCALISATION OU DÉTECTION DE LA PRÉSENCE EN UTILISANT LA RÉFLEXION OU LA RERADIATION D'ONDES RADIO; DISPOSITIONS ANALOGUES UTILISANT D'AUTRES ONDES - Détails des systèmes correspondant aux groupes , , de systèmes selon le groupe
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
G06T 7/44 - Analyse de la texture basée sur la description statistique de texture utilisant des opérateurs de l'image, p.ex. des filtres, des mesures de densité des bords ou des histogrammes locaux
38.
System and method for configuring an ID reader using a mobile device
A system and method for communicating at least one of updated configuration information and hardware setup recommendations to a user of an ID decoding vision system is provided. An image of an object containing one or more IDs is acquired with a mobile device. The ID associated with the object is decoded to derive information. Physical dimensions of the ID associated with the object are determined. Based on the information and the dimensions, configuration data can be transmitted to a remote server that automatically determines setup information for the vision system based upon the configuration data. The remote server thereby transmits at least one of (a) updated configuration information to the vision system and (b) hardware setup recommendations to a user of the vision system based upon the configuration data.
G06K 9/80 - Combinaison du prétraitement de l'image et de fonctions de reconnaissance
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
G06K 19/06 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques caractérisés par le genre de marque numérique, p.ex. forme, nature, code
G06Q 30/016 - Fourniture d’une assistance aux clients, p. ex pour assister un client dans un lieu commercial ou par un service d’assistance après-vente
H04L 67/00 - Dispositions ou protocoles de réseau pour la prise en charge de services ou d'applications réseau
H04M 1/72403 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens de soutien local des applications accroissant la fonctionnalité
H04W 76/10 - Gestion de la connexion Établissement de la connexion
H04M 1/02 - Caractéristiques de structure des appareils téléphoniques
H04M 1/2755 - Dispositifs dans lesquels plusieurs signaux peuvent être enregistrés simultanément avec possibilité d'emmagasiner plus d'un numéro d'abonné à la fois utilisant des mémoires électroniques statiques, p.ex. des puces électroniques en fournissant les données par balayage optique
39.
System and method for refining dimensions of a generally cuboidal 3D object imaged by 3D vision system and controls for the same
A system and method for estimating dimensions of an approximately cuboidal object from a 3D image of the object acquired by an image sensor of the vision system processor is provided. An identification module, associated with the vision system processor, automatically identifies a 3D region in the 3D image that contains the cuboidal object. A selection module, associated with the vision system processor, automatically selects 3D image data from the 3D image that corresponds to approximate faces or boundaries of the cuboidal object. An analysis module statistically analyzes, and generates statistics for, the selected 3D image data that correspond to approximate cuboidal object faces or boundaries. A refinement module chooses statistics that correspond to improved cuboidal dimensions from among cuboidal object length, width and height. The improved cuboidal dimensions are provided as dimensions for the object. A user interface displays a plurality of interface screens for setup and runtime operation.
G06T 1/00 - Traitement de données d'image, d'application générale
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p.ex. le suivi des coins ou des segments
G06T 7/35 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés statistiques
G06T 7/77 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés statistiques
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
G06T 7/64 - Analyse des attributs géométriques de la convexité ou de la concavité
40.
SYSTEM AND METHOD FOR EXTRACTING AND MEASURING SHAPES OF OBJECTS HAVING CURVED SURFACES WITH A VISION SYSTEM
This invention provides a system and method that efficiently detects objects imaged using a 3D camera arrangement by referencing a cylindrical or spherical surface represented by a point cloud, and measures variant features of an extracted object including volume, height, and center of mass, bounding box, and other relevant metrics. The system and method, advantageously, operates directly on unorganized and un-ordered points, requiring neither a mesh/surface reconstruction nor voxel grid representation of object surfaces in a point cloud. Based upon a cylinder/sphere reference model, an acquired 3D point cloud is flattened. Object (blob) detection is carried out in the flattened 3D space, and objects are converted back to the 3D space to compute the features, which can include regions that differ from the regular shape of the cylinder/sphere. Downstream utilization devices and/or processes, such as part reject mechanism and/or robot manipulators can act on the identified feature data.
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p.ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]; Caractéristiques régionales saillantes
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
An optical system can include a receiver secured to a first optical component and a flexure arrangement secured to a second optical component. The flexure arrangement can include a plurality of flexures, each with a free end that can extend away from the second optical component and into a corresponding cavity of the receiver. Each of the cavities can be sized to receive adhesive that secures the corresponding flexure within the cavity when the adhesive has hardened, and to permit adjustment of the corresponding flexure within the cavity, before the adhesive has hardened, to adjust an alignment of the first and second optical components relative to multiple degrees of freedom.
This invention provides a single-camera vision system, typically for use in logistics applications, that allows for adjustment of the camera viewing angle to accommodate a wide range of object heights and associated widths moving relative to an imaged scene with constant magnification. The camera assembly employs an image sensor that is more particularly suited to such applications, with an aspect (height-to-width) ratio of approximately 1:4 to 1:8. The camera assembly includes a distance sensor to determine the distance to the top of each object. The camera assembly employs a zoom lens that can change at relatively high speed (e.g. <10 ms) to allow adjustment of the viewing angle from object to object as each one passes under the camera's field of view (FOV). Optics that allow the image to be resolved on the image sensor within the desired range of viewing angles are provided in the camera lens assembly.
An on-axis aimer and distance measurement apparatus for a vision system can include a light source configured to generate a first light beam along a first axis. The first light beam can project an aimer pattern on an object and a receiver can be configured to receive reflected light from the first light beam to determine a distance between a lens of the vision system and the object. One or more parameters of vision system can be controlled based on the determined distance.
This invention provides a vision system with an exchangeable illumination assembly that allows for increased versatility in the type and configuration of illumination supplied to the system without altering the underlying optics, sensor, vision processor, or the associated housing. The vision system housing includes a front plate that optionally includes a plurality of mounting bases for accepting different types of lenses, and a connector that allows removable interconnection with the illustrative illumination assembly. The illumination assembly includes a cover that is light transmissive. The cover encloses an illumination component that can include a plurality of lighting elements that surround an aperture through which received light rays from the imaged scene pass through to the lens. The arrangement of lighting elements is highly variable and the user can be supplied with an illumination assembly that best suits its needs without need to change the vision system processor, sensor or housing.
An apparatus for controlling a depth of field for a reader in a vision system includes a dual aperture assembly having an inner region and an outer region. A first light source can be used to generate a light beam associated with the inner region and a second light source can be used to generate a light beam associated with the outer region. The depth of field of the reader can be controlled by selecting one of the first light source and second light source to illuminate an object to acquire an image of the object. The selection of the first light source or the second light source can be based on at least one parameter of the vision system.
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
47.
Systems and methods for decoding two-dimensional matrix symbols with incomplete or absent fixed patterns
Systems and methods for reading a two-dimensional matrix symbol or for determining if a two-dimensional matrix symbol is decodable are disclosed. The systems and methods can include a data reading algorithm that receives an image, locates at least a portion of the data modules within the image without using a fixed pattern, fits a model of the module positions from the image, extrapolates the model resulting in predicted module positions, determines module values from the image at the predicted module positions, and extracts a binary matrix from the module values.
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
48.
SYSTEM AND METHOD FOR EXTENDING DEPTH OF FIELD FOR 2D VISION SYSTEM CAMERAS IN THE PRESENCE OF MOVING OBJECTS
This invention provides a system and method for enhanced depth of field (DOF) advantageously used in logistics applications, scanning for features and ID codes on objects. It effectively combines a vision system, a glass lens designed for on-axis and Scheimpflug configurations, a variable lens and a mechanical system to adapt the lens to the different configurations without detaching the optics. The optics can be steerable, which allows it to adjust between variable angles so as to optimize the viewing angle to optimize DOF for the object in a Scheimpflug configuration. One, or a plurality, of images can be acquired of the object at one, or differing angle settings, with the entire region of interest clearly imaged. In another implementation, the optical path can include a steerable mirror and a folding mirror overlying the region of interest, which allows different multiple images to be acquired at different locations on the object.
G03B 5/06 - Objectif basculant autour d'un axe perpendiculaire à l'axe optique
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
49.
Handheld ID-reading system with integrated illumination assembly
This provides an ID reader, typically configured for handheld operation, which integrates three types of illumination into a compact package that generates robust performance and resistance to harsh environmental conditions, such as dust and moisture. These illumination types include, direct (diffuse) light, low-angle light and polarized light. The ID reader includes a sealed reader module assembly having the illuminators in combination with an imager assembly (optics and image sensor) at its relative center. Additionally, also an on-axis aimer and a variable focus system with liquid lens have been integrated in this module and is placed on axis using a mirror assembly that includes a dichroic filter. As the optimal distance to read a code with low-angle light is typically shorter than the optimal distance to use the polarized illumination a variable (e.g. liquid) lens can adjust the focus of the reader to the optimal distance for the selected illumination.
Disclosed herein are systems and methods for machine vision. A machine vision system includes a motion rendering device, a first image sensor, and a second image sensor. The machine visions system includes a processor configured to run a computer program stored in memory that is configured to determine a first transformation that allows mapping between the first coordinate system associated with the motion rendering device and the second coordinate system associated with the first image sensor, and to determine a second transformation that allows mapping between the first coordinate system associated with the motion rendering device and the third coordinate system associated with the second image sensor.
Systems and methods reduce temperature induced drift effects on a liquid lens used in a vision system. A feedback loop receives a temperature value from a temperature sensor, and based on the received temperature value, controls a power to the heating element based on a difference between the measured temperature of the liquid lens and a predetermined control temperature to maintain the temperature value within a predetermined control temperature range to reduce the effects of drift. A processor can also control a bias signal applied to the lens or a lens actuator to control temperature variations and the associated induced drift effects. An image sharpness can also be determined over a series of images, alone or in combination with controlling the temperature of the liquid lens, to adjust a focal distance of the lens.
G02B 3/14 - Lentilles remplies d'un fluide ou à l'intérieur desquelles le vide a été fait à distance focale variable
G02B 7/00 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques
G02B 7/08 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement adaptés pour fonctionner en combinaison avec un mécanisme de télécommande
G02B 7/02 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles
G02B 26/00 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables
52.
Systems and methods for improved 3-D data reconstruction from stereo-temporal image sequences
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for data pre-processing for stereo-temporal image sequences to improve three-dimensional data reconstruction. In some aspects, the techniques described herein relate to systems, methods, and computer readable media for improved correspondence refinement for image areas affected by oversaturation. In some aspects, the techniques described herein relate to systems, methods, and computer readable media configured to fill missing correspondences to improve three-dimensional (3-D) reconstruction. The techniques include identifying image points without correspondences, using existing correspondences and/or other information to generate approximated correspondences, and cross-checking the approximated correspondences to determine whether the approximated correspondences should be used for the image processing.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 7/32 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur la corrélation
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la projection de lumière structurée
G06T 7/90 - Détermination de caractéristiques de couleur
G06T 7/536 - Récupération de la profondeur ou de la forme à partir des effets de perspective, p.ex. en utilisant des points de fuite
H04N 13/139 - Conversion du format, p.ex. du débit de trames ou de la taille
H04N 13/122 - Raffinement de la perception 3D des images stéréoscopiques par modification du contenu des signaux d’images, p.ex. par filtrage ou par ajout d’indices monoscopiques de profondeur
G06T 1/00 - Traitement de données d'image, d'application générale
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
An illumination system is provided for an optical system that includes an imaging device for acquiring an image of a target, for decoding of a symbol or other analysis. The illumination system can include a first light source configured to provide illumination of a first wavelength, a second light source configured to provide illumination of a second wavelength that is different from the first wavelength. The light sources can be controlled for operations that include: illuminating the target with the first and second light sources simultaneously for acquisition of the image of the target; and altering an illumination output of at least one of the first light source or the second light source, while maintaining non-zero illumination output for at least one of the first light source or the second light source, to indicate a status of the optical system.
This invention applies dynamic weighting between a point-to-plane and point-to-edge metric on a per-edge basis in an acquired image using a vision system. This allows an applied ICP technique to be significantly more robust to a variety of object geometries and/or occlusions. A system and method herein provides an energy function that is minimized to generate candidate 3D poses for use in alignment of runtime 3D image data of an object with model 3D image data. Since normals are much more accurate than edges, the use of normal is desirable when possible. However, in some use cases, such as a plane, edges provide information in relative directions the normals do not. Hence the system and method defines a “normal information matrix”, which represents the directions in which sufficient information is present. Performing (e.g.) a principal component analysis (PCA) on this matrix provides a basis for the available information.
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
H04N 13/275 - Générateurs de signaux d’images à partir de modèles 3D d’objets, p.ex. des signaux d’images stéréoscopiques générés par ordinateur
G06T 17/10 - Description de volumes, p.ex. de cylindres, de cubes ou utilisant la GSC [géométrie solide constructive]
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur les caractéristiques
G06V 30/24 - Reconnaissance de caractères caractérisée par la méthode de traitement ou de reconnaissance
G06F 18/22 - Critères d'appariement, p.ex. mesures de proximité
G06V 10/77 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
The techniques described herein relate to methods, apparatus, and computer readable media configured to determine an estimated volume of an object captured by a three-dimensional (3D) point cloud. A 3D point cloud comprising a plurality of 3D points and a reference plane in spatial relation to the 3D point cloud is received. A 2D grid of bins is configured along the reference plane, wherein each bin of the 2D grid comprises a length and width that extends along the reference plane. For each bin of the 2D grid, a number of 3D points in the bin and a height of the bin from the reference plane is determined. An estimated volume of an object captured by the 3D point cloud based on the calculated number of 3D points in each bin and the height of each bin.
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
56.
System and method for three-dimensional scan of moving objects longer than the field of view
This invention provides a system and method for using an area scan sensor of a vision system, in conjunction with an encoder or other knowledge of motion, to capture an accurate measurement of an object larger than a single field of view (FOV) of the sensor. It identifies features/edges of the object, which are tracked from image to image, thereby providing a lightweight way to process the overall extents of the object for dimensioning purposes. Logic automatically determines if the object is longer than the FOV, and thereby causes a sequence of image acquisition snapshots to occur while the moving/conveyed object remains within the FOV until the object is no longer present in the FOV. At that point, acquisition ceases and the individual images are combined as segments in an overall image. These images can be processed to derive overall dimensions of the object based on input application details.
The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin.
The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin.
G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
G06T 7/50 - Récupération de la profondeur ou de la forme
G06T 7/66 - Analyse des attributs géométriques des moments d'image ou du centre de gravité
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’image; Analyse de projection
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
59.
Methods and apparatus for extracting profiles from three-dimensional images
The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud. A 3D region of interest is determined that includes a width along a first axis, a height along a second axis, and a depth along a third axis. The 3D points within the 3D region of interest are represented as a set of 2D points based on coordinate values of the first and second axes. The 2D points are grouped into a plurality of 2D bins arranged along the first axis. For each 2D bin, a representative 2D position is determined based on the associated set of 2D points. Each of the representative 2D positions are connected to neighboring representative 2D positions to generate the 2D profile.
G06F 18/2134 - Extraction de caractéristiques, p.ex. en transformant l'espace des caractéristiques; Synthétisations; Mappages, p.ex. procédés de sous-espace basée sur des critères de séparation, p.ex. analyse en composantes indépendantes
60.
METHODS AND APPARATUS FOR IDENTIFYING SURFACE FEATURES IN THREE-DIMENSIONAL IMAGES
The techniques described herein relate to methods, apparatus, and computer readable media configured to identify a surface feature of a portion of a three-dimensional (3D) point cloud. Data indicative of a path along a 3D point cloud is received, wherein the 3D point cloud comprises a plurality of 3D data points. A plurality of lists of 3D data points are generated, wherein: each list of 3D data points extends across the 3D point cloud at a location that intersects the received path; and each list of 3D data points intersects the received path at different locations. A characteristic associated with a surface feature is identified in at least some of the plurality of lists of 3D data points. The identified characteristics are grouped based on one or more properties of the identified characteristics. The surface feature is identified based on the grouped characteristics.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 17/00 - Modélisation tridimensionnelle [3D] pour infographie
61.
Composite three-dimensional blob tool and method for operating the same
This invention provides a system and method that performs 3D imaging of a complex object, where image data is likely lost. Available 3D image data, in combination with an absence/loss of image data, allows computation of x, y and z dimensions. Absence/loss of data is assumed to be just another type of image data, and represents the presence of something that has prevented accurate data from being generated in the subject image. Segments of data can be connected to areas of absent data and generate a maximum bounding box. The shadow that this object generates can be represented as negative or missing data, but is not representative of the physical object. The height from the positive data, the object shadow size based on that height, the location in the FOV, and the ray angles that generate the images, are estimated and the object shadow size is removed from the result.
A base station or handheld device can be equipped with a latch system or a multi-hinge arrangement for electrical contacts. The latch system can be adjustable between different latching configurations in which the base station and handheld device are retained together by different degrees. The multi-hinge arrangement can provide rotation about multiple axes to provide rolling contact between electrical contacts of the base station and the handheld device.
A computer-implemented method for scanning a side of an object to identify a region of interest is provided. The method can include determining, using one or more computing devices, a distance between a side of an object and an imaging device, determining, using the one or more computing devices, a scanning pattern for an imaging device that includes a controllable mirror, based on the distance between the side of the object and the imaging device, moving a controllable mirror according to the scanning pattern to acquire, using the one or more computing device and the imaging device, a plurality of images of the side of the object, and identifying, using the one or more computing devices, the region of interest based on the plurality of images.
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G05D 1/02 - Commande de la position ou du cap par référence à un système à deux dimensions
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
G06V 10/147 - Caractéristiques optiques de l’appareil qui effectue l’acquisition ou des dispositifs d’éclairage - Détails de capteurs, p.ex. lentilles de capteurs
64.
Body portion of a lighting device for imaging systems
Disclosed is a defect inspection device for determining anomaly of an inspection object. The defect inspection device may include: a lighting system which includes a light source for transmitting light onto the inspection object; and a dynamic diffuser located between the light source and the inspection object and capable of controlling a diffusivity of light transmitted onto the inspection object; and one or more processors for controlling the dynamic diffuser based on characteristics of the inspection object.
Disclosed is a defect inspection device. The defect inspection device may include a lighting system designed for transmitting a lighting pattern having different illuminances for each area on a surface of an inspection object; a photographing unit for obtaining an image data of the inspection object; one or more processors for processing the image data; and a memory for storing a deep learning-based model. In addition, the one or more processors are adapted to control, the lighting system to transmit a lighting pattern having a different illuminance for each area on a surface of an inspection object, input, an image data obtained by the photographing unit into the deep learning-based model, wherein the image data includes a rapid change of illuminance in at least a part of the object surface; and determine, a defect on a surface of the inspection object using the deep learning-based model.
This invention provides a system and method for calibration of a 3D vision system using a multi-layer 3D calibration target that removes the requirement of accurate pre-calibration of the target. The system and method acquires images of the multi-layer 3D calibration target at different spatial locations and at different times, and computes the orientation difference of the 3D calibration target between the two acquisitions. The technique can be used to perform vision-based single-plane orientation repeatability inspection and monitoring. By applying this technique to an assembly working plane, vision-based assembly working plane orientation repeatability, inspection and monitoring can occur. Combined with a moving robot end effector, this technique provides vision-based robot end-effector orientation repeatability inspection and monitoring. Vision-guided adjustment of two planes to achieve parallelism can be achieved. The system and method operates to perform precise vision-guided robot setup to achieve parallelism of the robot's end-effector and the assembly working plane.
A modular vision system that can include a housing with a faceplate and a first and second optical module mounted to the faceplate. Each of the first and second optical modules can include a mounting body, a rectangular image sensor, and an imaging lens that defines an optical axis and a field of view. The first optical module can be configured to be mounted to the faceplate in a first plurality of mounting orientations and the second optical module can be configured to be mounted to the faceplate in a second plurality of mounting orientations. The first and second optical modules can thus collectively provide a plurality of imaging configurations.
H04N 23/45 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image à partir de plusieurs capteurs d'image de type différent ou fonctionnant dans des modes différents, p. ex. avec un capteur CMOS pour les images en mouvement en combinaison avec un dispositif à couplage de charge [CCD]
H04N 23/54 - Montage de tubes analyseurs, de capteurs d'images électroniques, de bobines de déviation ou de focalisation
H04N 23/55 - Pièces optiques spécialement adaptées aux capteurs d'images électroniques; Leur montage
H04N 23/695 - Commande de la direction de la caméra pour modifier le champ de vision, p. ex. par un panoramique, une inclinaison ou en fonction du suivi des objets
69.
Methods and apparatus for using range data to predict object features
Embodiments relate to predicting height information for an object. First distance data is determined at a first time when an object is at a first position that is only partially within the field-of-view. Second distance data is determined at a second, later time when the object is at a second, different position that is only partially within the field-of-view. A distance measurement model that models a physical parameter of the object within the field-of-view is determined for the object based on the first and second distance data. Third distance data indicative of an estimated distance to the object prior to the object being entirely within the field-of-view of the distance sensing device is determined based on the first distance data, the second distance data, and the distance measurement model. Data indicative of a height of the object is determined based on the third distance data.
G01B 11/06 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la longueur, la largeur ou l'épaisseur pour mesurer l'épaisseur
G01B 11/14 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la distance ou la marge entre des objets ou des ouvertures espacés
G01B 11/28 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des superficies
This invention provides a vision system camera, and associated methods of operation, having a multi-core processor, high-speed, high-resolution imager, FOVE, auto-focus lens and imager-connected pre-processor to pre-process image data provides the acquisition and processing speed, as well as the image resolution that are highly desirable in a wide range of applications. This arrangement effectively scans objects that require a wide field of view, vary in size and move relatively quickly with respect to the system field of view. This vision system provides a physical package with a wide variety of physical interconnections to support various options and control functions. The package effectively dissipates internally generated heat by arranging components to optimize heat transfer to the ambient environment and includes dissipating structure (e.g. fins) to facilitate such transfer. The system also enables a wide range of multi-core processes to optimize and load-balance both image processing and system operation (i.e. auto-regulation tasks).
Vision systems and methods for acquiring an image of an image scene and/or measuring a three-dimensional location of an object are disclosed. The vision systems can include a single image sensor, a first optical path, and a second optical path. The first optical path can be selectively transmissive of a first light, the second optical path can be selectively transmissive of a second light, and the first and second light can have a different distinguishing characteristic.
H04N 13/218 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D utilisant le multiplexage spatial
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
G02B 30/35 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p.ex. visionneuses de diapositives 3D utilisant des éléments optiques réfléchissants dans le chemin optique entre les images et les observateurs
This invention provides a system and method for finding line features in an image that allows multiple lines to be efficiently and accurately identified and characterized. When lines are identified, the user can train the system to associate predetermined (e.g. text) labels with respect to such lines. These labels can be used to define neural net classifiers. The neural net operates at runtime to identify and score lines in a runtime image that are found using a line-finding process. The found lines can be displayed to the user with labels and an associated probability score map based upon the neural net results. Lines that are not labeled are generally deemed to have a low score, and are either not flagged by the interface, or identified as not relevant.
G06T 7/143 - Découpage; Détection de bords impliquant des approches probabilistes, p.ex. la modélisation à "champs aléatoires de Markov [MRF]"
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/50 - Extraction de caractéristiques d’images ou de vidéos en utilisant l’addition des valeurs d’intensité d’image; Analyse de projection
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06F 18/40 - Dispositions logicielles spécialement adaptées à la reconnaissance des formes, p.ex. interfaces utilisateur ou boîtes à outils à cet effet
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06F 18/2415 - Techniques de classification relatives au modèle de classification, p.ex. approches paramétriques ou non paramétriques basées sur des modèles paramétriques ou probabilistes, p.ex. basées sur un rapport de vraisemblance ou un taux de faux positifs par rapport à un taux de faux négatifs
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
74.
Systems and method for vision inspection with multiple types of light
Systems and methods are provided for acquiring images of objects. Light of different types (e.g., different polarization orientations) can be directed onto an object from different respective directions (e.g., from different sides of the object). A single image acquisition can be executed in order to acquire different sub-images corresponding to the different light types. An image of a surface of the object, including representation of surface features of the surface, can be generated based on the sub-images.
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
G06T 7/586 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir de plusieurs sources de lumière, p.ex. stéréophotométrie
H04N 13/218 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D utilisant le multiplexage spatial
H04N 13/243 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant au moins trois capteurs d’images 2D
75.
Latch and hinge systems for base stations and handheld devices
A base station or handheld device can be equipped with a latch system or a multi-hinge arrangement for electrical contacts. The latch system can be adjustable between different latching configurations in which the base station and handheld device are retained together by different degrees. The multi-hinge arrangement can provide rotation about multiple axes to provide rolling contact between electrical contacts of the base station and the handheld device.
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
H05K 5/02 - Enveloppes, coffrets ou tiroirs pour appareils électriques - Détails
76.
On-axis aimer for vision system and multi-range illuminator for same
This invention provides an aimer assembly for a vision system that is coaxial (on-axis) with the camera optical axis, thus providing an aligned aim point at a wide range of working distances. The aimer includes a projecting light element located aside the camera optical axis. The beam and received light from the imaged (illuminated) scene are selectively reflected or transmitted through a dichoric mirror assembly in a manner that permits the beam to be aligned with the optical axis and projected to the scene while only light from the scene is received by the sensor. The aimer beam and illuminator employ differing light wavelengths. In a further embodiment, an internal illuminator includes a plurality of light sources below the camera optical axis. Some of the light sources are covered by a prismatic structure for close distance, and other light sources are collimated, projecting over a longer distance.
G06K 19/00 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques
G06K 7/015 - Alignement ou centrage du dispositif de lecture par rapport au support d'enregistrement
F21V 13/04 - Combinaisons de deux sortes d'éléments uniquement les éléments étant des réflecteurs et des réfracteurs
G03B 3/00 - Dispositions pour la mise au point présentant un intérêt général pour les appareils photographiques, les appareils de projection ou les tireuses
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
F21V 5/04 - Réfracteurs pour sources lumineuses de forme lenticulaire
77.
Machine vision system and method with steerable mirror
Systems and methods are provided for acquiring images of objects using an imaging device and a controllable mirror. The controllable mirror can be controlled to change a field of view for the imaging device, including so as to acquire images of different locations, of different parts of an object, or with different degrees of zoom.
This invention provides a system and method for finding multiple line features in an image. Two related steps are used to identify line features. First, the process computes x and y-components of the gradient field at each image location, projects the gradient field over a plurality subregions, and detects a plurality of gradient extrema, yielding a plurality of edge points with position and gradient. Next, the process iteratively chooses two edge points, fits a model line to them, and if edge point gradients are consistent with the model, computes the full set of inlier points whose position and gradient are consistent with that model. The candidate line with greatest inlier count is retained and the set of remaining outlier points is derived. The process then repeatedly applies the line fitting operation on this and subsequent outlier sets to find a plurality of line results. The process can be exhaustive RANSAC-based.
Systems and methods are provided for acquiring images of objects using an imaging device and a controllable mirror. The controllable mirror can be controlled to change a field of view for the imaging device, including so as to acquire images of different locations, of different parts of an object, or with different degrees of zoom.
A calibration fixture that enables more accurate calibration of a touch probe on, for example, a CMM, with respect to the camera. The camera is mounted so that its optical axis is approximately or substantially parallel with the z-axis of the probe. The probe and workpiece are in relative motion, along a plane defined by orthogonal x and y axes, and optionally the z-axis and/or and rotation R about the z-axis. The calibration fixture is arranged to image from beneath the touch surface of the probe and, via a 180-degree prism structure, to transmit light from the probe touch point along the optical axis to the camera. Alternatively, two cameras respectively view the fiducial location relative to the CMM arm and the probe location when aligned on the fiducial. The fixture can define an integrated assembly with an optics block and a camera assembly.
G01B 21/04 - Dispositions pour la mesure ou leurs détails, où la technique de mesure n'est pas couverte par les autres groupes de la présente sous-classe, est non spécifiée ou est non significative pour mesurer la longueur, la largeur ou l'épaisseur en mesurant les coordonnées de points
H04N 17/00 - Diagnostic, test ou mesure, ou leurs détails, pour les systèmes de télévision
G01B 5/012 - Têtes de contact de palpeurs pour de telles machines
G12B 5/00 - Réglage de la position ou de l'attitude, p.ex. niveau d'instruments ou d'autres appareils, ou de leurs parties constitutives; Compensation des effets d'inclinaison ou d'accélération, p.ex. pour appareils d'optique
G01B 21/20 - Dispositions pour la mesure ou leurs détails, où la technique de mesure n'est pas couverte par les autres groupes de la présente sous-classe, est non spécifiée ou est non significative pour mesurer des contours ou des courbes, p.ex. pour déterminer un profil
Systems and methods reduce temperature induced drift effects on a liquid lens used in a vision system. A feedback loop receives a temperature value from a temperature sensor, and based on the received temperature value, controls a power to the heating element based on a difference between the measured temperature of the liquid lens and a predetermined control temperature to maintain the temperature value within a predetermined control temperature range to reduce the effects of drift. A processor can also control a bias signal applied to the lens or a lens actuator to control temperature variations and the associated induced drift effects. An image sharpness can also be determined over a series of images, alone or in combination with controlling the temperature of the liquid lens, to adjust a focal distance of the lens.
G02B 3/14 - Lentilles remplies d'un fluide ou à l'intérieur desquelles le vide a été fait à distance focale variable
G02B 7/00 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques
G02B 7/08 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement adaptés pour fonctionner en combinaison avec un mécanisme de télécommande
G02B 7/02 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles
G02B 26/00 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables
82.
Methods and apparatus for testing multiple fields for machine vision
The techniques described herein relate to methods, apparatus, and computer readable media configured to test a pose of a three-dimensional model. A three-dimensional model is stored, the three dimensional model comprising a set of probes. Three-dimensional data of an object is received, the three-dimensional data comprising a set of data entries. The three-dimensional data is converted into a set of fields, comprising generating a first field comprising a first set of values, where each value of the first set of values is indicative of a first characteristic of an associated one or more data entries from the set of data entries, and generating a second field comprising a second set of values, where each second value of the second set of values is indicative of a second characteristic of an associated one or more data entries from the set of data entries, wherein the second characteristic is different than the first characteristic. A pose of the three-dimensional model is tested with the set of fields, comprising testing the set of probes to the set of fields, to determine a score for the pose.
This invention provides a system and method for selecting the correct profile from a range of peaks generated by analyzing a surface with multiple exposure levels applied at discrete intervals. The cloud of peak information is resolved by comparison to a model profile into a best candidate to represent an accurate representation of the object profile. Illustratively, a displacement sensor projects a line of illumination on the surface and receives reflected light at a sensor assembly at a set exposure level. A processor varies the exposure level setting in a plurality of discrete increments, and stores an image of the reflected light for each of the increments. A determination process combines the stored images and aligns the combined images with respect to a model image. Points from the combined images are selected based upon closeness to the model image to provide a candidate profile of the surface.
G01B 11/03 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la longueur, la largeur ou l'épaisseur en mesurant les coordonnées de points
G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la projection de lumière structurée
H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet
Techniques include systems, computerized methods, and computer readable media for creating a graphical program in a graphical program development environment. A spreadsheet node having an input terminal in the graphical program is instantiated. The spreadsheet node is associated with a spreadsheet that specifies a list of functions to be executed in a computing device, and the input terminal is connected to the first terminal of the first node, indicating a data connection between the first terminal of the first node and the input terminal of the spreadsheet node. The input terminal of the spreadsheet node is associated with a first cell in the spreadsheet, indicating that the first cell in the spreadsheet be populated with any data received by the input terminal. A human readable file is generated specifying the graphical program, including the spreadsheet node.
G05B 19/05 - Automates à logique programmables, p.ex. simulant les interconnexions logiques de signaux d'après des diagrammes en échelle ou des organigrammes
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
85.
System and method for reduced-speckle laser line generation
An illumination apparatus for reducing speckle effect in light reflected off an illumination target includes a laser; diffuser positioned in an optical path between an illumination target and the laser to diffuse collimated laser light in a planar fan of diffused light that spreads in one dimension across at least a portion of the illumination target; and a beam deflector to direct the collimated laser light incident on the beam deflector to sweep across different locations on the linear diffuser within an exposure time for illumination of the illumination target by the diffused light. The different locations span a distance across the linear diffuser that provides sufficient uncorrelated speckle patterns, at an image sensor, in light reflected from an intersection of the planar fan of light with the illumination target to add incoherently when imaged by the image sensor within the exposure time.
G02B 27/48 - Systèmes optiques utilisant la granulation produite par laser
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
G01B 11/14 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la distance ou la marge entre des objets ou des ouvertures espacés
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
A system or method can analyze symbols on a set of objects having different sizes. The system can identify a characteristic object dimension corresponding to the set of objects. An image of a first object can be received, and, a first virtual object boundary feature (e.g., edge) in the image can be identified for the first object based on the characteristic object dimension. A first symbol can be identified in the image, and whether the first symbol is positioned on the first object can be determined, based on the first virtual object boundary feature.
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
B07C 3/14 - Appareillages caractérisés par les moyens utilisés pour détecter la destination utilisant des moyens de détection photosensibles
Described are methods, systems, apparatus, and computer program products for determining the presence of an object on a target surface. A machine vision system includes a first image capture device configured to image a first portion of a target surface from a first viewpoint and a second image capture device configured to image a second portion of the target surface from a second viewpoint. The machine vision system is configured to acquire a first image from the first image capture device, a second image from the second image capture device, rectify the first image and second image, retrieve a disparity field, generate difference data by comparing, based on the mappings of the disparity field, image elements in the first rectified image and a second image elements in the second rectified image; and determine whether the difference data is indicative of an object on the target surface.
The techniques described herein relate to methods, apparatus, and computer readable media configured to test a pose of a model to image data. Image data of an object is received, the image data comprising a set of data entries. A set of regions of the image data are determined, wherein each region in the set of regions comprises an associated set of neighboring data entries in the set of data entries. Processed image data is generated, wherein the processed image data comprises a set of cells that each have an associated value, and generating the processed image data comprises, for each region in the set of regions, determining a maximum possible score of each data entry in the associated set of neighboring data entries from the image data, setting one or more values of the set of values based on the determined maximum possible score, and testing the pose of the model using the processed image data.
This invention provides a vision system that is arranged to compensate for optical drift that can occur in certain variable lens assemblies, including, but not limited to, liquid lens arrangements. The system includes an image sensor operatively connected to a vision system processor, and a variable lens assembly that is controlled (e.g. by the vision processor or another range-determining device) to vary a focal distance thereof. A positive lens assembly is configured to weaken an effect of the variable lens assembly over a predetermined operational range of the object from the positive lens assembly. The variable lens assembly is located adjacent to a front or rear focal point of the positive lens. The variable lens assembly illustratively comprises a liquid lens assembly that can be inherently variable over approximately 20 diopter. In an embodiment, the lens barrel has a C-mount lens base.
G02B 3/14 - Lentilles remplies d'un fluide ou à l'intérieur desquelles le vide a été fait à distance focale variable
G02B 7/04 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement
G02B 7/14 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles équipées de lentilles interchangeables
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 5/217 - Circuits pour la suppression ou la diminution de perturbations, p.ex. moiré ou halo lors de la production des signaux d'image
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
The present invention relates to optical imaging devices and methods for reading optical codes. The image device comprises a sensor, a lens, a plurality of illumination devices, and a plurality of reflective surfaces. The sensor is configured to sense with a predetermined number of lines of pixels, where the predetermined lines of pixels are arranged in a predetermined position. The lens has an imaging path along an optical axis. The plurality of illumination devices are configured to transmit an illumination pattern along the optical axis, and the plurality of reflective surfaces are configured to fold the optical axis.
G06K 19/06 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques caractérisés par le genre de marque numérique, p.ex. forme, nature, code
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
G02B 17/00 - Systèmes avec surfaces réfléchissantes, avec ou sans éléments de réfraction
91.
System and method for efficiently scoring probes in an image with a vision system
A system and method for scoring trained probes for use in analyzing one or more candidate poses of a runtime image is provided. A set of probes with location and gradient direction based on a trained model are applied to one or more candidate poses based upon a runtime image. The applied probes each respectively include a discrete set of position offsets with respect to the gradient direction thereof. A match score is computed for each of the probes, which includes estimating a best match position for each of the probes respectively relative to one of the offsets thereof, and generating a set of individual probe scores for each of the probes, respectively at the estimated best match position.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p.ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]; Caractéristiques régionales saillantes
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
G06F 18/28 - Détermination de motifs de référence représentatifs, p.ex. en faisant la moyenne ou en déformant; Génération de dictionnaires
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur les caractéristiques
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
92.
Methods and apparatus for optimizing image acquisition of objects subject to illumination patterns
The techniques described herein relate to methods, apparatus, and computer readable media configured to determine parameters for image acquisition. One or more image sensors are each arranged to capture a set of images of a scene, and each image sensor comprises a set of adjustable imaging parameters. A projector is configured to project a moving pattern on the scene, wherein the projector comprises a set of adjustable projector parameters. The set of adjustable projector parameters and the set of adjustable imaging parameters are determined, based on a set of one or more constraints, to reduce noise in 3D data generated based on the set of images.
Evaluating a symbol on an object can include acquiring a first image of the object, including the symbol. A second image can be derived from the first image based upon determining a saturation threshold for the second image and possibly scaling of pixel values to a reduced bit-depth for the second image.
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
G06T 7/90 - Détermination de caractéristiques de couleur
94.
Systems and methods for stitching sequential images of an object
A system may comprise a transport device for moving at least one object, wherein at least one substantially planar surface of the object is moved in a known plane locally around a viewing area, wherein the substantially planar surface of the object is occluded except when the at least one substantially planar surface passes by the viewing area, at least one 2D digital optical sensor configured to capture at least two sequential 2D digital images of the at least one substantially planar surface of the at least one object that is moved in the known plane around the viewing area, and a controller operatively coupled to the 2D digital optical sensor, the controller performing the steps of: a) receiving a first digital image, b) receiving a second digital image, and c) stitching the first digital image and the second digital image using a stitching algorithm to generate a stitched image.
G06T 11/60 - Edition de figures et de texte; Combinaison de figures ou de texte
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p.ex. le suivi des coins ou des segments
G06K 19/06 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques caractérisés par le genre de marque numérique, p.ex. forme, nature, code
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
H04N 23/74 - Circuits de compensation de la variation de luminosité dans la scène en influençant la luminosité de la scène à l'aide de moyens d'éclairage
H04N 23/698 - Commande des caméras ou des modules de caméras pour obtenir un champ de vision élargi, p. ex. pour la capture d'images panoramiques
95.
Method for the three dimensional measurement of moving objects during a known movement
A 3D measurement method including: projecting a pattern sequence onto a moving object; capturing a first image sequence with a first camera and a second image sequence synchronously to the first image sequence with a second camera; determining corresponding image points in the two sequences; computing a trajectory of a potential object point from imaging parameters and from known movement data for each pair of image points that is to be checked for correspondence. The potential object point is imaged by both image points in case they correspond. Imaging object positions derived therefrom at each of the capture points in time into image planes respectively of the two cameras. Corresponding image points positions are determined as trajectories in the two cameras and the image points are compared with each other along predetermined image point trajectories and examined for correspondence; lastly performing 3D measurement of the moved object by triangulation.
G01B 11/30 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la rugosité ou l'irrégularité des surfaces
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
G06T 7/579 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir du mouvement
G06T 7/285 - Analyse du mouvement utilisant une séquence de paires d'images stéréo
Methods, systems, and devices for 3D measurement and/or pattern generation are provided in accordance with various embodiments. Some embodiments include a method of pattern projection that may include projecting one or more patterns. Each pattern from the one or more patterns may include an arrangement of three or more symbols that are arranged such that for each symbol in the arrangement, a degree of similarity between said symbol and a most proximal of the remaining symbols in the arrangement is less than a degree of similarity between said symbol and a most distal of the remaining symbols in the arrangement. Some embodiments further include: illuminating an object using the one or more projected patterns; collecting one or more images of the illuminated object; and/or computing one or more 3D locations of the illuminated object based on the one or more projected patterns and the one or more collected images.
This invention provides a system and method for generating camera calibrations for a vision system camera along three discrete planes in a 3D volume space that uses at least two (e.g. parallel) object planes at different known heights. For any third (e.g. parallel) plane of a specified height, the system and method then automatically generates calibration data for the camera by interpolating/extrapolating from the first two calibrations. This alleviates the need to set the calibration object at more than two heights, speeding the calibration process and simplifying the user's calibration setup, and also allowing interpolation/extrapolation to heights that are space-constrained, and not readily accessible by a calibration object. The calibration plate can be calibrated at each height using a full 2D hand-eye calibration, or using a hand-eye calibration at the first height and then at a second height with translation to a known position along the height (e.g. Z) direction.
This invention provides a removably mountable lens assembly for a vision system camera that includes an integral auto-focusing liquid lens unit, in which the lens unit compensates for focus variations by employing a feedback control circuit that is integrated into the body of the lens assembly. The feedback control circuit receives motion information related to the bobbin of the lens from a position sensor (e.g. a Hall sensor) and uses this information internally to correct for motion variations that deviate from the lens setting position at a desired lens focal distance setting. Illustratively, the feedback circuit can be interconnected with one or more temperature sensors that adjust the lens setting position for a particular temperature value. In addition, the feedback circuit can communicate with an accelerometer that reads a direction of gravity and thereby corrects for potential sag in the lens membrane based upon the spatial orientation of the lens.
G02B 7/08 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement adaptés pour fonctionner en combinaison avec un mécanisme de télécommande
G02B 3/14 - Lentilles remplies d'un fluide ou à l'intérieur desquelles le vide a été fait à distance focale variable
G02B 7/30 - Systèmes pour la génération automatique de signaux de mise au point utilisant un triangle parallactique avec une ligne de base
G02B 7/38 - Systèmes pour la génération automatique de signaux de mise au point utilisant des techniques liées à la netteté de l'image mesurée en différents points de l'axe optique
99.
System and method for auto-focusing a vision system camera on barcodes
This invention provides a system and method for detecting and acquiring one or more in-focus images of one or more barcodes within the field of view of an imaging device. A measurement process measures depth-of-field of barcode detection. A plurality of nominal coarse focus settings of a variable lens allow sampling, in steps, of a lens adjustment range corresponding to allowable distances between the one or more barcodes and the image sensor, so that a step size of the sampling is less than a fraction of the depth-of-field of barcode detection. An acquisition process acquires a nominal coarse focus image for each nominal coarse focus setting. A barcode detection process detects one or more barcode-like regions and respective likelihoods. A fine focus process fine-adjusts, for each high-likelihood barcode, the variable lens near a location of the barcode-like regions. The process acquires an image for decoding using the fine adjusted setting.
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
100.
Systems and methods for detecting motion during 3D data reconstruction
In some aspects, the techniques described herein relate to systems, methods, and computer readable media for detecting movement in a scene. A first temporal pixel image is generated based on a first set of images of a scene over time, and a second temporal pixel image is generated based on a second set of images. One or more derived values are determined based on values of the temporal pixels in the first temporal pixel image, the second temporal pixel image, or both. Correspondence data is determined based on the first temporal pixel image and the second temporal pixel image indicative of a set of correspondences between image points of the first set of images and image points of the second set of images. An indication of whether there is a likelihood of motion in the scene is determined based on the one or more derived values and the correspondence data.