Cognex Corporation

États‑Unis d’Amérique

Retour au propriétaire

1-77 de 77 pour Cognex Corporation Trier par
Recheche Texte
Brevet
International - WIPO
Excluant les filiales
Affiner par Reset Report
Date
Nouveautés (dernières 4 semaines) 2
2024 avril (MACJ) 1
2024 mars 1
2024 février 1
2024 (AACJ) 3
Voir plus
Classe IPC
G06T 7/00 - Analyse d'image 12
G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire 10
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales 5
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet 4
G01N 21/88 - Recherche de la présence de criques, de défauts ou de souillures 4
Voir plus
Résultats pour  brevets

1.

SYSTEMS AND METHODS FOR DEFLECTOMETRY

      
Numéro d'application US2023075730
Numéro de publication 2024/076922
Statut Délivré - en vigueur
Date de dépôt 2023-10-02
Date de publication 2024-04-11
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Jacobson, Lowell D.
  • Wang, Lei

Abrégé

Systems and methods for deflectometry are disclosed. The deflectometry display is divided into subregions, with the subregions collectively covering the entire display. Deflectometry data sets are acquired for each of the subregions and all of the data is processed to compute fused deflectometry images having enhanced quality. By using display subregions, smaller portions of the object of interest are illuminated, so the amount of diffuse reflection is correspondingly reduced. By focusing on smaller regions of deflectometry patterns, the ratio of specular-to-diffuse reflection intensity can be increased. This allows display brightness and camera acquisition time to be increased without saturation and improves signal-to-noise ratio quality of the specular signal, which improves the quality of the subsequently computed deflectometry images.

Classes IPC  ?

  • G01N 21/88 - Recherche de la présence de criques, de défauts ou de souillures
  • G06T 7/00 - Analyse d'image
  • G01B 11/30 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la rugosité ou l'irrégularité des surfaces

2.

SYSTEMS AND METHODS FOR CONFIGURING MACHINE VISION TUNNELS

      
Numéro d'application US2023074949
Numéro de publication 2024/064924
Statut Délivré - en vigueur
Date de dépôt 2023-09-22
Date de publication 2024-03-28
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Sanz Rodriguez, Saul
  • Ruetten, Jens
  • Depre, Tony
  • Stroo, Bart

Abrégé

Systems and methods are provided for generating machine vision tunnel configurations. The systems and methods described herein may automatically generate a configuration summary of tunnel configurations for a prospective machine vision tunnel based on received parameters. The configuration summary may be modified via operator interaction with the configuration summary. The systems and methods described herein may also automatically generate and transmit a bill of materials report, a tunnel commissioning report, or a graphical representation of an approved tunnel configuration, including generating some or all of these data sets dynamically in response to operator inputs.

Classes IPC  ?

3.

METHODS AND APPARATUS FOR DETERMINING ORIENTATIONS OF AN OBJECT IN THREE-DIMENSIONAL DATA

      
Numéro d'application US2023072102
Numéro de publication 2024/036320
Statut Délivré - en vigueur
Date de dépôt 2023-08-11
Date de publication 2024-02-15
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Vaidya, Nitin, M.
  • Bogan, Nathaniel

Abrégé

The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a candidate three-dimensional (3D) orientation of an object represented by a three-dimensional (3D) point cloud. The method includes receiving data indicative of a 3D point cloud comprising a plurality of 3D points, determining a first histogram for the plurality of 3D points based on geometric features determined based, on the plurality of 3D points, accessing data indicative of a. second histogram of geometric features of a 3D representation of a reference object, computing, for each of a plurality of different rotations between the first histogram and the second histogram in 3D space, a scoring metric for the associated rotation, and determining the candidate 3D orientation based on the scoring metrics of the plurality of different rotations.

Classes IPC  ?

  • G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques

4.

SYSTEMS AND METHODS FOR COMMISSIONING A MACHINE VISION SYSTEM

      
Numéro d'application US2023066773
Numéro de publication 2023/220590
Statut Délivré - en vigueur
Date de dépôt 2023-05-09
Date de publication 2023-11-16
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wurz, Caitlin
  • Liu, Humberto Andres, Leon
  • Atkins Iii, Henry
  • Tanmay, Sinha
  • Surana, Deepak
  • Gauthier, Georges
  • El-Barkouky, Ahmed
  • Brodeur, Patrick
  • Lutzke, Patrick
  • Ruetten, Jens
  • Depre, Tony

Abrégé

Methods and systems are provided for commissioning machine vision systems. The methods and systems described herein may automatically configure, or otherwise assist users in configuring, a machine vision system based on a specification package.

Classes IPC  ?

  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra

5.

SYSTEM AND METHOD FOR FIELD CALIBRATION OF A VISION SYSTEM

      
Numéro d'application US2023066778
Numéro de publication 2023/220593
Statut Délivré - en vigueur
Date de dépôt 2023-05-09
Date de publication 2023-11-16
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wurz, Caitlin
  • Liu, Humberto Andres Leon
  • Brodeur, Patrick
  • Gauthier, Georges
  • Sposato, Kyle
  • Corbett, Michael
  • El-Barkouky, Ahmed

Abrégé

A method for three-dimensional (3D) field calibration of a machine vision system includes receiving a set of calibration parameters and an identification of one or more machine vision system imaging devices, determining a camera acquisition parameter for calibration based on the set of calibration parameters, validating the set of calibration parameters and the camera acquisition parameter, and controlling the imaging device(s) to collect image data of a calibration target. The image data may be collected using the determined camera acquisition parameter. The method further includes generating a set of calibration data for the imaging device(s) using the collected image data for the imaging device(s). The set of calibration data can include a maximum error. The method further includes generating a report including the set of calibration data for the imaging device(s) and an indication of whether the maximum error for the imaging device(s) is within an acceptable tolerance and displaying the report on a display.

Classes IPC  ?

  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra

6.

SYSTEM AND METHOD FOR DYNAMIC TESTING OF A MACHINE VISION SYSTEM

      
Numéro d'application US2023066779
Numéro de publication 2023/220594
Statut Délivré - en vigueur
Date de dépôt 2023-05-09
Date de publication 2023-11-16
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wurz, Caitlin
  • Liu, Humberto Andres Leon
  • Brodeur, Patrick
  • Gauthier, Georges
  • Sposato, Kyle
  • Corbett, Michael
  • Sanz Rodriguez, Saul
  • Ruetten, Jens

Abrégé

A method for dynamic testing of a machine vision system includes receiving a set of testing parameters and a selection of a tunnel system. The machine vision system can include the tunnel system and the tunnel system can include a conveyor and at least one imaging device. The method can further include validating the testing parameters and controlling the at least one imaging device to acquire a set of image data of a testing target positioned at a predetermined justification on the conveyor. The testing target can include a plurality of target symbols. The method can further include determining a test result by analyzing the set of image data to determine if the at least one imaging device reads a target symbol associated with the at least one imaging device and generating a report including the test result.

Classes IPC  ?

  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra

7.

MACHINE VISION SYSTEM AND METHOD WITH HYBRID ZOOM OPTICAL ASSEMBLY

      
Numéro d'application US2023066413
Numéro de publication 2023/212735
Statut Délivré - en vigueur
Date de dépôt 2023-04-28
Date de publication 2023-11-02
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Fernandez Dorado, Jose
  • Nunnink, Laurens

Abrégé

An optical assembly (120) for a machine vision system (100) having an image sensor (104) includes a lens assembly (108) and a motor system (118) coupled to the lens assembly (108). The lens assembly (108) can include a plurality of solid lens elements (126) and a liquid lens (128), where the liquid lens (128) includes an adjustable membrane (136). The motor system (118) can be configured to move the lens assembly (108) to adjust a distance between the lens assembly (108) and the image sensor (104) of the vision system (100).

Classes IPC  ?

  • G02B 3/14 - Lentilles remplies d'un fluide ou à l'intérieur desquelles le vide a été fait à distance focale variable
  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
  • H04N 23/55 - Pièces optiques spécialement adaptées aux capteurs d'images électroniques; Leur montage

8.

SYSTEM AND METHOD FOR USE OF POLARIZED LIGHT TO IMAGE TRANSPARENT MATERIALS APPLIED TO OBJECTS

      
Numéro d'application US2023014354
Numéro de publication 2023/167984
Statut Délivré - en vigueur
Date de dépôt 2023-03-02
Date de publication 2023-09-07
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Carey, Ben, R.
  • Norkett, Ryan, D.
  • Molnar, Gergely, G.

Abrégé

This invention provides a system and method inspecting transparent or translucent features on a substrate of an object. A vision system camera, having an image sensor that provides image data to a vision system processor, receives light from a field of view that includes the object through a light-polarizing filter assembly. An illumination source projects polarized light onto the substrate within the field of view. A vision system process locates and registers the substrate, and locates thereon, based upon registration, the transparent or translucent features. A vision system process then performs inspection on the features using predetermined thresholds. The substrate can be a shipping box on a conveyor, having flaps sealed at a seam by transparent tape. Alternatively, a plurality of illuminators or cameras can project and receive polarized light oriented in a plurality of polarization angles, which generates a plurality of images that are combined into a result image.

Classes IPC  ?

  • G01N 21/21 - Propriétés affectant la polarisation
  • G01N 21/88 - Recherche de la présence de criques, de défauts ou de souillures
  • G01N 21/90 - Recherche de la présence de criques, de défauts ou de souillures dans un récipient ou dans son contenu
  • G01N 21/95 - Recherche de la présence de criques, de défauts ou de souillures caractérisée par le matériau ou la forme de l'objet à analyser
  • G01N 21/84 - Systèmes spécialement adaptés à des applications particulières

9.

CONFIGURABLE LIGHT EMISSION BY SELECTIVE BEAM-SWEEPING

      
Numéro d'application US2023013644
Numéro de publication 2023/164009
Statut Délivré - en vigueur
Date de dépôt 2023-02-22
Date de publication 2023-08-31
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Aljasem, Khalid
  • Eggert, Florian
  • Ruhnau, Thomas
  • Schweier, Andre
  • Filhaber, John

Abrégé

An opto-electronic system includes a laser operable to produce a laser beam; an optical element including two or more beam-shaping portions, each of the two or more beam-shaping portions having a different optical property; a beam deflector arranged to sweep the laser beam across the optical element to produce output light; and electronics communicatively coupled with the laser, the beam deflector, or both the laser and the beam deflector. The electronics are configured to cause selective impingement of the laser beam onto a proper subset of the two or more beam-shaping portions of the optical element to modify one or more optical parameters of the output light.

Classes IPC  ?

  • G02B 27/09 - Mise en forme du faisceau, p.ex. changement de la section transversale, non prévue ailleurs
  • G02B 26/10 - Systèmes de balayage

10.

SYSTEM AND METHOD FOR APPLYING DEEP LEARNING TOOLS TO MACHINE VISION AND INTERFACE FOR THE SAME

      
Numéro d'application US2022018778
Numéro de publication 2023/075825
Statut Délivré - en vigueur
Date de dépôt 2022-03-03
Date de publication 2023-05-04
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wyss, Reto
  • Petry Iii, John, P.

Abrégé

This invention overcomes disadvantages of the prior art by providing a vision system and method of use, and graphical user interface (GUI), which employs a camera assembly having an on-board processor of low to modest processing power. At least one vision system tool analyzes image data, and generates results therefrom, based upon a deep learning process. A training process provides training image data to a processor remote from the on-board processor to cause generation of the vision system tool therefrom, and provides a stored version of the vision system tool for runtime operation on the on-board processor. The GUI allows manipulation of thresholds applicable to the vision system tool and refinement of training of the vision system tool by the training process. A scoring process allows unlabeled images from a set of acquired and/or stored images to be selected automatically for labelling as training images using a computed confidence score.

Classes IPC  ?

  • G06V 30/40 - Reconnaissance des formes à partir d’images axée sur les documents

11.

SYSTEMS AND METHODS FOR DETECTING OBJECTS

      
Numéro d'application US2022046040
Numéro de publication 2023/059876
Statut Délivré - en vigueur
Date de dépôt 2022-10-07
Date de publication 2023-04-13
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Liu, Zihan, Hans
  • Hanhart, Philippe
  • Wyss, Reto
  • Barker, Simon, Alaric

Abrégé

The techniques described herein relate to computerized methods and apparatuses for detecting objects in an image. The techniques described herein further relate to computerized methods and apparatuses for detecting one or more objects using a pretrained machine learning model and one or more other machine learning models that can be trained in a field training process. The pre-trained machine learning model may be a deep machine learning model.

Classes IPC  ?

  • G06V 20/62 - Texte, p.ex. plaques d’immatriculation, textes superposés ou légendes des images de télévision
  • G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
  • G06T 7/00 - Analyse d'image

12.

MODULAR MIRROR SUBSYSTEMS FOR MULTI-SIDE SCANNING

      
Numéro d'application US2022041383
Numéro de publication 2023/028149
Statut Délivré - en vigueur
Date de dépôt 2022-08-24
Date de publication 2023-03-02
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Sanz Rodriguez, Saul
  • Depre, Tony
  • Ruetten, Jens, Iii
  • Nunnink, Laurens

Abrégé

A method for an imaging module can include rotating an imaging assembly that includes an imaging device about a first pivot point of a bracket to a select first orientation, fastening the imaging assembly to the bracket at the first orientation, rotating a mirror assembly that includes a mirror about a second pivot point of the bracket to a select second orientation, and fastening the mirror assembly to the bracket at the second orientation. An adjustable, selectively oriented imaging assembly of a first imaging module can acquire images using an adjustable, selectively oriented mirror assembly of a second imaging module.

Classes IPC  ?

  • G02B 7/182 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour miroirs pour miroirs
  • G02B 7/198 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour miroirs pour miroirs avec des moyens pour régler la position du miroir par rapport à son support
  • G02B 26/10 - Systèmes de balayage
  • G02B 27/62 - Appareils optiques spécialement adaptés pour régler des éléments optiques pendant l'assemblage de systèmes optiques

13.

MACHINE VISION SYSTEM AND METHOD WITH MULTISPECTRAL LIGHT ASSEMBLY

      
Numéro d'application US2022038842
Numéro de publication 2023/014601
Statut Délivré - en vigueur
Date de dépôt 2022-07-29
Date de publication 2023-02-09
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Fernandez-Dorado, Jose
  • Nunnink, Laurens

Abrégé

A multispectral light assembly for an illumination system includes a multispectral light source configured to generate a plurality of different wavelengths of light and a light pipe positioned in front of the multispectral light source and configured to provide color mixing for two or more of the plurality of different wavelengths. The multispectral light assembly also includes a diffusive surface on the light pipe and a projection lens positioned in front of the diffusive surface. A processor device may be in communication with the multispectral light assemblies and may be configured to control activation of the multispectral light source.

Classes IPC  ?

  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
  • G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
  • B60Q 1/02 - Agencement des dispositifs de signalisation optique ou d'éclairage, leur montage, leur support ou les circuits à cet effet les dispositifs étant principalement destinés à éclairer la route en avant du véhicule ou d'autres zones de la route ou des environs
  • F21K 9/23 - Sources lumineuses rétrocompatibles pour dispositifs d’éclairage avec un seul culot pour chaque source lumineuse, p.ex. pour le remplacement de lampes à incandescence avec un culot à baïonnette ou à vis
  • F21V 23/00 - Agencement des éléments du circuit électrique dans ou sur les dispositifs d’éclairage
  • G06K 7/12 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant une longueur d'onde choisie, p.ex. pour lire des marques rouges et ignorer des marques bleues
  • G02B 7/02 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles
  • G02B 7/08 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement adaptés pour fonctionner en combinaison avec un mécanisme de télécommande

14.

MACHINE VISION SYSTEM AND METHOD WITH STEERABLE MIRROR

      
Numéro d'application US2022034929
Numéro de publication 2022/272080
Statut Délivré - en vigueur
Date de dépôt 2022-06-24
Date de publication 2022-12-29
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Kempf, Torsten
  • Rodriguez, Saul, Sanz
  • Fernandez-Dorado, Pepe
  • Nunnink, Laurens

Abrégé

A computer-implemented method for scanning a side of an object (22). The method can include determining a scanning pattern for an imaging device (e.g., based on a distance between the side of the object and the imaging device), and moving the controllable mirror (30) according to the scanning pattern to acquire a plurality of images of the side of the object. A region of interest can be identified based on the plurality of images.

Classes IPC  ?

  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire

15.

SYSTEMS AND METHODS FOR ASSIGNING A SYMBOL TO AN OBJECT

      
Numéro d'application US2022035175
Numéro de publication 2022/272173
Statut Délivré - en vigueur
Date de dépôt 2022-06-27
Date de publication 2022-12-29
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • El-Barkouky, Ahmed
  • Sauter, Emily

Abrégé

A method for assigning a symbol to an object in an image includes receiving the image captured by an imaging device where the symbol may be located within the image. The method further includes receiving, in a first coordinate system, a three-dimensional (3D) location of one or more points that corresponds to pose information indicative of a 3D pose of the object in the image, mapping the 3D location of the one or more points of the object to a 2D location within the image, and assigning the symbol to the object based on a relationship between a 2D location of the symbol in the image and the 2D location of the one or more points of the object in the image.

Classes IPC  ?

16.

METHODS, SYSTEMS, AND MEDIA FOR GENERATING IMAGES OF MULTIPLE SIDES OF AN OBJECT

      
Numéro d'application US2022033098
Numéro de publication 2022/261496
Statut Délivré - en vigueur
Date de dépôt 2022-06-10
Date de publication 2022-12-15
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • El-Barkouky, Ahmed
  • Negro, James, A.
  • Ye, Xiangyun

Abrégé

In accordance with some embodiments of the disclosed subject matter, methods, systems, and media for generating images of multiple sides of an object are provided. In some embodiments, a method comprises receiving information indicative of a 3D pose of a first object in a first coordinate space at a first time; receiving a group of images captured using at least one image sensor, each image associated with a field of view within the first coordinate space; mapping at least a portion of a surface of the first object to a 2D area with respect to the image based on the 3D pose of the first object; associating, for images including the surface, a portion of that image with the surface of the first object based on the 2D area; and generating a composite image of the surface using images associated with the surface.

Classes IPC  ?

  • G06T 15/20 - Calcul de perspectives
  • G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
  • G06T 15/04 - Mappage de texture

17.

SYSTEMS AND METHODS FOR DETECTING AND ADDRESSING ISSUE CLASSIFICATIONS FOR OBJECT SORTING

      
Numéro d'application US2022017969
Numéro de publication 2022/183033
Statut Délivré - en vigueur
Date de dépôt 2022-02-25
Date de publication 2022-09-01
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Lachappelle, Shane
  • Link, John, Jeffrey
  • Lym, Julie, Juhyun

Abrégé

Some embodiments of the disclosure provide systems and methods for improving sorting and routing of objects, including in sorting systems. Characteristic dimensional data for one or more objects with common barcode information can be compared to dimensional data of another object with the common barcode information to evaluate a classification (e.g., a side-by-side exception) of the other object. In some cases, the evaluation can include identifying the classification as incorrect (e.g., as a false side-by-side exception).

Classes IPC  ?

  • B07C 3/14 - Appareillages caractérisés par les moyens utilisés pour détecter la destination utilisant des moyens de détection photosensibles
  • B07C 5/34 - Tri en fonction d'autres propriétés particulières

18.

OPTICAL IMAGING DEVICES AND METHODS

      
Numéro d'application US2021062820
Numéro de publication 2022/125906
Statut Délivré - en vigueur
Date de dépôt 2021-12-10
Date de publication 2022-06-16
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Dippel, Nicole
  • Oteo, Esther
  • Nunnink, Laurens
  • Gerst, Carl, W.
  • Engle, Matthew, D.

Abrégé

The present invention relates to optical imaging devices and methods for reading optical codes. The image device comprises a sensor, a lens, a plurality of illumination devices, and a plurality of reflective surfaces. The sensor is configured to sense with a predetermined number of lines of pixels, where the predetermined lines of pixels are arranged in a predetermined position. The lens has an imaging path along an optical axis. The plurality of illumination devices are configured to transmit an illumination pattern along the optical axis, and the plurality of reflective surfaces are configured to fold the optical axis.

Classes IPC  ?

  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire

19.

FLEXURE ARRANGEMENTS FOR OPTICAL COMPONENTS

      
Numéro d'application US2021057212
Numéro de publication 2022/098571
Statut Délivré - en vigueur
Date de dépôt 2021-10-29
Date de publication 2022-05-12
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Filhaber, John

Abrégé

An optical system can include a receiver secured to a first optical component and a flexure arrangement secured to a second optical component. The flexure arrangement can include a plurality of flexures, each with a free end that can extend away from the second optical component and into a corresponding cavity of the receiver. Each of the cavities can be sized to receive adhesive that secures the corresponding flexure within the cavity when the adhesive has hardened, and to permit adjustment of the corresponding flexure within the cavity, before the adhesive has hardened, to adjust an alignment of the first and second optical components relative to multiple degrees of freedom.

Classes IPC  ?

  • G02B 7/00 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques
  • G02B 7/02 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles

20.

SYSTEM AND METHOD FOR EXTRACTING AND MEASURING SHAPES OF OBJECTS HAVING CURVED SURFACES WITH A VISION SYSTEM

      
Numéro d'application US2021055314
Numéro de publication 2022/082069
Statut Délivré - en vigueur
Date de dépôt 2021-10-15
Date de publication 2022-04-21
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Zhu, Hongwei
  • Moreno, Daniel

Abrégé

This invention provides a system and method that efficiently detects objects imaged using a 3D camera arrangement by referencing a cylindrical or spherical surface represented by a point cloud, and measures variant features of an extracted object including volume, height, and center of mass, bounding box, and other relevant metrics. The system and method, advantageously, operates directly on unorganized and unordered points, requiring neither a mesh/surface reconstruction nor voxel grid representation of object surfaces in a point cloud. Based upon a cylinder/sphere reference model, an acquired 3D point cloud is flattened. Object (blob) detection is carried out in the flattened 3D space, and objects are converted back to the 3D space to compute the features, which can include regions that differ from the regular shape of the cylinder/sphere. Downstream utilization devices and/or processes, such as part reject mechanism and/or robot manipulators can act on the identified feature data.

Classes IPC  ?

  • G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
  • G06T 7/00 - Analyse d'image

21.

MACHINE VISION SYSTEM AND METHOD WITH ON-AXIS AIMER AND DISTANCE MEASUREMENT ASSEMBLY

      
Numéro d'application US2021052159
Numéro de publication 2022/067163
Statut Délivré - en vigueur
Date de dépôt 2021-09-27
Date de publication 2022-03-31
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Fernandez-Dorado, Jose
  • Garcia-Campos, Pablo
  • Nunnink, Laurens

Abrégé

An on-axis aimer and distance measurement apparatus for a vision system can include a light source configured to generate a first light beam along a first axis. The first light beam can project an aimer pattern on an object and a receiver can be configured to receive reflected light from the first light beam to determine a distance between a lens of the vision system and the object. One or more parameters of vision system can be controlled based on the determined distance.

Classes IPC  ?

  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire

22.

MACHINE VISION SYSTEM AND METHOD WITH MULTI-APERTURE OPTICS ASSEMBLY

      
Numéro d'application US2021048903
Numéro de publication 2022/051526
Statut Délivré - en vigueur
Date de dépôt 2021-09-02
Date de publication 2022-03-10
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Fernandez-Dorado, Jose
  • Nunnink, Laurens

Abrégé

An apparatus for controlling a depth of field for a reader in a vision system includes a dual aperture assembly having an inner region and an outer region. A first light source can be used to generate a light beam associated with the inner region and a second light source can be used to generate a light beam associated with the outer region. The depth of field of the reader can be controlled by selecting one of the first light source and second light source to illuminate an object to acquire an image of the object. The selection of the first light source or the second light source can be based on at least one parameter of the vision system.

Classes IPC  ?

  • H04N 5/225 - Caméras de télévision
  • H04N 13/00 - Systèmes vidéo stéréoscopiques; Systèmes vidéo multi-vues; Leurs détails
  • G01S 17/42 - Mesure simultanée de la distance et d'autres coordonnées
  • H04N 5/00 - TRANSMISSION D'IMAGES, p.ex. TÉLÉVISION - Détails des systèmes de télévision
  • H04N 13/106 - Traitement de signaux d’images

23.

SYSTEM AND METHOD FOR EXTENDING DEPTH OF FIELD FOR 2D VISION SYSTEM CAMERAS IN THE PRESENCE OF MOVING OBJECTS

      
Numéro d'application US2021040722
Numéro de publication 2022/011036
Statut Délivré - en vigueur
Date de dépôt 2021-07-07
Date de publication 2022-01-13
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Dorado, Jose, Fernandez
  • Lozano, Esther, Oteo
  • Campos, Pablo, Garcia
  • Nunnink, Laurens

Abrégé

This invention provides a system and method for enhanced depth of field (DOF) advantageously used in logistics applications, scanning for features and ID codes on objects. It effectively combines a vision system, a glass lens designed for on-axis and Scheimpflug configurations, a variable lens and a mechanical system to adapt the lens to the different configurations without detaching the optics. The optics can be steerable, which allows it to adjust between variable angles so as to optimize the viewing angle to optimize DOF for the object in a Scheimpflug configuration. One, or a plurality, of images can be acquired of the object at one, or differing angle settings, with the entire region of interest clearly imaged. In another implementation, the optical path can include a steerable mirror and a folding mirror overlying the region of interest, which allows different multiple images to be acquired at different locations on the object.

Classes IPC  ?

  • G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
  • G03B 5/06 - Objectif basculant autour d'un axe perpendiculaire à l'axe optique
  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
  • H04N 5/00 - TRANSMISSION D'IMAGES, p.ex. TÉLÉVISION - Détails des systèmes de télévision

24.

METHODS AND APPARATUS FOR IDENTIFYING SURFACE FEATURES IN THREE-DIMENSIONAL IMAGES

      
Numéro d'application US2021031513
Numéro de publication 2021/231260
Statut Délivré - en vigueur
Date de dépôt 2021-05-10
Date de publication 2021-11-18
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Bogan, Nathaniel
  • Hoelscher, Andrew
  • Michael, David J.

Abrégé

The techniques described herein relate to methods, apparatus, and computer readable media configured to identify a surface feature of a portion of a three-dimensional (3D) point cloud. Data indicative of a path along a 3D point cloud is received, wherein the 3D point cloud comprises a plurality of 3D data points. A plurality of lists of 3D data points are generated, wherein: each list of 3D data points extends across the 3D point cloud at a location that intersects the received path; and each list of 3D data points intersects the received path at different locations. A characteristic associated with a surface feature is identified in at least some of the plurality of lists of 3D data points. The identified characteristics are grouped based on one or more properties of the identified characteristics. The surface feature is identified based on the grouped characteristics.

Classes IPC  ?

25.

METHODS AND APPARATUS FOR GENERATING POINT CLOUD HISTOGRAMS

      
Numéro d'application US2021031519
Numéro de publication 2021/231265
Statut Délivré - en vigueur
Date de dépôt 2021-05-10
Date de publication 2021-11-18
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Zhu, Hongwei
  • Michael, David, J.
  • Vaidya, Nitin, M.

Abrégé

The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin.

Classes IPC  ?

  • G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
  • G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la projection de lumière structurée

26.

METHODS AND APPARATUS FOR EXTRACTING PROFILES FROM THREE-DIMENSIONAL IMAGES

      
Numéro d'application US2021031503
Numéro de publication 2021/231254
Statut Délivré - en vigueur
Date de dépôt 2021-05-10
Date de publication 2021-11-18
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Zhu, Hongwei
  • Bogan, Nathaniel
  • Michael, David, J.

Abrégé

The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud. A 3D region of interest is determined that includes a width along a first axis, a height along a second axis, and a depth along a third axis. The 3D points within the 3D region of interest are represented as a set of 2D points based on coordinate values of the first and second axes. The 2D points are grouped into a plurality of 2D bins arranged along the first axis. For each 2D bin, a representative 2D position is determined based on the associated set of 2D points. Each of the representative 2D positions are connected to neighboring representative 2D positions to generate the 2D profile.

Classes IPC  ?

  • G06T 7/50 - Récupération de la profondeur ou de la forme

27.

METHODS AND APPARATUS FOR DETERMINING VOLUMES OF 3D IMAGES

      
Numéro d'application US2021031523
Numéro de publication 2021/231266
Statut Délivré - en vigueur
Date de dépôt 2021-05-10
Date de publication 2021-11-18
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Moreno, Daniel, Alejandro

Abrégé

The techniques described herein relate to methods, apparatus, and computer readable media configured to determine an estimated volume of an object captured by a three-dimensional (3D) point cloud. A 3D point cloud comprising a plurality of 3D points and a reference plane in spatial relation to the 3D point cloud is received. A 2D grid of bins is configured along the reference plane, wherein each bin of the 2D grid comprises a length and width that extends along the reference plane. For each bin of the 2D grid, a number of 3D points in the bin and a height of the bin from the reference plane is determined. A n estimated volume of an object captured by the 3D point cloud based on the calculated number of 3D points in each bin and the height of each bin.

Classes IPC  ?

  • G06T 7/521 - Récupération de la profondeur ou de la forme à partir de la projection de lumière structurée
  • G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume

28.

SYSTEM AND METHOD FOR THREE-DIMENSIONAL SCAN OF MOVING OBJECTS LONGER THAN THE FIELD OF VIEW

      
Numéro d'application US2021018627
Numéro de publication 2021/168151
Statut Délivré - en vigueur
Date de dépôt 2021-02-18
Date de publication 2021-08-26
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Carey, Ben R.
  • Parrett, Andrew
  • Liu, Yukang
  • Chiang, Gilbert

Abrégé

This invention provides a system and method for using an area scan sensor of a vision system, in conjunction with an encoder or other knowledge of motion, to capture an accurate measurement of an object larger than a single field of view (FOV) of the sensor. It identifies features/edges of the object, which are tracked from image to image, thereby providing a lightweight way to process the overall extents of the object for dimensioning purposes. Logic automatically determines if the object is longer than the FOV, and thereby causes a sequence of image acquisition snapshots to occur while the moving/conveyed object remains within the FOV until the object is no longer present in the FOV. At that point, acquisition ceases and the individual images are combined as segments in an overall image. These images can be processed to derive overall dimensions of the object based on input application details.

Classes IPC  ?

29.

COMPOSITE THREE-DIMENSIONAL BLOB TOOL AND METHOD FOR OPERATING THE SAME

      
Numéro d'application US2021017498
Numéro de publication 2021/163219
Statut Délivré - en vigueur
Date de dépôt 2021-02-10
Date de publication 2021-08-19
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Liu, Gang
  • Carey, Ben, R.
  • Mullan, Nickolas, J.
  • Spear, Caleb
  • Parrett, Andrew

Abrégé

This invention provides a system and method that performs 3D imaging of a complex object, where image data is likely lost. Available 3D image data, in combination with an absence/loss of image data, allows computation of x, y and z dimensions. Absence/loss of data is assumed to be just another type of image data, and represents the presence of something that has prevented accurate data from being generated in the subject image. Segments of data can be connected to areas of absent data and generate a maximum bounding box. The shadow that this object generates can be represented as negative or missing data, but is not representative of the physical object. The height from the positive data, the object shadow size based on that height, the location in the FOV, and the ray angles that generate the images, are estimated and the object shadow size is removed from the result.

Classes IPC  ?

  • G06T 5/00 - Amélioration ou restauration d'image
  • G06T 7/11 - Découpage basé sur les zones

30.

OBJECT DIMENSIONING SYSTEM AND METHOD

      
Numéro d'application US2020015203
Numéro de publication 2020/159866
Statut Délivré - en vigueur
Date de dépôt 2020-01-27
Date de publication 2020-08-06
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Fernandez-Dorado, José
  • Mira, Emilio, Pastor
  • Guerrero, Francisco, Azcona
  • Bachelder, Ivan
  • Nunnink, Laurens
  • Kempf, Torsten
  • Vaidyanathan, Savithri
  • Moed, Kyra
  • Boatner, John, Bryan

Abrégé

Determining dimensions of an object can include determining a distance between the object and an imaging device, and an angle of an optical axis of the imaging device. One of more features of the object can be identified in an image of the object. The dimensions of the object can be determined based upon the distance, the angle, and the one or more identified features.

Classes IPC  ?

  • G01B 11/02 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la longueur, la largeur ou l'épaisseur
  • G01B 11/22 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la profondeur
  • G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
  • G06T 7/60 - Analyse des attributs géométriques

31.

SYSTEM AND METHOD FOR FINDING AND CLASSIFYING PATTERNS IN AN IMAGE WITH A VISION SYSTEM

      
Numéro d'application US2019035840
Numéro de publication 2019/236885
Statut Délivré - en vigueur
Date de dépôt 2019-06-06
Date de publication 2019-12-12
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wang, Lei
  • Anand, Vivek
  • Jacobson, Lowell, D.
  • Li, David, Y.

Abrégé

This invention provides a system and method for finding patterns in images that incorporates neural net classifiers. A pattern finding tool is coupled with a classifier that can be run before or after the tool to have labeled pattern results with sub-pixel accuracy. In the case of a pattern finding tool that can detect multiple templates, its performance is improved when a neural net classifier informs the pattern finding tool to work only on a subset of the originally trained templates. Similarly, in the case of a pattern finding tool that initially detects a pattern, a neural network classifier can then determine whether it has found the correct pattern. The neural network can also reconstruct/ clean-up an imaged shape, and/or to eliminate pixels less relevant to the shape of interest, therefore reducing the search time, as well significantly increasing the chance of lock on the correct shapes.

Classes IPC  ?

  • G06K 9/20 - Obtention de l'image
  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06N 3/08 - Méthodes d'apprentissage

32.

HIGH-ACCURACY CALIBRATION SYSTEM AND METHOD

      
Numéro d'application US2018027997
Numéro de publication 2018/195096
Statut Délivré - en vigueur
Date de dépôt 2018-04-17
Date de publication 2018-10-25
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Li, David, Y.
  • Sun, Li

Abrégé

This invention provides a calibration target with a calibration pattern on at least one surface. The relationship of locations of calibration features on the pattern are determined for the calibration target and stored for use during a calibration procedure by a calibrating vision system. Knowledge of the calibration target's feature relationships allow the calibrating vision to image the calibration target in a single pose and rediscover each of the calibration features in a predetermined coordinate space. The calibrating vision can then transform the relationships between features from the stored data into the calibrating vision system's local coordinate space. The locations can be encoded in a barcode that is applied to the target, provided in a separate encoded element, or obtained from an electronic data source. The target can include encoded information within the pattern defining a location of adjacent calibration features with respect to the overall geometry of the target.

Classes IPC  ?

  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra
  • H04N 13/246 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques Étalonnage des caméras

33.

SYSTEM AND METHOD FOR 3D PROFILE DETERMINATION USING MODEL-BASED PEAK SELECTION

      
Numéro d'application US2018024268
Numéro de publication 2018/183155
Statut Délivré - en vigueur
Date de dépôt 2018-03-26
Date de publication 2018-10-04
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Li, David Y.
  • Sun, Li
  • Jacobson, Lowell D.
  • Wang, Lei

Abrégé

This invention provides a system and method for selecting the correct profile from a range of peaks generated by analyzing a surface with multiple exposure levels applied at discrete intervals. The cloud of peak information is resolved by comparison to a model profile into a best candidate to represent an accurate representation of the object profile. Illustratively, a displacement sensor projects a line of illumination on the surface and receives reflected light at a sensor assembly at a set exposure level. A processor varies the exposure level setting in a plurality of discrete increments, and stores an image of the reflected light for each of the increments. A determination process combines the stored images and aligns the combined images with respect to a model image. Points from the combined images are selected based upon closeness to the model image to provide a candidate profile of the surface.

Classes IPC  ?

  • G06T 11/00 - Génération d'images bidimensionnelles [2D]
  • G06T 1/00 - Traitement de données d'image, d'application générale
  • H04N 13/204 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques
  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet

34.

SYSTEM AND METHOD FOR REDUCED-SPECKLE LASER LINE GENERATION

      
Numéro d'application US2018014562
Numéro de publication 2018/136818
Statut Délivré - en vigueur
Date de dépôt 2018-01-19
Date de publication 2018-07-26
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Filhaber, John F.

Abrégé

An illumination apparatus for reducing speckle effect in light reflected off an illumination target (120) includes a laser (150); a linear diffuser (157) positioned in an optical path between an illumination target and the laser to diffuse collimated laser light (154) in a planar fan of diffused light (158, 159) that spreads in one dimension across at least a portion of the illumination target; and a beam deflector (153) to direct the collimated laser light incident on the beam deflector to sweep across different locations on the linear diffuser within an exposure time for illumination of the illumination target by the diffused light. The different locations span a distance across the linear diffuser that provides sufficient uncorrelated speckle patterns, at an image sensor (164), in light reflected from an intersection of the planar fan of light with the illumination target to add incoherently when imaged by the image sensor within the exposure time.

Classes IPC  ?

  • G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
  • G02B 5/02 - Diffuseurs; Eléments afocaux
  • G02B 27/48 - Systèmes optiques utilisant la granulation produite par laser
  • G02B 26/10 - Systèmes de balayage

35.

MACHINE VISION SYSTEM FOR CAPTURING A DIGITAL IMAGE OF A SPARSELY ILLUMINATED SCENE

      
Numéro d'application US2017026866
Numéro de publication 2018/057063
Statut Délivré - en vigueur
Date de dépôt 2017-04-10
Date de publication 2018-03-29
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Mcgarry, John

Abrégé

A method includes producing two or more measurements by an image sensor having a pixel array, the measurements including information contained in a set of sign-bits, the producing of each measurement including (i) forming an image signal on the pixel array; and (ii) comparing accumulated pixel currents output from pixels of the pixel array in accordance with the image signal and a set of pixel sampling patterns to produce the set of sign-bits of the measurement; buffering at least one of the measurements to form a buffered measurement; comparing information of the buffered measurement to information of the measurements to produce a differential measurement; and combining the differential measurement with information of the set of pixel sampling patterns to produce at least a portion of one or more digital images relating to one or more of the image signals formed on the pixel array.

Classes IPC  ?

  • H04N 5/361 - Traitement du bruit, p.ex. détection, correction, réduction ou élimination du bruit appliqué au courant d'obscurité
  • H04N 5/345 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner en lisant partiellement une matrice de capteurs SSIS

36.

APPARATUS FOR PROJECTING A TIME-VARIABLE OPTICAL PATTERN ONTO AN OBJECT TO BE MEASURED IN THREE DIMENSIONS

      
Numéro d'application EP2017065114
Numéro de publication 2017/220595
Statut Délivré - en vigueur
Date de dépôt 2017-06-20
Date de publication 2017-12-28
Propriétaire
  • COGNEX CORPORETION (USA)
  • COGNEX ENSHAPE GMBH (Allemagne)
Inventeur(s)
  • Petersen, Jens
  • Schaffer, Martin
  • Harendt, Bastian

Abrégé

The invention relates to an apparatus for projecting a time-variable optical pattern onto an object to be measured in three dimensions, comprising a holder for an optical pattern, a light source having an illumination optical unit and an imaging optical unit, wherein the optical pattern is secured as a slide on a displacement mechanism which displaces the optical pattern relative to the illumination optical unit and relative to the imaging optical unit, wherein the displacement mechanism effectuates a displacement of the optical pattern in a slide plane that is oriented perpendicular to the optical axis of the imaging optical unit.

Classes IPC  ?

  • G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
  • G02B 7/00 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques

37.

METHOD FOR THE THREE DIMENSIONAL MEASUREMENT OF MOVING OBJECTS DURING A KNOWN MOVEMENT

      
Numéro d'application EP2017065118
Numéro de publication 2017/220598
Statut Délivré - en vigueur
Date de dépôt 2017-06-20
Date de publication 2017-12-28
Propriétaire
  • COGNEX CORPORATION (USA)
  • COGNEX ENSHAPE GMBH (Allemagne)
Inventeur(s) Harendt, Bastian

Abrégé

The invention relates to a method for the three dimensional measurement of a moving object during a known movement comprising the following method steps: Projecting a pattern sequence consisting of N patterns onto the moving object, recording a first image sequence consisting of N images using a first camera and recording a second image sequence synchronous to the first image sequence consisting of N images using a second camera, identifying corresponding image points in the first sequence and in the second sequence, wherein trajectories of potential object points are computed from the known movement data and object positions determined therefrom are projected onto each image plane of the first and second cameras, wherein the positions of corresponding image points are determined in advance as a first image point trajectory in the first camera and a second image point trajectory in the second camera, the image points along the previously determined image point trajectories are compared with one another and checked for correspondence and, in a concluding step, the moving object is measured three dimensionally using triangulation from the corresponding image points.

Classes IPC  ?

  • G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
  • G06T 7/579 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir du mouvement
  • G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo

38.

METHOD FOR CALIBRATING AN IMAGE-CAPTURING SENSOR CONSISTING OF AT LEAST ONE SENSOR CAMERA, USING A TIME-CODED PATTERNED TARGET

      
Numéro d'application EP2017065120
Numéro de publication 2017/220600
Statut Délivré - en vigueur
Date de dépôt 2017-06-20
Date de publication 2017-12-28
Propriétaire
  • COGNEX CORPORATION (USA)
  • COGNEX ENSHAPE GMBH (Allemagne)
Inventeur(s)
  • Grosse, Marcus
  • Harendt, Bastian

Abrégé

The invention relates to a method for calibrating an image-capturing sensor consisting of at least one sensor camera, using a time-coded patterned target, said time-coded patterned target being displayed on a flat-screen display. A sequence of patterns is displayed on the flat-screen display and is captured by the at least one sensor camera as a series of camera images. An association of the pixels of the time-coded patterned target with the respective pixels captured in the camera images is carried out for a fixed position of the flat-screen display in the surroundings, or for at least two different positions of the flat-screen display in the surroundings, the sensor being calibrated by means of corresponding pixels. A gamma-correction is carried out, in which, before the time-coded patterned target is displayed, a gamma curve of the flat-screen display is captured in a random position and/or in each position of the flat-screen display, together with the recording of the sequence of patterns by means of the sensor, and is corrected.

Classes IPC  ?

  • G01C 25/00 - Fabrication, étalonnage, nettoyage ou réparation des instruments ou des dispositifs mentionnés dans les autres groupes de la présente sous-classe

39.

METHOD FOR REDUCING THE ERROR INDUCED IN PROJECTIVE IMAGE-SENSOR MEASUREMENTS BY PIXEL OUTPUT CONTROL SIGNALS

      
Numéro d'application US2017034830
Numéro de publication 2017/205829
Statut Délivré - en vigueur
Date de dépôt 2017-05-26
Date de publication 2017-11-30
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Mcgarry, John

Abrégé

An image sensor for forming projective measurements includes a pixel-array wherein each pixel is coupled with conductors of a pixel output control bus and with a pair of conductors of a pixel output bus. In certain pixels, the pattern of coupling to the pixel output bus is reversed, thereby beneficially de-correlating the image noise induced by pixel output control signals from vectors of the projective basis.

Classes IPC  ?

  • H04N 5/335 - Transformation d'informations lumineuses ou analogues en informations électriques utilisant des capteurs d'images à l'état solide [capteurs SSIS] 
  • H04N 5/378 - Circuits de lecture, p.ex. circuits d’échantillonnage double corrélé [CDS], amplificateurs de sortie ou convertisseurs A/N

40.

CALIBRATION FOR VISION SYSTEM

      
Numéro d'application US2017032560
Numéro de publication 2017/197369
Statut Délivré - en vigueur
Date de dépôt 2017-05-12
Date de publication 2017-11-16
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Mcgarry, John
  • Jacobson, Lowell
  • Wang, Lei
  • Liu, Gang

Abrégé

A vision system capable of performing run-time 3D calibration includes a mount configured to hold an object, the mount including a 3D calibration structure; a camera; a motion stage coupled with the mount or the camera; and a computing device configured to perform operations including: acquiring images from the camera when the mount is in respective predetermined orientations relative to the camera, each of the acquired images including a representation of at least a portion of the object and at least a portion of the 3D calibration structure that are concurrently in a field of view of the camera; performing at least an adjustment of a 3D calibration for each of the acquired images based on information relating to the 3D calibration structure as imaged in the acquired images; and determining 3D positions, dimensions or both of one or more features of the object.

Classes IPC  ?

  • G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
  • G01B 11/00 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques
  • G01B 11/02 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer la longueur, la largeur ou l'épaisseur
  • G01B 21/04 - Dispositions pour la mesure ou leurs détails, où la technique de mesure n'est pas couverte par les autres groupes de la présente sous-classe, est non spécifiée ou est non significative pour mesurer la longueur, la largeur ou l'épaisseur en mesurant les coordonnées de points
  • H04N 13/02 - Générateurs de signaux d'image
  • G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra
  • G06T 7/50 - Récupération de la profondeur ou de la forme

41.

JOINT OF MATED COMPONENTS

      
Numéro d'application US2017028447
Numéro de publication 2017/184781
Statut Délivré - en vigueur
Date de dépôt 2017-04-19
Date de publication 2017-10-26
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Townley-Smith, Paul Andrew
  • Mcgarry, John

Abrégé

Joints are described used for mating two components of an apparatus that have been precisely aligned with respect to each other, e.g., based on a six degrees of freedom alignment procedure. For example, the precisely aligned components can be optical components that are part of an optical apparatus with highly sensitive mechanical tolerances.

Classes IPC  ?

  • G02B 7/00 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques
  • G02B 7/02 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles
  • G02B 27/62 - Appareils optiques spécialement adaptés pour régler des éléments optiques pendant l'assemblage de systèmes optiques
  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire

42.

MACHINE VISION SYSTEM FOR FORMING A ONE DIMENSIONAL DIGITAL REPRESENTATION OF A LOW INFORMATION CONTENT SCENE

      
Numéro d'application US2017013328
Numéro de publication 2017/123863
Statut Délivré - en vigueur
Date de dépôt 2017-01-13
Date de publication 2017-07-20
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Mcgarry, John

Abrégé

A machine vision system to form a one dimensional digital representation of a low information content scene, e.g., a scene that is sparsely illuminated by an illumination plane, and the one dimensional digital representation is a projection formed with respect to columns of a rectangular pixel array of the machine vision system.

Classes IPC  ?

  • H04N 5/335 - Transformation d'informations lumineuses ou analogues en informations électriques utilisant des capteurs d'images à l'état solide [capteurs SSIS] 
  • H04N 5/357 - Traitement du bruit, p.ex. détection, correction, réduction ou élimination du bruit

43.

AIR FLOW MECHANISM FOR IMAGE CAPTURE AND VISION SYSTEMS

      
Numéro d'application US2014050722
Numéro de publication 2015/137994
Statut Délivré - en vigueur
Date de dépôt 2014-08-12
Date de publication 2015-09-17
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Koelmel, Volker

Abrégé

This invention provides a mechanism for clearing debris and vapors from the region around the optical axis (OA) of a vision system (100) that employs a directed airflow (AF) in the region. The airflow (AF) is guided by an air knife (170) that surrounds a viewing gap placed in front of the camera optics (140). The air knife (170) delivers airflow (AF) in a manner that takes advantage of the Coanda effect to generate an airflow (AF) that prevents infiltration of debris and contaminants into the optical path. Illustratively, the air knife (170) defines a geometry that effectively multiplies the delivered airflow approximately fifty times (twenty-five times on each of two air-knife sides) that of the supplied compressed air. This provides an extremely strong air curtain along the scan direction that essentially blocks infiltration of environmental contamination to the optics (140) of the camera (110).

Classes IPC  ?

  • G01N 21/15 - Prévention de la souillure des éléments du système optique ou de l'obstruction du chemin lumineux
  • G01N 21/88 - Recherche de la présence de criques, de défauts ou de souillures

44.

CONFIGURABLE IMAGE TRIGGER FOR A VISION SYSTEM AND METHOD FOR USING THE SAME

      
Numéro d'application US2012070298
Numéro de publication 2013/096282
Statut Délivré - en vigueur
Date de dépôt 2012-12-18
Date de publication 2013-06-27
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Mahuna, Tyson

Abrégé

This invention provides a trigger for a vision system that can be set using a user interface that allows the straightforward variation of a plurality of exposed trigger parameters. Illustratively, the vision system includes a triggering mode in which the system keeps acquiring an image of a field of view with respect to objects in relative motion. The system runs user-configurable "trigger logic". When the trigger logic succeeds/passes, the current image or a newly acquired image is then transmitted to the main inspection logic for processing. The trigger logic can be readily configured by a user operating an interface, which can also be used to configure the main inspection process, to trigger the vision system by tools such as presence-absence, edge finding, barcode finding, pattern matching, image thresholding, or any arbitrary combination of tools exposed by the vision system in the interface.

Classes IPC  ?

  • G01N 21/88 - Recherche de la présence de criques, de défauts ou de souillures
  • G06T 1/00 - Traitement de données d'image, d'application générale

45.

METHODS AND APPARATUS FOR ONE-DIMENSIONAL SIGNAL EXTRACTION

      
Numéro d'application US2012070758
Numéro de publication 2013/096526
Statut Délivré - en vigueur
Date de dépôt 2012-12-20
Date de publication 2013-06-27
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Silver, William, M.
  • Bachelder, Ivan A.

Abrégé

Methods and apparatus are disclosed for extracting a one-dimensional digital signal from a two-dimensional digital image along a projection line. Disclosed embodiments provide an image memory in which is stored the digital image, a working memory, a direct memory access controller, a table memory that holds a plurality of transfer templates, and a processor. The processor selects a transfer template from the table memory responsive to an orientation of the projection line, computes a customized set of transfer parameters from the selected transfer template and parameters of the projection line, transmits the transfer parameters to the direct memory access controller, commands the direct memory access controller to transfer data from the image memory to the working memory as specified by the transfer parameters, and computes the one-dimensional digital signal using at least a portion of the data transferred by the direct memory access controller into the working memory.

Classes IPC  ?

  • G06K 9/50 - Extraction d'éléments ou de caractéristiques de l'image en analysant des intersections de la forme avec des lignes prédéterminées
  • G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
  • G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles

46.

SYSTEM AND METHOD FOR CONTROLLING ILLUMINATION IN A VISION SYSTEM

      
Numéro d'application US2012065033
Numéro de publication 2013/081828
Statut Délivré - en vigueur
Date de dépôt 2012-11-14
Date de publication 2013-06-06
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Burrell, Paul

Abrégé

This invention provides a system and method for enabling control of an illuminator having predetermined operating parameters by a vision system processor/core based upon stored information regarding parameters that are integrated with the illuminator. The parameters are retrieved by the processor, and are used to control the operation of the illuminator and/or the camera during image acquisition. In an embodiment, the stored parameters are a discrete numerical or other value that corresponds to the illuminator type. The discrete value maps to a corresponding value in look-up table/database associated with the camera that contains parameter sets associated with each of a plurality of values in the database. The data associated with the discrete value in the camera contains the necessary parameters or settings for that illuminator type. In other embodiments, some or all of the actual parameter information can be stored with the illuminator and retrieved by the camera processor.

Classes IPC  ?

  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
  • H04N 5/225 - Caméras de télévision
  • H04N 5/235 - Circuits pour la compensation des variations de la luminance de l'objet

47.

AUTO-FOCUS MECHANISM FOR VISION SYSTEM CAMERA

      
Numéro d'application US2012065030
Numéro de publication 2013/078045
Statut Délivré - en vigueur
Date de dépôt 2012-11-14
Date de publication 2013-05-30
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Gainer, Robert, L.

Abrégé

This invention provides an electro-mechanical auto-focus function for a smaller-diameter lens type that nests, and is removably mounted, within the mounting space and thread arrangement of a larger-diameter lens base of a vision camera assembly housing. In an illustrative embodiment, the camera assembly includes a threaded base having a first diameter, which illustratively defines a C-mount base. A motor-driven gear-reduction drive assembly is mounted internally, and includes teeth that engage corresponding teeth on the outer diameter of a cylindrical focus gear, which has an internal lead screw. The focus gear is freely rotatable, and removably captured, within the threaded C-mount base in a nested, coaxial relationship. The internal lead screw of the focus gear threadingly engages the external threads of a coaxial lens holder. This converts the drive gear rotation into linear/axial lens holder motion. The lens holder includes anti-rotation stops, which allow its linear/axial movement but restrain any rotational motion.

Classes IPC  ?

  • G03B 3/10 - Mise au point effectuée par force motrice
  • G03B 17/14 - Corps d'appareils avec moyens pour supporter des objectifs, des lentilles additionnelles, des filtres, des masques ou des tourelles de façon interchangeable
  • G03B 17/56 - Accessoires

48.

ENCRYPTION AUTHENTICATION OF DATA TRANSMITTED FROM MACHINE VISION TOOLS

      
Numéro d'application US2012054857
Numéro de publication 2013/040029
Statut Délivré - en vigueur
Date de dépôt 2012-09-12
Date de publication 2013-03-21
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Scherer, Timothy

Abrégé

The technology provides, in some aspects, methods and systems for securely transmitting data using a machine vision system (e.g., within a pharmaceutical facility). Thus, for example, in one aspect, the technology provides a method that includes the steps of establishing a communications link between a machine vision processor and a remote digital data processor (e.g., a database server, personal computer, etc.); encrypting, on the machine vision processor, (i) at least one network packet containing machine vision data, and (ii) at least one network packet containing non-machine vision data; and sending to the remote digital data processor the encrypted network packets from the machine vision processor.

Classes IPC  ?

  • H04L 29/06 - Commande de la communication; Traitement de la communication caractérisés par un protocole

49.

MASTER AND SLAVE MACHINE VISION SYSTEM

      
Numéro d'application US2012054858
Numéro de publication 2013/040030
Statut Délivré - en vigueur
Date de dépôt 2012-09-12
Date de publication 2013-03-21
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Martinicky, Brian
  • Wu, Scott

Abrégé

The technology provides, in some aspects, methods and systems for triggering a master machine vision processor and a slave machine vision processor in a multi-camera machine vision system. Thus, for example, in one aspect, the technology provides a method that includes the steps of establishing a communications link between a slave machine vision processor and a master machine vision processor; receiving on the slave machine vision processor a data message from the master machine vision processor; and triggering the slave machine vision processor to perform a machine vision function, the triggering occurring at a frequency based upon the data message, wherein at least one triggering of the slave machine vision processor occurs independent of the master machine vision processor.

Classes IPC  ?

  • G06T 1/20 - Architectures de processeurs; Configuration de processeurs p.ex. configuration en pipeline

50.

DETERMINING THE UNIQUENESS OF A MODEL FOR MACHINE VISION

      
Numéro d'application US2011066883
Numéro de publication 2012/092132
Statut Délivré - en vigueur
Date de dépôt 2011-12-22
Date de publication 2012-07-05
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wang, Xiaoguang
  • Jacobson, Lowell

Abrégé

Described are methods and apparatuses, including computer program products, for determining model uniqueness with a quality metric of a model of an object in a machine vision application. Determining uniqueness involves receiving a training image, generating a model of an object based on the training image, generating a modified training image based on the training image, determining a set of poses that represent possible instances of the model in the modified training image, and computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

51.

MODEL-BASED POSE ESTIMATION USING A NON-PERSPECTIVE CAMERA

      
Numéro d'application IB2011003044
Numéro de publication 2012/076979
Statut Délivré - en vigueur
Date de dépôt 2011-12-14
Date de publication 2012-06-14
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Liu, Lifeng
  • Wallack, Aaron, S.
  • Marrion, Cyril, C., Jr.

Abrégé

This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features, to generate a set of 3D image features and thereby determine a 3D pose. Also provided is a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime.

Classes IPC  ?

52.

MOBILE OBJECT CONTROL SYSTEM AND PROGRAM, AND MOBILE OBJECT CONTROL METHOD

      
Numéro d'application US2011041222
Numéro de publication 2011/163209
Statut Délivré - en vigueur
Date de dépôt 2011-06-21
Date de publication 2011-12-29
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Ikeda, Masaaki

Abrégé

The problem for the present invention is, in a mobile object control system that controls a mobile object on the basis of images captured by a plurality of image capturing units, to enhance the accuracy of the correspondence relationships between the respective image capture coordinate systems that are determined for the respective image capturing units. And, according to the present invention, a first rotational center position specification unit specifies a first rotational center position in a first image capture coordinate system corresponding to a first point on the basis of respective first reference guide marks included in respective first images. Moreover, a second rotational center position specification unit specifies a second rotational center position in a second image capture coordinate system corresponding to the first point on the basis of respective second reference guide marks included in respective second images. And, on the basis of the first rotational center position and the second rotational center position, a coordinate system correspondence relationship storage unit stores a coordinate system correspondence relationship that specifies the correspondence relationship between the first image capture coordinate system and second image capture coordinate system.

Classes IPC  ?

  • G01B 11/26 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour tester l'alignement des axes
  • G05B 19/00 - Systèmes de commande à programme
  • H05K 13/00 - Appareils ou procédés spécialement adaptés à la fabrication ou l'ajustage d'ensembles de composants électriques

53.

SYSTEM AND METHOD FOR PROCESSING IMAGE DATA RELATIVE TO A FOCUS OF ATTENTION WITHIN THE OVERALL IMAGE

      
Numéro d'application US2011036458
Numéro de publication 2011/146337
Statut Délivré - en vigueur
Date de dépôt 2011-05-13
Date de publication 2011-11-24
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Moed, Michael, C.
  • Mcgarry, E., John

Abrégé

This invention provides a system and method for processing discrete image data within an overall set of acquired image data based upon a focus of attention within that image. The result of such processing is to operate upon a more limited subset of the overall image data to generate output values required by the vision system process. Such output value can be a decoded ID or other alphanumeric data. The system and method is performed in a vision system having two processor groups, along with a data memory that is smaller in capacity than the amount of image data to be read out from the sensor array. The first processor group is a plurality of SIMD processors and at least one general purpose processor, co-located on the same die with the data memory. A data reduction function operates within the same clock cycle as data-readout from the sensor to generate a reduced data set that is stored in the on-die data memory. At least a portion of the overall, unreduced image data is concurrently (in the same clock cycle) transferred to the second processor while the first processor transmits at least one region indicator with respect to the reduced data set to the second processor. The region indicator represents at least one focus of attention for the second processor to operate upon.

Classes IPC  ?

  • G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
  • G06T 1/20 - Architectures de processeurs; Configuration de processeurs p.ex. configuration en pipeline

54.

DISTRIBUTED VISION SYSTEM WITH MULTI-PHASE SYNCHRONIZATION

      
Numéro d'application US2010061533
Numéro de publication 2011/090660
Statut Délivré - en vigueur
Date de dépôt 2010-12-21
Date de publication 2011-07-28
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Mcclellan, James, R.

Abrégé

This invention provides a system and method for synchronization of vision system inspection results produced by each of a plurality of processors that includes a first bank (that can be a "master" bank) containing a master vision system processor and at least one slave vision system processor. At least a second bank (that can be one of a plurality of "slave" banks) contains a master vision system processor and at least one slave vision system processor. Each vision system processor in each bank generates results from an image acquired and processed in a given inspection cycle. The inspection cycle can be based on an external trigger or other trigger signal, and it can enable some or all of the processors/banks to acquire and process images at a given time/cycle. In a given cycle, each of the multiple banks can be positioned to acquire an image of a respective region of a plurality of succeeding regions on a moving line. A synchronization process (a) generates a unique identifier and that passes a trigger signal with the unique identifier associated with the master processor in the first bank to each of the slave processor in the master bank and each of the master and slave processor and (b) receives consolidated results via the master processor of the second bank, having the unique identifier and consolidated results from the results from the first bank. The process then (c) consolidates the results for transmission to a destination if the results are complete and the unique identifier of each of the results is the same.

Classes IPC  ?

  • H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c. à d. systèmes dans lesquels le signal vidéo n'est pas diffusé

55.

SWIPE SCANNER EMPLOYING A VISION SYSTEM

      
Numéro d'application US2011020812
Numéro de publication 2011/085362
Statut Délivré - en vigueur
Date de dépôt 2011-01-11
Date de publication 2011-07-14
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Mcgarry, E., John

Abrégé

This invention provides a point-of-sale scanning device that employs vision sensors and vision processing to decode symbology and matrices of information of objects, documents and other substrates as such objects are moved (swiped) through the field-of-view of the scanning device's window. The scanning device defines a form factor that conforms to that of a conventional laser-based point-of-sale scanning device using a housing having a plurality of mirrors, oriented generally at 45-degree angles with respect to the window's plane so as to fold the optical path, thereby allowing for an extended depth of field. The path is divided laterally so as to reach opposing lenses and image sensors, which face each other and are oriented along a lateral optical axis between sidewalls of the device. The sensors and lenses can be adapted to perform different parts of the overall vision system and/or code recognition process. The housing also provides illumination that fills the volume space. Illustratively, illumination is provided adjacent to the window in a ring having two rows for intermediate and long-range illumination of objects. Illumination of objects at or near the scanning window is provided by illuminators positioned along the sidewalls in a series of rows, these rows directed to avoid flooding the optical path.

Classes IPC  ?

  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire

56.

SYSTEM AND METHOD FOR RUNTIME DETERMINATION OF CAMERA MISCALIBRATION

      
Numéro d'application US2010061989
Numéro de publication 2011/079258
Statut Délivré - en vigueur
Date de dépôt 2010-12-23
Date de publication 2011-06-30
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Ye, Xiangyun
  • Li, David, Y.
  • Shivaram, Guruprasad
  • Michael, David, J.

Abrégé

This invention provides a system and method for runtime determination (self- diagnosis) of camera miscalibration (accuracy), typically related to camera extrinsics, based on historical statistics of runtime alignment scores for objects acquired in the scene, which are defined based on matching of observed and expected image data of trained object models. This arrangement avoids a need to cease runtime operation of the vision system and/or stop the production line that is served by the vision system to diagnose if the system's camera(s) remain calibrated. Under the assumption that objects or features inspected by the vision system over time are substantially the same, the vision system accumulates statistics of part alignment results and stores intermediate results to be used as indicator of current system accuracy. For multi-camera vision systems, cross validation is illustratively employed to identify individual problematic cameras. The system and method allows for faster, less- expensive and more-straightforward diagnosis of vision system failures related to deteriorating camera calibration.

Classes IPC  ?

57.

OBJECT CONTROL SYSTEM, OBJECT CONTROL METHOD AND PROGRAM, AND ROTATIONAL CENTER POSITION SPECIFICATION DEVICE

      
Numéro d'application US2010059089
Numéro de publication 2011/071813
Statut Délivré - en vigueur
Date de dépôt 2010-12-06
Date de publication 2011-06-16
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Ikeda, Masaaki

Abrégé

The problem for the present invention is to provide an object control system that can prevent shifting an object to a target position from requiring a long time, even if, for example, the position of installation of an image capturing unit is deviated. And, according to the means for solution of the present invention, an object control system includes: a first image capturing unit that captures a first image including a first reference mark that specifies a first object line determined in advance with respect to an object; an angle acquisition unit that, on the basis of said first reference mark within said first image, acquires a first differential angle that specifies the angle between a first target object line, determined in advance with respect to said first image, and said first object line; and an object control unit that controls a rotation mechanism that rotates said object, on the basis of said first differential angle.

Classes IPC  ?

  • H05K 13/00 - Appareils ou procédés spécialement adaptés à la fabrication ou l'ajustage d'ensembles de composants électriques

58.

SYSTEM AND METHOD FOR ALIGNMENT AND INSPECTION OF BALL GRID ARRAY DEVICES

      
Numéro d'application US2010002895
Numéro de publication 2011/056219
Statut Délivré - en vigueur
Date de dépôt 2010-11-04
Date de publication 2011-05-12
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wang, Xiaoguang
  • Wang, Lei

Abrégé

A system and method for high-speed alignment and inspection of components, such as BGA devices, having non-uniform features is provided. During training time of a machine vision system, a small subset of alignment significant blobs along with a quantum of geometric analysis for picking granularity is determined. Also, during training time, balls may be associated with groups, each of which may have its own set of parameters for inspection.

Classes IPC  ?

59.

SYSTEM AND METHOD FOR ACQUIRING A STILL IMAGE FROM A MOVING IMAGE

      
Numéro d'application US2010048611
Numéro de publication 2011/032082
Statut Délivré - en vigueur
Date de dépôt 2010-09-13
Date de publication 2011-03-17
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Mcgarry, John, E.
  • Silver, William, M.

Abrégé

This invention provides a system and method to capture a moving image of a scene that can be more readily de-blurred as compared to images captured through other known methods such as coded exposure de-blurring (flutter shutter) operating on an equivalent exposure -time interval Rather than stopping and starting the integration of light measurement during the exposure-time interval, photo-generated current is switched between multiple charge storage sites in accordance with a temporal switching pattern that optimizes the conditioning of the solution to the inverse blur transform. By switching the image intensity signal between storage sites all of the light energy available during the exposure-time interval is transduced to electronic charge and captured to form a temporally decomposed representation of the moving image As compared to related methods that discard approximately half of the image intensity signal available over an equivalent exposure-time interval, such a temporally decomposed image is a far more complete representation of the moving image and more effectively de-blurred using simple linear de-convolution techniques

Classes IPC  ?

  • H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
  • H04N 5/335 - Transformation d'informations lumineuses ou analogues en informations électriques utilisant des capteurs d'images à l'état solide [capteurs SSIS] 

60.

SYSTEM AND METHOD FOR CAPTURING AND DETECTING BARCODES USING VISION ON CHIP PROCESSOR

      
Numéro d'application US2010023734
Numéro de publication 2010/093680
Statut Délivré - en vigueur
Date de dépôt 2010-02-10
Date de publication 2010-08-19
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Moed, Michael, C.
  • Akopyan, Mikhail

Abrégé

This invention provides a system and method for capturing, detecting and extracting features of an ID, such as a ID barcode, that employs an efficient processing system based upon a CPU-controlled vision system on a chip (VSoC) architecture, which illustratively provides a linear array processor (LAP) constructed with a single instruction multiple data (SIMD) architecture in which each pixel of the rows of the pixel array are directed to individual processors in a similarly wide array. The pixel data are processed in a front end (FE) process that performs rough finding and tracking of regions of interest (ROIs) that potentially contain ID-like features. The ROI-fmding process occurs in two parts so as to optimize the efficiency of the LAP in neighborhood operations — a row-processing step that occurs during image pixel readout from the pixel array and an image-processing step that occurs typically after readout occurs. The relative motion of the ID-containing ROI with respect to the pixel array is tracked and predicted. An optional back end (BE) process employs the predicted ROI to perform feature-extraction after image capture. The feature extraction derives candidate ID features that are verified by a verification step that confirms the ID, creates a refined ROI, angle of orientation and feature set. These are transmitted to a decoding processor or other device.

Classes IPC  ?

  • G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie

61.

SYSTEM AND METHOD FOR UTILIZING A LINEAR SENSOR

      
Numéro d'application US2010000314
Numéro de publication 2010/090742
Statut Délivré - en vigueur
Date de dépôt 2010-02-04
Date de publication 2010-08-12
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Tremblay, Robert, J.
  • Silver, William, M.
  • Moed, Michael, C.

Abrégé

Systems and methods for utilizing linear optoelectronic sensors are provided. A plurality of linear sensors (525A, 525B) may be utilized to obtain velocity measurements of a web material (505) at two points. The acceleration of the web material (505) may be determined from the velocity measurements and a control signal issued to a servo (515) to maintain proper tension along the web material.

Classes IPC  ?

  • D21F 7/00 - Autres parties constitutives des machines à fabriquer des feuilles continues de papier
  • D21G 9/00 - Autres accessoires pour machines à fabriquer le papier
  • B65H 7/14 - Commande de l'alimentation en articles, de l'enlèvement des articles, de l'avance des piles ou d'appareils associés permettant de tenir compte d'une alimentation incorrecte, de la non présence d'articles ou de la présence d'articles défectueux par palpeurs ou détecteurs par palpeurs ou détecteurs photo-électriques
  • B65H 23/04 - Positionnement, tension, suppression des à-coups ou guidage des bandes longitudinal
  • G01N 33/00 - Recherche ou analyse des matériaux par des méthodes spécifiques non couvertes par les groupes
  • G01S 5/16 - Localisation par coordination de plusieurs déterminations de direction ou de ligne de position; Localisation par coordination de plusieurs déterminations de distance utilisant des ondes électromagnétiques autres que les ondes radio
  • G06T 7/00 - Analyse d'image

62.

SYSTEM AND METHOD FOR THREE-DIMENSIONAL ALIGNMENT OF OBJECTS USING MACHINE VISION

      
Numéro d'application US2009066247
Numéro de publication 2010/077524
Statut Délivré - en vigueur
Date de dépôt 2009-12-01
Date de publication 2010-07-08
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Marrion, Cyril, C.
  • Foster, Nigel, J.
  • Liu, Lifeng
  • Li, David, Y.
  • Shivaram, Guruprasad
  • Wallack, Aaron, S.
  • Ye, Xiangyun

Abrégé

This invention provides a system and method for determining the three-dimensional alignment of a modeled object or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.

Classes IPC  ?

  • G06K 9/64 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons ou corrélations simultanées de signaux images avec une pluralité de références, p.ex. matrice de résistances
  • G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales

63.

METHOD AND SYSTEM FOR DYNAMIC FEATURE DETECTION

      
Numéro d'application US2009002198
Numéro de publication 2009/126273
Statut Délivré - en vigueur
Date de dépôt 2009-04-08
Date de publication 2009-10-15
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Silver, William, M.

Abrégé

Disclosed are methods and systems for dynamic feature detection of physical features of objects in the field of view of a sensor. Dynamic feature detection substantially reduces the effects of accidental alignment of physical features with the pixel grid of a digital image by using the relative motion of objects or material in and/or through the field of view to capture and process a plurality of images that correspond to a plurality of alignments. Estimates of the position, weight, and other attributes of a feature are based on an analysis of the appearance of the feature as it moves in the field of view and appears at a plurality of pixel grid alignments. The resulting reliability and accuracy is superior to prior art static feature detection systems and methods.

Classes IPC  ?

  • G06K 9/64 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons ou corrélations simultanées de signaux images avec une pluralité de références, p.ex. matrice de résistances
  • G06T 7/00 - Analyse d'image
  • G06T 7/20 - Analyse du mouvement

64.

SYSTEM AND METHOD FOR PERFORMING MULTI-IMAGE TRAINING FOR PATTERN RECOGNITION AND REGISTRATION

      
Numéro d'application US2008013823
Numéro de publication 2009/085173
Statut Délivré - en vigueur
Date de dépôt 2008-12-18
Date de publication 2009-07-09
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Bogan, Nathaniel
  • Wang, Xiaoguang
  • Wallack, Aaron, S.

Abrégé

A system and method for performing multi-image training for pattern recognition and registration is provided. A machine vision system first obtains N training images of the scene. Each of the N images is used as a baseline image and the N-1 images are registered to the baseline. Features that represent a set of corresponding image features are added to the model. The feature to be added to the model may comprise an average of the features from each of the images in which the feature appears. The process continues until every feature that meets a threshold requirement is accounted for. The model that results from the present invention represents those stable features that are found in at least the threshold number of the N training images. The model may then be used to train an alignment/inspection tool with the set of features.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques

65.

VISION SENSORS, SYSTEMS, AND METHODS

      
Numéro d'application US2008071993
Numéro de publication 2009/070354
Statut Délivré - en vigueur
Date de dépôt 2008-08-01
Date de publication 2009-06-04
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Mcgarry, John, E.
  • Plummer, Piers, A., N.

Abrégé

A single chip vision sensor of an embodiment includes a pixel array and one or more circuits. The one or more circuits are configured to search an image for one or more features using a model of the one or more features. A method of an embodiment in a single chip vision sensor includes obtaining an image based at least partially on sensed light, and searching the image for one or more features using a model of the one or more features. A system of an embodiment includes the single chip vision sensor and a device. The device is configured to receive one or more signals from the single chip vision sensor and to control an operation based at least partially on the one or more signals.

Classes IPC  ?

  • G06K 7/00 - Méthodes ou dispositions pour la lecture de supports d'enregistrement

66.

SYSTEM AND METHOD FOR READING PATTERNS USING MULTIPLE IMAGE FRAMES

      
Numéro d'application US2008083191
Numéro de publication 2009/064759
Statut Délivré - en vigueur
Date de dépôt 2008-11-12
Date de publication 2009-05-22
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Nadabar, Sateesha

Abrégé

This invention provides a system and method for decoding symbology that contains a respective data set using multiple image frames (420, 422, 424) of the symbol, wherein at least some of those frames can have differing image parameters (for example orientation, lens zoom, aperture, etc.) so that combining the frames with an illustrative multiple image application (430) allows the most-readable portions of each frame to be stitched togβther. And unlike prior systems which may select one 'best' image, the illustrative system method allows this stitched image to form a complete, readable image of the underlying symbol (310). In an illustrative embodiment the system and method includes an imaging assembly that acquires multiple image frames of the symbol in which some of those image frames have discrete, differing image parameters from others of the frames. A processor, which is operatively connected to the imaging assembly processes the plurality of acquired image frames of the symbol to decode predetermined code data from at least some of the plurality of image frames, and' to combine the predeterminded code data from the at least some of the plurality of image frames to define a decodable version of the data set represented by the symbol.

Classes IPC  ?

  • G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire

67.

CIRCUITS AND METHODS ALLOWING FOR PIXEL ARRAY EXPOSURE PATTERN CONTROL

      
Numéro d'application US2008071914
Numéro de publication 2009/035785
Statut Délivré - en vigueur
Date de dépôt 2008-08-01
Date de publication 2009-03-19
Propriétaire
  • COGNEX CORPORATION (USA)
  • INNOVACIONES MICROELECTRONICAS S.L. (Espagne)
Inventeur(s)
  • Mcgarry, E. John
  • Dominguez-Castro, Rafael
  • Garcia, Alberto

Abrégé

An image processing system includes an image sensor circuit. The image sensor circuit is configured to obtain an image using a type of shutter operation in which an exposure pattern of a pixel array is set according to exposure information that changes over time based at least partially on charge accumulated in at least a portion of the pixel array. An image sensor circuit includes a pixel array and one or more circuits. The one or more circuits are configured to update exposure information based at least partially on one or more signals output from the pixel array, and to control an exposure pattern of the pixel array based on the exposure information. A pixel circuit includes a first transistor connected between a photodiode and a sense node, and a second transistor connected between an exposure control signal line and a gate of the first transistor.

Classes IPC  ?

  • H04N 3/14 - TRANSMISSION D'IMAGES, p.ex. TÉLÉVISION - Détails des dispositifs de balayage des systèmes de télévision; Leur combinaison avec la production des tensions d'alimentation par des moyens non exclusivement optiques-mécaniques au moyen de dispositifs à l'état solide à balayage électronique

68.

SYSTEM AND METHOD FOR TRAFFIC SIGN RECOGNITION

      
Numéro d'application US2008010722
Numéro de publication 2009/035697
Statut Délivré - en vigueur
Date de dépôt 2008-09-15
Date de publication 2009-03-19
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Moed, Michael, C.
  • Bonnick, Thomas, W.

Abrégé

This invention provides a vehicle-borne system and method for traffic sign recognition that provides greater accuracy and efficiency in the location and classification of various types of traffic signs by employing rotation and scale-invariant (RSI)-based geometric pattern-matching on candidate traffic signs acquired by a vehicle-mounted forward-looking camera and applying one or more discrimination processes to the recognized sign candidates from the pattern-matching process to increase or decrease the confidence of the recognition. These discrimination processes include discrimination based upon sign color versus model sign color arrangements, discrimination based upon the pose of the sign candidate versus vehicle location and/or changes in the pose between image frames, and/or discrimination of the sign candidate versus stored models of fascia characteristics. The sign candidates that pass with high confidence are classified based upon the associated model data and the drive/vehicle is informed of their presence. In an illustrative embodiment, a preprocess step converts a color image of the sign candidates into a grayscale image in which the contrast between sign colors is appropriate enhanced to assist the pattern-matching process.

Classes IPC  ?

  • G06K 9/64 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques utilisant des comparaisons ou corrélations simultanées de signaux images avec une pluralité de références, p.ex. matrice de résistances

69.

METHOD AND SYSTEM FOR OPTOELECTRONIC DETECTION AND LOCATION OF OBJECTS

      
Numéro d'application US2008007280
Numéro de publication 2008/156608
Statut Délivré - en vigueur
Date de dépôt 2008-06-11
Date de publication 2008-12-24
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Silver, William, M.

Abrégé

Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art.

Classes IPC  ?

70.

METHOD AND SYSTEM FOR OPTOELECTRONIC DETECTION AND LOCATION OF OBJECTS

      
Numéro d'application US2008007302
Numéro de publication 2008/156616
Statut Délivré - en vigueur
Date de dépôt 2008-06-11
Date de publication 2008-12-24
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Silver, William, M.

Abrégé

Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art.

Classes IPC  ?

  • G01V 8/00 - Prospection ou détection par des moyens optiques

71.

SYSTEM AND METHOD FOR LOCATING A THREE-DIMENSIONAL OBJECT USING MACHINE VISON

      
Numéro d'application US2008006535
Numéro de publication 2008/153721
Statut Délivré - en vigueur
Date de dépôt 2008-05-22
Date de publication 2008-12-18
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wallack, Aaron, S.
  • Michael, David, J.

Abrégé

This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets. The overall, refined/optimized pose estimate incorporates data from each of the cameras' acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/view's acquired runtime edgelets. A final transformation of trained features relative to the runtime features is derived from the iterative error computation.

Classes IPC  ?

72.

HUMAN-MACHINE-INTERFACE AND METHOD FOR MANIPULATING DATA IN A MACHINE VISION SYSTEM

      
Numéro d'application US2007025861
Numéro de publication 2008/085346
Statut Délivré - en vigueur
Date de dépôt 2007-12-19
Date de publication 2008-07-17
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Tremblay, Robert
  • Phillips, Brian
  • Keating, John
  • Eames, Andrew
  • Whitman, Steven
  • Mirtich, Brian
  • Arbogast, Carol, M.

Abrégé

This invention provides a Graphical User Interface (GUI) that operates in con¬ nection with a machine vision detector or other machine vision system, which pro¬ vides a highly intuitive and industrial machine-like appearance and layout. The GUI includes a centralized image frame window surrounded by panes having buttons and specific interface components that the user employs in each step of a machine vision system set up and run procedure. One pane allows the user to view and manipulate a recorded filmstrip of image thumbnails taken in a sequence, and provides the filmstrip with specialized highlighting (colors or patterns) that indicate useful information about the underlying images. The system is set up and run are using a sequential se¬ ries of buttons or switches that are activated by the user in turn to perform each of the steps needed to connect to a vision system, train the system to recognize or detect ob¬ jects/parts, configure the logic that is used to handle recognition/detection signals, set up system outputs from the system based upon the logical results, and finally, run the programmed system in real time. The programming of logic is performed using a programming window that includes a ladder logic arrangement. A thumbnail window is provided on the programming window in which an image from a filmstrip is dis¬ played, focusing upon the locations of the image (and underlying viewed object/part) in which the selected contact element is provided.

Classes IPC  ?

  • G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
  • G06T 7/00 - Analyse d'image

73.

METHOD AND APPARATUS FOR SEMICONDUCTOR WAFER ALIGNMENT

      
Numéro d'application US2007018752
Numéro de publication 2008/024476
Statut Délivré - en vigueur
Date de dépôt 2007-08-23
Date de publication 2008-02-28
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Michael, David
  • Clark, James
  • Liu, Gang

Abrégé

The invention provides, in some aspects, a wafer alignment system comprising an image acquisition device, an illumination source, a rotatable wafer platform, and an image processor that includes functionality for mapping coordinates in an image of an article (such as a wafer) on the platform to a 'world' frame of reference at each of a plurality of angles of rotation of the platform.

Classes IPC  ?

  • G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales

74.

AUTOMATICALLY DETERMINING MACHINE VISION TOOL PARAMETERS

      
Numéro d'application US2007007759
Numéro de publication 2007/126947
Statut Délivré - en vigueur
Date de dépôt 2007-03-28
Date de publication 2007-11-08
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wallack, Aaron
  • Michael, David

Abrégé

A method for automatically determining machine vision tool parameters is presented, including: marking to indicate a desired image result for each image of a plurality of images; selecting a combination of machine vision tool. parameters, and running the machine visjp꧀*.tool on the plurality of images using the combination of parameters to provide' a computed image result for each image of the plurality of images, each computed image result including a plurality of computed measures; comparing each desired image result with a corresponding computed image result to provide a comparison result vector associated with the combination of machine vision tool parameters, then comparing the comparison result vector associated with the combination of machine vision tool parameters to a previously computed comparison result vector associated with a previous combination of machine vision tool parameters using a result comparison heuristic to determine which combination of machine vision tool parameters is best overall.

Classes IPC  ?

  • G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales

75.

VIDEO TRAFFIC MONITORING AND SIGNALING APPARATUS

      
Numéro d'application US2007007662
Numéro de publication 2007/126874
Statut Délivré - en vigueur
Date de dépôt 2007-03-27
Date de publication 2007-11-08
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Schatz, David
  • Shillman, Robert

Abrégé

A traffic signal head having a signal lamp or signal ball with an embedded video monitoring system can be provided to perform vehicle detection to inform an intelligent traffic control system. Video monitoring of traffic lanes facing the signal head can be analyzed by such a system to emulate inductive loop signals that are input signals to traffic control systems.

Classes IPC  ?

  • G08G 1/08 - Commande des signaux de trafic selon le nombre ou la vitesse détectés des véhicules

76.

METHODS AND APPARATUS FOR PRACTICAL 3D VISION SYSTEM

      
Numéro d'application US2006039324
Numéro de publication 2007/044629
Statut Délivré - en vigueur
Date de dépôt 2006-10-06
Date de publication 2007-04-19
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s)
  • Wallack, Aaron
  • Michael, David

Abrégé

The invention provides inter alia methods and apparatus for determining the pose,e.g., position along x-, y- and z-axes, pitch, roll and yaw (or one or more characteristics of that pose) of an object in three dimensions by triangulation of data gleaned from multiple images of the object. Thus, for example, in one aspect, the invention provides a method for 3D machine vision in which, during a calibration step, multiple cameras disposed to acquire images of the object from different respective viewpoints are calibrated to discern a mapping function that identifies rays in 3D space emanating from each respective camera's lens that correspond to pixel locations in that camera's field of view. In a training step, functionality associated with the cameras is trained to recognize expected patterns in images to be acquired of the object. A runtime step triangulates locations in 3D space of one or more of those patterns from pixel-wise positions of those patterns in images of the object and from the mappings discerned during calibration step.

Classes IPC  ?

  • G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales

77.

METHODS AND APPARATUS FOR READING BAR CODE IDENTIFICATIONS

      
Numéro d'application US2005030785
Numéro de publication 2006/026594
Statut Délivré - en vigueur
Date de dépôt 2005-08-30
Date de publication 2006-03-09
Propriétaire COGNEX CORPORATION (USA)
Inventeur(s) Nadabar, Sateesh, G.

Abrégé

The invention provides methods and apparatus for analysis of images of two-dimensional (2D) bar codes (12) in which a model that has proven successful in decoding of a prior 2D image of a 2D bar code (12) is utilized to speed analysis of images of subsequent 2D bar codes (12). hi its various aspects, the invention can be used in analyzing conventional 2D bar codes (12), e.g., those complying with Maxicode and DataMatrix standards, as well as stacked linear bar codes, e.g., those utilizing the Codablock symbology. Bar code readers (16), digital data processing apparatus and other devices according to the invention be used, by way of non-limiting example, to decode bar codes (12) on damaged labels, as well as those screened, etched, peened or otherwise formed on manufactured articles (e.g., from semiconductors to airplane wings). In addition to making bar code reading possible under those conditions, devices utilizing such methods can speed bar codeanalysis in applications where multiple bar codes of like type are read in succession and/or are read under like circumstances - e.g., on the factory floor, at point-of-sale locations, in parcel deliver and so forth. Such devices can also speed and/or make possible bar code analysis where in applications where multiple bar codes read from a single article (14) -e.g., as in the case of a multiply-encoded airplane propellor or other milled parts. The invention also provides methods and apparatus for optical character recognition and other image-based analysis paralleling the above.

Classes IPC  ?

  • G06K 7/10 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire
  • G06K 19/06 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques caractérisés par le genre de marque numérique, p.ex. forme, nature, code