Cognex Corporation

United States of America

Back to Profile

1-77 of 77 for Cognex Corporation Sort by
Query
Patent
World - WIPO
Excluding Subsidiaries
Aggregations Reset Report
Date
New (last 4 weeks) 2
2024 April (MTD) 1
2024 March 1
2024 February 1
2024 (YTD) 3
See more
IPC Class
G06T 7/00 - Image analysis 12
G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation 10
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints 5
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object 4
G01N 21/88 - Investigating the presence of flaws, defects or contamination 4
See more
Found results for  patents

1.

SYSTEMS AND METHODS FOR DEFLECTOMETRY

      
Application Number US2023075730
Publication Number 2024/076922
Status In Force
Filing Date 2023-10-02
Publication Date 2024-04-11
Owner COGNEX CORPORATION (USA)
Inventor
  • Jacobson, Lowell D.
  • Wang, Lei

Abstract

Systems and methods for deflectometry are disclosed. The deflectometry display is divided into subregions, with the subregions collectively covering the entire display. Deflectometry data sets are acquired for each of the subregions and all of the data is processed to compute fused deflectometry images having enhanced quality. By using display subregions, smaller portions of the object of interest are illuminated, so the amount of diffuse reflection is correspondingly reduced. By focusing on smaller regions of deflectometry patterns, the ratio of specular-to-diffuse reflection intensity can be increased. This allows display brightness and camera acquisition time to be increased without saturation and improves signal-to-noise ratio quality of the specular signal, which improves the quality of the subsequently computed deflectometry images.

IPC Classes  ?

  • G01N 21/88 - Investigating the presence of flaws, defects or contamination
  • G06T 7/00 - Image analysis
  • G01B 11/30 - Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces

2.

SYSTEMS AND METHODS FOR CONFIGURING MACHINE VISION TUNNELS

      
Application Number US2023074949
Publication Number 2024/064924
Status In Force
Filing Date 2023-09-22
Publication Date 2024-03-28
Owner COGNEX CORPORATION (USA)
Inventor
  • Sanz Rodriguez, Saul
  • Ruetten, Jens
  • Depre, Tony
  • Stroo, Bart

Abstract

Systems and methods are provided for generating machine vision tunnel configurations. The systems and methods described herein may automatically generate a configuration summary of tunnel configurations for a prospective machine vision tunnel based on received parameters. The configuration summary may be modified via operator interaction with the configuration summary. The systems and methods described herein may also automatically generate and transmit a bill of materials report, a tunnel commissioning report, or a graphical representation of an approved tunnel configuration, including generating some or all of these data sets dynamically in response to operator inputs.

IPC Classes  ?

3.

METHODS AND APPARATUS FOR DETERMINING ORIENTATIONS OF AN OBJECT IN THREE-DIMENSIONAL DATA

      
Application Number US2023072102
Publication Number 2024/036320
Status In Force
Filing Date 2023-08-11
Publication Date 2024-02-15
Owner COGNEX CORPORATION (USA)
Inventor
  • Vaidya, Nitin, M.
  • Bogan, Nathaniel

Abstract

The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a candidate three-dimensional (3D) orientation of an object represented by a three-dimensional (3D) point cloud. The method includes receiving data indicative of a 3D point cloud comprising a plurality of 3D points, determining a first histogram for the plurality of 3D points based on geometric features determined based, on the plurality of 3D points, accessing data indicative of a. second histogram of geometric features of a 3D representation of a reference object, computing, for each of a plurality of different rotations between the first histogram and the second histogram in 3D space, a scoring metric for the associated rotation, and determining the candidate 3D orientation based on the scoring metrics of the plurality of different rotations.

IPC Classes  ?

  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods

4.

SYSTEMS AND METHODS FOR COMMISSIONING A MACHINE VISION SYSTEM

      
Application Number US2023066773
Publication Number 2023/220590
Status In Force
Filing Date 2023-05-09
Publication Date 2023-11-16
Owner COGNEX CORPORATION (USA)
Inventor
  • Wurz, Caitlin
  • Liu, Humberto Andres, Leon
  • Atkins Iii, Henry
  • Tanmay, Sinha
  • Surana, Deepak
  • Gauthier, Georges
  • El-Barkouky, Ahmed
  • Brodeur, Patrick
  • Lutzke, Patrick
  • Ruetten, Jens
  • Depre, Tony

Abstract

Methods and systems are provided for commissioning machine vision systems. The methods and systems described herein may automatically configure, or otherwise assist users in configuring, a machine vision system based on a specification package.

IPC Classes  ?

  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

5.

SYSTEM AND METHOD FOR FIELD CALIBRATION OF A VISION SYSTEM

      
Application Number US2023066778
Publication Number 2023/220593
Status In Force
Filing Date 2023-05-09
Publication Date 2023-11-16
Owner COGNEX CORPORATION (USA)
Inventor
  • Wurz, Caitlin
  • Liu, Humberto Andres Leon
  • Brodeur, Patrick
  • Gauthier, Georges
  • Sposato, Kyle
  • Corbett, Michael
  • El-Barkouky, Ahmed

Abstract

A method for three-dimensional (3D) field calibration of a machine vision system includes receiving a set of calibration parameters and an identification of one or more machine vision system imaging devices, determining a camera acquisition parameter for calibration based on the set of calibration parameters, validating the set of calibration parameters and the camera acquisition parameter, and controlling the imaging device(s) to collect image data of a calibration target. The image data may be collected using the determined camera acquisition parameter. The method further includes generating a set of calibration data for the imaging device(s) using the collected image data for the imaging device(s). The set of calibration data can include a maximum error. The method further includes generating a report including the set of calibration data for the imaging device(s) and an indication of whether the maximum error for the imaging device(s) is within an acceptable tolerance and displaying the report on a display.

IPC Classes  ?

  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

6.

SYSTEM AND METHOD FOR DYNAMIC TESTING OF A MACHINE VISION SYSTEM

      
Application Number US2023066779
Publication Number 2023/220594
Status In Force
Filing Date 2023-05-09
Publication Date 2023-11-16
Owner COGNEX CORPORATION (USA)
Inventor
  • Wurz, Caitlin
  • Liu, Humberto Andres Leon
  • Brodeur, Patrick
  • Gauthier, Georges
  • Sposato, Kyle
  • Corbett, Michael
  • Sanz Rodriguez, Saul
  • Ruetten, Jens

Abstract

A method for dynamic testing of a machine vision system includes receiving a set of testing parameters and a selection of a tunnel system. The machine vision system can include the tunnel system and the tunnel system can include a conveyor and at least one imaging device. The method can further include validating the testing parameters and controlling the at least one imaging device to acquire a set of image data of a testing target positioned at a predetermined justification on the conveyor. The testing target can include a plurality of target symbols. The method can further include determining a test result by analyzing the set of image data to determine if the at least one imaging device reads a target symbol associated with the at least one imaging device and generating a report including the test result.

IPC Classes  ?

  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

7.

MACHINE VISION SYSTEM AND METHOD WITH HYBRID ZOOM OPTICAL ASSEMBLY

      
Application Number US2023066413
Publication Number 2023/212735
Status In Force
Filing Date 2023-04-28
Publication Date 2023-11-02
Owner COGNEX CORPORATION (USA)
Inventor
  • Fernandez Dorado, Jose
  • Nunnink, Laurens

Abstract

An optical assembly (120) for a machine vision system (100) having an image sensor (104) includes a lens assembly (108) and a motor system (118) coupled to the lens assembly (108). The lens assembly (108) can include a plurality of solid lens elements (126) and a liquid lens (128), where the liquid lens (128) includes an adjustable membrane (136). The motor system (118) can be configured to move the lens assembly (108) to adjust a distance between the lens assembly (108) and the image sensor (104) of the vision system (100).

IPC Classes  ?

  • G02B 3/14 - Fluid-filled or evacuated lenses of variable focal length
  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
  • H04N 23/55 - Optical parts specially adapted for electronic image sensors; Mounting thereof

8.

SYSTEM AND METHOD FOR USE OF POLARIZED LIGHT TO IMAGE TRANSPARENT MATERIALS APPLIED TO OBJECTS

      
Application Number US2023014354
Publication Number 2023/167984
Status In Force
Filing Date 2023-03-02
Publication Date 2023-09-07
Owner COGNEX CORPORATION (USA)
Inventor
  • Carey, Ben, R.
  • Norkett, Ryan, D.
  • Molnar, Gergely, G.

Abstract

This invention provides a system and method inspecting transparent or translucent features on a substrate of an object. A vision system camera, having an image sensor that provides image data to a vision system processor, receives light from a field of view that includes the object through a light-polarizing filter assembly. An illumination source projects polarized light onto the substrate within the field of view. A vision system process locates and registers the substrate, and locates thereon, based upon registration, the transparent or translucent features. A vision system process then performs inspection on the features using predetermined thresholds. The substrate can be a shipping box on a conveyor, having flaps sealed at a seam by transparent tape. Alternatively, a plurality of illuminators or cameras can project and receive polarized light oriented in a plurality of polarization angles, which generates a plurality of images that are combined into a result image.

IPC Classes  ?

  • G01N 21/21 - Polarisation-affecting properties
  • G01N 21/88 - Investigating the presence of flaws, defects or contamination
  • G01N 21/90 - Investigating the presence of flaws, defects or contamination in a container or its contents
  • G01N 21/95 - Investigating the presence of flaws, defects or contamination characterised by the material or shape of the object to be examined
  • G01N 21/84 - Systems specially adapted for particular applications

9.

CONFIGURABLE LIGHT EMISSION BY SELECTIVE BEAM-SWEEPING

      
Application Number US2023013644
Publication Number 2023/164009
Status In Force
Filing Date 2023-02-22
Publication Date 2023-08-31
Owner COGNEX CORPORATION (USA)
Inventor
  • Aljasem, Khalid
  • Eggert, Florian
  • Ruhnau, Thomas
  • Schweier, Andre
  • Filhaber, John

Abstract

An opto-electronic system includes a laser operable to produce a laser beam; an optical element including two or more beam-shaping portions, each of the two or more beam-shaping portions having a different optical property; a beam deflector arranged to sweep the laser beam across the optical element to produce output light; and electronics communicatively coupled with the laser, the beam deflector, or both the laser and the beam deflector. The electronics are configured to cause selective impingement of the laser beam onto a proper subset of the two or more beam-shaping portions of the optical element to modify one or more optical parameters of the output light.

IPC Classes  ?

  • G02B 27/09 - Beam shaping, e.g. changing the cross-sectioned area, not otherwise provided for
  • G02B 26/10 - Scanning systems

10.

SYSTEM AND METHOD FOR APPLYING DEEP LEARNING TOOLS TO MACHINE VISION AND INTERFACE FOR THE SAME

      
Application Number US2022018778
Publication Number 2023/075825
Status In Force
Filing Date 2022-03-03
Publication Date 2023-05-04
Owner COGNEX CORPORATION (USA)
Inventor
  • Wyss, Reto
  • Petry Iii, John, P.

Abstract

This invention overcomes disadvantages of the prior art by providing a vision system and method of use, and graphical user interface (GUI), which employs a camera assembly having an on-board processor of low to modest processing power. At least one vision system tool analyzes image data, and generates results therefrom, based upon a deep learning process. A training process provides training image data to a processor remote from the on-board processor to cause generation of the vision system tool therefrom, and provides a stored version of the vision system tool for runtime operation on the on-board processor. The GUI allows manipulation of thresholds applicable to the vision system tool and refinement of training of the vision system tool by the training process. A scoring process allows unlabeled images from a set of acquired and/or stored images to be selected automatically for labelling as training images using a computed confidence score.

IPC Classes  ?

  • G06V 30/40 - Document-oriented image-based pattern recognition

11.

SYSTEMS AND METHODS FOR DETECTING OBJECTS

      
Application Number US2022046040
Publication Number 2023/059876
Status In Force
Filing Date 2022-10-07
Publication Date 2023-04-13
Owner COGNEX CORPORATION (USA)
Inventor
  • Liu, Zihan, Hans
  • Hanhart, Philippe
  • Wyss, Reto
  • Barker, Simon, Alaric

Abstract

The techniques described herein relate to computerized methods and apparatuses for detecting objects in an image. The techniques described herein further relate to computerized methods and apparatuses for detecting one or more objects using a pretrained machine learning model and one or more other machine learning models that can be trained in a field training process. The pre-trained machine learning model may be a deep machine learning model.

IPC Classes  ?

  • G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06T 7/00 - Image analysis

12.

MODULAR MIRROR SUBSYSTEMS FOR MULTI-SIDE SCANNING

      
Application Number US2022041383
Publication Number 2023/028149
Status In Force
Filing Date 2022-08-24
Publication Date 2023-03-02
Owner COGNEX CORPORATION (USA)
Inventor
  • Sanz Rodriguez, Saul
  • Depre, Tony
  • Ruetten, Jens, Iii
  • Nunnink, Laurens

Abstract

A method for an imaging module can include rotating an imaging assembly that includes an imaging device about a first pivot point of a bracket to a select first orientation, fastening the imaging assembly to the bracket at the first orientation, rotating a mirror assembly that includes a mirror about a second pivot point of the bracket to a select second orientation, and fastening the mirror assembly to the bracket at the second orientation. An adjustable, selectively oriented imaging assembly of a first imaging module can acquire images using an adjustable, selectively oriented mirror assembly of a second imaging module.

IPC Classes  ?

  • G02B 7/182 - Mountings, adjusting means, or light-tight connections, for optical elements for mirrors for mirrors
  • G02B 7/198 - Mountings, adjusting means, or light-tight connections, for optical elements for mirrors for mirrors with means for adjusting the mirror relative to its support
  • G02B 26/10 - Scanning systems
  • G02B 27/62 - Optical apparatus specially adapted for adjusting optical elements during the assembly of optical systems

13.

MACHINE VISION SYSTEM AND METHOD WITH MULTISPECTRAL LIGHT ASSEMBLY

      
Application Number US2022038842
Publication Number 2023/014601
Status In Force
Filing Date 2022-07-29
Publication Date 2023-02-09
Owner COGNEX CORPORATION (USA)
Inventor
  • Fernandez-Dorado, Jose
  • Nunnink, Laurens

Abstract

A multispectral light assembly for an illumination system includes a multispectral light source configured to generate a plurality of different wavelengths of light and a light pipe positioned in front of the multispectral light source and configured to provide color mixing for two or more of the plurality of different wavelengths. The multispectral light assembly also includes a diffusive surface on the light pipe and a projection lens positioned in front of the diffusive surface. A processor device may be in communication with the multispectral light assemblies and may be configured to control activation of the multispectral light source.

IPC Classes  ?

  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
  • G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
  • B60Q 1/02 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor the devices being primarily intended to illuminate the way ahead or to illuminate other areas of way or environments
  • F21K 9/23 - Retrofit light sources for lighting devices with a single fitting for each light source, e.g. for substitution of incandescent lamps with bayonet or threaded fittings
  • F21V 23/00 - Arrangement of electric circuit elements in or on lighting devices
  • G06K 7/12 - Methods or arrangements for sensing record carriers by corpuscular radiation using a selected wavelength, e.g. to sense red marks and ignore blue marks
  • G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
  • G02B 7/08 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses with mechanism for focusing or varying magnification adapted to co-operate with a remote control mechanism

14.

MACHINE VISION SYSTEM AND METHOD WITH STEERABLE MIRROR

      
Application Number US2022034929
Publication Number 2022/272080
Status In Force
Filing Date 2022-06-24
Publication Date 2022-12-29
Owner COGNEX CORPORATION (USA)
Inventor
  • Kempf, Torsten
  • Rodriguez, Saul, Sanz
  • Fernandez-Dorado, Pepe
  • Nunnink, Laurens

Abstract

A computer-implemented method for scanning a side of an object (22). The method can include determining a scanning pattern for an imaging device (e.g., based on a distance between the side of the object and the imaging device), and moving the controllable mirror (30) according to the scanning pattern to acquire a plurality of images of the side of the object. A region of interest can be identified based on the plurality of images.

IPC Classes  ?

  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation

15.

SYSTEMS AND METHODS FOR ASSIGNING A SYMBOL TO AN OBJECT

      
Application Number US2022035175
Publication Number 2022/272173
Status In Force
Filing Date 2022-06-27
Publication Date 2022-12-29
Owner COGNEX CORPORATION (USA)
Inventor
  • El-Barkouky, Ahmed
  • Sauter, Emily

Abstract

A method for assigning a symbol to an object in an image includes receiving the image captured by an imaging device where the symbol may be located within the image. The method further includes receiving, in a first coordinate system, a three-dimensional (3D) location of one or more points that corresponds to pose information indicative of a 3D pose of the object in the image, mapping the 3D location of the one or more points of the object to a 2D location within the image, and assigning the symbol to the object based on a relationship between a 2D location of the symbol in the image and the 2D location of the one or more points of the object in the image.

IPC Classes  ?

  • G06V 20/66 - Trinkets, e.g. shirt buttons or jewellery items
  • G06V 20/64 - Three-dimensional objects

16.

METHODS, SYSTEMS, AND MEDIA FOR GENERATING IMAGES OF MULTIPLE SIDES OF AN OBJECT

      
Application Number US2022033098
Publication Number 2022/261496
Status In Force
Filing Date 2022-06-10
Publication Date 2022-12-15
Owner COGNEX CORPORATION (USA)
Inventor
  • El-Barkouky, Ahmed
  • Negro, James, A.
  • Ye, Xiangyun

Abstract

In accordance with some embodiments of the disclosed subject matter, methods, systems, and media for generating images of multiple sides of an object are provided. In some embodiments, a method comprises receiving information indicative of a 3D pose of a first object in a first coordinate space at a first time; receiving a group of images captured using at least one image sensor, each image associated with a field of view within the first coordinate space; mapping at least a portion of a surface of the first object to a 2D area with respect to the image based on the 3D pose of the first object; associating, for images including the surface, a portion of that image with the surface of the first object based on the 2D area; and generating a composite image of the surface using images associated with the surface.

IPC Classes  ?

17.

SYSTEMS AND METHODS FOR DETECTING AND ADDRESSING ISSUE CLASSIFICATIONS FOR OBJECT SORTING

      
Application Number US2022017969
Publication Number 2022/183033
Status In Force
Filing Date 2022-02-25
Publication Date 2022-09-01
Owner COGNEX CORPORATION (USA)
Inventor
  • Lachappelle, Shane
  • Link, John, Jeffrey
  • Lym, Julie, Juhyun

Abstract

Some embodiments of the disclosure provide systems and methods for improving sorting and routing of objects, including in sorting systems. Characteristic dimensional data for one or more objects with common barcode information can be compared to dimensional data of another object with the common barcode information to evaluate a classification (e.g., a side-by-side exception) of the other object. In some cases, the evaluation can include identifying the classification as incorrect (e.g., as a false side-by-side exception).

IPC Classes  ?

  • B07C 3/14 - Apparatus characterised by the means used for detection of the destination using light-responsive detecting means
  • B07C 5/34 - Sorting according to other particular properties

18.

OPTICAL IMAGING DEVICES AND METHODS

      
Application Number US2021062820
Publication Number 2022/125906
Status In Force
Filing Date 2021-12-10
Publication Date 2022-06-16
Owner COGNEX CORPORATION (USA)
Inventor
  • Dippel, Nicole
  • Oteo, Esther
  • Nunnink, Laurens
  • Gerst, Carl, W.
  • Engle, Matthew, D.

Abstract

The present invention relates to optical imaging devices and methods for reading optical codes. The image device comprises a sensor, a lens, a plurality of illumination devices, and a plurality of reflective surfaces. The sensor is configured to sense with a predetermined number of lines of pixels, where the predetermined lines of pixels are arranged in a predetermined position. The lens has an imaging path along an optical axis. The plurality of illumination devices are configured to transmit an illumination pattern along the optical axis, and the plurality of reflective surfaces are configured to fold the optical axis.

IPC Classes  ?

  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation

19.

FLEXURE ARRANGEMENTS FOR OPTICAL COMPONENTS

      
Application Number US2021057212
Publication Number 2022/098571
Status In Force
Filing Date 2021-10-29
Publication Date 2022-05-12
Owner COGNEX CORPORATION (USA)
Inventor Filhaber, John

Abstract

An optical system can include a receiver secured to a first optical component and a flexure arrangement secured to a second optical component. The flexure arrangement can include a plurality of flexures, each with a free end that can extend away from the second optical component and into a corresponding cavity of the receiver. Each of the cavities can be sized to receive adhesive that secures the corresponding flexure within the cavity when the adhesive has hardened, and to permit adjustment of the corresponding flexure within the cavity, before the adhesive has hardened, to adjust an alignment of the first and second optical components relative to multiple degrees of freedom.

IPC Classes  ?

  • G02B 7/00 - Mountings, adjusting means, or light-tight connections, for optical elements
  • G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses

20.

SYSTEM AND METHOD FOR EXTRACTING AND MEASURING SHAPES OF OBJECTS HAVING CURVED SURFACES WITH A VISION SYSTEM

      
Application Number US2021055314
Publication Number 2022/082069
Status In Force
Filing Date 2021-10-15
Publication Date 2022-04-21
Owner COGNEX CORPORATION (USA)
Inventor
  • Zhu, Hongwei
  • Moreno, Daniel

Abstract

This invention provides a system and method that efficiently detects objects imaged using a 3D camera arrangement by referencing a cylindrical or spherical surface represented by a point cloud, and measures variant features of an extracted object including volume, height, and center of mass, bounding box, and other relevant metrics. The system and method, advantageously, operates directly on unorganized and unordered points, requiring neither a mesh/surface reconstruction nor voxel grid representation of object surfaces in a point cloud. Based upon a cylinder/sphere reference model, an acquired 3D point cloud is flattened. Object (blob) detection is carried out in the flattened 3D space, and objects are converted back to the 3D space to compute the features, which can include regions that differ from the regular shape of the cylinder/sphere. Downstream utilization devices and/or processes, such as part reject mechanism and/or robot manipulators can act on the identified feature data.

IPC Classes  ?

  • G06T 3/00 - Geometric image transformation in the plane of the image
  • G06T 7/00 - Image analysis

21.

MACHINE VISION SYSTEM AND METHOD WITH ON-AXIS AIMER AND DISTANCE MEASUREMENT ASSEMBLY

      
Application Number US2021052159
Publication Number 2022/067163
Status In Force
Filing Date 2021-09-27
Publication Date 2022-03-31
Owner COGNEX CORPORATION (USA)
Inventor
  • Fernandez-Dorado, Jose
  • Garcia-Campos, Pablo
  • Nunnink, Laurens

Abstract

An on-axis aimer and distance measurement apparatus for a vision system can include a light source configured to generate a first light beam along a first axis. The first light beam can project an aimer pattern on an object and a receiver can be configured to receive reflected light from the first light beam to determine a distance between a lens of the vision system and the object. One or more parameters of vision system can be controlled based on the determined distance.

IPC Classes  ?

  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation

22.

MACHINE VISION SYSTEM AND METHOD WITH MULTI-APERTURE OPTICS ASSEMBLY

      
Application Number US2021048903
Publication Number 2022/051526
Status In Force
Filing Date 2021-09-02
Publication Date 2022-03-10
Owner COGNEX CORPORATION (USA)
Inventor
  • Fernandez-Dorado, Jose
  • Nunnink, Laurens

Abstract

An apparatus for controlling a depth of field for a reader in a vision system includes a dual aperture assembly having an inner region and an outer region. A first light source can be used to generate a light beam associated with the inner region and a second light source can be used to generate a light beam associated with the outer region. The depth of field of the reader can be controlled by selecting one of the first light source and second light source to illuminate an object to acquire an image of the object. The selection of the first light source or the second light source can be based on at least one parameter of the vision system.

IPC Classes  ?

  • H04N 5/225 - Television cameras
  • H04N 13/00 - PICTORIAL COMMUNICATION, e.g. TELEVISION - Details thereof
  • G01S 17/42 - Simultaneous measurement of distance and other coordinates
  • H04N 5/00 - PICTORIAL COMMUNICATION, e.g. TELEVISION - Details of television systems
  • H04N 13/106 - Processing image signals

23.

SYSTEM AND METHOD FOR EXTENDING DEPTH OF FIELD FOR 2D VISION SYSTEM CAMERAS IN THE PRESENCE OF MOVING OBJECTS

      
Application Number US2021040722
Publication Number 2022/011036
Status In Force
Filing Date 2021-07-07
Publication Date 2022-01-13
Owner COGNEX CORPORATION (USA)
Inventor
  • Dorado, Jose, Fernandez
  • Lozano, Esther, Oteo
  • Campos, Pablo, Garcia
  • Nunnink, Laurens

Abstract

This invention provides a system and method for enhanced depth of field (DOF) advantageously used in logistics applications, scanning for features and ID codes on objects. It effectively combines a vision system, a glass lens designed for on-axis and Scheimpflug configurations, a variable lens and a mechanical system to adapt the lens to the different configurations without detaching the optics. The optics can be steerable, which allows it to adjust between variable angles so as to optimize the viewing angle to optimize DOF for the object in a Scheimpflug configuration. One, or a plurality, of images can be acquired of the object at one, or differing angle settings, with the entire region of interest clearly imaged. In another implementation, the optical path can include a steerable mirror and a folding mirror overlying the region of interest, which allows different multiple images to be acquired at different locations on the object.

IPC Classes  ?

  • G02B 27/00 - Optical systems or apparatus not provided for by any of the groups ,
  • G03B 5/06 - Swinging lens about normal to the optical axis
  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
  • H04N 5/00 - PICTORIAL COMMUNICATION, e.g. TELEVISION - Details of television systems

24.

METHODS AND APPARATUS FOR IDENTIFYING SURFACE FEATURES IN THREE-DIMENSIONAL IMAGES

      
Application Number US2021031513
Publication Number 2021/231260
Status In Force
Filing Date 2021-05-10
Publication Date 2021-11-18
Owner COGNEX CORPORATION (USA)
Inventor
  • Bogan, Nathaniel
  • Hoelscher, Andrew
  • Michael, David J.

Abstract

The techniques described herein relate to methods, apparatus, and computer readable media configured to identify a surface feature of a portion of a three-dimensional (3D) point cloud. Data indicative of a path along a 3D point cloud is received, wherein the 3D point cloud comprises a plurality of 3D data points. A plurality of lists of 3D data points are generated, wherein: each list of 3D data points extends across the 3D point cloud at a location that intersects the received path; and each list of 3D data points intersects the received path at different locations. A characteristic associated with a surface feature is identified in at least some of the plurality of lists of 3D data points. The identified characteristics are grouped based on one or more properties of the identified characteristics. The surface feature is identified based on the grouped characteristics.

IPC Classes  ?

25.

METHODS AND APPARATUS FOR GENERATING POINT CLOUD HISTOGRAMS

      
Application Number US2021031519
Publication Number 2021/231265
Status In Force
Filing Date 2021-05-10
Publication Date 2021-11-18
Owner COGNEX CORPORATION (USA)
Inventor
  • Zhu, Hongwei
  • Michael, David, J.
  • Vaidya, Nitin, M.

Abstract

The techniques described herein relate to methods, apparatus, and computer readable media configured to generate point cloud histograms. A one-dimensional histogram can be generated by determining a distance to a reference for each 3D point of a 3D point cloud. A one-dimensional histogram is generated by adding, for each histogram entry, distances that are within the entry's range of distances. A two-dimensional histogram can be determined by generating a set of orientations by determining, for each 3D point, an orientation with at least a first value for a first component and a second value for a second component. A two-dimensional histogram can be generated based on the set of orientations. Each bin can be associated with ranges of values for the first and second components. Orientations can be added for each bin that have first and second values within the first and second ranges of values, respectively, of the bin.

IPC Classes  ?

  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06T 7/521 - Depth or shape recovery from the projection of structured light

26.

METHODS AND APPARATUS FOR EXTRACTING PROFILES FROM THREE-DIMENSIONAL IMAGES

      
Application Number US2021031503
Publication Number 2021/231254
Status In Force
Filing Date 2021-05-10
Publication Date 2021-11-18
Owner COGNEX CORPORATION (USA)
Inventor
  • Zhu, Hongwei
  • Bogan, Nathaniel
  • Michael, David, J.

Abstract

The techniques described herein relate to methods, apparatus, and computer readable media configured to determining a two-dimensional (2D) profile of a portion of a three-dimensional (3D) point cloud. A 3D region of interest is determined that includes a width along a first axis, a height along a second axis, and a depth along a third axis. The 3D points within the 3D region of interest are represented as a set of 2D points based on coordinate values of the first and second axes. The 2D points are grouped into a plurality of 2D bins arranged along the first axis. For each 2D bin, a representative 2D position is determined based on the associated set of 2D points. Each of the representative 2D positions are connected to neighboring representative 2D positions to generate the 2D profile.

IPC Classes  ?

27.

METHODS AND APPARATUS FOR DETERMINING VOLUMES OF 3D IMAGES

      
Application Number US2021031523
Publication Number 2021/231266
Status In Force
Filing Date 2021-05-10
Publication Date 2021-11-18
Owner COGNEX CORPORATION (USA)
Inventor Moreno, Daniel, Alejandro

Abstract

The techniques described herein relate to methods, apparatus, and computer readable media configured to determine an estimated volume of an object captured by a three-dimensional (3D) point cloud. A 3D point cloud comprising a plurality of 3D points and a reference plane in spatial relation to the 3D point cloud is received. A 2D grid of bins is configured along the reference plane, wherein each bin of the 2D grid comprises a length and width that extends along the reference plane. For each bin of the 2D grid, a number of 3D points in the bin and a height of the bin from the reference plane is determined. A n estimated volume of an object captured by the 3D point cloud based on the calculated number of 3D points in each bin and the height of each bin.

IPC Classes  ?

  • G06T 7/521 - Depth or shape recovery from the projection of structured light
  • G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume

28.

SYSTEM AND METHOD FOR THREE-DIMENSIONAL SCAN OF MOVING OBJECTS LONGER THAN THE FIELD OF VIEW

      
Application Number US2021018627
Publication Number 2021/168151
Status In Force
Filing Date 2021-02-18
Publication Date 2021-08-26
Owner COGNEX CORPORATION (USA)
Inventor
  • Carey, Ben R.
  • Parrett, Andrew
  • Liu, Yukang
  • Chiang, Gilbert

Abstract

This invention provides a system and method for using an area scan sensor of a vision system, in conjunction with an encoder or other knowledge of motion, to capture an accurate measurement of an object larger than a single field of view (FOV) of the sensor. It identifies features/edges of the object, which are tracked from image to image, thereby providing a lightweight way to process the overall extents of the object for dimensioning purposes. Logic automatically determines if the object is longer than the FOV, and thereby causes a sequence of image acquisition snapshots to occur while the moving/conveyed object remains within the FOV until the object is no longer present in the FOV. At that point, acquisition ceases and the individual images are combined as segments in an overall image. These images can be processed to derive overall dimensions of the object based on input application details.

IPC Classes  ?

29.

COMPOSITE THREE-DIMENSIONAL BLOB TOOL AND METHOD FOR OPERATING THE SAME

      
Application Number US2021017498
Publication Number 2021/163219
Status In Force
Filing Date 2021-02-10
Publication Date 2021-08-19
Owner COGNEX CORPORATION (USA)
Inventor
  • Liu, Gang
  • Carey, Ben, R.
  • Mullan, Nickolas, J.
  • Spear, Caleb
  • Parrett, Andrew

Abstract

This invention provides a system and method that performs 3D imaging of a complex object, where image data is likely lost. Available 3D image data, in combination with an absence/loss of image data, allows computation of x, y and z dimensions. Absence/loss of data is assumed to be just another type of image data, and represents the presence of something that has prevented accurate data from being generated in the subject image. Segments of data can be connected to areas of absent data and generate a maximum bounding box. The shadow that this object generates can be represented as negative or missing data, but is not representative of the physical object. The height from the positive data, the object shadow size based on that height, the location in the FOV, and the ray angles that generate the images, are estimated and the object shadow size is removed from the result.

IPC Classes  ?

30.

OBJECT DIMENSIONING SYSTEM AND METHOD

      
Application Number US2020015203
Publication Number 2020/159866
Status In Force
Filing Date 2020-01-27
Publication Date 2020-08-06
Owner COGNEX CORPORATION (USA)
Inventor
  • Fernandez-Dorado, José
  • Mira, Emilio, Pastor
  • Guerrero, Francisco, Azcona
  • Bachelder, Ivan
  • Nunnink, Laurens
  • Kempf, Torsten
  • Vaidyanathan, Savithri
  • Moed, Kyra
  • Boatner, John, Bryan

Abstract

Determining dimensions of an object can include determining a distance between the object and an imaging device, and an angle of an optical axis of the imaging device. One of more features of the object can be identified in an image of the object. The dimensions of the object can be determined based upon the distance, the angle, and the one or more identified features.

IPC Classes  ?

  • G01B 11/02 - Measuring arrangements characterised by the use of optical techniques for measuring length, width, or thickness
  • G01B 11/22 - Measuring arrangements characterised by the use of optical techniques for measuring depth
  • G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
  • G06T 7/60 - Analysis of geometric attributes

31.

SYSTEM AND METHOD FOR FINDING AND CLASSIFYING PATTERNS IN AN IMAGE WITH A VISION SYSTEM

      
Application Number US2019035840
Publication Number 2019/236885
Status In Force
Filing Date 2019-06-06
Publication Date 2019-12-12
Owner COGNEX CORPORATION (USA)
Inventor
  • Wang, Lei
  • Anand, Vivek
  • Jacobson, Lowell, D.
  • Li, David, Y.

Abstract

This invention provides a system and method for finding patterns in images that incorporates neural net classifiers. A pattern finding tool is coupled with a classifier that can be run before or after the tool to have labeled pattern results with sub-pixel accuracy. In the case of a pattern finding tool that can detect multiple templates, its performance is improved when a neural net classifier informs the pattern finding tool to work only on a subset of the originally trained templates. Similarly, in the case of a pattern finding tool that initially detects a pattern, a neural network classifier can then determine whether it has found the correct pattern. The neural network can also reconstruct/ clean-up an imaged shape, and/or to eliminate pixels less relevant to the shape of interest, therefore reducing the search time, as well significantly increasing the chance of lock on the correct shapes.

IPC Classes  ?

  • G06K 9/20 - Image acquisition
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 3/08 - Learning methods

32.

HIGH-ACCURACY CALIBRATION SYSTEM AND METHOD

      
Application Number US2018027997
Publication Number 2018/195096
Status In Force
Filing Date 2018-04-17
Publication Date 2018-10-25
Owner COGNEX CORPORATION (USA)
Inventor
  • Li, David, Y.
  • Sun, Li

Abstract

This invention provides a calibration target with a calibration pattern on at least one surface. The relationship of locations of calibration features on the pattern are determined for the calibration target and stored for use during a calibration procedure by a calibrating vision system. Knowledge of the calibration target's feature relationships allow the calibrating vision to image the calibration target in a single pose and rediscover each of the calibration features in a predetermined coordinate space. The calibrating vision can then transform the relationships between features from the stored data into the calibrating vision system's local coordinate space. The locations can be encoded in a barcode that is applied to the target, provided in a separate encoded element, or obtained from an electronic data source. The target can include encoded information within the pattern defining a location of adjacent calibration features with respect to the overall geometry of the target.

IPC Classes  ?

  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
  • H04N 13/246 - Calibration of cameras

33.

SYSTEM AND METHOD FOR 3D PROFILE DETERMINATION USING MODEL-BASED PEAK SELECTION

      
Application Number US2018024268
Publication Number 2018/183155
Status In Force
Filing Date 2018-03-26
Publication Date 2018-10-04
Owner COGNEX CORPORATION (USA)
Inventor
  • Li, David Y.
  • Sun, Li
  • Jacobson, Lowell D.
  • Wang, Lei

Abstract

This invention provides a system and method for selecting the correct profile from a range of peaks generated by analyzing a surface with multiple exposure levels applied at discrete intervals. The cloud of peak information is resolved by comparison to a model profile into a best candidate to represent an accurate representation of the object profile. Illustratively, a displacement sensor projects a line of illumination on the surface and receives reflected light at a sensor assembly at a set exposure level. A processor varies the exposure level setting in a plurality of discrete increments, and stores an image of the reflected light for each of the increments. A determination process combines the stored images and aligns the combined images with respect to a model image. Points from the combined images are selected based upon closeness to the model image to provide a candidate profile of the surface.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06T 1/00 - General purpose image data processing
  • H04N 13/204 - Image signal generators using stereoscopic image cameras
  • H04N 5/235 - Circuitry for compensating for variation in the brightness of the object

34.

SYSTEM AND METHOD FOR REDUCED-SPECKLE LASER LINE GENERATION

      
Application Number US2018014562
Publication Number 2018/136818
Status In Force
Filing Date 2018-01-19
Publication Date 2018-07-26
Owner COGNEX CORPORATION (USA)
Inventor Filhaber, John F.

Abstract

An illumination apparatus for reducing speckle effect in light reflected off an illumination target (120) includes a laser (150); a linear diffuser (157) positioned in an optical path between an illumination target and the laser to diffuse collimated laser light (154) in a planar fan of diffused light (158, 159) that spreads in one dimension across at least a portion of the illumination target; and a beam deflector (153) to direct the collimated laser light incident on the beam deflector to sweep across different locations on the linear diffuser within an exposure time for illumination of the illumination target by the diffused light. The different locations span a distance across the linear diffuser that provides sufficient uncorrelated speckle patterns, at an image sensor (164), in light reflected from an intersection of the planar fan of light with the illumination target to add incoherently when imaged by the image sensor within the exposure time.

IPC Classes  ?

  • G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
  • G02B 5/02 - Diffusing elements; Afocal elements
  • G02B 27/48 - Laser speckle optics
  • G02B 26/10 - Scanning systems

35.

MACHINE VISION SYSTEM FOR CAPTURING A DIGITAL IMAGE OF A SPARSELY ILLUMINATED SCENE

      
Application Number US2017026866
Publication Number 2018/057063
Status In Force
Filing Date 2017-04-10
Publication Date 2018-03-29
Owner COGNEX CORPORATION (USA)
Inventor Mcgarry, John

Abstract

A method includes producing two or more measurements by an image sensor having a pixel array, the measurements including information contained in a set of sign-bits, the producing of each measurement including (i) forming an image signal on the pixel array; and (ii) comparing accumulated pixel currents output from pixels of the pixel array in accordance with the image signal and a set of pixel sampling patterns to produce the set of sign-bits of the measurement; buffering at least one of the measurements to form a buffered measurement; comparing information of the buffered measurement to information of the measurements to produce a differential measurement; and combining the differential measurement with information of the set of pixel sampling patterns to produce at least a portion of one or more digital images relating to one or more of the image signals formed on the pixel array.

IPC Classes  ?

  • H04N 5/361 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to dark current
  • H04N 5/345 - Extracting pixel data from an image sensor by controlling scanning circuits, e.g. by modifying the number of pixels having been sampled or to be sampled by partially reading an SSIS array

36.

APPARATUS FOR PROJECTING A TIME-VARIABLE OPTICAL PATTERN ONTO AN OBJECT TO BE MEASURED IN THREE DIMENSIONS

      
Application Number EP2017065114
Publication Number 2017/220595
Status In Force
Filing Date 2017-06-20
Publication Date 2017-12-28
Owner
  • COGNEX CORPORETION (USA)
  • COGNEX ENSHAPE GMBH (Germany)
Inventor
  • Petersen, Jens
  • Schaffer, Martin
  • Harendt, Bastian

Abstract

The invention relates to an apparatus for projecting a time-variable optical pattern onto an object to be measured in three dimensions, comprising a holder for an optical pattern, a light source having an illumination optical unit and an imaging optical unit, wherein the optical pattern is secured as a slide on a displacement mechanism which displaces the optical pattern relative to the illumination optical unit and relative to the imaging optical unit, wherein the displacement mechanism effectuates a displacement of the optical pattern in a slide plane that is oriented perpendicular to the optical axis of the imaging optical unit.

IPC Classes  ?

  • G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
  • G02B 7/00 - Mountings, adjusting means, or light-tight connections, for optical elements

37.

METHOD FOR THE THREE DIMENSIONAL MEASUREMENT OF MOVING OBJECTS DURING A KNOWN MOVEMENT

      
Application Number EP2017065118
Publication Number 2017/220598
Status In Force
Filing Date 2017-06-20
Publication Date 2017-12-28
Owner
  • COGNEX CORPORATION (USA)
  • COGNEX ENSHAPE GMBH (Germany)
Inventor Harendt, Bastian

Abstract

The invention relates to a method for the three dimensional measurement of a moving object during a known movement comprising the following method steps: Projecting a pattern sequence consisting of N patterns onto the moving object, recording a first image sequence consisting of N images using a first camera and recording a second image sequence synchronous to the first image sequence consisting of N images using a second camera, identifying corresponding image points in the first sequence and in the second sequence, wherein trajectories of potential object points are computed from the known movement data and object positions determined therefrom are projected onto each image plane of the first and second cameras, wherein the positions of corresponding image points are determined in advance as a first image point trajectory in the first camera and a second image point trajectory in the second camera, the image points along the previously determined image point trajectories are compared with one another and checked for correspondence and, in a concluding step, the moving object is measured three dimensionally using triangulation from the corresponding image points.

IPC Classes  ?

  • G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
  • G06T 7/579 - Depth or shape recovery from multiple images from motion
  • G06T 7/593 - Depth or shape recovery from multiple images from stereo images

38.

METHOD FOR CALIBRATING AN IMAGE-CAPTURING SENSOR CONSISTING OF AT LEAST ONE SENSOR CAMERA, USING A TIME-CODED PATTERNED TARGET

      
Application Number EP2017065120
Publication Number 2017/220600
Status In Force
Filing Date 2017-06-20
Publication Date 2017-12-28
Owner
  • COGNEX CORPORATION (USA)
  • COGNEX ENSHAPE GMBH (Germany)
Inventor
  • Grosse, Marcus
  • Harendt, Bastian

Abstract

The invention relates to a method for calibrating an image-capturing sensor consisting of at least one sensor camera, using a time-coded patterned target, said time-coded patterned target being displayed on a flat-screen display. A sequence of patterns is displayed on the flat-screen display and is captured by the at least one sensor camera as a series of camera images. An association of the pixels of the time-coded patterned target with the respective pixels captured in the camera images is carried out for a fixed position of the flat-screen display in the surroundings, or for at least two different positions of the flat-screen display in the surroundings, the sensor being calibrated by means of corresponding pixels. A gamma-correction is carried out, in which, before the time-coded patterned target is displayed, a gamma curve of the flat-screen display is captured in a random position and/or in each position of the flat-screen display, together with the recording of the sequence of patterns by means of the sensor, and is corrected.

IPC Classes  ?

  • G01C 25/00 - Manufacturing, calibrating, cleaning, or repairing instruments or devices referred to in the other groups of this subclass

39.

METHOD FOR REDUCING THE ERROR INDUCED IN PROJECTIVE IMAGE-SENSOR MEASUREMENTS BY PIXEL OUTPUT CONTROL SIGNALS

      
Application Number US2017034830
Publication Number 2017/205829
Status In Force
Filing Date 2017-05-26
Publication Date 2017-11-30
Owner COGNEX CORPORATION (USA)
Inventor Mcgarry, John

Abstract

An image sensor for forming projective measurements includes a pixel-array wherein each pixel is coupled with conductors of a pixel output control bus and with a pair of conductors of a pixel output bus. In certain pixels, the pattern of coupling to the pixel output bus is reversed, thereby beneficially de-correlating the image noise induced by pixel output control signals from vectors of the projective basis.

IPC Classes  ?

  • H04N 5/335 - Transforming light or analogous information into electric information using solid-state image sensors [SSIS]
  • H04N 5/378 - Readout circuits, e.g. correlated double sampling [CDS] circuits, output amplifiers or A/D converters

40.

CALIBRATION FOR VISION SYSTEM

      
Application Number US2017032560
Publication Number 2017/197369
Status In Force
Filing Date 2017-05-12
Publication Date 2017-11-16
Owner COGNEX CORPORATION (USA)
Inventor
  • Mcgarry, John
  • Jacobson, Lowell
  • Wang, Lei
  • Liu, Gang

Abstract

A vision system capable of performing run-time 3D calibration includes a mount configured to hold an object, the mount including a 3D calibration structure; a camera; a motion stage coupled with the mount or the camera; and a computing device configured to perform operations including: acquiring images from the camera when the mount is in respective predetermined orientations relative to the camera, each of the acquired images including a representation of at least a portion of the object and at least a portion of the 3D calibration structure that are concurrently in a field of view of the camera; performing at least an adjustment of a 3D calibration for each of the acquired images based on information relating to the 3D calibration structure as imaged in the acquired images; and determining 3D positions, dimensions or both of one or more features of the object.

IPC Classes  ?

  • G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
  • G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
  • G01B 11/02 - Measuring arrangements characterised by the use of optical techniques for measuring length, width, or thickness
  • G01B 21/04 - Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
  • H04N 13/02 - Picture signal generators
  • G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
  • G06T 7/50 - Depth or shape recovery

41.

JOINT OF MATED COMPONENTS

      
Application Number US2017028447
Publication Number 2017/184781
Status In Force
Filing Date 2017-04-19
Publication Date 2017-10-26
Owner COGNEX CORPORATION (USA)
Inventor
  • Townley-Smith, Paul Andrew
  • Mcgarry, John

Abstract

Joints are described used for mating two components of an apparatus that have been precisely aligned with respect to each other, e.g., based on a six degrees of freedom alignment procedure. For example, the precisely aligned components can be optical components that are part of an optical apparatus with highly sensitive mechanical tolerances.

IPC Classes  ?

  • G02B 7/00 - Mountings, adjusting means, or light-tight connections, for optical elements
  • G02B 7/02 - Mountings, adjusting means, or light-tight connections, for optical elements for lenses
  • G02B 27/62 - Optical apparatus specially adapted for adjusting optical elements during the assembly of optical systems
  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation

42.

MACHINE VISION SYSTEM FOR FORMING A ONE DIMENSIONAL DIGITAL REPRESENTATION OF A LOW INFORMATION CONTENT SCENE

      
Application Number US2017013328
Publication Number 2017/123863
Status In Force
Filing Date 2017-01-13
Publication Date 2017-07-20
Owner COGNEX CORPORATION (USA)
Inventor Mcgarry, John

Abstract

A machine vision system to form a one dimensional digital representation of a low information content scene, e.g., a scene that is sparsely illuminated by an illumination plane, and the one dimensional digital representation is a projection formed with respect to columns of a rectangular pixel array of the machine vision system.

IPC Classes  ?

  • H04N 5/335 - Transforming light or analogous information into electric information using solid-state image sensors [SSIS]
  • H04N 5/357 - Noise processing, e.g. detecting, correcting, reducing or removing noise

43.

AIR FLOW MECHANISM FOR IMAGE CAPTURE AND VISION SYSTEMS

      
Application Number US2014050722
Publication Number 2015/137994
Status In Force
Filing Date 2014-08-12
Publication Date 2015-09-17
Owner COGNEX CORPORATION (USA)
Inventor Koelmel, Volker

Abstract

This invention provides a mechanism for clearing debris and vapors from the region around the optical axis (OA) of a vision system (100) that employs a directed airflow (AF) in the region. The airflow (AF) is guided by an air knife (170) that surrounds a viewing gap placed in front of the camera optics (140). The air knife (170) delivers airflow (AF) in a manner that takes advantage of the Coanda effect to generate an airflow (AF) that prevents infiltration of debris and contaminants into the optical path. Illustratively, the air knife (170) defines a geometry that effectively multiplies the delivered airflow approximately fifty times (twenty-five times on each of two air-knife sides) that of the supplied compressed air. This provides an extremely strong air curtain along the scan direction that essentially blocks infiltration of environmental contamination to the optics (140) of the camera (110).

IPC Classes  ?

  • G01N 21/15 - Preventing contamination of the components of the optical system or obstruction of the light path
  • G01N 21/88 - Investigating the presence of flaws, defects or contamination

44.

CONFIGURABLE IMAGE TRIGGER FOR A VISION SYSTEM AND METHOD FOR USING THE SAME

      
Application Number US2012070298
Publication Number 2013/096282
Status In Force
Filing Date 2012-12-18
Publication Date 2013-06-27
Owner COGNEX CORPORATION (USA)
Inventor Mahuna, Tyson

Abstract

This invention provides a trigger for a vision system that can be set using a user interface that allows the straightforward variation of a plurality of exposed trigger parameters. Illustratively, the vision system includes a triggering mode in which the system keeps acquiring an image of a field of view with respect to objects in relative motion. The system runs user-configurable "trigger logic". When the trigger logic succeeds/passes, the current image or a newly acquired image is then transmitted to the main inspection logic for processing. The trigger logic can be readily configured by a user operating an interface, which can also be used to configure the main inspection process, to trigger the vision system by tools such as presence-absence, edge finding, barcode finding, pattern matching, image thresholding, or any arbitrary combination of tools exposed by the vision system in the interface.

IPC Classes  ?

  • G01N 21/88 - Investigating the presence of flaws, defects or contamination
  • G06T 1/00 - General purpose image data processing

45.

METHODS AND APPARATUS FOR ONE-DIMENSIONAL SIGNAL EXTRACTION

      
Application Number US2012070758
Publication Number 2013/096526
Status In Force
Filing Date 2012-12-20
Publication Date 2013-06-27
Owner COGNEX CORPORATION (USA)
Inventor
  • Silver, William, M.
  • Bachelder, Ivan A.

Abstract

Methods and apparatus are disclosed for extracting a one-dimensional digital signal from a two-dimensional digital image along a projection line. Disclosed embodiments provide an image memory in which is stored the digital image, a working memory, a direct memory access controller, a table memory that holds a plurality of transfer templates, and a processor. The processor selects a transfer template from the table memory responsive to an orientation of the projection line, computes a customized set of transfer parameters from the selected transfer template and parameters of the projection line, transmits the transfer parameters to the direct memory access controller, commands the direct memory access controller to transfer data from the image memory to the working memory as specified by the transfer parameters, and computes the one-dimensional digital signal using at least a portion of the data transferred by the direct memory access controller into the working memory.

IPC Classes  ?

  • G06K 9/50 - Extraction of features or characteristics of the image by analysing segments intersecting the pattern
  • G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
  • G06T 11/20 - Drawing from basic elements, e.g. lines or circles

46.

SYSTEM AND METHOD FOR CONTROLLING ILLUMINATION IN A VISION SYSTEM

      
Application Number US2012065033
Publication Number 2013/081828
Status In Force
Filing Date 2012-11-14
Publication Date 2013-06-06
Owner COGNEX CORPORATION (USA)
Inventor Burrell, Paul

Abstract

This invention provides a system and method for enabling control of an illuminator having predetermined operating parameters by a vision system processor/core based upon stored information regarding parameters that are integrated with the illuminator. The parameters are retrieved by the processor, and are used to control the operation of the illuminator and/or the camera during image acquisition. In an embodiment, the stored parameters are a discrete numerical or other value that corresponds to the illuminator type. The discrete value maps to a corresponding value in look-up table/database associated with the camera that contains parameter sets associated with each of a plurality of values in the database. The data associated with the discrete value in the camera contains the necessary parameters or settings for that illuminator type. In other embodiments, some or all of the actual parameter information can be stored with the illuminator and retrieved by the camera processor.

IPC Classes  ?

  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • H04N 5/225 - Television cameras
  • H04N 5/235 - Circuitry for compensating for variation in the brightness of the object

47.

AUTO-FOCUS MECHANISM FOR VISION SYSTEM CAMERA

      
Application Number US2012065030
Publication Number 2013/078045
Status In Force
Filing Date 2012-11-14
Publication Date 2013-05-30
Owner COGNEX CORPORATION (USA)
Inventor Gainer, Robert, L.

Abstract

This invention provides an electro-mechanical auto-focus function for a smaller-diameter lens type that nests, and is removably mounted, within the mounting space and thread arrangement of a larger-diameter lens base of a vision camera assembly housing. In an illustrative embodiment, the camera assembly includes a threaded base having a first diameter, which illustratively defines a C-mount base. A motor-driven gear-reduction drive assembly is mounted internally, and includes teeth that engage corresponding teeth on the outer diameter of a cylindrical focus gear, which has an internal lead screw. The focus gear is freely rotatable, and removably captured, within the threaded C-mount base in a nested, coaxial relationship. The internal lead screw of the focus gear threadingly engages the external threads of a coaxial lens holder. This converts the drive gear rotation into linear/axial lens holder motion. The lens holder includes anti-rotation stops, which allow its linear/axial movement but restrain any rotational motion.

IPC Classes  ?

  • G03B 3/10 - Power-operated focusing
  • G03B 17/14 - Bodies with means for supporting objectives, supplementary lenses, filters, masks, or turrets interchangeably
  • G03B 17/56 - Accessories

48.

ENCRYPTION AUTHENTICATION OF DATA TRANSMITTED FROM MACHINE VISION TOOLS

      
Application Number US2012054857
Publication Number 2013/040029
Status In Force
Filing Date 2012-09-12
Publication Date 2013-03-21
Owner COGNEX CORPORATION (USA)
Inventor Scherer, Timothy

Abstract

The technology provides, in some aspects, methods and systems for securely transmitting data using a machine vision system (e.g., within a pharmaceutical facility). Thus, for example, in one aspect, the technology provides a method that includes the steps of establishing a communications link between a machine vision processor and a remote digital data processor (e.g., a database server, personal computer, etc.); encrypting, on the machine vision processor, (i) at least one network packet containing machine vision data, and (ii) at least one network packet containing non-machine vision data; and sending to the remote digital data processor the encrypted network packets from the machine vision processor.

IPC Classes  ?

  • H04L 29/06 - Communication control; Communication processing characterised by a protocol

49.

MASTER AND SLAVE MACHINE VISION SYSTEM

      
Application Number US2012054858
Publication Number 2013/040030
Status In Force
Filing Date 2012-09-12
Publication Date 2013-03-21
Owner COGNEX CORPORATION (USA)
Inventor
  • Martinicky, Brian
  • Wu, Scott

Abstract

The technology provides, in some aspects, methods and systems for triggering a master machine vision processor and a slave machine vision processor in a multi-camera machine vision system. Thus, for example, in one aspect, the technology provides a method that includes the steps of establishing a communications link between a slave machine vision processor and a master machine vision processor; receiving on the slave machine vision processor a data message from the master machine vision processor; and triggering the slave machine vision processor to perform a machine vision function, the triggering occurring at a frequency based upon the data message, wherein at least one triggering of the slave machine vision processor occurs independent of the master machine vision processor.

IPC Classes  ?

  • G06T 1/20 - Processor architectures; Processor configuration, e.g. pipelining

50.

DETERMINING THE UNIQUENESS OF A MODEL FOR MACHINE VISION

      
Application Number US2011066883
Publication Number 2012/092132
Status In Force
Filing Date 2011-12-22
Publication Date 2012-07-05
Owner COGNEX CORPORATION (USA)
Inventor
  • Wang, Xiaoguang
  • Jacobson, Lowell

Abstract

Described are methods and apparatuses, including computer program products, for determining model uniqueness with a quality metric of a model of an object in a machine vision application. Determining uniqueness involves receiving a training image, generating a model of an object based on the training image, generating a modified training image based on the training image, determining a set of poses that represent possible instances of the model in the modified training image, and computing a quality metric of the model based on an evaluation of the set of poses with respect to the modified training image.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means

51.

MODEL-BASED POSE ESTIMATION USING A NON-PERSPECTIVE CAMERA

      
Application Number IB2011003044
Publication Number 2012/076979
Status In Force
Filing Date 2011-12-14
Publication Date 2012-06-14
Owner COGNEX CORPORATION (USA)
Inventor
  • Liu, Lifeng
  • Wallack, Aaron, S.
  • Marrion, Cyril, C., Jr.

Abstract

This invention provides a system and method for determining correspondence between camera assemblies in a 3D vision system so as to acquire contemporaneous images of a runtime object and determine the pose of the object, and in which at least one of the camera assemblies includes a non-perspective lens. The searched 2D object features of the acquired non-perspective image can be combined with the searched 2D object features in images of other camera assemblies, based on their trained object features, to generate a set of 3D image features and thereby determine a 3D pose. Also provided is a system and method for training and performing runtime 3D pose determination of an object using a plurality of camera assemblies in a 3D vision system. The cameras are arranged at different orientations with respect to a scene, so as to acquire contemporaneous images of an object, both at training and runtime.

IPC Classes  ?

52.

MOBILE OBJECT CONTROL SYSTEM AND PROGRAM, AND MOBILE OBJECT CONTROL METHOD

      
Application Number US2011041222
Publication Number 2011/163209
Status In Force
Filing Date 2011-06-21
Publication Date 2011-12-29
Owner COGNEX CORPORATION (USA)
Inventor Ikeda, Masaaki

Abstract

The problem for the present invention is, in a mobile object control system that controls a mobile object on the basis of images captured by a plurality of image capturing units, to enhance the accuracy of the correspondence relationships between the respective image capture coordinate systems that are determined for the respective image capturing units. And, according to the present invention, a first rotational center position specification unit specifies a first rotational center position in a first image capture coordinate system corresponding to a first point on the basis of respective first reference guide marks included in respective first images. Moreover, a second rotational center position specification unit specifies a second rotational center position in a second image capture coordinate system corresponding to the first point on the basis of respective second reference guide marks included in respective second images. And, on the basis of the first rotational center position and the second rotational center position, a coordinate system correspondence relationship storage unit stores a coordinate system correspondence relationship that specifies the correspondence relationship between the first image capture coordinate system and second image capture coordinate system.

IPC Classes  ?

  • G01B 11/26 - Measuring arrangements characterised by the use of optical techniques for testing the alignment of axes
  • G05B 19/00 - Programme-control systems
  • H05K 13/00 - Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components

53.

SYSTEM AND METHOD FOR PROCESSING IMAGE DATA RELATIVE TO A FOCUS OF ATTENTION WITHIN THE OVERALL IMAGE

      
Application Number US2011036458
Publication Number 2011/146337
Status In Force
Filing Date 2011-05-13
Publication Date 2011-11-24
Owner COGNEX CORPORATION (USA)
Inventor
  • Moed, Michael, C.
  • Mcgarry, E., John

Abstract

This invention provides a system and method for processing discrete image data within an overall set of acquired image data based upon a focus of attention within that image. The result of such processing is to operate upon a more limited subset of the overall image data to generate output values required by the vision system process. Such output value can be a decoded ID or other alphanumeric data. The system and method is performed in a vision system having two processor groups, along with a data memory that is smaller in capacity than the amount of image data to be read out from the sensor array. The first processor group is a plurality of SIMD processors and at least one general purpose processor, co-located on the same die with the data memory. A data reduction function operates within the same clock cycle as data-readout from the sensor to generate a reduced data set that is stored in the on-die data memory. At least a portion of the overall, unreduced image data is concurrently (in the same clock cycle) transferred to the second processor while the first processor transmits at least one region indicator with respect to the reduced data set to the second processor. The region indicator represents at least one focus of attention for the second processor to operate upon.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06T 1/20 - Processor architectures; Processor configuration, e.g. pipelining

54.

DISTRIBUTED VISION SYSTEM WITH MULTI-PHASE SYNCHRONIZATION

      
Application Number US2010061533
Publication Number 2011/090660
Status In Force
Filing Date 2010-12-21
Publication Date 2011-07-28
Owner COGNEX CORPORATION (USA)
Inventor Mcclellan, James, R.

Abstract

This invention provides a system and method for synchronization of vision system inspection results produced by each of a plurality of processors that includes a first bank (that can be a "master" bank) containing a master vision system processor and at least one slave vision system processor. At least a second bank (that can be one of a plurality of "slave" banks) contains a master vision system processor and at least one slave vision system processor. Each vision system processor in each bank generates results from an image acquired and processed in a given inspection cycle. The inspection cycle can be based on an external trigger or other trigger signal, and it can enable some or all of the processors/banks to acquire and process images at a given time/cycle. In a given cycle, each of the multiple banks can be positioned to acquire an image of a respective region of a plurality of succeeding regions on a moving line. A synchronization process (a) generates a unique identifier and that passes a trigger signal with the unique identifier associated with the master processor in the first bank to each of the slave processor in the master bank and each of the master and slave processor and (b) receives consolidated results via the master processor of the second bank, having the unique identifier and consolidated results from the results from the first bank. The process then (c) consolidates the results for transmission to a destination if the results are complete and the unique identifier of each of the results is the same.

IPC Classes  ?

  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

55.

SWIPE SCANNER EMPLOYING A VISION SYSTEM

      
Application Number US2011020812
Publication Number 2011/085362
Status In Force
Filing Date 2011-01-11
Publication Date 2011-07-14
Owner COGNEX CORPORATION (USA)
Inventor Mcgarry, E., John

Abstract

This invention provides a point-of-sale scanning device that employs vision sensors and vision processing to decode symbology and matrices of information of objects, documents and other substrates as such objects are moved (swiped) through the field-of-view of the scanning device's window. The scanning device defines a form factor that conforms to that of a conventional laser-based point-of-sale scanning device using a housing having a plurality of mirrors, oriented generally at 45-degree angles with respect to the window's plane so as to fold the optical path, thereby allowing for an extended depth of field. The path is divided laterally so as to reach opposing lenses and image sensors, which face each other and are oriented along a lateral optical axis between sidewalls of the device. The sensors and lenses can be adapted to perform different parts of the overall vision system and/or code recognition process. The housing also provides illumination that fills the volume space. Illustratively, illumination is provided adjacent to the window in a ring having two rows for intermediate and long-range illumination of objects. Illumination of objects at or near the scanning window is provided by illuminators positioned along the sidewalls in a series of rows, these rows directed to avoid flooding the optical path.

IPC Classes  ?

  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation

56.

SYSTEM AND METHOD FOR RUNTIME DETERMINATION OF CAMERA MISCALIBRATION

      
Application Number US2010061989
Publication Number 2011/079258
Status In Force
Filing Date 2010-12-23
Publication Date 2011-06-30
Owner COGNEX CORPORATION (USA)
Inventor
  • Ye, Xiangyun
  • Li, David, Y.
  • Shivaram, Guruprasad
  • Michael, David, J.

Abstract

This invention provides a system and method for runtime determination (self- diagnosis) of camera miscalibration (accuracy), typically related to camera extrinsics, based on historical statistics of runtime alignment scores for objects acquired in the scene, which are defined based on matching of observed and expected image data of trained object models. This arrangement avoids a need to cease runtime operation of the vision system and/or stop the production line that is served by the vision system to diagnose if the system's camera(s) remain calibrated. Under the assumption that objects or features inspected by the vision system over time are substantially the same, the vision system accumulates statistics of part alignment results and stores intermediate results to be used as indicator of current system accuracy. For multi-camera vision systems, cross validation is illustratively employed to identify individual problematic cameras. The system and method allows for faster, less- expensive and more-straightforward diagnosis of vision system failures related to deteriorating camera calibration.

IPC Classes  ?

57.

OBJECT CONTROL SYSTEM, OBJECT CONTROL METHOD AND PROGRAM, AND ROTATIONAL CENTER POSITION SPECIFICATION DEVICE

      
Application Number US2010059089
Publication Number 2011/071813
Status In Force
Filing Date 2010-12-06
Publication Date 2011-06-16
Owner COGNEX CORPORATION (USA)
Inventor Ikeda, Masaaki

Abstract

The problem for the present invention is to provide an object control system that can prevent shifting an object to a target position from requiring a long time, even if, for example, the position of installation of an image capturing unit is deviated. And, according to the means for solution of the present invention, an object control system includes: a first image capturing unit that captures a first image including a first reference mark that specifies a first object line determined in advance with respect to an object; an angle acquisition unit that, on the basis of said first reference mark within said first image, acquires a first differential angle that specifies the angle between a first target object line, determined in advance with respect to said first image, and said first object line; and an object control unit that controls a rotation mechanism that rotates said object, on the basis of said first differential angle.

IPC Classes  ?

  • H05K 13/00 - Apparatus or processes specially adapted for manufacturing or adjusting assemblages of electric components

58.

SYSTEM AND METHOD FOR ALIGNMENT AND INSPECTION OF BALL GRID ARRAY DEVICES

      
Application Number US2010002895
Publication Number 2011/056219
Status In Force
Filing Date 2010-11-04
Publication Date 2011-05-12
Owner COGNEX CORPORATION (USA)
Inventor
  • Wang, Xiaoguang
  • Wang, Lei

Abstract

A system and method for high-speed alignment and inspection of components, such as BGA devices, having non-uniform features is provided. During training time of a machine vision system, a small subset of alignment significant blobs along with a quantum of geometric analysis for picking granularity is determined. Also, during training time, balls may be associated with groups, each of which may have its own set of parameters for inspection.

IPC Classes  ?

59.

SYSTEM AND METHOD FOR ACQUIRING A STILL IMAGE FROM A MOVING IMAGE

      
Application Number US2010048611
Publication Number 2011/032082
Status In Force
Filing Date 2010-09-13
Publication Date 2011-03-17
Owner COGNEX CORPORATION (USA)
Inventor
  • Mcgarry, John, E.
  • Silver, William, M.

Abstract

This invention provides a system and method to capture a moving image of a scene that can be more readily de-blurred as compared to images captured through other known methods such as coded exposure de-blurring (flutter shutter) operating on an equivalent exposure -time interval Rather than stopping and starting the integration of light measurement during the exposure-time interval, photo-generated current is switched between multiple charge storage sites in accordance with a temporal switching pattern that optimizes the conditioning of the solution to the inverse blur transform. By switching the image intensity signal between storage sites all of the light energy available during the exposure-time interval is transduced to electronic charge and captured to form a temporally decomposed representation of the moving image As compared to related methods that discard approximately half of the image intensity signal available over an equivalent exposure-time interval, such a temporally decomposed image is a far more complete representation of the moving image and more effectively de-blurred using simple linear de-convolution techniques

IPC Classes  ?

  • H04N 5/232 - Devices for controlling television cameras, e.g. remote control
  • H04N 5/335 - Transforming light or analogous information into electric information using solid-state image sensors [SSIS]

60.

SYSTEM AND METHOD FOR CAPTURING AND DETECTING BARCODES USING VISION ON CHIP PROCESSOR

      
Application Number US2010023734
Publication Number 2010/093680
Status In Force
Filing Date 2010-02-10
Publication Date 2010-08-19
Owner COGNEX CORPORATION (USA)
Inventor
  • Moed, Michael, C.
  • Akopyan, Mikhail

Abstract

This invention provides a system and method for capturing, detecting and extracting features of an ID, such as a ID barcode, that employs an efficient processing system based upon a CPU-controlled vision system on a chip (VSoC) architecture, which illustratively provides a linear array processor (LAP) constructed with a single instruction multiple data (SIMD) architecture in which each pixel of the rows of the pixel array are directed to individual processors in a similarly wide array. The pixel data are processed in a front end (FE) process that performs rough finding and tracking of regions of interest (ROIs) that potentially contain ID-like features. The ROI-fmding process occurs in two parts so as to optimize the efficiency of the LAP in neighborhood operations — a row-processing step that occurs during image pixel readout from the pixel array and an image-processing step that occurs typically after readout occurs. The relative motion of the ID-containing ROI with respect to the pixel array is tracked and predicted. An optional back end (BE) process employs the predicted ROI to perform feature-extraction after image capture. The feature extraction derives candidate ID features that are verified by a verification step that confirms the ID, creates a refined ROI, angle of orientation and feature set. These are transmitted to a decoding processor or other device.

IPC Classes  ?

  • G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light

61.

SYSTEM AND METHOD FOR UTILIZING A LINEAR SENSOR

      
Application Number US2010000314
Publication Number 2010/090742
Status In Force
Filing Date 2010-02-04
Publication Date 2010-08-12
Owner COGNEX CORPORATION (USA)
Inventor
  • Tremblay, Robert, J.
  • Silver, William, M.
  • Moed, Michael, C.

Abstract

Systems and methods for utilizing linear optoelectronic sensors are provided. A plurality of linear sensors (525A, 525B) may be utilized to obtain velocity measurements of a web material (505) at two points. The acceleration of the web material (505) may be determined from the velocity measurements and a control signal issued to a servo (515) to maintain proper tension along the web material.

IPC Classes  ?

  • D21F 7/00 - Other details of machines for making continuous webs of paper
  • D21G 9/00 - Other accessories for paper-making machines
  • B65H 7/14 - Controlling article feeding, separating, pile-advancing, or associated apparatus, to take account of incorrect feeding, absence of articles, or presence of faulty articles by feelers or detectors by photoelectric feelers or detectors
  • B65H 23/04 - Registering, tensioning, smoothing, or guiding webs longitudinally
  • G01N 33/00 - Investigating or analysing materials by specific methods not covered by groups
  • G01S 5/16 - Position-fixing by co-ordinating two or more direction or position-line determinations; Position-fixing by co-ordinating two or more distance determinations using electromagnetic waves other than radio waves
  • G06T 7/00 - Image analysis

62.

SYSTEM AND METHOD FOR THREE-DIMENSIONAL ALIGNMENT OF OBJECTS USING MACHINE VISION

      
Application Number US2009066247
Publication Number 2010/077524
Status In Force
Filing Date 2009-12-01
Publication Date 2010-07-08
Owner COGNEX CORPORATION (USA)
Inventor
  • Marrion, Cyril, C.
  • Foster, Nigel, J.
  • Liu, Lifeng
  • Li, David, Y.
  • Shivaram, Guruprasad
  • Wallack, Aaron, S.
  • Ye, Xiangyun

Abstract

This invention provides a system and method for determining the three-dimensional alignment of a modeled object or scene. After calibration, a 3D (stereo) sensor system views the object to derive a runtime 3D representation of the scene containing the object. Rectified images from each stereo head are preprocessed to enhance their edge features. A stereo matching process is then performed on at least two (a pair) of the rectified preprocessed images at a time by locating a predetermined feature on a first image and then locating the same feature in the other image. 3D points are computed for each pair of cameras to derive a 3D point cloud. The 3D point cloud is generated by transforming the 3D points of each camera pair into the world 3D space from the world calibration. The amount of 3D data from the point cloud is reduced by extracting higher-level geometric shapes (HLGS), such as line segments. Found HLGS from runtime are corresponded to HLGS on the model to produce candidate 3D poses. A coarse scoring process prunes the number of poses. The remaining candidate poses are then subjected to a further more-refined scoring process. These surviving candidate poses are then verified by, for example, fitting found 3D or 2D points of the candidate poses to a larger set of corresponding three-dimensional or two-dimensional model points, whereby the closest match is the best refined three-dimensional pose.

IPC Classes  ?

  • G06K 9/64 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

63.

METHOD AND SYSTEM FOR DYNAMIC FEATURE DETECTION

      
Application Number US2009002198
Publication Number 2009/126273
Status In Force
Filing Date 2009-04-08
Publication Date 2009-10-15
Owner COGNEX CORPORATION (USA)
Inventor Silver, William, M.

Abstract

Disclosed are methods and systems for dynamic feature detection of physical features of objects in the field of view of a sensor. Dynamic feature detection substantially reduces the effects of accidental alignment of physical features with the pixel grid of a digital image by using the relative motion of objects or material in and/or through the field of view to capture and process a plurality of images that correspond to a plurality of alignments. Estimates of the position, weight, and other attributes of a feature are based on an analysis of the appearance of the feature as it moves in the field of view and appears at a plurality of pixel grid alignments. The resulting reliability and accuracy is superior to prior art static feature detection systems and methods.

IPC Classes  ?

  • G06K 9/64 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix
  • G06T 7/00 - Image analysis
  • G06T 7/20 - Analysis of motion

64.

SYSTEM AND METHOD FOR PERFORMING MULTI-IMAGE TRAINING FOR PATTERN RECOGNITION AND REGISTRATION

      
Application Number US2008013823
Publication Number 2009/085173
Status In Force
Filing Date 2008-12-18
Publication Date 2009-07-09
Owner COGNEX CORPORATION (USA)
Inventor
  • Bogan, Nathaniel
  • Wang, Xiaoguang
  • Wallack, Aaron, S.

Abstract

A system and method for performing multi-image training for pattern recognition and registration is provided. A machine vision system first obtains N training images of the scene. Each of the N images is used as a baseline image and the N-1 images are registered to the baseline. Features that represent a set of corresponding image features are added to the model. The feature to be added to the model may comprise an average of the features from each of the images in which the feature appears. The process continues until every feature that meets a threshold requirement is accounted for. The model that results from the present invention represents those stable features that are found in at least the threshold number of the N training images. The model may then be used to train an alignment/inspection tool with the set of features.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means

65.

VISION SENSORS, SYSTEMS, AND METHODS

      
Application Number US2008071993
Publication Number 2009/070354
Status In Force
Filing Date 2008-08-01
Publication Date 2009-06-04
Owner COGNEX CORPORATION (USA)
Inventor
  • Mcgarry, John, E.
  • Plummer, Piers, A., N.

Abstract

A single chip vision sensor of an embodiment includes a pixel array and one or more circuits. The one or more circuits are configured to search an image for one or more features using a model of the one or more features. A method of an embodiment in a single chip vision sensor includes obtaining an image based at least partially on sensed light, and searching the image for one or more features using a model of the one or more features. A system of an embodiment includes the single chip vision sensor and a device. The device is configured to receive one or more signals from the single chip vision sensor and to control an operation based at least partially on the one or more signals.

IPC Classes  ?

  • G06K 7/00 - Methods or arrangements for sensing record carriers

66.

SYSTEM AND METHOD FOR READING PATTERNS USING MULTIPLE IMAGE FRAMES

      
Application Number US2008083191
Publication Number 2009/064759
Status In Force
Filing Date 2008-11-12
Publication Date 2009-05-22
Owner COGNEX CORPORATION (USA)
Inventor Nadabar, Sateesha

Abstract

This invention provides a system and method for decoding symbology that contains a respective data set using multiple image frames (420, 422, 424) of the symbol, wherein at least some of those frames can have differing image parameters (for example orientation, lens zoom, aperture, etc.) so that combining the frames with an illustrative multiple image application (430) allows the most-readable portions of each frame to be stitched togβther. And unlike prior systems which may select one 'best' image, the illustrative system method allows this stitched image to form a complete, readable image of the underlying symbol (310). In an illustrative embodiment the system and method includes an imaging assembly that acquires multiple image frames of the symbol in which some of those image frames have discrete, differing image parameters from others of the frames. A processor, which is operatively connected to the imaging assembly processes the plurality of acquired image frames of the symbol to decode predetermined code data from at least some of the plurality of image frames, and' to combine the predeterminded code data from the at least some of the plurality of image frames to define a decodable version of the data set represented by the symbol.

IPC Classes  ?

  • G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation

67.

CIRCUITS AND METHODS ALLOWING FOR PIXEL ARRAY EXPOSURE PATTERN CONTROL

      
Application Number US2008071914
Publication Number 2009/035785
Status In Force
Filing Date 2008-08-01
Publication Date 2009-03-19
Owner
  • COGNEX CORPORATION (USA)
  • INNOVACIONES MICROELECTRONICAS S.L. (Spain)
Inventor
  • Mcgarry, E. John
  • Dominguez-Castro, Rafael
  • Garcia, Alberto

Abstract

An image processing system includes an image sensor circuit. The image sensor circuit is configured to obtain an image using a type of shutter operation in which an exposure pattern of a pixel array is set according to exposure information that changes over time based at least partially on charge accumulated in at least a portion of the pixel array. An image sensor circuit includes a pixel array and one or more circuits. The one or more circuits are configured to update exposure information based at least partially on one or more signals output from the pixel array, and to control an exposure pattern of the pixel array based on the exposure information. A pixel circuit includes a first transistor connected between a photodiode and a sense node, and a second transistor connected between an exposure control signal line and a gate of the first transistor.

IPC Classes  ?

  • H04N 3/14 - Scanning details of television systems; Combination thereof with generation of supply voltages by means not exclusively optical-mechanical by means of electrically scanned solid-state devices

68.

SYSTEM AND METHOD FOR TRAFFIC SIGN RECOGNITION

      
Application Number US2008010722
Publication Number 2009/035697
Status In Force
Filing Date 2008-09-15
Publication Date 2009-03-19
Owner COGNEX CORPORATION (USA)
Inventor
  • Moed, Michael, C.
  • Bonnick, Thomas, W.

Abstract

This invention provides a vehicle-borne system and method for traffic sign recognition that provides greater accuracy and efficiency in the location and classification of various types of traffic signs by employing rotation and scale-invariant (RSI)-based geometric pattern-matching on candidate traffic signs acquired by a vehicle-mounted forward-looking camera and applying one or more discrimination processes to the recognized sign candidates from the pattern-matching process to increase or decrease the confidence of the recognition. These discrimination processes include discrimination based upon sign color versus model sign color arrangements, discrimination based upon the pose of the sign candidate versus vehicle location and/or changes in the pose between image frames, and/or discrimination of the sign candidate versus stored models of fascia characteristics. The sign candidates that pass with high confidence are classified based upon the associated model data and the drive/vehicle is informed of their presence. In an illustrative embodiment, a preprocess step converts a color image of the sign candidates into a grayscale image in which the contrast between sign colors is appropriate enhanced to assist the pattern-matching process.

IPC Classes  ?

  • G06K 9/64 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix

69.

METHOD AND SYSTEM FOR OPTOELECTRONIC DETECTION AND LOCATION OF OBJECTS

      
Application Number US2008007280
Publication Number 2008/156608
Status In Force
Filing Date 2008-06-11
Publication Date 2008-12-24
Owner COGNEX CORPORATION (USA)
Inventor Silver, William, M.

Abstract

Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art.

IPC Classes  ?

70.

METHOD AND SYSTEM FOR OPTOELECTRONIC DETECTION AND LOCATION OF OBJECTS

      
Application Number US2008007302
Publication Number 2008/156616
Status In Force
Filing Date 2008-06-11
Publication Date 2008-12-24
Owner COGNEX CORPORATION (USA)
Inventor Silver, William, M.

Abstract

Disclosed are methods and systems for optoelectronic detection and location of moving objects. The disclosed methods and systems capture one-dimensional images of a field of view through which objects may be moving, make measurements in those images, select from among those measurements those that are likely to correspond to objects in the field of view, make decisions responsive to various characteristics of the objects, and produce signals that indicate those decisions. The disclosed methods and systems provide excellent object discrimination, electronic setting of a reference point, no latency, high repeatability, and other advantages that will be apparent to one of ordinary skill in the art.

IPC Classes  ?

  • G01V 8/00 - Prospecting or detecting by optical means

71.

SYSTEM AND METHOD FOR LOCATING A THREE-DIMENSIONAL OBJECT USING MACHINE VISON

      
Application Number US2008006535
Publication Number 2008/153721
Status In Force
Filing Date 2008-05-22
Publication Date 2008-12-18
Owner COGNEX CORPORATION (USA)
Inventor
  • Wallack, Aaron, S.
  • Michael, David, J.

Abstract

This invention provides a system and method for determining position of a viewed object in three dimensions by employing 2D machine vision processes on each of a plurality of planar faces of the object, and thereby refining the location of the object. First a rough pose estimate of the object is derived. This rough pose estimate can be based upon predetermined pose data, or can be derived by acquiring a plurality of planar face poses of the object (using, for example multiple cameras) and correlating the corners of the trained image pattern, which have known coordinates relative to the origin, to the acquired patterns. Once the rough pose is achieved, this is refined by defining the pose as a quaternion (a, b, c and d) for rotation and a three variables (x, y, z) for translation and employing an iterative weighted, least squares error calculation to minimize the error between the edgelets of trained model image and the acquired runtime edgelets. The overall, refined/optimized pose estimate incorporates data from each of the cameras' acquired images. Thereby, the estimate minimizes the total error between the edgelets of each camera's/view's trained model image and the associated camera's/view's acquired runtime edgelets. A final transformation of trained features relative to the runtime features is derived from the iterative error computation.

IPC Classes  ?

72.

HUMAN-MACHINE-INTERFACE AND METHOD FOR MANIPULATING DATA IN A MACHINE VISION SYSTEM

      
Application Number US2007025861
Publication Number 2008/085346
Status In Force
Filing Date 2007-12-19
Publication Date 2008-07-17
Owner COGNEX CORPORATION (USA)
Inventor
  • Tremblay, Robert
  • Phillips, Brian
  • Keating, John
  • Eames, Andrew
  • Whitman, Steven
  • Mirtich, Brian
  • Arbogast, Carol, M.

Abstract

This invention provides a Graphical User Interface (GUI) that operates in con¬ nection with a machine vision detector or other machine vision system, which pro¬ vides a highly intuitive and industrial machine-like appearance and layout. The GUI includes a centralized image frame window surrounded by panes having buttons and specific interface components that the user employs in each step of a machine vision system set up and run procedure. One pane allows the user to view and manipulate a recorded filmstrip of image thumbnails taken in a sequence, and provides the filmstrip with specialized highlighting (colors or patterns) that indicate useful information about the underlying images. The system is set up and run are using a sequential se¬ ries of buttons or switches that are activated by the user in turn to perform each of the steps needed to connect to a vision system, train the system to recognize or detect ob¬ jects/parts, configure the logic that is used to handle recognition/detection signals, set up system outputs from the system based upon the logical results, and finally, run the programmed system in real time. The programming of logic is performed using a programming window that includes a ladder logic arrangement. A thumbnail window is provided on the programming window in which an image from a filmstrip is dis¬ played, focusing upon the locations of the image (and underlying viewed object/part) in which the selected contact element is provided.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06T 7/00 - Image analysis

73.

METHOD AND APPARATUS FOR SEMICONDUCTOR WAFER ALIGNMENT

      
Application Number US2007018752
Publication Number 2008/024476
Status In Force
Filing Date 2007-08-23
Publication Date 2008-02-28
Owner COGNEX CORPORATION (USA)
Inventor
  • Michael, David
  • Clark, James
  • Liu, Gang

Abstract

The invention provides, in some aspects, a wafer alignment system comprising an image acquisition device, an illumination source, a rotatable wafer platform, and an image processor that includes functionality for mapping coordinates in an image of an article (such as a wafer) on the platform to a 'world' frame of reference at each of a plurality of angles of rotation of the platform.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

74.

AUTOMATICALLY DETERMINING MACHINE VISION TOOL PARAMETERS

      
Application Number US2007007759
Publication Number 2007/126947
Status In Force
Filing Date 2007-03-28
Publication Date 2007-11-08
Owner COGNEX CORPORATION (USA)
Inventor
  • Wallack, Aaron
  • Michael, David

Abstract

A method for automatically determining machine vision tool parameters is presented, including: marking to indicate a desired image result for each image of a plurality of images; selecting a combination of machine vision tool. parameters, and running the machine visjp꧀*.tool on the plurality of images using the combination of parameters to provide' a computed image result for each image of the plurality of images, each computed image result including a plurality of computed measures; comparing each desired image result with a corresponding computed image result to provide a comparison result vector associated with the combination of machine vision tool parameters, then comparing the comparison result vector associated with the combination of machine vision tool parameters to a previously computed comparison result vector associated with a previous combination of machine vision tool parameters using a result comparison heuristic to determine which combination of machine vision tool parameters is best overall.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

75.

VIDEO TRAFFIC MONITORING AND SIGNALING APPARATUS

      
Application Number US2007007662
Publication Number 2007/126874
Status In Force
Filing Date 2007-03-27
Publication Date 2007-11-08
Owner COGNEX CORPORATION (USA)
Inventor
  • Schatz, David
  • Shillman, Robert

Abstract

A traffic signal head having a signal lamp or signal ball with an embedded video monitoring system can be provided to perform vehicle detection to inform an intelligent traffic control system. Video monitoring of traffic lanes facing the signal head can be analyzed by such a system to emulate inductive loop signals that are input signals to traffic control systems.

IPC Classes  ?

  • G08G 1/08 - Controlling traffic signals according to detected number or speed of vehicles

76.

METHODS AND APPARATUS FOR PRACTICAL 3D VISION SYSTEM

      
Application Number US2006039324
Publication Number 2007/044629
Status In Force
Filing Date 2006-10-06
Publication Date 2007-04-19
Owner COGNEX CORPORATION (USA)
Inventor
  • Wallack, Aaron
  • Michael, David

Abstract

The invention provides inter alia methods and apparatus for determining the pose,e.g., position along x-, y- and z-axes, pitch, roll and yaw (or one or more characteristics of that pose) of an object in three dimensions by triangulation of data gleaned from multiple images of the object. Thus, for example, in one aspect, the invention provides a method for 3D machine vision in which, during a calibration step, multiple cameras disposed to acquire images of the object from different respective viewpoints are calibrated to discern a mapping function that identifies rays in 3D space emanating from each respective camera's lens that correspond to pixel locations in that camera's field of view. In a training step, functionality associated with the cameras is trained to recognize expected patterns in images to be acquired of the object. A runtime step triangulates locations in 3D space of one or more of those patterns from pixel-wise positions of those patterns in images of the object and from the mappings discerned during calibration step.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

77.

METHODS AND APPARATUS FOR READING BAR CODE IDENTIFICATIONS

      
Application Number US2005030785
Publication Number 2006/026594
Status In Force
Filing Date 2005-08-30
Publication Date 2006-03-09
Owner COGNEX CORPORATION (USA)
Inventor Nadabar, Sateesh, G.

Abstract

The invention provides methods and apparatus for analysis of images of two-dimensional (2D) bar codes (12) in which a model that has proven successful in decoding of a prior 2D image of a 2D bar code (12) is utilized to speed analysis of images of subsequent 2D bar codes (12). hi its various aspects, the invention can be used in analyzing conventional 2D bar codes (12), e.g., those complying with Maxicode and DataMatrix standards, as well as stacked linear bar codes, e.g., those utilizing the Codablock symbology. Bar code readers (16), digital data processing apparatus and other devices according to the invention be used, by way of non-limiting example, to decode bar codes (12) on damaged labels, as well as those screened, etched, peened or otherwise formed on manufactured articles (e.g., from semiconductors to airplane wings). In addition to making bar code reading possible under those conditions, devices utilizing such methods can speed bar codeanalysis in applications where multiple bar codes of like type are read in succession and/or are read under like circumstances - e.g., on the factory floor, at point-of-sale locations, in parcel deliver and so forth. Such devices can also speed and/or make possible bar code analysis where in applications where multiple bar codes read from a single article (14) -e.g., as in the case of a multiply-encoded airplane propellor or other milled parts. The invention also provides methods and apparatus for optical character recognition and other image-based analysis paralleling the above.

IPC Classes  ?

  • G06K 7/10 - Methods or arrangements for sensing record carriers by corpuscular radiation
  • G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code