Reality capture using cloud based computer networks is provided. Techniques include receiving user input of an object to capture, the user input including a location, an accuracy category, and a size category of the object, and generating at least one option to capture the object, in response to user input. Techniques include responsive to a user selecting the at least one option to capture the object, configuring a plurality of drones with a first setting for capturing at least a first portion of the object, and configuring a scanner with a second setting for capturing at least a second portion of the object. Techniques include causing the plurality of drones to capture the first portion of the object, in response to the drones being initiated at the location and causing the scanner to capture the second portion of the object, in response to the scanner being initiated at the location.
A system and method for detecting construction site defects and hazards using artificial intelligence (AI) is provided. The system includes a movable base unit, a coordinate measurement scanner, a vision based sensor, and one or more processors. The one or more processors perform operations that include generating a two-dimensional (2D) map of the environment based at least in part on output from the coordinate measurement scanner, applying image recognition to the video stream data to identify and label a defect or hazard in the video data stream, correlating a location of the defect or hazard in the video stream data with the location in the 2D map, and recording the location of the defect or hazard in the 2D map.
Examples described herein provide a method that includes receiving three-dimensional (3D) data associated with an environment. The method further includes generating a graphical representation based at least in part on at least one of the 3D data. The method further includes filling in a gap in the graphical representation using downsampled frame buffer objects.
An example method for feature extraction includes receiving a selection of a point from a plurality of points, the plurality of points representing an object. The method further includes identifying a feature of interest for the object based at least in part on the point. The method further includes performing edge extraction on the feature of interest. The method further includes performing pre-processing on results of the edge extraction. The method further includes classifying the feature of interest based at least in part on results of the pre-processing. The method further includes constructing, based at least in part on results of the classifying, a geometric primitive or mathematical function that has a best fit to a set of points from the plurality of points associated with the feature of interest. The method further includes generating a graphical representation of the feature of interest using the geometric primitive or mathematical function.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/77 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
5.
PHOTOSENSOR PROCESSING FOR IMPROVED LINE SCANNER PERFORMANCE
A method includes providing a measuring device having a projector, a camera with a photosensitive array, and at least one processor, projecting with the projector a line of light onto an object, capturing with the camera an image of the projected line of light on the object within a window subregion of the photosensitive array, and calculating with the at least one processor three-dimensional (3D) coordinates of points on the object based at least in part on the projected line of light and on the captured image.
G06T 7/70 - Determining position or orientation of objects or cameras
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
A 3D measuring instrument and method of operation is provided that includes a registration camera and a an autofocus camera. The method includes capturing with the registration camera a first registration image of a first plurality of points and a first image with the first camera with the instrument in a first pose. A plurality of three-dimensional (3D) coordinates of points are determined based on the first image. A second registration image of a second plurality of points is captured in a second pose and a focal length of the autofocus camera is adjusted. A second surface image is captured with the first camera having the adjusted focal length. A compensation parameter is determined based in part on the captured second surface image. The determined compensation parameter is stored.
A method and system of correcting a point cloud is provided. The method includes selecting a region within the point cloud. At least two objects within the region are identified. The at least two objects are re-aligned. At least a portion of the point cloud is aligned based at least in part on the realignment of the at least two objects.
A mobile three-dimensional (3D) measuring system includes a 3D measuring device configured to capture 3D data in a multi-level architecture, and an orientation sensor configured to estimate an altitude. One or more processing units coupled with the 3D measuring device and the orientation sensor perform a method that includes receiving a first portion of the 3D data captured by the 3D measuring device. The method further includes determining a level index based on the altitude. The level index indicates a level of the multi-level architecture at which the first portion is captured. The level index is associated with the first portion. Further, a map of the multi-level architecture is generated using the first portion, the generating comprises registering the first portion with a second portion of the 3D data responsive to the level index of the first portion being equal to the level index of the second portion.
According to some aspects of the invention, auxiliary axis measurement systems for determining three-dimensional coordinates of an object are provided as shown and described herein. According to some aspects of the invention, methods for operating auxiliary axis measurement systems for determining three-dimensional coordinates of an object are provided as shown and described herein.
G05B 19/401 - Numerical control (NC), i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by control arrangements for measuring, e.g. calibration and initialisation, measuring workpiece for machining purposes
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01B 21/04 - Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
10.
SYSTEM AND METHOD OF SCANNING AN ENVIRONMENT AND GENERATING TWO DIMENSIONAL IMAGES OF THE ENVIRONMENT
A system and method for scanning an environment and generating an annotated 2D map is provided. The method includes acquiring, via a 2D scanner, a plurality of 2D coordinates on object surfaces in the environment, the 2D scanner having a light source and an image sensor, the image sensor being arranged to receive light reflected from the object points. A first 360° image is acquired at a first position of the environment, via a 360° camera having a plurality of cameras and a controller, the controller being operable to merge the images acquired by the plurality of cameras to generate an image having a 360° view, the 360° camera being movable from the first to a second position. A 2D map is generated based at least in part on the plurality of two-dimensional coordinates of points. The first 360° image is integrated with the 2D map.
A computer-implemented method is provided that includes retrieving at least one selected image from a plurality of aerial images of an environment, the at least one selected image comprising surface regions that are concurrently in a three-dimensional (3D) point cloud of the environment. The method further includes detecting areas of the surface regions in the at least one selected image, such that coordinates of the areas of the surface regions are extracted from the at least one selected image. The method further includes comparing the at least one selected image to the 3D point cloud to align common locations in both the at least one selected image and the 3D point cloud. The method further includes displaying an integration of a drawing of the coordinates of the areas of the surface regions in a representation of the 3D point cloud.
A computer-implemented method is provided that includes causing an aerial vehicle to scan an environment in a predesignated pattern, such that a first set of images are captured. The method further includes detecting an emergency scene in the first set of images of the environment. The method further includes determining locations at which the aerial vehicle is to capture a second set of images of the emergency scene in the environment. The method further includes causing the aerial vehicle to acquire the second set of images at the locations. The method further includes determining selected images of the second set of images focused on the emergency scene. The method further includes extracting the selected images from the second set of images, the selected images comprising a representation of the emergency scene.
A computer-implemented method is provided that includes detecting at least one reflective surface in at least one two-dimensional (2D) image of an environment. The method further includes generating bounding coordinates encompassing the at least one reflective surface in the 2D image. The method further includes projecting the bounding coordinates of the 2D image into a three-dimensional (3D) space of the environment. The method further includes identifying a reflection artifact encompassed by the bounding coordinates in the 3D space. The method further includes removing the reflection artifact identified in the bounding coordinates.
Examples described herein provide a computer-implemented method that includes receiving a video stream from a camera. The method further includes detecting, within the video stream, an object of interest using a first trained machine learning model. The method further includes, responsive to determining that a confidence score associated with the object of interest fails to satisfy a threshold, determining, using a second trained machine learning model, a direction to move the camera to cause the confidence score to satisfy the threshold. The method further includes presenting an indication of the direction to move the camera to cause the confidence score to satisfy the threshold.
A system and method for determining a distance is provided. The system includes a scanner that captures a scan-point by emitting a light having a base frequency and at least one measurement frequency and receiving a reflection of the light. Processors determine the distance to the scan-point by using a method that comprises: generating a signal in response to receiving the reflection of light; determining a first distance to the scan-point based on a phase-shift of the signal and the measurement frequency; determining a second distance and a third distance based on a phase-shift of the signal determined using a Fourier transform at the measurement frequency on a pair of adjacent half-cycles; determining a corrected second distance and a corrected third distance by compensating for an error in the second distance and third distance by performing the Fourier transform on the pair of adjacent half-cycles.
G01S 7/4915 - Time delay measurement, e.g. operational details for pixel components; Phase measurement
G01S 17/36 - Systems determining position data of a target for measuring distance only using transmission of continuous waves, whether amplitude-, frequency-, or phase-modulated, or unmodulated with phase comparison between the received signal and the contemporaneously transmitted signal
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
16.
CORRECTION OF CURRENT SCAN DATA USING PRE-EXISTING DATA
A system and method for measuring coordinate values of an environment is provided. The system includes a coordinate measurement scanner that includes a light source that steers a beam of light to illuminate object points in the environment, and an image sensor arranged to receive light reflected from the object points to determine coordinates of the object points in the environment. The system also includes one or more processors for performing a method that includes receiving a previously generated map of the environment and causing the scanner to measure a plurality of coordinate values as the scanner is moved through the environment, the coordinate values forming a point cloud. The plurality of coordinate values are registered with the previously generated map into a single frame of reference. A current map of the environment is generated based at least in part on the previously generated map and the point cloud.
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 17/89 - Lidar systems, specially adapted for specific applications for mapping or imaging
A system and a method for removing artifacts from a 3D coordinate data are provided. The system includes one or more processors and a measuring device. The one or more processors are operable to receive training data and train the 3D measuring device to identify artifacts by analyzing the training data. The one or more processors are further operable to identify artifacts in live data based on the training of the processor system. The one or more processors are further operable to generate clear scan data by filtering the artifacts from the live data and output the clear scan data.
A method is provided that includes recording a landmark at a first scan position of a scanner, the landmark based at least in part on a semantic feature of scan data captured by the scanner. The semantic feature is identified using line-segments of the scan data. The method further includes capturing, by the scanner while moving through the environment, additional scan data at a second scan position. The method further includes, responsive to the scanner returning to the first scan position associated with the landmark, computing a measurement error. The method further includes correcting, using the measurement error, at least a portion of the scan data or the additional scan data.
Examples described herein provide a method that includes capturing, using a camera, a first image of an environment. The method further includes performing, by a processing system, a first positioning to establish a position of the first image in a layout of the environment. The method further includes detecting, by the processing system, a feature in the first image. The method further includes performing, by the processing system, a second positioning based at least in part on the feature to refine the position of the first image in the layout. The method further includes capturing, using the camera, a second image of the environment and automatically registering the second image to the layout. The method further includes generating a digital twin representation of the environment using the first image based at least in part on the refined position of the first image in the layout and using the second image.
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/77 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A system includes one or more processors that are configured to compensate a measurement tool by performing a method. The method includes capturing a first data using the measurement tool. The method further includes capturing a second data using the measurement tool. The method further includes detecting a first natural feature in the first data. The method further includes computing a difference in positions of the first natural feature in the first data and the second data respectively. The method further includes computing a compensation parameter to adjust the measurement tool based on the difference computed.
G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
21.
SEGMENTATION OF COMPUTED TOMOGRAPHY VOXEL DATA USING MACHINE LEARNING
Examples described herein provide a method that includes creating two-dimensional (2D) slices from a plurality of computed tomography (CT) voxel data sets. The method further includes adding artificial noise to the 2D slices to generate artificially noisy 2D slices. The method further includes creating patches from the 2D slices and the artificially noisy 2D slices. The method further includes training an autoencoder using the patches.
A method for measuring gaps between material layers include inserting a probe tip within a through-hole defined in a structural component. The probe tip is arranged at the end of a probe assembly attached to articulated arm coordinate measuring machine (AACMM). The method further includes contacting the probe tip with a hole surface of the through-hole. The method further includes translating the probe tip along the hole surface in a direction parallel to an axis through the through-hole. The probe tip passes over a gap along the through-hole. The method further includes measuring a radial position of the probe tip during the translation along the hole surface and across the gap including a deflection of radial position of the probe tip as the probe tip crosses the gap. The method further includes calculating a gap size of the gap based on the deflection and a size of the probe tip.
A system includes a three-dimensional (3D) scanner that captures a 3D point cloud corresponding to one or more objects in a surrounding environment. The system further includes a camera that captures a control image by capturing a plurality of images of the surrounding environment, and an auxiliary camera configured to capture an ultrawide-angle image of the surrounding environment. One or more processors of the system colorize the 3D point cloud using the ultrawide-angle image by mapping the ultrawide-angle image to the 3D point cloud. The system performs a limited system calibration before colorizing each 3D point cloud, and a periodic full system calibration before/after a plurality of 3D point clouds are colorized.
A method that includes providing a database for storing meta-data that describes steps in a workflow and an order of the steps in the workflow. The meta-data includes, for each of the steps: a reference to an input data file for the step; a description of a transaction performed at the step; and a reference to an output data file generated by the step based at least in part on applying the transaction to the input data file. Data that includes meta-data for a step in the workflow is received and the data is stored in the database. A trace of the workflow is generated based at least in part on contents of the database. The generating is based on receiving a request from a requestor for the trace of the workflow. At least a subset of the trace is output to the requestor.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
Examples described herein provide a method that includes receiving, from a camera, a first image captured at a first location of an environment. The method further includes receiving, by a three-dimensional (3D) coordinate measurement device, first 3D coordinate data captured at the first location of the environment. The method further includes receiving, from the camera, a second image captured at a second location of the environment. The method further includes detecting, by a processing system, first features of the first image and second features of the second image. The method further includes determining, by the processing system, whether a correspondence exists between the first image and the second image. The method further includes, responsive to determining that the correspondence exists between the first image and the second image, causing the 3D coordinate measurement device to capture, at the second location, second 3D coordinate data.
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
26.
CAPTURING THREE-DIMENSIONAL REPRESENTATION OF SURROUNDINGS USING MOBILE DEVICE
A mobile three-dimensional (3D) measuring system includes a 3D measuring device comprising a first sensor and a second sensor. The 3D measuring system further includes a computing system coupled with the 3D measuring device. A computing device is coupled with the computing system. The 3D measuring device continuously transmits a first data from the first sensor, and a second data from the second sensor to the computing system as it is moved in an environment. The computing system generates a 3D point cloud representing the environment. The computing system generates a 2D projection corresponding to the 3D point cloud. The computing device displays the 2D projection as a live feedback of a movement of the 3D measuring device.
A mobile 3D measuring system includes a 3D measuring device comprising a sensor that emits a plurality of scan lines in a field of view of the sensor. The 3D system further includes a field of view manipulator coupled with the 3D measuring device, the field of view manipulator comprising a passive optic element that redirects a first scan line from the plurality of scan lines. The 3D system further includes a computing system coupled with the 3D measuring device. The 3D measuring device continuously transmits a captured data from the sensor to the computing system as the 3D measuring device is moved in an environment, the captured data is based on receiving reflections corresponding to the plurality of scan lines, including a reflection of the first scan line that is redirected. The computing system generates a 3D point cloud representing the environment based on the captured data.
A mobile three-dimensional (3D) measuring system includes a 3D measuring device, and a support apparatus. The 3D measuring device is coupled to the support apparatus. The support apparatus includes a pole mount that includes a gimbal at the top of the pole mount, wherein the 3D measuring device is attached to the gimbal. The support apparatus further includes a counterweight at the bottom of the pole mount, the counterweight matches a weight of the 3D measuring device.
Examples described herein provide a method that includes communicatively connecting a camera to a processing system. The processing system includes a light detecting and ranging (LIDAR) sensor. The method further includes capturing, by the processing system, three-dimensional (3D) coordinate data of an environment using the LIDAR sensor while the processing system moves through the environment. The method further includes capturing, by the camera, a panoramic image of the environment. The method further includes associating the panoramic image of the environment with the 3D coordinate data of the environment to generate a dataset for the environment. The method further includes generating a digital twin representation of the environment using the dataset for the environment.
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
G03B 37/04 - Panoramic or wide-screen photography; Photographing extended surfaces, e.g. for surveying; Photographing internal surfaces, e.g. of pipe with cameras or projectors providing touching or overlapping fields of view
Examples described herein provide a method that includes receiving a model corresponding to an assembly. The method further includes defining an object of interest in the model. The method further includes receiving a point cloud generated based on data obtained by scanning the assembly using a laser scanner. The method further includes aligning the point cloud to the model. The method further includes determining whether a component corresponding to the object of interest is located correctly relative to the assembly based at least in part on the point cloud aligned to the model. The method further includes, responsive to determining that the component is not located correctly, taking a corrective action.
A laser tracker system and method of operating the laser tracker system is provided. The method includes providing a mobile computing device coupled for communication to a computer network. Identifying with the mobile computing device at least one laser tracker device on the computer network, the at least one laser tracker device including a first laser tracker device. The mobile computing device is connected to the first laser tracker device to transmit signals therebetween via the computer network in response to a first input from a user. One or more control functions are performed on the first laser tracker device in response to one or more second inputs from the user, wherein at least one of the one or more control functions includes selecting with the mobile computing device a retroreflective target and locking the first light beam on the retroreflective target.
G01S 17/66 - Tracking systems using electromagnetic waves other than radio waves
G01S 17/42 - Simultaneous measurement of distance and other coordinates
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G01C 15/00 - Surveying instruments or accessories not provided for in groups
G01S 7/00 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , ,
G01T 7/00 - MEASUREMENT OF NUCLEAR OR X-RADIATION - Details of radiation-measuring instruments
G01S 17/86 - Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
32.
SOFTWARE CAMERA VIEW LOCK ALLOWING EDITING OF DRAWING WITHOUT ANY SHIFT IN THE VIEW
A software camera lock is provided. A first image is displayed as a 3D image, wherein a semi-transparent second image overlays the first image. A software camera is inserted at a fixed location in the 3D image, wherein the software camera provides a field-of-view (FOV) displaying a portion of the 3D image, the FOV displaying a first reference in the FOV, the second image displaying a second reference that represents first reference and comprising an object. Software camera is locked in FOV using a lock software camera mode. A model is inserted in first image to match a location of the object in second image, wherein locking the software camera in the FOV causes the FOV of the first image to be maintained in place as the model is being moved in the first image to match the location of the object in second image.
A method for creating an augmented reality scene, the method comprising, by a computing device with a processor and a memory, receiving a first video image data and a second video image data; calculating an error value for a current pose between the two images by comparing the pixel colors in the first video image data and the second video image data; warping pixel coordinates into a second video image data through the use of the map of depth hypotheses for each pixel; varying the pose between the first video image data and the second video image data to find a warp that corresponds to a minimum error value; calculating, using the estimated poses, a new depth measurement for each pixel that is visible in both the first video image data and the second video image data.
A computer-implemented method includes identifying, by a controller, a part that is being transported to a workstation. The method further includes capturing a 3D scan of the part using a dynamic machine vision sensor. The method further includes validating the part by comparing the 3D scan of the part with a 3D model of the part. The method further includes, based on a determination that the part is valid, projecting a hologram that includes a sequence of assembly steps associated with the part. The method further includes, upon completion of the sequence of assembly steps, capturing a 3D scan of an item that is assembled using the part. The method further includes validating the item by comparing the 3D scan of the item with a 3D model of the item. The method further includes notifying a validity of the item.
G05B 19/4099 - Surface or curve machining, making 3D objects, e.g. desktop manufacturing
B23P 19/04 - Machines for simply fitting together or separating metal parts or objects, or metal and non-metal parts, whether or not involving some deformation; Tools or devices therefor so far as not provided for in other classes for assembling or disassembling parts
Examples described herein provide a method that includes receiving three-dimensional (3D) data of an object in an environment. The method further includes generating the point cloud-defined boundary around the object based at least in part on the 3D data.
An example method includes receiving a first plurality of coordinate measurement points capturing a portion of an environment and a reference object within the environment, the first plurality of coordinate measurement points defining at least a portion of a first point cloud. The method further includes receiving a second plurality of coordinate measurement points from a position other than the at least one aerial position, the second plurality of coordinate measurement points capturing at least some of the portion of the environment and the reference object within the environment, the second plurality of coordinate measurement points defining at least a portion of a second point cloud. The method further includes aligning the first point cloud and the second point cloud based at least in part on the reference object captured in the first point cloud and the reference object captured the second point cloud to generate a combined point cloud.
An example method includes generating a graphical representation of a point cloud of an environment overlaid on a video stream of the environment. The method further includes receiving a first selection of a first point pair, the first point pair including a first virtual point of the point cloud and a first real point of the environment, the first real point corresponding to the first virtual point. The method further includes receiving a second selection of a second point pair, the second point pair including a second virtual point of the point cloud and a second real point of the environment, the second real point corresponding to the second virtual point. The method further includes aligning the point cloud to the environment based at least in part on the first point pair and the second point pair and updating the graphical representation based at least in part on the aligning.
According to one aspect of the disclosure a method for generating a three-dimensional model of an environment is provided. The method includes acquiring a first plurality of 3D coordinates of surfaces in the environment in a first coordinate frame of reference using a first measurement device, the first plurality of 3D coordinates including at least one subset of 3D coordinates of a target, the first measurement device optically measuring the plurality of 3D coordinates. A second plurality of 3D coordinates of the environment are acquired in a second frame of reference using a second measurement device, the second measurement device being operably disposed in a fixed relationship to the target. The second plurality of 3D coordinates with the first plurality of 3D coordinates are registered in the first coordinate frame of reference based at least in part on the at least one subset of 3D coordinates and the fixed relationship.
G06F 30/13 - Architectural design, e.g. computer-aided architectural design [CAAD] related to design of buildings, bridges, landscapes, production plants or roads
Techniques are described to generate a 3D scene by mapping a point cloud with a 2D image, and colorize portions of the 3D scene synthetically. An input is received to select, from the 3D scene, a portion to be colorized synthetically. The colorizing includes generating a reflectance image based on an intensity image of the point cloud. The colorizing further includes generating an occlusion mask that identifies the selected portion in the reflectance image. The colorizing further includes estimating, using a trained machine learning model, a color for each of the one or more points in the selected portion based on the reflectance image, the occlusion mask, and the 2D image. The 3D scene is updated by using the estimated colors from the trained machine learning model to colorize the selected portion.
A system and method for measuring three-dimensional (3D) coordinate values of an environment is provided. The system includes a movable base unit a first scanner and a second scanner. One or more processors performing a method that includes causing the first scanner to determine first plurality of coordinate values in a first frame of reference based at least in part on a measurement by at least one sensor. The second scanner determines a second plurality of 3D coordinate values in a second frame of reference as the base unit is moved from a first position to a second position. The determining of the first coordinate values and the second plurality of 3D coordinate values being performed simultaneously. The second plurality of 3D coordinate values are registered in a common frame of reference based on the first plurality of coordinate values.
G01C 3/02 - Measuring distances in line of sight; Optical rangefinders - Details
G01B 5/008 - Measuring arrangements characterised by the use of mechanical techniques for measuring coordinates of points using coordinate measuring machines
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01S 17/06 - Systems determining position data of a target
G01C 21/16 - Navigation; Navigational instruments not provided for in groups by using measurement of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
41.
USER INTERFACE FOR THREE-DIMENSIONAL MEASUREMENT DEVICE
A system and method for providing feedback on a quality of a 3D scan is provided. The system includes a coordinate scanner configured to optically measure and determine a plurality of three-dimensional coordinates to a plurality of locations on at least one surface in the environment, the coordinate scanner being configured to move through the environment while acquiring the plurality of three-dimensional coordinates. A display having a graphical user interface. One or more processors are provided that are configured to determine a quality attribute of a process of measuring the plurality of three-dimensional coordinates based at least in part on the movement of the coordinate scanner in the environment and display a graphical quality indicator on the graphical user interface based at least in part on the quality attribute, the quality indicator is a graphical element having at least one movable element.
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
42.
ARTIFICIAL PANORAMA IMAGE PRODUCTION AND IN-PAINTING FOR OCCLUDED AREAS IN IMAGES
A system includes a three-dimensional (3D) scanner, a camera with a viewpoint that is different from a viewpoint of the 3D scanner, and one or more processors coupled with the 3D scanner and the camera. The processors access a point cloud from the 3D scanner and one or more images from the camera, the point cloud comprises a plurality of 3D scan-points, a 3D scan-point represents a distance of a point in a surrounding environment from the 3D scanner, and an image comprises a plurality of pixels, a pixel represents a color of a point in the surrounding environment. The processors generate, using the point cloud and the one or more images, an artificial image that represents a portion of the surrounding environment viewed from an arbitrary position in an arbitrary direction, wherein generating the artificial image comprises colorizing each pixel in the artificial image.
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
H04N 13/25 - Image signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
Examples described herein provide a method that includes receiving point cloud data from a three-dimensional (3D) coordinate measurement device, the point cloud data corresponding at least in part to the object. The method further includes analyzing, by a processing system, the point cloud data by comparing a point of the point cloud data to a corresponding reference point from reference data to determine a distance between the point and the corresponding reference point, wherein the point and the corresponding reference point are associated with the object. The method further includes determining, by the processing system, whether a change to a location of the object occurred by comparing the distance to a distance threshold. The method further includes, responsive to determining that the change to the location of the object occurred, displaying a change indicium on a display of the processing system.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
Examples described herein provide a method for denoising data. The method includes receiving an image pair, a disparity map associated with the image pair, and a scanned point cloud associated with the image pair. The method includes generating, using a machine learning model, a predicted point cloud based at least in part on the image pair and the disparity map. The method includes comparing the scanned point cloud to the predicted point cloud to identify noise in the scanned point cloud. The method includes generating a new point cloud without at least some of the noise based at least in part on comparing the scanned point cloud to the predicted point cloud.
A scanner that can detect types of targets in a scan are includes a processor, housing and a 3D scanner disposed within the housing The processor is configured to identify locations of one more checkerboard targets disposed in the scan area by: identifying transition locations where adjacent segments on a single scan line transition from a first color to a second color; recording locations of the transition locations as first to second color transition locations; identifying and recording transition locations where adjacent segments on a single scan line transition from the second color to the first color as second to first color transition locations; forming a transition line through adjacent first to second color transition locations and adjacent second to first color transition locations; and identifying a location of a checkerboard target based on the transition line.
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01S 17/42 - Simultaneous measurement of distance and other coordinates
46.
MARKERLESS REGISTRATION OF IMAGE AND LASER SCAN DATA
A system includes a first type of measurement device that captures first 2D images, a second type of measurement device that captures 3D scans. A 3D scan includes a point cloud and a second 2D image. The system also includes processors that register the first 2D images. The method includes accessing the 3D scan that records at least a portion of the surrounding environment that is also captured by a first 2D image. Further, 2D features in the second 2D image are detected, and 3D coordinates from the point cloud are associated to the 2D features. 2D features are also detected in the first 2D image, and matching 2D features from the first 2D image and the second 2D image are identified. A position and orientation of the first 2D image is calculated in a coordinate system of the 3D scan using the matching 2D features.
A system includes a three-dimensional (3D) scanner, a camera, and one or more processors coupled with the 3D scanner and the camera. The processors capture a frame that includes a point cloud comprising plurality of 3D scan points and a 2D image. A 3D scan point represents a distance of a point in a surrounding environment from the 3D scanner. A pixel represents a color of a point in the surrounding environment. The processors identify, using a machine learning model, a subset of pixels that represents a reflective surface in the 2D image. Further, for each pixel in the subset of pixels, one or more corresponding 3D scan points is determined. An updated point cloud is created in the frame by removing the corresponding 3D scan points from the point cloud.
H04N 13/25 - Image signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
A method is provided that includes generating a four-dimensional (4D) model of an environment based on three-dimensional (3D) coordinates of the environment captured at a first point in time. The method further includes updating the 4D model based at least in part to an update to at least a subset of the 3D coordinates of the environment captured at a second point in time. The method further includes enriching the 4D model by adding supplemental information to the model.
A laser projector steers a pulsed laser beam to form a pattern of stationary dots on an object, the pulsed laser beam having a periodicity determined based at least in part on a maximum allowable spacing of the dots and on a maximum angular velocity at which the beam can be steered, wherein a pulse width of the laser beam and a pulse peak power of the laser beam are based at least in part on the determined periodicity and on laser safety requirements.
A method includes capturing a frame including a 3D point cloud and a 2D image. A key point is detected in the 2D image, the key point is a candidate to be used as a feature. A 3D patch of a predetermined dimension is created that includes points surrounding a 3D position of the key point. The 3D position and the points of the 3D patch are determined from the 3D point cloud. Based on a determination that the points in the 3D patch are on a single plane based on the corresponding 3D coordinates, a descriptor for the 3D patch is computed. The frame is registered with a second frame by matching the descriptor for the 3D patch with a second descriptor associated with a second 3D patch from the second frame. The 3D point cloud is aligned with multiple 3D point clouds based on the registered frame.
A method, system, and computer product that track scanning data acquired by a three-dimensional (3D) coordinate scanner is provided. The method includes storing a digital representation of an environment in memory of a mobile computing device. A first scan is performed with the 3D coordinate scanner in an area of the environment. A location of the first scan is determined on the digital representation. The first scan is registered with the digital representation. The location of the 3D coordinate scanner is indicated on the digital representation at the time of the first scan.
A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.
Examples described herein provide a method that includes performing a first scan of an object to generate first scan data. The method further includes detecting a defect on a surface of the object by analyzing the first scan data to identify a region of interest containing the defect by comparing the first scan data to reference scan data. The method further includes performing a second scan of the region of interest containing the defect to generate second scan data, the second scan data being higher resolution scan data than the first scan data. The method further includes combining the first scan data and the second scan data to generate a point cloud of the object.
A system and method of generating a two-dimensional (2D) image of an environment is provided. The system includes a scanner having a first light source, an image sensor, a second light source and a controller, the second light source emitting a visible light, the controller determining a distance to points based on a beam of light emitted by the first light source and receiving of the reflected beam of light from the points. Processors are operably coupled to the scanner execute a method comprising: generating a map of the environment; emitting light from the second light source towards an edge defined by at least a pair of surfaces; detecting the edge based on emitting a second beam of light and receiving the reflected second beam of light; and defining a room on the map based on the detecting of the corner or the edge.
A system includes a transporter robot with a motion controller that changes the transporter robot's poses during transportation. A scanning device is fixed to the transporter robot. One or more processors are coupled to the transporter robot and the scanning device to generate a map of the surrounding environment. At a timepoint T1, when the transporter robot is stationary at a first location, a first pose of the transporter robot is captured. During transporting the scanning device, at a timepoint T2, the scanning device captures additional scan-data of a portion of the surrounding environment. In response, the motion controller provides a second pose of the transporter robot at T2. A compensation vector and a rotation for the scan-data are determined based on a difference between the first pose and the second pose. A revised scan-data is computed, and the revised scan-data is registered to generate the map.
Examples described herein provide a method that includes capturing data about an environment. The method further includes generating a database of two-dimensional (2D) features and associated three-dimensional (3D) coordinates based at least in part on the data about the environment. The method further includes determining a position (x, y, z) and an orientation (pitch, roll, yaw) of a device within the environment based at least in part on the database of 2D features and associated 3D coordinates. The method further includes causing the device to display, on a display of the device, an augmented reality element at a predetermined location based at least in part on the position and the orientation of the device.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 10/74 - Image or video pattern matching; Proximity measures in feature spaces
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
Technical solutions are described to track a handheld three-dimensional (3D) scanner in an environment using natural features in the environment. In one or more examples, the natural features are detected using machine learning. Features are filtered by performing a stereo matching between respective pairs of stereo images captured by the scanner. The features are further filtered using time matching between images captured by the scanner at different timepoints.
A method for updating a digital representation of an environment includes capturing an image of a portion of the environment using a change-detection device. Further, a corresponding digital data is determined that represents the portion in the digital representation of the environment. A change in the portion is detected by comparing the image with the corresponding digital data. In response to the change being above a predetermined threshold, the method includes initiating a resource-intensive scan of the portion using a scanning device, and updating the digital representation of the environment by replacing the corresponding digital data representing the portion with the resource-intensive scan.
A device and method for projecting a light pattern is provided. The device includes a processor system and a housing. The housing is rotatable about a first axis. A measurement device is operably coupled to the housing that measures a distance to a surface in an environment. A light projector is operably coupled to the housing, the light projector having a light source and a pair of movable mirrors, the light source positioned to emit light onto the pair of movable mirrors. Wherein the processor system is responsive to computer instructions for: determining 3D coordinates of points on the surface with the 3D measurement device; selecting a pattern; adjusting the pattern based at least in part on the 3D coordinates; and causing the light projector to emit a beam of light and moving the pair of mirrors to generate the adjusted pattern on the surface.
G01V 8/26 - Detecting, e.g. by using light barriers using multiple transmitters or receivers using mechanical scanning systems
G02B 26/08 - Optical devices or arrangements for the control of light using movable or deformable optical elements for controlling the direction of light
G01B 11/02 - Measuring arrangements characterised by the use of optical techniques for measuring length, width, or thickness
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
A method for performing a simultaneous location and mapping of a scanner device in a surrounding environment includes capturing a scan-data of a portion of a map of the surrounding environment. The scan-data comprises a point cloud. Further, at runtime, a user-interface is used to make, a selection of a feature from the scan-data, and a selection of a submap that was previously captured. The submap includes the same feature. The method further includes determining a first scan position as a present position of the scanner device, and determining a second scan position as a position of the scanner device. The method further includes determining a displacement vector for the map based on the first and the second scan positions. Further, a revised first scan position is computed based on the second scan position and the displacement vector. The scan-data is registered using the revised first scan position.
G06V 10/77 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
61.
SURFACE DETERMINATION USING THREE-DIMENSIONAL VOXEL DATA
6Examples described herein provide a method that includes obtaining, by a processing device, three-dimensional (3D) voxel data. The method further includes performing, by the processing device, gray value thresholding based at least in part on the 3D voxel data and assigning a classification value to at least one voxel of the 3D voxel data. The method further includes defining, by the processing device, segments based on the classification value. The method further includes filtering, by the processing device, the segments based on the classification value. The method further includes evaluating, by the processing device, the segments to identify a surface voxel per segment. The method further includes determining, by the processing device, a position of a surface point within the surface voxel.
G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
G01N 23/046 - Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups , or by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
G01N 23/083 - Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups , or by transmitting the radiation through the material and measuring the absorption the radiation being X-rays
62.
Compensation of three-dimensional measuring instrument having an autofocus camera
A 3D measuring instrument includes a registration camera and a surface measuring system having a projector and autofocus camera. In a first pose, the registration camera captures a first registration image of first registration points. The autofocus camera captures a first surface image of first light projected onto the object by the projector and determines first 3D coordinates of points on the object. In a second pose, the registration camera captures a second registration image of second registration points. The autofocus camera adjusts the autofocus mechanism based at least in part on adjusting a focal length to reduce a difference between positions of the first and second registration points. A second surface image of second light is captured. A compensation parameter is determined based at least in part on the first registration image, the second registration image, the first 3D coordinates, the second surface image, and the projected second light.
A system includes a three-dimensional (3D) scanner that captures a 3D point cloud with multiple scan-points corresponding to one or more objects scanned in a surrounding environment. The system further includes a camera that captures an image of the surrounding environment. The system further includes one or more processors that colorize the scan-points in the 3D point cloud using the image. Colorizing a scan-point includes determining, for the scan-point, a corresponding pixel in the image by back-projecting the scan-point to the camera. Colorizing the scan-point includes assigning, to the scan-point, a color-value based on the corresponding pixel. Colorizing the scan-point includes computing, for the scan-point, a distance of the scan-point from the camera. Colorizing the scan-point includes determining, based on the distance, that the scan-point is occluded from only one of the camera and the 3D scanner, and in response, updating the color-value assigned to the scan-point.
A system includes a three-dimensional (3D) scanner that captures a 3D point cloud corresponding to one or more objects in a surrounding environment. The system further includes a camera that captures a control image by capturing a plurality of images of the surrounding environment, and an auxiliary camera configured to capture an ultrawide-angle image of the surrounding environment. One or more processors of the system colorize the 3D point cloud using the ultrawide-angle image by mapping the ultrawide-angle image to the 3D point cloud. The system performs a limited system calibration before colorizing each 3D point cloud, and a periodic full system calibration before/after a plurality of 3D point clouds are colorized.
A point cloud is colorized by mapping a color image using an intensity image. The mapping includes detecting multiple features from the intensity image using a feature-extraction algorithm. A feature is extracted that is not within a predetermined vicinity of an edge in the intensity image. A template is created by selecting a portion of a predetermined size from the intensity image with the feature at the center. A search window is created with the same size as the template by selecting a portion of a luminance image as a search space. The luminance image is obtained from the color image. A cost value is computed for each pixel of the search space by comparing image gradients of the template and the search window. A matching point is determined in the color image corresponding to the feature based on the cost value for each pixel of search space.
Automatic selection of region in 3D point cloud is provided. Neighbor points are determined for given seed point of seed points. Responsive to a color difference of a given neighbor point from given seed point being less than neighbor color distance threshold and responsive to an angle between a normal of given neighbor point and a normal of given seed point being less than neighbor normal angle threshold, given neighbor point is added to region in 3D point cloud. Responsive to curvature at given neighbor point being less than curvature threshold, responsive to color difference of given neighbor point from initial seed point being less than initial seed color distance threshold and responsive to an angle between a normal of given neighbor point and a normal of initial seed point being less than an initial seed normal angle, given neighbor point is added to seed points for processing.
A light projector and method of aligning the light projector is provided. A light projector steers an outgoing beam of light onto an object, passing light returned from the object through a focusing lens onto an optical detector. The light projector may generate a light pattern or template by rapidly moving the outgoing beam of light along a path on a surface. To place the light pattern/template in a desired location, the light projector may be aligned with an electronic model.
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
A system includes a first light source that emits a beam of light; an electrical modulator that imparts a time-varying modulation on the beam of light; a beam- shaping system that shapes the beam of light and projects the shaped beam of light onto an object; an image sensor that captures the beam of light reflected from the object; and processors that determine three-dimensional (3D) coordinates of points on the object.
A system includes a handheld unit having a light source, an image sensor, one or more first processors, an Ethernet cable, and a frame. The light source projects light onto an object, and the image sensor captures an image of light reflected from the object. One or more first processors are directly coupled to the frame. An accessory device has one or more second processors that receive data extracted from the captured image over the Ethernet cable and, in response, determine three-dimensional (3D) coordinates of points on the object. The accessory device also send electrical power over the Ethernet cable to the handheld unit.
H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
A system includes a first light source that projects lines of light onto an object, a second light source that illuminates markers on or near the object, one or more image sensors that receive first reflected light from the projected lines of light and second reflected light from the illuminated markers, one or more processors that determine the locations of the lines of light on the image sensors based on the first reflected light and that determines the locations of the markers on the image sensors based on the second reflected light, and a frame physically coupled to the first light source, the second light source, the one or more image sensors, and the one or more processors.
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G06T 7/70 - Determining position or orientation of objects or cameras
71.
THREE-DIMIENSIONAL POINT CLOUD GENERATION USING MACHINE LEARNING
An example method for training a machine learning model is provided. The method includes receiving training data collected by a three-dimensional (3D) imager, the training data comprising a plurality of training sets. The method further includes generating, using the training data, a machine learning model from which a disparity map can be inferred from a pair of images that capture a scene where a light pattern is projected onto an object.
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
H04N 13/254 - Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 7/593 - Depth or shape recovery from multiple images from stereo images
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
G03B 35/12 - Stereoscopic photography by simultaneous recording involving recording of different viewpoint images in different colours on a colour film
72.
UPSCALING TRIANGULATION SCANNER IMAGES TO REDUCE NOISE
Examples described herein provide a method that includes performing, by a processing device, using a neural network, pattern recognition on an image to recognize a feature in the image. The method further includes performing, by the processing device, upscaling of the image to increase a resolution of the image while maintaining the feature to generate an upscaled image.
A handheld device has a projector that projects a pattern of light onto an object, a first camera that captures the projected pattern of light in first images, a second camera that captures the projected pattern of light in second images, a registration camera that captures a succession of third images, one or more processors that determines three-dimensional (3D) coordinates of points on the object based at least in part on the projected pattern, the first images, and the second images, the one or more processors being further operable to register the determined 3D coordinates based at least in part on common features extracted from the succession of third images, and a mobile computing device operably connected to the handheld device and cooperating with the one or more processors, the mobile computing device operable to display the registered 3D coordinates of points.
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01K 3/00 - Thermometers giving results other than momentary value of temperature
G01K 13/00 - Thermometers specially adapted for specific purposes
H10N 10/13 - Thermoelectric devices comprising a junction of dissimilar materials, i.e. devices exhibiting Seebeck or Peltier effects operating with only the Peltier or Seebeck effects characterised by the heat-exchanging means at the junction
A computer-implemented method is performed by one or more processors to automatically register a plurality of captured data obtained using a respective measurement device, each of the captured data is obtained separately. The computer-implemented method includes accessing a first captured data of a portion of an environment, and a first image corresponding to said portion of the environment captured from a known relative position and angle with respect to the first captured data. Further, from the plurality of captured data, a second captured data is identified that has at least a partial overlap with said portion, the second captured data is identified based on a corresponding second image. The second image is captured from a known relative position and angle with respect to the second captured data. The method further includes transforming the second captured data and/or the first captured data to a coordinate system.
A 3D measuring system includes a first projector that projects a first line onto an object at a first wavelength, a second projector that projects a second line onto the object at a second wavelength, a first illuminator that emits a third light onto some markers, a second illuminator that emits a fourth light onto some markers, a first camera having a first lens and a first image sensor, a second camera having a second lens and a second image sensor, the first lens operable to pass the first wavelength, block the second wavelength, and pass the third light to a first image sensor, the second lens operable to pass the second wavelength, block the first wavelength, and pass the fourth light. The system further includes one or more processors operable to determine 3D coordinates based on images captured by the first image sensor and the second image sensor.
H04N 13/254 - Image signal generators using stereoscopic image cameras in combination with electromagnetic radiation sources for illuminating objects
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
According to one aspect of the disclosure, a three-dimensional coordinate scanner is provided. The scanner includes a projector configured to emit a pattern of light; a sensor arranged in a fixed predetermined relationship to the projector, the sensor having a photosensitive array comprised of a plurality of event-based pixels, each of the event-based pixels being configured to transmit a signal in response to a change in irradiance exceeding a threshold. One or more processors are electrically coupled to the projector and the sensor, the one or more processors being configured to modulate the pattern of light and determine a three-dimensional coordinate of a surface based at least in part on the pattern of light and the signal.
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01B 11/245 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
77.
Tracking data acquired by coordinate measurement devices through a workflow
A method that includes providing a database for storing meta-data that describes steps in a workflow and an order of the steps in the workflow. The meta-data includes, for each of the steps: a reference to an input data file for the step; a description of a transaction performed at the step; and a reference to an output data file generated by the step based at least in part on applying the transaction to the input data file. Data that includes meta-data for a step in the workflow is received and the data is stored in the database. A trace of the workflow is generated based at least in part on contents of the database. The generating is based on receiving a request from a requestor for the trace of the workflow. At least a subset of the trace is output to the requestor.
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
78.
Line scanner having target-tracking and geometry-tracking modes
A handheld three-dimensional (3D) measuring system operates in a target mode and a geometry mode. In the target mode, a target-mode projector projects a first line of light onto an object, and a first illuminator sends light to markers on or near the object. A first camera captures an image of the first line of light and the illuminated markers. In the geometry mode, a geometry-mode projector projects onto the object a first multiplicity of lines, which are captured by the first camera and a second camera. One or more processors determines 3D coordinates in the target mode and the geometry mode.
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G01C 11/02 - Picture-taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
H04N 23/55 - Optical parts specially adapted for electronic image sensors; Mounting thereof
H04N 23/56 - Cameras or camera modules comprising electronic image sensors; Control thereof provided with illuminating means
H04N 23/80 - Camera processing pipelines; Components thereof
H04N 13/239 - Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
79.
CLOUD-TO-CLOUD COMPARISON USING ARTIFICIAL INTELLIGENCE-BASED ANALYSIS
Examples described herein provide a method that includes aligning, by a processing device, a measured point cloud for an object with reference data for the object. The method further includes comparing, by the processing device, the measurement point cloud to the reference data to determine a displacement value between each point in the measurement point cloud and a corresponding point in the reference data. The method further includes generating, by the processing device, a deviation histogram of the displacement values between each point in the measurement point cloud and the corresponding point in the reference data. The method further includes identifying, by the processing device, a region of interest of the deviation histogram. The method further includes determining, by the processing device, whether a deviation associated with the object exists based at least in part on the region of interest.
A method includes mapping attribute information from a sensor with 3D coordinates from a 3D measurement device, wherein the mapping comprises blending the attribute information to avoid boundary transition effects. The blending includes representing the 3D coordinates that are captured using a plurality of voxel grids. The blending further includes converting the plurality of voxel grids to a corresponding plurality of multi-band pyramids, wherein each multi-band pyramid comprises a plurality of levels, each level storing attribute information for a different frequency band. The blending further includes computing a blended multi-band pyramid based on the plurality of voxel grids by combining corresponding levels from each of the multi-band pyramids. The blending further includes converting the blended multi-band pyramid into a blended voxel grid. The blending further includes outputting the blended voxel grid.
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G06T 7/593 - Depth or shape recovery from multiple images from stereo images
G01S 7/481 - Constructional features, e.g. arrangements of optical elements
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
An articulated arm coordinate measuring machine includes an arm having multiple segments and an end assembly. The end assembly has multiple accessory interfaces that allow multiple accessories to be coupled to the end assembly. The accessory interfaces are configured to allow the accessories to be repeatably interchanged between the accessory interfaces.
B25J 9/04 - Programme-controlled manipulators characterised by movement of the arms, e.g. cartesian co-ordinate type by rotating at least one arm, excluding the head movement itself, e.g. cylindrical co-ordinate type or polar co-ordinate type
G01B 5/008 - Measuring arrangements characterised by the use of mechanical techniques for measuring coordinates of points using coordinate measuring machines
G01B 21/04 - Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant for measuring length, width, or thickness by measuring coordinates of points
82.
SIMULTANEOUS LOCALIZATION AND MAPPING ALGORITHMS USING THREE-DIMENSIONAL REGISTRATION
An example method includes receiving, via a 3D scanner, a 3D scan of the environment. The 3D scan includes a global position and is partitioned into a plurality of 3D submaps. The method further includes receiving, via a two-dimensional (2D) scanner accessory, a plurality of 2D submaps of the environment. The method further includes receiving coordinates of the scan position in the plurality of 2D submaps in response to the 3D scanner initiating the acquisition of the 3D scan. The method further includes associating the coordinates of the scan position with the plurality of 2D submaps. The method further includes performing real-time positioning by linking the coordinates of the scan position with the plurality of 2D submaps using a SLAM algorithm. The method further includes performing, based at least in part on the real-time positioning, a registration technique on the plurality of 3D submaps to generate a global map.
A 3D measurement system, a laser scanner and a measurement device are provided. The system includes a 3D measurement device and a 360 degree image acquisition system coupled in a fixed relationship to the 3D measurement device. The 360 degree image acquisition system includes a first photosensitive array operably coupled to a first lens, the first lens having a first optical axis in a first direction, the first lens being configured to provide a first field of view greater than 180 degrees. The image acquisition system further includes a second photosensitive array operably coupled to a second lens, the second lens having a second optical axis in a second direction, the second direction is opposite the first direction, the second lens being configured to provide a second field of view greater than 180 degrees. Wherein the first field of view at least partially overlaps with the second field of view.
A system and method for providing a distributed measurement system. The system performs operations that include receiving, via a user interface of a user device, a request from a requestor to access a data file of a project. The project includes a plurality of data files including the data file, and at least one of the one or more data files is generated based at least in part on measurement data output from a measurement device. Based on determining that the requestor has permission to access the data file, one or more editing options are provided for editing the data file. The one or more editing options vary based at least in part on one or both of a characteristic of the user device and a characteristic of the data file. The data file is edited in response to receiving an editing request.
G06F 16/176 - Support for shared access to files; File sharing support
G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
G06F 16/13 - File access structures, e.g. distributed indices
A method includes capturing, by a three-dimensional (3D) scanner, a 3D point cloud, and capturing, by a camera, a control image by capturing and stitching multiple images of the surrounding environment. The method further includes capturing, by an auxiliary camera, an ultrawide-angle calibration image. The method further includes dynamically calibrating the auxiliary camera using the 3D point cloud, the control image, and the calibration image. The calibrating includes extracting a first plurality of features from the control image and extracting a second plurality of features from the calibration image. Further, a set of matching features are determined from the first and second sets of features. A set of control points is generated using the set of matching features by determining points in the 3D point cloud that correspond to the set of matching features. Further, a self-calibration of the auxiliary camera is performed using the set of control points.
A method for determining three-dimensional (3D) coordinates of an object surface with a 3D measuring device includes forming from the determined 3D coordinates a mesh having a first face, constructing a voxel array aligned to the first face, obtaining a plurality of images from a first camera having a corresponding plurality of poses, obtaining for each voxel in the voxel array a plurality of voxel values obtained from the corresponding plurality of images, determining for each voxel row a quality value determined based at least in part on an average value of a first quantity and a dispersion of the first quantity, the first quantity based at least in part on first voxel values determined as a function of pose, and determining a distance from a point on the first face to the object surface based at least in part on the determined quality values for the voxel rows.
Three-dimensional coordinate scanners and methods of scanning environments are described. The scanners include a housing having a top, a bottom, a first side, a second side, a first end face, and a second end face. A 3D point cloud system is arranged within the housing including a rotating mirror and configured to acquire 3D point cloud data of a scanned environment. A first color camera is arranged within the housing on the first side and configured to capture respective color data of the scanned environment and a second color camera arranged within the housing on the second side and configured to capture respective color data of the scanned environment.
A three-dimensional (3D) measuring instrument includes a registration camera and a surface measuring system having a projector and an autofocus camera. For the instrument in a first pose, the registration camera captures a first registration image of first registration points. The autofocus camera captures a first surface image of first light projected onto the object by the projector and determines first 3D coordinates of points on the object. For the instrument in a second pose, the registration camera captures a second registration image of second registration points. The autofocus camera adjusts the autofocus mechanism and captures a second surface image of second light projected by the projector. A compensation parameter is determined based at least in part on the first registration image, the second registration image, the first 3D coordinates, the second surface image, and the projected second light.
G06T 7/593 - Depth or shape recovery from multiple images from stereo images
G06T 7/55 - Depth or shape recovery from multiple images
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06T 7/521 - Depth or shape recovery from the projection of structured light
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
G06T 7/571 - Depth or shape recovery from multiple images from focus
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
A system and method for scanning an environment and generating an annotated 2D map is provided. The system includes a 2D scanner having a light source, an image sensor and a first controller. The first controller determines a distance value to at least one of the object points. The system further includes a 360° camera having a movable platform, and a second controller that merges the images acquired by the cameras to generate an image having a 360° view in a horizontal plane. The system also includes processors coupled to the 2D scanner and the 360° camera. The processors are responsive to generate a 2D map of the environment based at least in part on a signal from an operator and the distance value. The processors being further responsive for acquiring a 360° image and integrating it at a location on the 2D map.
Examples described herein provide a method that includes performing cluster matching with one or more cluster sizes for each of a plurality of points of a measurement point cloud. The method further includes determining, based on results of the multi-radii cluster matching, whether an object is displaced or whether the object includes a defect.
A three-dimensional (3D) measurement system, a method of measuring 3D coordinates, and a method of generating dense 3D data is provided. The method of measuring 3D coordinates includes using a first 3D measurement device and a second 3D measurement device in a cooperative manner is provided. The method includes acquiring a first set of 3D coordinates with the first 3D measurement device. The first set of 3D coordinates are transferred to the second 3D measurement device. A second set of 3D coordinates is acquired with the second 3D measurement device. The second set of 3D coordinates are registered to the first set of 3D coordinates in real-time while the second 3D measurement device is acquiring the second set of 3D coordinates.
Techniques are described to determine a constraint for performing a simultaneous location and mapping. A method includes detecting a first set of planes in a first scan-data of an environment, and detecting a second set of planes in a second scan-data. Further, a plane that is in the first set of planes and the second set of planes is identified. Further, a first set of measurements of a landmark on the plane is determined from the first scan-data, and a second set of measurements of said landmark is determined from the second scan-data. The constraint is determined by computing a relationship between the first set of measurements and the second set of measurements.
G01S 17/894 - 3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
G01S 17/42 - Simultaneous measurement of distance and other coordinates
G01S 7/48 - RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES - Details of systems according to groups , , of systems according to group
A system for generating an automatically segmented and annotated two-dimensional (2D) map of an environment includes processors coupled to a scanner to convert a 2D map from the scanner into a 2D image. Further, a mapping system categorizes a first set of pixels from the image into one of room-inside, room-outside, and noise by applying a trained neural network to the image. The mapping system further categorizes a first subset of pixels from the first set of pixels based on a room type if the first subset of pixels is categorized as room-inside. The mapping system also determines the room type of a second subset of pixels from the first set of pixels based on the first subset of pixels by using a flooding algorithm. The mapping system further annotates a portion of the 2D map to identify the room type based on the pixels corresponding to the portion.
Scanning systems and methods for measuring shafts are described. The scanning systems include a support structure and a scanner mounted to the support structure, at least one fixed guide arranged such that the support structure is configured to move along the at least one fixed guide, at least one positional guide arranged such that at least one positional guide is connected to the support structure to guide movement of the scanner along the at least one fixed guide, and an encoder operably coupled to the at least one positional guide and configured to measure, at least, a distance from the encoder to the support structure.
G01B 11/04 - Measuring arrangements characterised by the use of optical techniques for measuring length, width, or thickness specially adapted for measuring length or width of objects while moving
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
According to one or more embodiments, a method includes capturing a first three-dimensional (3D) point cloud and a second 3D point cloud. Each of the 3D point clouds includes a plurality 3D coordinates corresponding to one or more objects scanned in a surrounding environment. The first 3D point cloud and the second 3D point cloud capturing at least one overlapping portion. Further, the method includes capturing a first ultrawide-angle image and a second ultrawide-angle image of the surrounding environment, the first ultrawide-angle image captures color information of the first 3D point cloud, and the second ultrawide-angle image captures color information of the second 3D point cloud. The method further includes registering the first 3D point cloud and the second 3D point cloud by mapping one or more features from the first ultrawide-angle image and the second ultrawide-angle image.
Techniques are described for converting a 2D map into a 3D mesh. The 2D map of the environment is generated using data captured by a 2D scanner. Further, a set of features is identified from a subset of panoramic images of the environment that are captured by a camera. Further, the panoramic images from the subset are aligned with the 2D map using the features that are extracted. Further, 3D coordinates of the features are determined using 2D coordinates from the 2D map and a third coordinate based on a pose of the camera. The 3D mesh is generated using the 3D coordinates of the features.
H04N 13/261 - Image signal generators with monoscopic-to-stereoscopic image conversion
G01P 15/097 - Measuring acceleration; Measuring deceleration; Measuring shock, i.e. sudden change of acceleration by making use of inertia forces with conversion into electric or magnetic values by vibratory elements
A system and method of inspecting a plurality of objects using a computed tomography (CT) system is provided. The method includes acquiring an image of a fixture used for holding the plurality of objects with the CT system. A first electronic model of the fixture is generated. The objects are placed in the fixture. An image of the fixture and the objects is acquired with the CT system. A second electronic model of the fixture and the objects is generated. A third electronic model of the objects is defined based at least in part on subtracting the first electronic model from the second electronic model. Dimensions of the objects from the third electronic model are compared with a computer aided design (CAD) model. A report is output based at least in part on the comparison of the objects from the third electronic model with the CAD model.
G01N 23/046 - Investigating or analysing materials by the use of wave or particle radiation, e.g. X-rays or neutrons, not covered by groups , or by transmitting the radiation through the material and forming images of the material using tomography, e.g. computed tomography [CT]
An example system for measuring three-dimensional (3D) coordinate values of an environment is provided. The system includes a mobile scanning platform configured to measure coordinates in the environment. The mobile scanning platform has one or more radio antennas. The system further includes one or more processors operably coupled to the mobile scanning platform, the one or more processors being responsive to nontransitory executable instructions for performing a method. The method includes registering the measured coordinates to generate a point cloud. Registering includes triangulating a position of the mobile scanning platform based at least in part on data received from the one or more radio antennas. Registering further includes adjusting an orientation or position of one or more of the measured coordinates to align with a layout of the environment.
A system and method for providing feedback on a quality of a 3D scan is provided. The system includes a coordinate scanner configured to optically measure and determine a plurality of three-dimensional coordinates to a plurality of locations on at least one surface in the environment, the coordinate scanner being configured to move through the environment while acquiring the plurality of three-dimensional coordinates. A display having a graphical user interface. One or more processors are provided that are configured to determine a quality attribute of a process of measuring the plurality of three-dimensional coordinates based at least in part on the movement of the coordinate scanner in the environment and display a graphical quality indicator on the graphical user interface based at least in part on the quality attribute, the quality indicator is a graphical element having at least one movable element.
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G01B 11/00 - Measuring arrangements characterised by the use of optical techniques
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object