Systems and methods directed to image classification using image comparison are provided. In one example, a method includes capturing, by a camera, a current image of an asset under inspection, wherein the current image includes at least one inspection point of the asset. The method further includes presenting the current image relative to a previous image of the asset for comparison, wherein the previous image includes the at least one inspection point of the asset. The method further includes receiving a classification of the current image based on a comparison between the current image and the previous image. Additional methods and systems are also provided.
Systems and methods directed to asset inspection are provided. In one example, a method includes capturing, by a camera, a live image of an asset under inspection. The method further includes receiving, at the camera, a manipulation to align the camera relative to the asset based on a comparison between the live image and a reference image of the asset. The method further includes capturing, by the camera, an adjusted live image of the asset aligned with the reference image. Additional methods and systems are also provided.
H04N 23/11 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image à partir de différentes longueurs d'onde pour générer des signaux d'image à partir de longueurs d'onde de lumière visible et infrarouge
G01J 5/02 - Pyrométrie des radiations, p.ex. thermométrie infrarouge ou optique - Détails structurels
Systems and methods for improved three-dimensional tracking of objects in a traffic or security monitoring scene are disclosed herein. In various embodiments, a system includes an image sensor, an object localization system, and a coordinate transformation system. The image sensor may be configured to capture a stream of images of a scene. The object localization system may be configured to detect an object in the captured stream of images and determine an object location of the object in the stream of images. The coordinate transformation system may be configured to transform the object location of the object to first coordinates on a flat ground plane, and transform the first coordinates to second coordinates on a non-flat ground plane based at least in part on an elevation map of the scene. Associated methods are also provided.
Bird's eye view (BEV) semantic mapping systems and methods are provided. A method includes receiving an image captured by a monocular camera having a first point of view (POV) of an environment including a plurality of features. The method further includes processing, by an artificial neural network (ANN), the captured image to generate a semantic map for the captured image, the semantic map associated with a second POV different from the first POV. The features exhibit a uniform scale in the semantic map. Additional methods and associated systems are also provided.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
5.
BIRD'S EYE VIEW (BEV) SEMANTIC MAPPING SYSTEMS AND METHODS USING PLURALITY OF CAMERAS
Bird's eye view (BEV) semantic mapping systems and methods are provided. A method includes receiving a plurality of images captured by a plurality of monocular cameras having different points of view (POVs) of an environment. The method further includes processing, by an artificial neural network (ANN), the images to generate a plurality of semantic maps of the environment associated with the images, the semantic maps having a shared POV. The method further includes processing the semantic maps to generate a combined semantic map of the environment having the shared POV. Additional methods and associated systems are also provided.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/56 - Contexte ou environnement de l’image à l’extérieur d’un véhicule à partir de capteurs embarqués
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
6.
UNMANNED AERIAL VEHICLE LANDING PLATFORM SYSTEMS AND METHODS
Systems and methods related to unmanned aerial vehicle (UAV) landing platforms are provided. In one example, a system includes a platform (108) adapted for launching and/or landing a UAV (106). The platform (108) includes a support plate (502) adapted to support the UAV (106), and one or more motors (506) configured to align the support plate (502) with a horizon based on a detected orientation of the support plate (502). A logic device may be configured to detect the orientation of the support plate (502) relative to the horizon, and control the one or more motors (506) to align the support plate (502) with the horizon based on the detected orientation of the support plate (502). A method may include adjusting the platform (108) to a desired angle relative to a horizon.
B64U 80/10 - Transport ou stockage spécialement adaptés aux véhicules aériens sans pilote avec des moyens de déplacement du véhicule aérien sans pilote vers un emplacement d’alimentation ou de lancement, p. ex. armes robotiques ou carrousels
B64U 70/99 - Moyens de rétention du véhicule aérien sans pilote sur la plate-forme, p. ex. chiens ou aimants
B64U 70/90 - Lancement à partir de ou atterrissage sur des plates-formes
B64U 80/80 - Transport ou stockage spécialement adaptés aux véhicules aériens sans pilote par des véhicules
7.
DETECTION THRESHOLD DETERMINATION FOR INFRARED IMAGING SYSTEMS AND METHODS
Techniques are provided for facilitating detection threshold determination for infrared imaging systems and methods. In one example, a method includes capturing, by an imaging device, a thermal image of a scene. The method further includes determining temperature difference data indicative of a difference between temperature data of the thermal image associated with a background of the scene and temperature data of the thermal image associated with gas detection. The method further includes determining detection threshold data based on sensitivity characteristics associated with the imaging device and the temperature difference data. The method further includes generating a detection threshold image based on the detection threshold data. Each pixel of the detection threshold image corresponds to a respective pixel of the thermal image and has a value indicative of a detection threshold associated with the respective pixel of the thermal image. Related devices and systems are also provided.
Techniques are disclosed for systems and methods to provide assisted navigation based on surrounding threats. In one example, an assisted navigation system receives data from a plurality of sensors associated with a mobile structure. The assisted navigation system determines a plurality of navigational hazards disposed within a monitored area associated with the mobile structure. The assisted navigation system processes the data and/or the navigational hazards to determine an operational context of the mobile structure. The assisted navigation system generates a context-dependent navigational chart for the mobile structure, wherein the navigational chart comprises greater or fewer of the navigational hazards in response to the determined operational context. The assisted navigation system updates the navigational chart in response to changes in the data. Additional systems and methods are provided.
Techniques for facilitating image setting determination and associated machine learning in infrared imaging systems and methods are provided. In one example, an infrared imaging system includes an infrared imager, a logic device, and an output/feedback device. The infrared imager is configured to capture image data associated with a scene. The logic device is configured to determine, using a machine learning model, an image setting based on the image data. The output/feedback device is configured to provide an indication of the image setting. The output/feedback device is further configured to receive user input associated with the image setting. The output/feedback device is further configured to determine, for use in training the machine learning model, a training dataset based on the user input and the image setting. Related devices and methods are also provided.
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
Techniques for facilitating stray light mitigation are provided. In one example, a method includes determining moving averages associated with an image. Each of the moving averages is associated with a respective window size. The method further includes determining a kernel based on the moving averages. The method further includes generating a stray light compensated image based on the image and the kernel. Related devices and systems are also provided.
G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
H04N 5/217 - Circuits pour la suppression ou la diminution de perturbations, p.ex. moiré ou halo lors de la production des signaux d'image
H04N 5/33 - Transformation des rayonnements infrarouges
H04N 5/359 - Traitement du bruit, p.ex. détection, correction, réduction ou élimination du bruit appliqué aux porteurs de charge en excès générés par l'exposition, p.ex. bavure, tache, image fantôme, diaphonie ou fuite entre les pixels
H04N 5/357 - Traitement du bruit, p.ex. détection, correction, réduction ou élimination du bruit
Systems and methods include an acoustic image capture component configured to capture acoustic signals and infrared images of a scene, and a logic device configured to identify an acoustic event, localize the acoustic event including a target, identify the target in the infrared images, acquire temperature data associated with the target based on the infrared images, evaluate the temperature data and acoustic event information and determine a corresponding evaluation classification, and process the identified target in accordance with the evaluation classification.
Fiducial marker detection systems and methods are provided. In one example, a method includes capturing, by a camera of an unmanned aerial vehicle, an image. The method further includes identifying one or more image contours in the image. The method further includes determining a position of a fiducial marker in the image. The method further includes projecting, based at least on the position, models associated with one or more contours of the fiducial marker into an image plane of the camera to obtain one or more model contours. The method further includes determining a pose associated with the fiducial marker based at least on the one or more image contours and the one or more model contours. Related devices and systems are also provided.
A detection device, such as an unmanned vehicle, is adapted to detect and classify an object in sensor data comprising at least one image using a dual-task classification model comprising predetermined object classifications and learned object classifications, determine user interest in the detected object, communicate object detection information to a control system based at least in part on the determined user interest in the detected object, receive learned object classification parameters based at least in part on the communicated object detection information, and update the dual -task classification model with the received learned object classification parameters.
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/58 - Reconnaissance d’objets en mouvement ou d’obstacles, p.ex. véhicules ou piétons; Reconnaissance des objets de la circulation, p.ex. signalisation routière, feux de signalisation ou routes
G06V 10/778 - Apprentissage de profils actif, p.ex. apprentissage en ligne des caractéristiques d’images ou de vidéos
Rescue parachute deployment systems (RPDSs) and related techniques are provided to improve the safety and operational flexibility of unmanned aerial vehicles (UAVs). An RPDS includes a canopy assembly (168), a rotor guard (680, 682) disposed at least partially about the canopy assembly and configured to protect the canopy assembly from rotor strike damage as the canopy assembly is launched through a rotor plane of the UAV, and an ejector assembly (164) configured to deploy the rotor guard into and the canopy assembly through a rotor plane of the UAV. The RPDS may also include a logic device coupled to and/or integrated with the ejector assembly and/or the UAV that is configured to determine a rescue parachute launch condition is active and to control the ejector assembly to deploy the canopy assembly through the rotor plane of the UAV.
Various techniques are disclosed to provide for improved detection of elevated human body temperatures. In one example, a method includes receiving a thermal image. The method also includes processing the thermal image to detect a person's face and a characteristic associated with the person. The method also includes selecting a circadian rhythm model associated with the detected characteristic. The method also includes determining an expected body temperature using the circadian rhythm model. The method also includes extracting a temperature associated with the person's face from the thermal image. The method also includes comparing the extracted temperature with the expected body temperature to detect an elevated body temperature condition. Additional methods and systems are also provided.
Techniques are disclosed for systems and methods to provide remote sensing imagery for mobile structures. A remote sensing imagery system includes a radar assembly (160,300,302,304) mounted to a mobile structure (101) and a coupled logic device (130). The radar assembly includes an imaging system (282) coupled to or within the radar assembly and configured to provide image data associated with the radar assembly. The logic device is configured to receive radar returns corresponding to a detected target (464) from the radar assembly and image data corresponding to the radar returns from the imaging system, and then generate radar image data based on the radar returns and the image data. Subsequent user input and/or the sensor data may be used to adjust a steering actuator, a propulsion system thrust, and/or other operational systems of the mobile structure.
Techniques are disclosed for systems and methods to provide remote sensing imagery for mobile structures. A remote sensing imagery system includes a radar assembly mounted to a mobile structure and a coupled logic device. The radar assembly includes an orientation and position sensor (OPS) coupled to or within the radar assembly and configured to provide orientation and position data associated with the radar assembly. The logic device is configured to receive radar returns corresponding to a detected target from the radar assembly and orientation and/or position data corresponding to the radar returns from the OPS, determine a target radial speed corresponding to the detected target, and then generate remote sensor image data based on the remote sensor returns and the target radial speed. Subsequent user input and/or the sensor data may be used to adjust a steering actuator, a propulsion system thrust, and/or other operational systems of the mobile structure.
G01S 7/292 - Récepteurs avec extraction de signaux d'échos recherchés
G01S 7/295 - Moyens pour transformer des coordonnées ou pour évaluer des données, p.ex. en utilisant des calculateurs
G01S 13/524 - Discrimination entre objets fixes et mobiles ou entre objets se déplaçant à différentes vitesses utilisant la transmission de trains discontinus d'ondes modulées par impulsions basée sur le décalage de phase ou de fréquence résultant du mouvement des objets, avec référence aux signaux transmis, p.ex. MTI cohérent
G01S 13/58 - Systèmes de détermination de la vitesse ou de la trajectoire; Systèmes de détermination du sens d'un mouvement
G01S 13/60 - Systèmes de détermination de la vitesse ou de la trajectoire; Systèmes de détermination du sens d'un mouvement dans lesquels l'émetteur et le récepteur sont montés sur l'objet mobile, p.ex. pour déterminer la vitesse par rapport au sol, l'angle de dérive, le trajet au sol
G01S 13/86 - Combinaisons de systèmes radar avec des systèmes autres que radar, p.ex. sonar, chercheur de direction
G01S 13/89 - Radar ou systèmes analogues, spécialement adaptés pour des applications spécifiques pour la cartographie ou la représentation
G01S 13/937 - Radar ou systèmes analogues, spécialement adaptés pour des applications spécifiques pour prévenir les collisions d’embarcations