Systems and methods directed to image classification using image comparison are provided. In one example, a method includes capturing, by a camera, a current image of an asset under inspection, wherein the current image includes at least one inspection point of the asset. The method further includes presenting the current image relative to a previous image of the asset for comparison, wherein the previous image includes the at least one inspection point of the asset. The method further includes receiving a classification of the current image based on a comparison between the current image and the previous image. Additional methods and systems are also provided.
Systems and methods directed to asset inspection are provided. In one example, a method includes capturing, by a camera, a live image of an asset under inspection. The method further includes receiving, at the camera, a manipulation to align the camera relative to the asset based on a comparison between the live image and a reference image of the asset. The method further includes capturing, by the camera, an adjusted live image of the asset aligned with the reference image. Additional methods and systems are also provided.
H04N 23/11 - Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
Systems and methods for improved three-dimensional tracking of objects in a traffic or security monitoring scene are disclosed herein. In various embodiments, a system includes an image sensor, an object localization system, and a coordinate transformation system. The image sensor may be configured to capture a stream of images of a scene. The object localization system may be configured to detect an object in the captured stream of images and determine an object location of the object in the stream of images. The coordinate transformation system may be configured to transform the object location of the object to first coordinates on a flat ground plane, and transform the first coordinates to second coordinates on a non-flat ground plane based at least in part on an elevation map of the scene. Associated methods are also provided.
Bird's eye view (BEV) semantic mapping systems and methods are provided. A method includes receiving an image captured by a monocular camera having a first point of view (POV) of an environment including a plurality of features. The method further includes processing, by an artificial neural network (ANN), the captured image to generate a semantic map for the captured image, the semantic map associated with a second POV different from the first POV. The features exhibit a uniform scale in the semantic map. Additional methods and associated systems are also provided.
Bird's eye view (BEV) semantic mapping systems and methods are provided. A method includes receiving a plurality of images captured by a plurality of monocular cameras having different points of view (POVs) of an environment. The method further includes processing, by an artificial neural network (ANN), the images to generate a plurality of semantic maps of the environment associated with the images, the semantic maps having a shared POV. The method further includes processing the semantic maps to generate a combined semantic map of the environment having the shared POV. Additional methods and associated systems are also provided.
Systems and methods related to unmanned aerial vehicle (UAV) landing platforms are provided. In one example, a system includes a platform (108) adapted for launching and/or landing a UAV (106). The platform (108) includes a support plate (502) adapted to support the UAV (106), and one or more motors (506) configured to align the support plate (502) with a horizon based on a detected orientation of the support plate (502). A logic device may be configured to detect the orientation of the support plate (502) relative to the horizon, and control the one or more motors (506) to align the support plate (502) with the horizon based on the detected orientation of the support plate (502). A method may include adjusting the platform (108) to a desired angle relative to a horizon.
B64U 80/10 - Transport or storage specially adapted for UAVs with means for moving the UAV to a supply or launch location, e.g. robotic arms or carousels
B64U 70/99 - Means for retaining the UAV on the platform, e.g. dogs or magnets
B64U 70/90 - Launching from or landing on platforms
B64U 80/80 - Transport or storage specially adapted for UAVs by vehicles
7.
DETECTION THRESHOLD DETERMINATION FOR INFRARED IMAGING SYSTEMS AND METHODS
Techniques are provided for facilitating detection threshold determination for infrared imaging systems and methods. In one example, a method includes capturing, by an imaging device, a thermal image of a scene. The method further includes determining temperature difference data indicative of a difference between temperature data of the thermal image associated with a background of the scene and temperature data of the thermal image associated with gas detection. The method further includes determining detection threshold data based on sensitivity characteristics associated with the imaging device and the temperature difference data. The method further includes generating a detection threshold image based on the detection threshold data. Each pixel of the detection threshold image corresponds to a respective pixel of the thermal image and has a value indicative of a detection threshold associated with the respective pixel of the thermal image. Related devices and systems are also provided.
Techniques are disclosed for systems and methods to provide assisted navigation based on surrounding threats. In one example, an assisted navigation system receives data from a plurality of sensors associated with a mobile structure. The assisted navigation system determines a plurality of navigational hazards disposed within a monitored area associated with the mobile structure. The assisted navigation system processes the data and/or the navigational hazards to determine an operational context of the mobile structure. The assisted navigation system generates a context-dependent navigational chart for the mobile structure, wherein the navigational chart comprises greater or fewer of the navigational hazards in response to the determined operational context. The assisted navigation system updates the navigational chart in response to changes in the data. Additional systems and methods are provided.
Techniques for facilitating image setting determination and associated machine learning in infrared imaging systems and methods are provided. In one example, an infrared imaging system includes an infrared imager, a logic device, and an output/feedback device. The infrared imager is configured to capture image data associated with a scene. The logic device is configured to determine, using a machine learning model, an image setting based on the image data. The output/feedback device is configured to provide an indication of the image setting. The output/feedback device is further configured to receive user input associated with the image setting. The output/feedback device is further configured to determine, for use in training the machine learning model, a training dataset based on the user input and the image setting. Related devices and methods are also provided.
Techniques for facilitating stray light mitigation are provided. In one example, a method includes determining moving averages associated with an image. Each of the moving averages is associated with a respective window size. The method further includes determining a kernel based on the moving averages. The method further includes generating a stray light compensated image based on the image and the kernel. Related devices and systems are also provided.
H04N 5/359 - Noise processing, e.g. detecting, correcting, reducing or removing noise applied to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
H04N 5/357 - Noise processing, e.g. detecting, correcting, reducing or removing noise
Systems and methods include an acoustic image capture component configured to capture acoustic signals and infrared images of a scene, and a logic device configured to identify an acoustic event, localize the acoustic event including a target, identify the target in the infrared images, acquire temperature data associated with the target based on the infrared images, evaluate the temperature data and acoustic event information and determine a corresponding evaluation classification, and process the identified target in accordance with the evaluation classification.
Fiducial marker detection systems and methods are provided. In one example, a method includes capturing, by a camera of an unmanned aerial vehicle, an image. The method further includes identifying one or more image contours in the image. The method further includes determining a position of a fiducial marker in the image. The method further includes projecting, based at least on the position, models associated with one or more contours of the fiducial marker into an image plane of the camera to obtain one or more model contours. The method further includes determining a pose associated with the fiducial marker based at least on the one or more image contours and the one or more model contours. Related devices and systems are also provided.
A detection device, such as an unmanned vehicle, is adapted to detect and classify an object in sensor data comprising at least one image using a dual-task classification model comprising predetermined object classifications and learned object classifications, determine user interest in the detected object, communicate object detection information to a control system based at least in part on the determined user interest in the detected object, receive learned object classification parameters based at least in part on the communicated object detection information, and update the dual -task classification model with the received learned object classification parameters.
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/58 - Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
G06V 10/778 - Active pattern-learning, e.g. online learning of image or video features
Rescue parachute deployment systems (RPDSs) and related techniques are provided to improve the safety and operational flexibility of unmanned aerial vehicles (UAVs). An RPDS includes a canopy assembly (168), a rotor guard (680, 682) disposed at least partially about the canopy assembly and configured to protect the canopy assembly from rotor strike damage as the canopy assembly is launched through a rotor plane of the UAV, and an ejector assembly (164) configured to deploy the rotor guard into and the canopy assembly through a rotor plane of the UAV. The RPDS may also include a logic device coupled to and/or integrated with the ejector assembly and/or the UAV that is configured to determine a rescue parachute launch condition is active and to control the ejector assembly to deploy the canopy assembly through the rotor plane of the UAV.
Various techniques are disclosed to provide for improved detection of elevated human body temperatures. In one example, a method includes receiving a thermal image. The method also includes processing the thermal image to detect a person's face and a characteristic associated with the person. The method also includes selecting a circadian rhythm model associated with the detected characteristic. The method also includes determining an expected body temperature using the circadian rhythm model. The method also includes extracting a temperature associated with the person's face from the thermal image. The method also includes comparing the extracted temperature with the expected body temperature to detect an elevated body temperature condition. Additional methods and systems are also provided.
Techniques are disclosed for systems and methods to provide remote sensing imagery for mobile structures. A remote sensing imagery system includes a radar assembly (160,300,302,304) mounted to a mobile structure (101) and a coupled logic device (130). The radar assembly includes an imaging system (282) coupled to or within the radar assembly and configured to provide image data associated with the radar assembly. The logic device is configured to receive radar returns corresponding to a detected target (464) from the radar assembly and image data corresponding to the radar returns from the imaging system, and then generate radar image data based on the radar returns and the image data. Subsequent user input and/or the sensor data may be used to adjust a steering actuator, a propulsion system thrust, and/or other operational systems of the mobile structure.
Techniques are disclosed for systems and methods to provide remote sensing imagery for mobile structures. A remote sensing imagery system includes a radar assembly mounted to a mobile structure and a coupled logic device. The radar assembly includes an orientation and position sensor (OPS) coupled to or within the radar assembly and configured to provide orientation and position data associated with the radar assembly. The logic device is configured to receive radar returns corresponding to a detected target from the radar assembly and orientation and/or position data corresponding to the radar returns from the OPS, determine a target radial speed corresponding to the detected target, and then generate remote sensor image data based on the remote sensor returns and the target radial speed. Subsequent user input and/or the sensor data may be used to adjust a steering actuator, a propulsion system thrust, and/or other operational systems of the mobile structure.
G01S 7/295 - Means for transforming co-ordinates or for evaluating data, e.g. using computers
G01S 13/524 - Discriminating between fixed and moving objects or between objects moving at different speeds using transmissions of interrupted pulse modulated waves based upon the phase or frequency shift resulting from movement of objects, with reference to the transmitted signals, e.g. coherent MTi
G01S 13/58 - Velocity or trajectory determination systems; Sense-of-movement determination systems
G01S 13/60 - Velocity or trajectory determination systems; Sense-of-movement determination systems wherein the transmitter and receiver are mounted on the moving object, e.g. for determining ground speed, drift angle, ground track
G01S 13/86 - Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
G01S 13/89 - Radar or analogous systems, specially adapted for specific applications for mapping or imaging
G01S 13/937 - Radar or analogous systems, specially adapted for specific applications for anti-collision purposes of marine craft