An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of an object. A pixel position of a body part of a person is determined in at least a subset of the received image frames, thereby determining a set of pixel positions of the body part. An aggregated body part position is determined based on the set of pixel positions. If the aggregated body part position is determined to correspond to a position associated with the object, a trigger signal is provided indicating an interaction event has occurred.
G06V 10/147 - Optical characteristics of the device performing the acquisition or on the illumination arrangements - Details of sensors, e.g. sensor lenses
G06V 10/24 - Aligning, centring, orientation detection or correction of the image
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
4.
System and method for determining an item count in a rack using magnetic sensors
A system includes a longitudinal rack storing a plurality of items, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a pre-selected thickness. Each sensor generates a value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing values generated by the sensors, and a processor configured to determine a position of the shoe/magnet based on the values and determine a count of the items based on the position of the shoe/magnet.
G01D 5/14 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
A system includes a sensor, a weight sensor, and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and weight measurements from the weight sensor. The tracking subsystem detects an event associated with an item being removed from a rack in which the weight sensor is installed. The tracking subsystem determines that a first person or a second person may be associated with the event. In response to determining that the first or second person may be associated with the event, buffer frames are stored of top-view images generated by the sensor during a time period associated with the event. The tracking subsystem then determines, using at least one of the stored buffer frames and a first action-detection algorithm, whether an action associated with the event was performed by the first person or the second person.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G08B 13/14 - Mechanical actuation by lifting or attempted removal of hand-portable articles
G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
A system includes a sensor, a weight sensor, and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and weight measurements from the weight sensor. The tracking subsystem detects an event associated with an item being removed from a rack in which the weight sensor is installed. The tracking subsystem determines that a first person or a second person may be associated with the event. In response to determining that the first or second person may be associated with the event, buffer frames are stored of top-view images generated by the sensor during a time period associated with the event. The tracking subsystem then determines, using at least one of the stored buffer frames and a first action-detection algorithm, whether an action associated with the event was performed by the first person or the second person.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G08B 13/14 - Mechanical actuation by lifting or attempted removal of hand-portable articles
G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
7.
SCALABLE POSITION TRACKING SYSTEM FOR TRACKING POSITION IN LARGE SPACES
A weight sensor includes a plurality of load cells. A first load cell is configured to produce a first electric current based on a force experienced by the first load cell. A second load cell is configured to produce a second electric current based on a force experienced by the second load cell. A third load cell is configured to produce a third electric current based on a force experienced by the third load cell. And a fourth load cell is configured to produce a fourth electric current based on a force experienced by the fourth load cell.
G01G 19/40 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight
G01S 17/66 - Tracking systems using electromagnetic waves other than radio waves
G01S 17/87 - Combinations of systems using electromagnetic waves other than radio waves
An object tracking system that includes a sensor and a tracking system. The sensor is configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is further configured to receive the frame, to determine a pixel location for the first person, and to determine a person is within the predefined zone associated with the rack. The tracking system is further configured to identify the plurality of items in a digital cart associated with the person, to identify an item from the digital cart associated with the first person, and to remove the identified item from the digital cart associated with the person.
A system includes a first sensor configured to generate images of at least a first portion of a space. A processor of the system is configured to determine a position of a possible object in the space based on generated images.
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G01G 19/414 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
G01G 19/52 - Weighing apparatus combined with other objects, e.g. with furniture
A system includes a longitudinal rack storing a plurality of items, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a thickness of an item stored in the rack. Each sensor generates a voltage value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing voltage values generated by the sensors, and a processor configured to monitor voltage values generated by the sensors, detect that a particular sensor has generated a maximum voltage value and determine a number of items stored in the rack based on a particular number of items corresponding to the particular sensor.
G06M 9/00 - Counting of objects in a stack thereof
A47F 1/12 - Containers with arrangements for dispensing articles dispensing from the side of an approximately horizontal stack
G01R 15/20 - Adaptations providing voltage or current isolation, e.g. for high-voltage or high-current networks using galvano-magnetic devices, e.g. Hall-effect devices
G06Q 10/0875 - Itemisation or classification of parts, supplies or services, e.g. bill of materials
H05K 1/18 - Printed circuits structurally associated with non-printed electric components
G01D 5/14 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
12.
ITEM IDENTIFICATION USING DIGITAL IMAGE PROCESSING
A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.
In response to detecting a first triggering event corresponding to placement of a first item on a platform, a first image of the platform is captured. A first item identifier of the first item is identified and stored in a memory. In response to detecting a second triggering event corresponding to placement of a second item on the platform, a second image of the platform is captured. The second image is compared with the first image. Upon determining that the first item depicted in the second image overlaps with the first item depicted in the first image and the overlap equals or exceeds a threshold, the first item identifier is assigned to the first item depicted in the second image. A second item identifier of the second item is identified, and information associated with the first item identifier and the second item identifier is displayed on a user interface device.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
15.
SYSTEM AND METHOD FOR CAMERA RE-CALIBRATION BASED ON AN UPDATED HOMOGRAPHY
A device for object tracking receives an image from a camera, where the image shows a set of points on a calibration board placed on a platform. The device determines a pixel location array that comprises pixel locations associated with the points in the image. The device determines, by applying a first homography to the pixel location array, a calculated location array identifying calculated physical location coordinates of the set of points in the global plane. The device determines that the difference between a reference location array and the calculated location array is more than a threshold value. In response, the device determines that the camera and/or the platform has moved from a respective initial location when the first homography was determined. The device determines a second homography by multiplying an inverse of the pixel location array by the reference location array and calibrates the camera using the second homography.
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images of the first item are captured. For each image of the first item, a cropped image is generated including a bounding box around the first item depicted in the image. For each cropped image, a ratio is calculated between a portion of a total area within the bounding box occupied by the first item and the total area. If the ratio equals or exceeds a minimum threshold, an item identifier associated with the first item is identified based on the cropped image. On the other hand, if the ratio is below the threshold, the cropped image is discarded. A particular item identifier is selected from a set of cropped images that were not discarded.
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
A system determines an interaction period during which a fuel dispensing operation is performed at a fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with fuel dispensed from the fuel dispensing terminal by dividing the determined volume for fuel by the interaction period. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system retrieves a video feed that shows the fuel dispensing terminal during the fuel dispensing operation. The system creates a file for the fuel dispensing operation. The system stores the video feed in the created file.
B67D 7/04 - Apparatus or devices for transferring liquids from bulk storage containers or reservoirs into vehicles or into portable containers, e.g. for retail sale purposes for transferring fuels, lubricants or mixed fuels and lubricants
B67D 7/22 - Arrangements of indicators or registers
19.
ANOMALY DETECTION AND CONTROLLING OPERATIONS OF FUEL DISPENSING TERMINAL DURING OPERATIONS
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
A system determines an interaction period during which a fuel dispensing operation is performed at a fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with fuel dispensed from the fuel dispensing terminal by dividing the determined volume for fuel by the interaction period. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system retrieves a video feed that shows the fuel dispensing terminal during the fuel dispensing operation. The system creates a file for the fuel dispensing operation. The system stores the video feed in the created file.
A system determines that a fuel dispensing operation may be anomalous. In response, the system accesses fuel inventory data that indicates fuel levels in a fuel tank within a threshold period. The system determines a measure amount of fuel that left the fuel tank based on the fuel inventory data. The system determines a calculated amount of dispensed fuel associated with one or more fuel dispensing operations within the threshold period. The system compares the measured amount of fuel that left the fuel tank with the calculated amount of dispensed fuel. The system determines that the measured amount of fuel that left the fuel tank is more than the calculated amount of dispensed fuel. In response, the system confirms that the fuel dispensing operation is anomalous and communicates an electronic signal to the fuel dispensing terminal that causes the fuel dispensing terminal to stop dispensing fuel.
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
A reference depth image of a platform is captured using a three-dimensional (3D) sensor. A plurality of secondary depth images are captured, wherein for each secondary depth image, a depth difference parameter is determined by comparing the secondary depth image and the reference depth image. In response to determining that the depth difference parameter has stayed constant at a value higher than zero for a duration of a first time interval, it is determined that an item has been placed on the platform.
A device captures an image of a first item and generates a first encoded vector for the image. The device identifies a set of items that have at least one attribute in common with the first item. The device determines the identity of the first item based at least on attributes of the first item. The device determines that a confidence score associated with the identity of the first item is less than a threshold percentage. In response, the device determines a height of the first item. The device identifies item(s) with average heights within a threshold range from the height of the first item. The device compares the first encoded vector with a second encoded vector associated with a second item from the identified item(s). If the first encoded vector corresponds to the second encoded vector, the device determines that the first item corresponds to the second item.
G06T 7/55 - Depth or shape recovery from multiple images
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/40 - Extraction of image or video features
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
25.
System and method for identifying moved items on a platform during item identification
In response to detecting a first triggering event corresponding to placement of a first item on a platform, a plurality of first images are captured of the first item and a plurality of cropped first images are generated based on the first images. A first item identifier associated with the first item is identified based on the cropped first images. In response to detecting a second triggering event corresponding to placement of a second item on the platform, a plurality of second images of the first item are captured and a plurality of cropped second images are generated from the second images. In response to determining that the cropped first images match with the cropped second images, the first item identifier is assigned to the first item depicted in the second images.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
26.
System and method for selecting an item from a plurality of identified items by filtering out back images of the items
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images are captured of the first item and a plurality of cropped images are generated based on the first images. An item identifier is identified based on each cropped image, wherein each item identifier is associated with a numerical similarity value. Each cropped image is further tagged as a front image or a back image. A particular item identifier identified for a corresponding cropped image tagged as a front image is selected and associated with the first item. An indicator of the particular item identifier is displayed on a user interface device.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
An item tracking system comprises a plurality of cameras, a memory storing associations between item identifiers of respective items, and a processor configured to capture a plurality of first images of a first item and identify a first item identifier of the first item based on the first images. The processor captures a plurality of second images of a second item, generates cropped image of the second item from each second image, and identifies an item identifier for each cropped image. Based on the associations stored in the memory, the processor determines that an association exists between the first item identifier of the first item and a second item identifier, and assigns the second item identifier to the second item when at least one of the item identifiers corresponding to the cropped images is the second item identifier.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
31.
SYSTEM AND METHOD FOR ITEM IDENTIFICATION USING CONTAINER-BASED CLASSIFICATION
A device detects a triggering event that corresponds to a placement of an item on a platform. In response, the device captures an image of the item and generates a first encoded vector for the image. The first encoded vector describes one or more attributes of the item. The device determines that the item is associated with a first container category based on the one or more attributes of the item. The device identifies one or more items that have been identified as having placed inside a container associated with the first container category. The device displays a list of item options that comprises the one or more items on a graphical user interface (GUI). The device receives a selection of a first item from along the list of item options and identifies the first item as being inside the container.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images are captured of the first item and a plurality of cropped images are generated based on the images. For each cropped image, a first encoded vector is generated and compared to encoded vectors in an encoded vector library that are tagged as a front image. Based on the comparison, a second encoded vector is selected from the encoded vector library that most closely matches with the first encoded vector. An item identifier is identified that is associated with the second encoded vector. A particular item identifier is selected that is identified for a particular cropped image.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
33.
SYSTEM AND METHOD FOR SPACE SEARCH REDUCTION IN IDENTIFYING ITEMS FROM IMAGES VIA ITEM HEIGHT
A device detects a triggering event that corresponds to a placement of a first item on a platform. In response, the device captures an image of the first item and generates a first encoded vector for the image. The first encoded vector describes one or more attributes of the first item. The device determines a height of the first item. The device identifies one or more items in an encoded vector library that are associated with average heights within a threshold range from the determined height of the first item. The device compares the first encoded vector with a second encoded vector associated with a second item from among the one or more items. The device determines that the first encoded vector corresponds to the second encoded vector. In response, the device determines that the first item corresponds to the second item.
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images are captured of the first item and a plurality of cropped images are generated based on the first images. An item identifier is identified based on each cropped image, wherein each item identifier is associated with a numerical similarity value. Item identifiers associated with a highest and next highest similarity values are selected. When a difference between the highest and the next highest similarity values equals or exceeds a threshold, the item identifier associated with the highest similarity value is associated with the first item placed on the platform. An indicator of the item identifier is displayed on a user interface device.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
35.
SYSTEM AND METHOD FOR IDENTIFYING AN ITEM BASED ON INTERACTION HISTORY OF A USER
In response to detecting a first triggering event corresponding to placement of a first item on a platform, a plurality of first images are captured of the first item. An item identifier associated with the first item is identified based on the first images and assigned to the first item. In response to detecting a second triggering event corresponding to placement of a second item on the platform, a plurality of second images are captured of the second item, a plurality of cropped images are generated based on the second images, and a plurality of item identifiers are determined for the second item based on the cropped images. When a process for selecting a particular item identifier from the plurality of item identifiers fails, a second item identifier is assigned to the second item based on an association between the first item identifier and the second item identifier.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
36.
SYSTEM AND METHOD FOR AGGREGATING METADATA FOR ITEM IDENTIFICATION USING DIGITAL IMAGE PROCESSING
A system for identifying items based on aggregated metadata obtains images of an item. The system extracts a set of features from images of the item. The system identifies a first value of a first feature associated with a first image of the item. The system identifies a second value of the first feature associated with a second image of the item. The system aggregates the first value and the second value. The system associates the item to the aggregated first value and the second value, where the aggregated first value and the second value represent the first feature of the item. The system adds a new entry for each image of the item to a training dataset associated with an item identification model.
G06T 7/90 - Determination of colour characteristics
G01G 19/414 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/56 - Extraction of image or video features relating to colour
G06F 18/22 - Matching criteria, e.g. proximity measures
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
An apparatus to create a virtual layout of a virtual store to emulate a physical layout of a physical store includes a memory and a processor. The processor receives a physical position and orientation associated with a physical rack located in the physical store. In response, the processor places a virtual rack at a virtual position and orientation on the virtual layout, to represent the physical position and orientation of the physical rack on the physical layout. The processor receives virtual items associated with physical items located on physical shelves of the physical rack. In response, the processor places the virtual items on virtual shelves of the virtual rack, the virtual shelves representing the physical shelves of the physical rack. The processor assigns a rack camera, which captures video that includes the physical rack, to the virtual rack and stores the virtual layout in the memory.
A system includes a longitudinal rack storing a plurality of packs of cigarettes, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a thickness of a pack stored in the rack. Each sensor generates a voltage value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing voltage values generated by the sensors, and a processor configured to monitor voltage values generated by the sensors, detect that a particular sensor has generated a maximum voltage value and determine a number of packs stored in the rack based on a particular number of packs corresponding to the particular sensor.
G06M 9/00 - Counting of objects in a stack thereof
G06Q 10/0875 - Itemisation or classification of parts, supplies or services, e.g. bill of materials
G01R 15/20 - Adaptations providing voltage or current isolation, e.g. for high-voltage or high-current networks using galvano-magnetic devices, e.g. Hall-effect devices
H05K 1/18 - Printed circuits structurally associated with non-printed electric components
A47F 1/12 - Containers with arrangements for dispensing articles dispensing from the side of an approximately horizontal stack
G01D 5/14 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
41.
System and method for determining an item count in a rack using magnetic sensors
A system includes a longitudinal rack storing a plurality of packs of cigarettes, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a pre-selected thickness. Each sensor generates a value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing values generated by the sensors, and a processor configured to determine a position of the shoe/magnet based on the values and determine a pack count of the packs based on the position of the shoe/magnet.
G01D 5/14 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
A device is configured to receive a data request that includes an encrypted data element. The device is further configured to identify a data source device associated with the data request, to identify a first encryption key associated with the data source device, and to decrypt the encrypted data element using the first encryption key. The device is further configured to identify a first data processor device associated with receiving the data request, to identify a second encryption key associated with the first data processor device, wherein the second encryption key is different from the first encryption key, and to re-encrypt the decrypted data element. The device is further configured to identify routing instructions associated with the first data processor device and to send the re-encrypted data element to the first data processor device in accordance with the routing instructions.
A data prediction subsystem receives event data indicating amounts of items removed from locations over a previous period of time, An event probability is determined based at least in part on a number of concurrent days without detected item removal events for a first item at a first location and an anticipated item removal amount per day. After determining that the event probability is less than the threshold value, an updated status is determined for the first item at the first location. The updated status is an empty status indicating that the first item is not believed to be present at the first location. Based at least in part on the updated status for the first item at the first location, a prediction value is determined corresponding to a recommended amount of the first item to request for a future time.
A data prediction subsystem includes receives event data indicating amounts of items removed from locations over a previous period of time. For a first day of the first set of event data having zero events or an empty status indicating that the first item is not believed to be present at the first location, longitudinal and cross-sectional components are determined. An anticipated event value for the first item at the first location is determined using the longitudinal component and the cross-sectional component. Based at least in part on the anticipated event value, a prediction value is determined that corresponds to a recommended amount of the first item to request at a future time.
A data prediction subsystem stores hourly event potential data indicating an expected amount of removal events as a function of time of day. Based at least in part on event data, it is determined that a first item associated with a first location has an empty status at a time after a start of a day. For the day, an anticipated event value is determined for the first item at the first location. Using the anticipated event value and the hourly event potential data, a time-adjusted event value is determined. Based at least in part on the time-adjusted event value, a prediction value is determined that corresponds to a recommended amount of the first item to request for a future time.
A data prediction subsystem stores event data indicating amounts of items removed from locations over a previous period of time and event-to-status transition rules for each location. An event is detected at a first location. The detected event is associated with a change in status of a first item. Based on the detected event and the event-to-status transition rules, an anticipated item status is determined for the first item, indicating whether the first item is believed to be present at the first location at a time during the previous period of time of the event data. Based at least in part on the anticipated item status for the first item, a prediction value is determined that corresponds to a recommended amount of the first item to request for a future time.
A system determines an interaction period during which a fuel dispensing operation is performed at a fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with fuel dispensed from the fuel dispensing terminal by dividing the determined volume for fuel by the interaction period. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system retrieves a video feed that shows the fuel dispensing terminal during the fuel dispensing operation. The system creates a file for the fuel dispensing operation. The system stores the video feed in the created file.
B67D 7/14 - Arrangements of devices for controlling, indicating, metering or registering quantity or price of liquid transferred responsive to input of recorded programmed information, e.g. on punched cards
B67D 7/30 - Arrangements of devices for controlling, indicating, metering or registering quantity or price of liquid transferred with means for predetermining quantity of liquid to be transferred
A system for diagnostic analysis of a toilet over time intervals comprises a processor operable to receive a distance measurement from a sensor over a network. The processor is operable to determine an instance of a decrease in a water level in a toilet tank based on a comparison of the received distance measurement to a setpoint and to determine a plurality of instances of the decrease in the water level within a period of time. The processor is operable to calculate a ratio of the determined number of the plurality of instances of the decrease in the water level to a number of instances wherein a door changes from a first position to a second position. The processor is operable to compare the calculated ratio to a threshold ratio and to send an alert to a user device when the calculated ratio is less than the threshold ratio.
G01F 23/00 - Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm
A system includes first and second sensors, and a computing system. The first sensor measures a first property of a first piece of equipment, and the second sensor measures a second property of a second piece of equipment. The computing system includes a processor and memory, which stores a condition that depends on both the first property and the second property. Satisfaction of the condition indicates that maintenance of the first piece of equipment should be prioritized over maintenance of the second piece of equipment. The processor receives the measured first property and the measured second property. In response to determining, based on the measured first and second properties, that the third condition is satisfied, transmits an alert for display on a user device. The alert indicates that maintenance of the first piece of equipment has a higher priority than maintenance of the second piece of equipment.
A system for providing a measure of a number of cups housed within a cup dispenser includes a sensor and a computing system. The cup dispenser includes a body configured to house a stack of cups, and a plunger configured to engage a first cup of the stack of cups and to bias the stack of cups toward a discharge opening defined by a first end of the body. The sensor is coupled to the plunger and is configured to measure a distance from the plunger to a second end of the body, and to transmit the measured distance across a network. The computing system receives the measured distance from the network, and determines, based on the measured distance, the measure of the number of cups housed within the cup dispenser. In response to determining that the number of cups is less than the threshold, the system transmits an alert.
G07F 13/10 - Coin-freed apparatus for controlling dispensing of fluids, semiliquids or granular material from reservoirs with associated dispensing of containers, e.g. cups or other articles
A system includes a sensor and a computing system. The sensor is coupled to a syrup line of the beverage dispenser and is configured to measure a pressure within the line. The syrup line is coupled at a first end to a syrup bag and at a second end to a pump configured to pump syrup from the syrup bag to an outlet of the beverage dispenser. The pump is operated using pressurized gas and generates a pressure corresponding to the pressure of the gas within the syrup line when both the syrup bag is full and the outlet of the beverage dispenser is closed. The sensor transmits a measured pressure to the computing system. The computing system determines that the measured pressure is less than a threshold pressure. In response, the processor transmits an alert for display on a user device, which identifies the syrup bag for replacement.
A system for measuring a fill level of a trash can comprises a processor operable to receive a distance measurement from a network, wherein a sensor communicatively coupled to the processor through the network is operable to determine the distance measurement. The processor is operable to calculate a percentage of waste in the trash can based on the received distance measurement and a difference between a first setpoint and a second setpoint. The processor is operable to determine a threshold for a first period of time based on entity information. The processor is operable to compare the percentage of waste in the trash can to the threshold for the first period of time and to send an alert for display on a user device when the percentage of waste is greater than the threshold for the first period of time.
A system for determining an ambient concentration of compositions for bathroom cleaning comprises a processor operable to receive a concentration measurement from a sensor over a network within a first period of time. The processor is operable to compare the received concentration measurement to a first threshold and to a second threshold greater than the first threshold. The processor is operable to instruct a memory communicatively coupled to the processor to store an indication that a bathroom was cleaned in response to a determination that the received concentration measurement is greater than the first threshold and less than the second threshold. The processor is operable to send an alert for display on a user device indicating either that the sensor has been tampered or that a spill event has occurred in response to a determination that the received concentration measurement is greater than the second threshold.
A system includes one or more memory units and a processor. The processor is configured to receive, from a food temperature probe, a first temperature associated with a first food item. The processor is further configured to receive, from the food temperature probe, a second temperature associated with a second food item. The processor is further configured to receive, from the food temperature probe, a third temperature that was measured by the food temperature probe after measuring the first temperature but before measuring the second temperature, the third temperature associated with a cleaning of the food temperature probe. The processor is further configured to send an alert for display on a user device when the third temperature is greater than the cleaning threshold temperature.
A system includes a door sensor that provides a status of whether a door of a refrigeration system is open or closed, a temperature sensor that measures a temperature of a food compartment of the refrigeration system, a power sensor that measures an amount of power consumed by the refrigeration system, and a compressor sensor that provides acoustic data about a compressor of the refrigeration system. The system further includes a remote computing system configured to send an alert indicating that the refrigeration system needs servicing when the temperature of the food compartment of the refrigeration system is determined to be above a predetermined temperature while: the door of the refrigeration system is closed, the amount of power consumed by the refrigeration system is within a predetermined power range, and acoustic signals of the compressor of the refrigeration system are within a predetermined acoustic range.
A system includes a local device and a remote computing system. The local device is disposed in a toilet paper dispenser and includes a sensor and a first processor. The sensor is configured to measure a distance to a toilet paper roll. The first processor is configured to calculate, using the measured distance to the toilet paper roll, a percentage of toilet paper remaining on the toilet paper roll. The first processor is further configured to transmit, when it is determined that the percentage of toilet paper remaining is less than a minimum threshold, sensor data across a wireless communications network. The remote computing system includes a second processor configured to receive the sensor data and send an alert for display on a user device in response to receiving the sensor data.
An accessory mounting apparatus and system are provided. The system includes a structural framing, an accessory mounting apparatus, and an accessory. The accessory mounting apparatus includes a first component, a second component, and a connector arm. The first component includes a first base, a first rail connector that extends from the first base, and one or more first connectors. The second component includes a second base and a second rail connector that extends from the second base. The connector arm includes a second connector and a mounting arm, and the second connector is coupled to the one or more first connectors. Further, the first rail connector and the second rail connector are configured to engage a first rail slot and a second rail slot of the structural framing, accordingly. Additionally, the accessory is coupled to the mounting arm.
F16M 11/06 - Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand allowing pivoting
F16M 13/02 - Other supports for positioning apparatus or articles; Means for steadying hand-held apparatus or articles for supporting on, or attaching to, an object, e.g. tree, gate, window-frame, cycle
B33Y 80/00 - Products made by additive manufacturing
67.
PROACTIVE REQUEST COMMUNICATION SYSTEM WITH IMPROVED DATA PREDICTION USING ARTIFICIAL INTELLIGENCE
An event tracking subsystem detects events for the removal of items at a plurality of locations. A data prediction subsystem receives event data based on the detected events that indicates an amount of an item removed from each of the plurality of locations over a previous period of time. The data prediction subsystem determines a prediction data value for each item at each of the plurality of locations. The prediction data is used to proactively request items with improved communication and computational efficiency.
A data prediction subsystem receives event data indicating an amount of an item removed from each of a plurality of locations over a previous period of time. For each location, prediction data is determined using the event data. The prediction data includes, for each day over a future period of time, a non-integer value indicating an anticipated amount of the item that will be removed from the location. An improved rounding process is used to round the prediction value for each day. The resulting prediction data is used to proactively request items with improved communication and computational efficiency.
G06Q 10/08 - Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06F 7/499 - Denomination or exception handling, e.g. rounding or overflow
G06F 17/18 - Complex mathematical operations for evaluating statistical data
A device configured to receive a resource reservation request that identifies a user and a resource. The device is further configured to generate a resource allocation that includes an association between the user, location information for the resource, a resource identifier for the resource, and a token value. The device is further configured to associate the resource allocation with a time interval indicating a deadline for using the resource allocation. The device is further configured to receive a reservation verification request from a network device. The device is further configured to determine the location information for the network device is within a predetermined distance from the location information for the resource, to determine the resource identifier matches the resource identifier for the resource, and to determine a current time is within the time interval. The device is further configured to generate resource allocation instructions authorizing access to the resource.
A system for selecting delivery mechanisms sends a request to a plurality of servers associated with one or more autonomous delivery mechanism and one or more non-autonomous delivery mechanisms to provide delivery metadata. The request comprises a pickup location coordinate and a delivery location coordinate. The delivery metadata comprises a delivery time and a delivery quote. The system receives a first set of delivery metadata associated with one or more autonomous delivery mechanisms, and a second set of delivery metadata associated with one or more non-autonomous delivery mechanisms. The system identifies a particular autonomous delivery mechanism from within the category of autonomous delivery mechanisms based on the first set of delivery metadata. The system identifies a particular non-autonomous delivery mechanism from within the category of non-autonomous delivery mechanisms based on the second set of delivery metadata.
A system receives content of a memory resource. The system compares the content of the memory resource with a first resource data associated with a first physical space, and with a second resource data associated with a second physical space. The system determines which of the first physical space and the second physical space can fulfill more than a threshold percentage of objects in the memory resource based on the comparison between the content of the memory resource with the first and second resource data. The system determines that the first physical space can fulfill more than the threshold percentage of objects in the memory resource. The system assigns the first physical space to the memory resource for concluding an operation associated with the memory resource.
A system sends a drop-off location coordinate to a server associated with a delivery mechanism. The system receives, from the server, a hyperlink that upon access, the drop-off location coordinate is displayed on a virtual map. The system links the hyperlink to an adjust drop-off location element, such that when the adjust drop-off element is accessed, the virtual map is displayed within a delivery user interface. The system integrates the adjust drop-off element into the delivery user interface such that the adjust drop-off element is accessible from within the delivery user interface.
A system generates a set of instructions for preparing content of a memory resource that comprises a set of objects in a particular sequence. The system sends the content of the memory resource and the set of instructions to a user device. The system receives a first message that indicates the set of objects is being prepared from the user device. The system sends, to a server associated with a delivery vehicle, a third message to alert the delivery vehicle to pick up the set of objects from a pickup location and deliver to a delivery location coordinate. The system receives, from the server, an alert message that indicates the delivery vehicle has reached the pickup location coordinate. The system forwards the alert message to the user device.
A system for selecting delivery mechanisms sends a request to a plurality of servers associated with one or more autonomous delivery mechanism and one or more non-autonomous delivery mechanisms to provide delivery metadata. The request comprises a pickup location coordinate and a delivery location coordinate. The delivery metadata comprises a delivery time and a delivery quote. The system receives a first set of delivery metadata associated with one or more autonomous delivery mechanisms, and a second set of delivery metadata associated with one or more non-autonomous delivery mechanisms. The system identifies a particular autonomous delivery mechanism from within the category of autonomous delivery mechanisms based on the first set of delivery metadata. The system identifies a particular non-autonomous delivery mechanism from within the category of non-autonomous delivery mechanisms based on the second set of delivery metadata.
A system for selecting delivery mechanisms sends a request to a plurality of servers associated with one or more autonomous delivery mechanism and one or more non-autonomous delivery mechanisms to provide delivery metadata. The request comprises a pickup location coordinate and a delivery location coordinate. The delivery metadata comprises a delivery time and a delivery quote. The system receives a first set of delivery metadata associated with one or more autonomous delivery mechanisms, and a second set of delivery metadata associated with one or more non-autonomous delivery mechanisms. The system identifies a particular autonomous delivery mechanism from within the category of autonomous delivery mechanisms based on the first set of delivery metadata. The system identifies a particular non-autonomous delivery mechanism from within the category of non-autonomous delivery mechanisms based on the second set of delivery metadata.
A system presents a plurality of objects on a delivery user interface. The system updates content of a memory resource as one or more objects are added to the memory resource. In response to determining that the content of the memory resource is finalized, the system determines a selection of a particular autonomous delivery mechanism and a particular non-autonomous delivery mechanism based on filtering conditions associated with the content of the memory resource. The system presents the selection of the particular autonomous delivery mechanism and the particular non-autonomous delivery mechanism on a delivery user interface. The system determines that a delivery mechanism is selected from the selection. The system receives one or more status updates associated with the selected delivery mechanism, and displays the one or more status updates on the delivery user interface.
A system presents, on a user interface, a first message that indicates an operation associates with a memory resource is concluded. The system present, on the user interface, content of the memory resource that comprises a plurality of objects. The system presents, on the user interface, a set of instructions to prepare the plurality of objects in a particular sequence. The system receives a second message that indicates the plurality of objects in ready for pickup by a delivery mechanism. The system presents, on the user interface, an alert message that indicates the delivery mechanism has reached a pickup location coordinate. If the category of the delivery mechanism is an autonomous delivery mechanism, the system presents, on the user interface, a pin number that unlocks the autonomous delivery mechanism.
A device configured to receive a rack identifier for a rack that is configured to hold items. The device is further configured to identify a master template that is associated with the rack. The device is further configured to receive images of the plurality of items on the rack and to combine the images into a composite image of the rack. The device is further configured to identify shelves on the rack within the composite image and to generate bounding boxes that correspond with an item on the rack. The device is further configured to associate each bounding box with an item identifier and an item location. The device is further configured to generate a rack analysis message based on a comparison of the item locations for each bounding box and the rack positions from the master template and to output the rack analysis message.
A device configured to receive a rack identifier for a rack that is configured to hold items. The device is further configured to identify a master template that is associated with the rack. The device is further configured to receive images of the plurality of items on the rack and to combine the images into a composite image of the rack. The device is further configured to identify shelves on the rack within the composite image and to generate bounding boxes that correspond with an item on the rack. The device is further configured to associate each bounding box with an item identifier and an item location. The device is further configured to generate a rack analysis message based on a comparison of the item locations for each bounding box and the rack positions from the master template and to output the rack analysis message.
A device configured to receive a rack identifier for a rack that is configured to hold items. The device is further configured to identify a master template that is associated with the rack. The device is further configured to receive images of the plurality of items on the rack and to combine the images into a composite image of the rack. The device is further configured to identify shelves on the rack within the composite image and to generate bounding boxes that correspond with an item on the rack. The device is further configured to associate each bounding box with an item identifier and an item location. The device is further configured to generate a rack analysis message based on a comparison of the item locations for each bounding box and the rack positions from the master template and to output the rack analysis message.
A device configured to receive a resource reservation request that identifies a user and a resource. The device is further configured to generate a resource allocation that includes an association between the user, location information for the resource, a resource identifier for the resource, and a token value. The device is further configured to associate the resource allocation with a time interval indicating a deadline for using the resource allocation. The device is further configured to receive a reservation verification request from a network device. The device is further configured to determine the location information for the network device is within a predetermined distance from the location information for the resource, to determine the resource identifier matches the resource identifier for the resource, and to determine a current time is within the time interval. The device is further configured to generate resource allocation instructions authorizing access to the resource.
A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.
A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.
A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.
A system for updating a training dataset of an item identification model determines that an item is not included in a training dataset. In response to determining that the item is not included in the training dataset, the system obtains an identifier of the item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features associated with the item from the images. The system associates the item to the identifier and the set of features. The system adds a new entry to the training dataset, where the new entry represents the item labeled with the identifier and the set of features.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06K 9/46 - Extraction of features or characteristics of the image
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
H04N 13/207 - Image signal generators using stereoscopic image cameras using a single 2D image sensor
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
94.
System and method for capturing images for training of an item identification model
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
A system for identifying items based on aggregated metadata obtains images of an item. The system extracts a set of features from images of the item. The system identifies a first value of a first feature associated with a first image of the item. The system identifies a second value of the first feature associated with a second image of the item. The system aggregates the first value and the second value. The system associates the item to the aggregated first value and the second value, where the aggregated first value and the second value represent the first feature of the item. The system adds a new entry for each image of the item to a training dataset associated with an item identification model.
G01G 19/414 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/56 - Extraction of image or video features relating to colour
G06F 18/22 - Matching criteria, e.g. proximity measures
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
A device configured to capture a first overhead depth image of the platform using a three-dimensional (3D) sensor at a first time instance and a second overhead depth image of a first object using the 3D sensor at a second time instance. The device is further configured to determine that a first portion of the first object is within a region-of-interest and a second portion of the first object is outside the region-of-interest in the second overhead depth image. The device is further configured to capture a third overhead depth image of a second object placed on the platform using the 3D sensor at a third time instance. The device is further configured to capture a first image of the second object using a camera in response to determining that the first object is outside of the region-of-interest and the second object is within the region-of-interest for the platform.
A device configured to detect a triggering event corresponding with a user placing a first item on the platform, to capture a first image of the first item on the platform using a camera, and to input the first image into a machine learning model that is configured to output a first encoded vector based on features of the first item that are present in the first image. The device is further configured to identify a second encoded vector in an encoded vector library that most closely matches the first encoded vector and to identify a first item identifier in the encoded vector library that is associated with the second encoded vector. The device is further configured to identify the user, to identify an account that is associated with the user, and to associate the first item identifier with the account of the user.
A device configured to receive a first point cloud data for a first item, to identify a first plurality of data points for the first object within the first point cloud data, and to extract the first plurality of data points from the first point cloud data. The device is further configured to receive a second point cloud data for the first item, to identify a second plurality of data points for the first object within the second point cloud data, and to extract a second plurality of data points from the second point cloud data. The device is further configured to merge the first plurality of data points and the second plurality of data points to generate combined point cloud data and to determine dimensions for the first object based on the combined point cloud data.
A system for refining an item identification model detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features from at least one of the images. The system identifies the item based on the set of features. The system receives an indication that the item is not identified correctly. The system receives an identifier of the item. The system identifies the item based on the identifier of the item. The system feeds the identifier of the item and the images to the item identification model. The system retrains the item identification model to learn to associate the item to the images. The system updates the set of features based on the determined association between the item and the images.
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06T 7/90 - Determination of colour characteristics
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/56 - Extraction of image or video features relating to colour
A device configured to identify a first pixel location within a first plurality of pixels corresponding with an item in a first image and to apply a first homography to the first pixel location to determine a first (x,y) coordinate. The device is further configured to identify a second pixel location within a second plurality of pixels corresponding with the item in a second image and to apply a second homography to the second pixel location to determine a second (x,y) coordinate. The device is further configured to determine that the distance between the first (x,y) coordinate and the second (x,y) coordinate is less than or equal to the distance threshold value, to associate the first plurality of pixels and the second plurality of pixels with a cluster for the item, and to output the first plurality of pixels and the second plurality of pixels.