A system includes a sensor, a weight sensor, and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and weight measurements from the weight sensor. The tracking subsystem detects an event associated with an item being removed from a rack in which the weight sensor is installed. The tracking subsystem determines that a first person or a second person may be associated with the event. In response to determining that the first or second person may be associated with the event, buffer frames are stored of top-view images generated by the sensor during a time period associated with the event. The tracking subsystem then determines, using at least one of the stored buffer frames and a first action-detection algorithm, whether an action associated with the event was performed by the first person or the second person.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G08B 13/14 - Mechanical actuation by lifting or attempted removal of hand-portable articles
G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
A system includes a sensor, a weight sensor, and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and weight measurements from the weight sensor. The tracking subsystem detects an event associated with an item being removed from a rack in which the weight sensor is installed. The tracking subsystem determines that a first person or a second person may be associated with the event. In response to determining that the first or second person may be associated with the event, buffer frames are stored of top-view images generated by the sensor during a time period associated with the event. The tracking subsystem then determines, using at least one of the stored buffer frames and a first action-detection algorithm, whether an action associated with the event was performed by the first person or the second person.
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G08B 13/14 - Mechanical actuation by lifting or attempted removal of hand-portable articles
G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
3.
SCALABLE POSITION TRACKING SYSTEM FOR TRACKING POSITION IN LARGE SPACES
A weight sensor includes a plurality of load cells. A first load cell is configured to produce a first electric current based on a force experienced by the first load cell. A second load cell is configured to produce a second electric current based on a force experienced by the second load cell. A third load cell is configured to produce a third electric current based on a force experienced by the third load cell. And a fourth load cell is configured to produce a fourth electric current based on a force experienced by the fourth load cell.
G01G 19/40 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight
G01S 17/66 - Tracking systems using electromagnetic waves other than radio waves
G01S 17/87 - Combinations of systems using electromagnetic waves other than radio waves
An object tracking system that includes a sensor and a tracking system. The sensor is configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is further configured to receive the frame, to determine a pixel location for the first person, and to determine a person is within the predefined zone associated with the rack. The tracking system is further configured to identify the plurality of items in a digital cart associated with the person, to identify an item from the digital cart associated with the first person, and to remove the identified item from the digital cart associated with the person.
A system includes a first sensor configured to generate images of at least a first portion of a space. A processor of the system is configured to determine a position of a possible object in the space based on generated images.
G06T 7/246 - Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
G01G 19/414 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
G01G 19/52 - Weighing apparatus combined with other objects, e.g. with furniture
A system includes a longitudinal rack storing a plurality of items, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a thickness of an item stored in the rack. Each sensor generates a voltage value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing voltage values generated by the sensors, and a processor configured to monitor voltage values generated by the sensors, detect that a particular sensor has generated a maximum voltage value and determine a number of items stored in the rack based on a particular number of items corresponding to the particular sensor.
G06M 9/00 - Counting of objects in a stack thereof
A47F 1/12 - Containers with arrangements for dispensing articles dispensing from the side of an approximately horizontal stack
G01R 15/20 - Adaptations providing voltage or current isolation, e.g. for high-voltage or high-current networks using galvano-magnetic devices, e.g. Hall-effect devices
G06Q 10/0875 - Itemisation or classification of parts, supplies or services, e.g. bill of materials
H05K 1/18 - Printed circuits structurally associated with non-printed electric components
G01D 5/14 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
7.
ITEM IDENTIFICATION USING DIGITAL IMAGE PROCESSING
A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
A system determines an interaction period during which a fuel dispensing operation is performed at a fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with fuel dispensed from the fuel dispensing terminal by dividing the determined volume for fuel by the interaction period. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system retrieves a video feed that shows the fuel dispensing terminal during the fuel dispensing operation. The system creates a file for the fuel dispensing operation. The system stores the video feed in the created file.
B67D 7/04 - Apparatus or devices for transferring liquids from bulk storage containers or reservoirs into vehicles or into portable containers, e.g. for retail sale purposes for transferring fuels, lubricants or mixed fuels and lubricants
B67D 7/22 - Arrangements of indicators or registers
10.
ANOMALY DETECTION AND CONTROLLING OPERATIONS OF FUEL DISPENSING TERMINAL DURING OPERATIONS
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
In response to detecting a first triggering event corresponding to placement of a first item on a platform, a first image of the platform is captured. A first item identifier of the first item is identified and stored in a memory. In response to detecting a second triggering event corresponding to placement of a second item on the platform, a second image of the platform is captured. The second image is compared with the first image. Upon determining that the first item depicted in the second image overlaps with the first item depicted in the first image and the overlap equals or exceeds a threshold, the first item identifier is assigned to the first item depicted in the second image. A second item identifier of the second item is identified, and information associated with the first item identifier and the second item identifier is displayed on a user interface device.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
12.
SYSTEM AND METHOD FOR CAMERA RE-CALIBRATION BASED ON AN UPDATED HOMOGRAPHY
A device for object tracking receives an image from a camera, where the image shows a set of points on a calibration board placed on a platform. The device determines a pixel location array that comprises pixel locations associated with the points in the image. The device determines, by applying a first homography to the pixel location array, a calculated location array identifying calculated physical location coordinates of the set of points in the global plane. The device determines that the difference between a reference location array and the calculated location array is more than a threshold value. In response, the device determines that the camera and/or the platform has moved from a respective initial location when the first homography was determined. The device determines a second homography by multiplying an inverse of the pixel location array by the reference location array and calibrates the camera using the second homography.
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images of the first item are captured. For each image of the first item, a cropped image is generated including a bounding box around the first item depicted in the image. For each cropped image, a ratio is calculated between a portion of a total area within the bounding box occupied by the first item and the total area. If the ratio equals or exceeds a minimum threshold, an item identifier associated with the first item is identified based on the cropped image. On the other hand, if the ratio is below the threshold, the cropped image is discarded. A particular item identifier is selected from a set of cropped images that were not discarded.
A system determines that a fuel dispensing operation may be anomalous. In response, the system accesses fuel inventory data that indicates fuel levels in a fuel tank within a threshold period. The system determines a measure amount of fuel that left the fuel tank based on the fuel inventory data. The system determines a calculated amount of dispensed fuel associated with one or more fuel dispensing operations within the threshold period. The system compares the measured amount of fuel that left the fuel tank with the calculated amount of dispensed fuel. The system determines that the measured amount of fuel that left the fuel tank is more than the calculated amount of dispensed fuel. In response, the system confirms that the fuel dispensing operation is anomalous and communicates an electronic signal to the fuel dispensing terminal that causes the fuel dispensing terminal to stop dispensing fuel.
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
A reference depth image of a platform is captured using a three-dimensional (3D) sensor. A plurality of secondary depth images are captured, wherein for each secondary depth image, a depth difference parameter is determined by comparing the secondary depth image and the reference depth image. In response to determining that the depth difference parameter has stayed constant at a value higher than zero for a duration of a first time interval, it is determined that an item has been placed on the platform.
A device captures an image of a first item and generates a first encoded vector for the image. The device identifies a set of items that have at least one attribute in common with the first item. The device determines the identity of the first item based at least on attributes of the first item. The device determines that a confidence score associated with the identity of the first item is less than a threshold percentage. In response, the device determines a height of the first item. The device identifies item(s) with average heights within a threshold range from the height of the first item. The device compares the first encoded vector with a second encoded vector associated with a second item from the identified item(s). If the first encoded vector corresponds to the second encoded vector, the device determines that the first item corresponds to the second item.
G06T 7/55 - Depth or shape recovery from multiple images
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/40 - Extraction of image or video features
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
18.
System and method for identifying moved items on a platform during item identification
In response to detecting a first triggering event corresponding to placement of a first item on a platform, a plurality of first images are captured of the first item and a plurality of cropped first images are generated based on the first images. A first item identifier associated with the first item is identified based on the cropped first images. In response to detecting a second triggering event corresponding to placement of a second item on the platform, a plurality of second images of the first item are captured and a plurality of cropped second images are generated from the second images. In response to determining that the cropped first images match with the cropped second images, the first item identifier is assigned to the first item depicted in the second images.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
19.
System and method for selecting an item from a plurality of identified items by filtering out back images of the items
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images are captured of the first item and a plurality of cropped images are generated based on the first images. An item identifier is identified based on each cropped image, wherein each item identifier is associated with a numerical similarity value. Each cropped image is further tagged as a front image or a back image. A particular item identifier identified for a corresponding cropped image tagged as a front image is selected and associated with the first item. An indicator of the particular item identifier is displayed on a user interface device.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
An item tracking system comprises a plurality of cameras, a memory storing associations between item identifiers of respective items, and a processor configured to capture a plurality of first images of a first item and identify a first item identifier of the first item based on the first images. The processor captures a plurality of second images of a second item, generates cropped image of the second item from each second image, and identifies an item identifier for each cropped image. Based on the associations stored in the memory, the processor determines that an association exists between the first item identifier of the first item and a second item identifier, and assigns the second item identifier to the second item when at least one of the item identifiers corresponding to the cropped images is the second item identifier.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
23.
SYSTEM AND METHOD FOR ITEM IDENTIFICATION USING CONTAINER-BASED CLASSIFICATION
A device detects a triggering event that corresponds to a placement of an item on a platform. In response, the device captures an image of the item and generates a first encoded vector for the image. The first encoded vector describes one or more attributes of the item. The device determines that the item is associated with a first container category based on the one or more attributes of the item. The device identifies one or more items that have been identified as having placed inside a container associated with the first container category. The device displays a list of item options that comprises the one or more items on a graphical user interface (GUI). The device receives a selection of a first item from along the list of item options and identifies the first item as being inside the container.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/22 - Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images are captured of the first item and a plurality of cropped images are generated based on the images. For each cropped image, a first encoded vector is generated and compared to encoded vectors in an encoded vector library that are tagged as a front image. Based on the comparison, a second encoded vector is selected from the encoded vector library that most closely matches with the first encoded vector. An item identifier is identified that is associated with the second encoded vector. A particular item identifier is selected that is identified for a particular cropped image.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
25.
SYSTEM AND METHOD FOR SPACE SEARCH REDUCTION IN IDENTIFYING ITEMS FROM IMAGES VIA ITEM HEIGHT
A device detects a triggering event that corresponds to a placement of a first item on a platform. In response, the device captures an image of the first item and generates a first encoded vector for the image. The first encoded vector describes one or more attributes of the first item. The device determines a height of the first item. The device identifies one or more items in an encoded vector library that are associated with average heights within a threshold range from the determined height of the first item. The device compares the first encoded vector with a second encoded vector associated with a second item from among the one or more items. The device determines that the first encoded vector corresponds to the second encoded vector. In response, the device determines that the first item corresponds to the second item.
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images are captured of the first item and a plurality of cropped images are generated based on the first images. An item identifier is identified based on each cropped image, wherein each item identifier is associated with a numerical similarity value. Item identifiers associated with a highest and next highest similarity values are selected. When a difference between the highest and the next highest similarity values equals or exceeds a threshold, the item identifier associated with the highest similarity value is associated with the first item placed on the platform. An indicator of the item identifier is displayed on a user interface device.
G06F 16/583 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
27.
SYSTEM AND METHOD FOR IDENTIFYING AN ITEM BASED ON INTERACTION HISTORY OF A USER
In response to detecting a first triggering event corresponding to placement of a first item on a platform, a plurality of first images are captured of the first item. An item identifier associated with the first item is identified based on the first images and assigned to the first item. In response to detecting a second triggering event corresponding to placement of a second item on the platform, a plurality of second images are captured of the second item, a plurality of cropped images are generated based on the second images, and a plurality of item identifiers are determined for the second item based on the cropped images. When a process for selecting a particular item identifier from the plurality of item identifiers fails, a second item identifier is assigned to the second item based on an association between the first item identifier and the second item identifier.
H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/70 - Labelling scene content, e.g. deriving syntactic or semantic representations
G06V 10/26 - Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
28.
SYSTEM AND METHOD FOR AGGREGATING METADATA FOR ITEM IDENTIFICATION USING DIGITAL IMAGE PROCESSING
A system for identifying items based on aggregated metadata obtains images of an item. The system extracts a set of features from images of the item. The system identifies a first value of a first feature associated with a first image of the item. The system identifies a second value of the first feature associated with a second image of the item. The system aggregates the first value and the second value. The system associates the item to the aggregated first value and the second value, where the aggregated first value and the second value represent the first feature of the item. The system adds a new entry for each image of the item to a training dataset associated with an item identification model.
G06T 7/90 - Determination of colour characteristics
G01G 19/414 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/56 - Extraction of image or video features relating to colour
G06F 18/22 - Matching criteria, e.g. proximity measures
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
29.
TOOL FOR GENERATING A VIRTUAL STORE THAT EMULATES A PHYSICAL STORE
An apparatus to create a virtual layout of a virtual store to emulate a physical layout of a physical store includes a memory and a processor. The processor receives a physical position and orientation associated with a physical rack located in the physical store. In response, the processor places a virtual rack at a virtual position and orientation on the virtual layout, to represent the physical position and orientation of the physical rack on the physical layout. The processor receives virtual items associated with physical items located on physical shelves of the physical rack. In response, the processor places the virtual items on virtual shelves of the virtual rack, the virtual shelves representing the physical shelves of the physical rack. The processor assigns a rack camera, which captures video that includes the physical rack, to the virtual rack and stores the virtual layout in the memory.
A system includes a longitudinal rack storing a plurality of packs of cigarettes, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a thickness of a pack stored in the rack. Each sensor generates a voltage value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing voltage values generated by the sensors, and a processor configured to monitor voltage values generated by the sensors, detect that a particular sensor has generated a maximum voltage value and determine a number of packs stored in the rack based on a particular number of packs corresponding to the particular sensor.
G06M 9/00 - Counting of objects in a stack thereof
G06Q 10/0875 - Itemisation or classification of parts, supplies or services, e.g. bill of materials
G01R 15/20 - Adaptations providing voltage or current isolation, e.g. for high-voltage or high-current networks using galvano-magnetic devices, e.g. Hall-effect devices
H05K 1/18 - Printed circuits structurally associated with non-printed electric components
A47F 1/12 - Containers with arrangements for dispensing articles dispensing from the side of an approximately horizontal stack
G01D 5/14 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
31.
System and method for determining an item count in a rack using magnetic sensors
A system includes a longitudinal rack storing a plurality of packs of cigarettes, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a pre-selected thickness. Each sensor generates a value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing values generated by the sensors, and a processor configured to determine a position of the shoe/magnet based on the values and determine a pack count of the packs based on the position of the shoe/magnet.
G01D 5/14 - Mechanical means for transferring the output of a sensing member; Means for converting the output of a sensing member to another variable where the form or nature of the sensing member does not constrain the means for converting; Transducers not specially adapted for a specific variable using electric or magnetic means influencing the magnitude of a current or voltage
A device is configured to receive a data request that includes an encrypted data element. The device is further configured to identify a data source device associated with the data request, to identify a first encryption key associated with the data source device, and to decrypt the encrypted data element using the first encryption key. The device is further configured to identify a first data processor device associated with receiving the data request, to identify a second encryption key associated with the first data processor device, wherein the second encryption key is different from the first encryption key, and to re-encrypt the decrypted data element. The device is further configured to identify routing instructions associated with the first data processor device and to send the re-encrypted data element to the first data processor device in accordance with the routing instructions.
A data prediction subsystem receives event data indicating amounts of items removed from locations over a previous period of time, An event probability is determined based at least in part on a number of concurrent days without detected item removal events for a first item at a first location and an anticipated item removal amount per day. After determining that the event probability is less than the threshold value, an updated status is determined for the first item at the first location. The updated status is an empty status indicating that the first item is not believed to be present at the first location. Based at least in part on the updated status for the first item at the first location, a prediction value is determined corresponding to a recommended amount of the first item to request for a future time.
A data prediction subsystem includes receives event data indicating amounts of items removed from locations over a previous period of time. For a first day of the first set of event data having zero events or an empty status indicating that the first item is not believed to be present at the first location, longitudinal and cross-sectional components are determined. An anticipated event value for the first item at the first location is determined using the longitudinal component and the cross-sectional component. Based at least in part on the anticipated event value, a prediction value is determined that corresponds to a recommended amount of the first item to request at a future time.
A data prediction subsystem stores hourly event potential data indicating an expected amount of removal events as a function of time of day. Based at least in part on event data, it is determined that a first item associated with a first location has an empty status at a time after a start of a day. For the day, an anticipated event value is determined for the first item at the first location. Using the anticipated event value and the hourly event potential data, a time-adjusted event value is determined. Based at least in part on the time-adjusted event value, a prediction value is determined that corresponds to a recommended amount of the first item to request for a future time.
A data prediction subsystem stores event data indicating amounts of items removed from locations over a previous period of time and event-to-status transition rules for each location. An event is detected at a first location. The detected event is associated with a change in status of a first item. Based on the detected event and the event-to-status transition rules, an anticipated item status is determined for the first item, indicating whether the first item is believed to be present at the first location at a time during the previous period of time of the event data. Based at least in part on the anticipated item status for the first item, a prediction value is determined that corresponds to a recommended amount of the first item to request for a future time.
A system determines an interaction period during which a fuel dispensing operation is performed at a fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with fuel dispensed from the fuel dispensing terminal by dividing the determined volume for fuel by the interaction period. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system retrieves a video feed that shows the fuel dispensing terminal during the fuel dispensing operation. The system creates a file for the fuel dispensing operation. The system stores the video feed in the created file.
B67D 7/14 - Arrangements of devices for controlling, indicating, metering or registering quantity or price of liquid transferred responsive to input of recorded programmed information, e.g. on punched cards
B67D 7/30 - Arrangements of devices for controlling, indicating, metering or registering quantity or price of liquid transferred with means for predetermining quantity of liquid to be transferred
39.
SYSTEM AND METHOD FOR DIAGNOSTIC ANALYSIS OF A TOILET OVER TIME INTERVALS
A system for diagnostic analysis of a toilet over time intervals comprises a processor operable to receive a distance measurement from a sensor over a network. The processor is operable to determine an instance of a decrease in a water level in a toilet tank based on a comparison of the received distance measurement to a setpoint and to determine a plurality of instances of the decrease in the water level within a period of time. The processor is operable to calculate a ratio of the determined number of the plurality of instances of the decrease in the water level to a number of instances wherein a door changes from a first position to a second position. The processor is operable to compare the calculated ratio to a threshold ratio and to send an alert to a user device when the calculated ratio is less than the threshold ratio.
G01F 23/00 - Indicating or measuring liquid level or level of fluent solid material, e.g. indicating in terms of volume or indicating by means of an alarm
A system includes first and second sensors, and a computing system. The first sensor measures a first property of a first piece of equipment, and the second sensor measures a second property of a second piece of equipment. The computing system includes a processor and memory, which stores a condition that depends on both the first property and the second property. Satisfaction of the condition indicates that maintenance of the first piece of equipment should be prioritized over maintenance of the second piece of equipment. The processor receives the measured first property and the measured second property. In response to determining, based on the measured first and second properties, that the third condition is satisfied, transmits an alert for display on a user device. The alert indicates that maintenance of the first piece of equipment has a higher priority than maintenance of the second piece of equipment.
A system for providing a measure of a number of cups housed within a cup dispenser includes a sensor and a computing system. The cup dispenser includes a body configured to house a stack of cups, and a plunger configured to engage a first cup of the stack of cups and to bias the stack of cups toward a discharge opening defined by a first end of the body. The sensor is coupled to the plunger and is configured to measure a distance from the plunger to a second end of the body, and to transmit the measured distance across a network. The computing system receives the measured distance from the network, and determines, based on the measured distance, the measure of the number of cups housed within the cup dispenser. In response to determining that the number of cups is less than the threshold, the system transmits an alert.
G07F 13/10 - Coin-freed apparatus for controlling dispensing of fluids, semiliquids or granular material from reservoirs with associated dispensing of containers, e.g. cups or other articles
A system includes a sensor and a computing system. The sensor is coupled to a syrup line of the beverage dispenser and is configured to measure a pressure within the line. The syrup line is coupled at a first end to a syrup bag and at a second end to a pump configured to pump syrup from the syrup bag to an outlet of the beverage dispenser. The pump is operated using pressurized gas and generates a pressure corresponding to the pressure of the gas within the syrup line when both the syrup bag is full and the outlet of the beverage dispenser is closed. The sensor transmits a measured pressure to the computing system. The computing system determines that the measured pressure is less than a threshold pressure. In response, the processor transmits an alert for display on a user device, which identifies the syrup bag for replacement.
A system for measuring a fill level of a trash can comprises a processor operable to receive a distance measurement from a network, wherein a sensor communicatively coupled to the processor through the network is operable to determine the distance measurement. The processor is operable to calculate a percentage of waste in the trash can based on the received distance measurement and a difference between a first setpoint and a second setpoint. The processor is operable to determine a threshold for a first period of time based on entity information. The processor is operable to compare the percentage of waste in the trash can to the threshold for the first period of time and to send an alert for display on a user device when the percentage of waste is greater than the threshold for the first period of time.
A system for determining an ambient concentration of compositions for bathroom cleaning comprises a processor operable to receive a concentration measurement from a sensor over a network within a first period of time. The processor is operable to compare the received concentration measurement to a first threshold and to a second threshold greater than the first threshold. The processor is operable to instruct a memory communicatively coupled to the processor to store an indication that a bathroom was cleaned in response to a determination that the received concentration measurement is greater than the first threshold and less than the second threshold. The processor is operable to send an alert for display on a user device indicating either that the sensor has been tampered or that a spill event has occurred in response to a determination that the received concentration measurement is greater than the second threshold.
A system includes one or more memory units and a processor. The processor is configured to receive, from a food temperature probe, a first temperature associated with a first food item. The processor is further configured to receive, from the food temperature probe, a second temperature associated with a second food item. The processor is further configured to receive, from the food temperature probe, a third temperature that was measured by the food temperature probe after measuring the first temperature but before measuring the second temperature, the third temperature associated with a cleaning of the food temperature probe. The processor is further configured to send an alert for display on a user device when the third temperature is greater than the cleaning threshold temperature.
A system includes a door sensor that provides a status of whether a door of a refrigeration system is open or closed, a temperature sensor that measures a temperature of a food compartment of the refrigeration system, a power sensor that measures an amount of power consumed by the refrigeration system, and a compressor sensor that provides acoustic data about a compressor of the refrigeration system. The system further includes a remote computing system configured to send an alert indicating that the refrigeration system needs servicing when the temperature of the food compartment of the refrigeration system is determined to be above a predetermined temperature while: the door of the refrigeration system is closed, the amount of power consumed by the refrigeration system is within a predetermined power range, and acoustic signals of the compressor of the refrigeration system are within a predetermined acoustic range.
A system includes a local device and a remote computing system. The local device is disposed in a toilet paper dispenser and includes a sensor and a first processor. The sensor is configured to measure a distance to a toilet paper roll. The first processor is configured to calculate, using the measured distance to the toilet paper roll, a percentage of toilet paper remaining on the toilet paper roll. The first processor is further configured to transmit, when it is determined that the percentage of toilet paper remaining is less than a minimum threshold, sensor data across a wireless communications network. The remote computing system includes a second processor configured to receive the sensor data and send an alert for display on a user device in response to receiving the sensor data.
An accessory mounting apparatus and system are provided. The system includes a structural framing, an accessory mounting apparatus, and an accessory. The accessory mounting apparatus includes a first component, a second component, and a connector arm. The first component includes a first base, a first rail connector that extends from the first base, and one or more first connectors. The second component includes a second base and a second rail connector that extends from the second base. The connector arm includes a second connector and a mounting arm, and the second connector is coupled to the one or more first connectors. Further, the first rail connector and the second rail connector are configured to engage a first rail slot and a second rail slot of the structural framing, accordingly. Additionally, the accessory is coupled to the mounting arm.
F16M 11/06 - Means for attachment of apparatus; Means allowing adjustment of the apparatus relatively to the stand allowing pivoting
F16M 13/02 - Other supports for positioning apparatus or articles; Means for steadying hand-held apparatus or articles for supporting on, or attaching to, an object, e.g. tree, gate, window-frame, cycle
B33Y 80/00 - Products made by additive manufacturing
49.
PROACTIVE REQUEST COMMUNICATION SYSTEM WITH IMPROVED DATA PREDICTION USING ARTIFICIAL INTELLIGENCE
An event tracking subsystem detects events for the removal of items at a plurality of locations. A data prediction subsystem receives event data based on the detected events that indicates an amount of an item removed from each of the plurality of locations over a previous period of time. The data prediction subsystem determines a prediction data value for each item at each of the plurality of locations. The prediction data is used to proactively request items with improved communication and computational efficiency.
A data prediction subsystem receives event data indicating an amount of an item removed from each of a plurality of locations over a previous period of time. For each location, prediction data is determined using the event data. The prediction data includes, for each day over a future period of time, a non-integer value indicating an anticipated amount of the item that will be removed from the location. An improved rounding process is used to round the prediction value for each day. The resulting prediction data is used to proactively request items with improved communication and computational efficiency.
G06Q 10/08 - Logistics, e.g. warehousing, loading or distribution; Inventory or stock management
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06F 7/499 - Denomination or exception handling, e.g. rounding or overflow
G06F 17/18 - Complex mathematical operations for evaluating statistical data
A device configured to receive a resource reservation request that identifies a user and a resource. The device is further configured to generate a resource allocation that includes an association between the user, location information for the resource, a resource identifier for the resource, and a token value. The device is further configured to associate the resource allocation with a time interval indicating a deadline for using the resource allocation. The device is further configured to receive a reservation verification request from a network device. The device is further configured to determine the location information for the network device is within a predetermined distance from the location information for the resource, to determine the resource identifier matches the resource identifier for the resource, and to determine a current time is within the time interval. The device is further configured to generate resource allocation instructions authorizing access to the resource.
A system for selecting delivery mechanisms sends a request to a plurality of servers associated with one or more autonomous delivery mechanism and one or more non-autonomous delivery mechanisms to provide delivery metadata. The request comprises a pickup location coordinate and a delivery location coordinate. The delivery metadata comprises a delivery time and a delivery quote. The system receives a first set of delivery metadata associated with one or more autonomous delivery mechanisms, and a second set of delivery metadata associated with one or more non-autonomous delivery mechanisms. The system identifies a particular autonomous delivery mechanism from within the category of autonomous delivery mechanisms based on the first set of delivery metadata. The system identifies a particular non-autonomous delivery mechanism from within the category of non-autonomous delivery mechanisms based on the second set of delivery metadata.
A system receives content of a memory resource. The system compares the content of the memory resource with a first resource data associated with a first physical space, and with a second resource data associated with a second physical space. The system determines which of the first physical space and the second physical space can fulfill more than a threshold percentage of objects in the memory resource based on the comparison between the content of the memory resource with the first and second resource data. The system determines that the first physical space can fulfill more than the threshold percentage of objects in the memory resource. The system assigns the first physical space to the memory resource for concluding an operation associated with the memory resource.
A system sends a drop-off location coordinate to a server associated with a delivery mechanism. The system receives, from the server, a hyperlink that upon access, the drop-off location coordinate is displayed on a virtual map. The system links the hyperlink to an adjust drop-off location element, such that when the adjust drop-off element is accessed, the virtual map is displayed within a delivery user interface. The system integrates the adjust drop-off element into the delivery user interface such that the adjust drop-off element is accessible from within the delivery user interface.
A system generates a set of instructions for preparing content of a memory resource that comprises a set of objects in a particular sequence. The system sends the content of the memory resource and the set of instructions to a user device. The system receives a first message that indicates the set of objects is being prepared from the user device. The system sends, to a server associated with a delivery vehicle, a third message to alert the delivery vehicle to pick up the set of objects from a pickup location and deliver to a delivery location coordinate. The system receives, from the server, an alert message that indicates the delivery vehicle has reached the pickup location coordinate. The system forwards the alert message to the user device.
A system presents a plurality of objects on a delivery user interface. The system updates content of a memory resource as one or more objects are added to the memory resource. In response to determining that the content of the memory resource is finalized, the system determines a selection of a particular autonomous delivery mechanism and a particular non-autonomous delivery mechanism based on filtering conditions associated with the content of the memory resource. The system presents the selection of the particular autonomous delivery mechanism and the particular non-autonomous delivery mechanism on a delivery user interface. The system determines that a delivery mechanism is selected from the selection. The system receives one or more status updates associated with the selected delivery mechanism, and displays the one or more status updates on the delivery user interface.
A system presents, on a user interface, a first message that indicates an operation associates with a memory resource is concluded. The system present, on the user interface, content of the memory resource that comprises a plurality of objects. The system presents, on the user interface, a set of instructions to prepare the plurality of objects in a particular sequence. The system receives a second message that indicates the plurality of objects in ready for pickup by a delivery mechanism. The system presents, on the user interface, an alert message that indicates the delivery mechanism has reached a pickup location coordinate. If the category of the delivery mechanism is an autonomous delivery mechanism, the system presents, on the user interface, a pin number that unlocks the autonomous delivery mechanism.
A device configured to receive a rack identifier for a rack that is configured to hold items. The device is further configured to identify a master template that is associated with the rack. The device is further configured to receive images of the plurality of items on the rack and to combine the images into a composite image of the rack. The device is further configured to identify shelves on the rack within the composite image and to generate bounding boxes that correspond with an item on the rack. The device is further configured to associate each bounding box with an item identifier and an item location. The device is further configured to generate a rack analysis message based on a comparison of the item locations for each bounding box and the rack positions from the master template and to output the rack analysis message.
A device configured to receive a resource reservation request that identifies a user and a resource. The device is further configured to generate a resource allocation that includes an association between the user, location information for the resource, a resource identifier for the resource, and a token value. The device is further configured to associate the resource allocation with a time interval indicating a deadline for using the resource allocation. The device is further configured to receive a reservation verification request from a network device. The device is further configured to determine the location information for the network device is within a predetermined distance from the location information for the resource, to determine the resource identifier matches the resource identifier for the resource, and to determine a current time is within the time interval. The device is further configured to generate resource allocation instructions authorizing access to the resource.
A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.
A system for updating a training dataset of an item identification model determines that an item is not included in a training dataset. In response to determining that the item is not included in the training dataset, the system obtains an identifier of the item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features associated with the item from the images. The system associates the item to the identifier and the set of features. The system adds a new entry to the training dataset, where the new entry represents the item labeled with the identifier and the set of features.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06K 9/46 - Extraction of features or characteristics of the image
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
H04N 13/207 - Image signal generators using stereoscopic image cameras using a single 2D image sensor
H04N 13/271 - Image signal generators wherein the generated image signals comprise depth maps or disparity maps
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
62.
System and method for capturing images for training of an item identification model
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
A system for identifying items based on aggregated metadata obtains images of an item. The system extracts a set of features from images of the item. The system identifies a first value of a first feature associated with a first image of the item. The system identifies a second value of the first feature associated with a second image of the item. The system aggregates the first value and the second value. The system associates the item to the aggregated first value and the second value, where the aggregated first value and the second value represent the first feature of the item. The system adds a new entry for each image of the item to a training dataset associated with an item identification model.
G01G 19/414 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
G06T 7/62 - Analysis of geometric attributes of area, perimeter, diameter or volume
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/56 - Extraction of image or video features relating to colour
G06F 18/22 - Matching criteria, e.g. proximity measures
G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
64.
SYSTEM AND METHOD FOR REFINING AN ITEM IDENTIFICATION MODEL BASED ON FEEDBACK
A system for refining an item identification model detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features from at least one of the images. The system identifies the item based on the set of features. The system receives an indication that the item is not identified correctly. The system receives an identifier of the item. The system identifies the item based on the identifier of the item. The system feeds the identifier of the item and the images to the item identification model. The system retrains the item identification model to learn to associate the item to the images. The system updates the set of features based on the determined association between the item and the images.
A device configured to capture a first overhead depth image of the platform using a three-dimensional (3D) sensor at a first time instance and a second overhead depth image of a first object using the 3D sensor at a second time instance. The device is further configured to determine that a first portion of the first object is within a region-of-interest and a second portion of the first object is outside the region-of-interest in the second overhead depth image. The device is further configured to capture a third overhead depth image of a second object placed on the platform using the 3D sensor at a third time instance. The device is further configured to capture a first image of the second object using a camera in response to determining that the first object is outside of the region-of-interest and the second object is within the region-of-interest for the platform.
A device configured to detect a triggering event corresponding with a user placing a first item on the platform, to capture a first image of the first item on the platform using a camera, and to input the first image into a machine learning model that is configured to output a first encoded vector based on features of the first item that are present in the first image. The device is further configured to identify a second encoded vector in an encoded vector library that most closely matches the first encoded vector and to identify a first item identifier in the encoded vector library that is associated with the second encoded vector. The device is further configured to identify the user, to identify an account that is associated with the user, and to associate the first item identifier with the account of the user.
A device configured to receive a first point cloud data for a first item, to identify a first plurality of data points for the first object within the first point cloud data, and to extract the first plurality of data points from the first point cloud data. The device is further configured to receive a second point cloud data for the first item, to identify a second plurality of data points for the first object within the second point cloud data, and to extract a second plurality of data points from the second point cloud data. The device is further configured to merge the first plurality of data points and the second plurality of data points to generate combined point cloud data and to determine dimensions for the first object based on the combined point cloud data.
A device configured to identify a first pixel location within a first plurality of pixels corresponding with an item in a first image and to apply a first homography to the first pixel location to determine a first (x,y) coordinate. The device is further configured to identify a second pixel location within a second plurality of pixels corresponding with the item in a second image and to apply a second homography to the second pixel location to determine a second (x,y) coordinate. The device is further configured to determine that the distance between the first (x,y) coordinate and the second (x,y) coordinate is less than or equal to the distance threshold value, to associate the first plurality of pixels and the second plurality of pixels with a cluster for the item, and to output the first plurality of pixels and the second plurality of pixels.
A device configured to receive a first encoded vector and receive one or more feature descriptors for a first object. The device is further configured to remove one or more encoded vectors from an encoded vector library that are not associated with the one or more feature descriptors and to identify a second encoded vector in the encoded vector library that most closely matches the first encoded vector based on the numerical values within the first encoded vector. The device is further configured to identify a first item identifier in the encoded vector library that is associated with the second encoded vector and to output the first item identifier.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06K 9/46 - Extraction of features or characteristics of the image
G06T 7/90 - Determination of colour characteristics
G06K 9/62 - Methods or arrangements for recognition using electronic means
G01G 19/414 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
A device configured to capture a first image of an item on a platform using a camera and to determine a first number of pixels in the first image that corresponds with the item. The device is further configured to capture a first depth image of an item on the platform using a three-dimensional (3D) sensor and to determine a second number of pixels within the first depth image that corresponds with the item. The device is further configured to determine that the difference between the first number of pixels in the first image and the second number of pixels in the first depth image is less than the difference threshold value, to extract the plurality of pixels corresponding with the item in the first image from the first image to generate a second image, and to output the second image.
A coffee machine includes a housing that comprises a first platform, a second platform, a first robotic arm, a second robotic arm, and an information handling system. The information handling system comprises a memory and a processor. The memory is configured to receive and store a multiple beverage orders in a log and to monitor a position of a first cup and a second cup corresponding to a first beverage order and a second beverage order. The processor is configured to initiate a second sub-step of the second beverage order prior to initiating a second sub-step of the first beverage order in response to determining that a first sub-step of the second beverage order has terminated first and to transmit to the log of the memory that the second cup is at the third designated position for the second sub-step of the second beverage order to be performed.
A coffee machine includes a first housing and a second housing, wherein the second housing is disposed on top of the first housing. The second housing comprises a first platform, a second platform disposed below the first platform, and a lift disposed at a first side of the second housing and configured to translate between the first platform and the second platform. The second housing further comprises a first robotic arm disposed above the first platform, a second robotic arm disposed between the second platform and the first platform, and a coffee brewing machine actionable to dispense one or more fluids into a cup. The coffee machine further comprises an information handling system comprising a processor, wherein the processor is configured to actuate the first robotic arm, the second robotic arm, the coffee brewing machine, and the lift.
A47J 31/44 - Parts or details of beverage-making apparatus
A47J 31/52 - Alarm-clock-controlled mechanisms for coffee- or tea-making apparatus
A47J 31/40 - Beverage-making apparatus with dispensing means for adding a measured quantity of ingredients, e.g. coffee, water, sugar, cocoa, milk, tea
A47J 31/41 - Beverage-making apparatus with dispensing means for adding a measured quantity of ingredients, e.g. coffee, water, sugar, cocoa, milk, tea of liquid ingredients
A47J 43/044 - Machines for domestic use not covered elsewhere, e.g. for grinding, mixing, stirring, kneading, emulsifying, whipping or beating foodstuffs, e.g. power-driven with tools driven from the top side
B25J 11/00 - Manipulators not otherwise provided for
73.
Object detection based on wrist-area region-of-interest
An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates images of the items stored on the rack. Over a period of time, a tracking subsystem tracks a pixel position of the wrist of a person interacting with items stored on the rack, receives image frames of the angled-view images. The tracking subsystem determines whether an item was interacted with by a person and, if so, the identified item is assigned to the person.
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
74.
Event trigger based on region-of-interest near hand-shelf interaction
An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem determines that a person is within a threshold distance of the rack and receives image frames of the angled-view images. A pixel position of a wrist of the person is determined in at least a subset of the received image frames, thereby determining a set of pixel positions of the wrist. An aggregated wrist position is determined based on the set of pixel positions. If the aggregated wrist position is determined to correspond to a position on a shelf of the rack, a trigger signal is provided indicating a shelf-interaction event has occurred.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06V 10/147 - Optical characteristics of the device performing the acquisition or on the illumination arrangements - Details of sensors, e.g. sensor lenses
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 20/40 - Scenes; Scene-specific elements in video content
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
G06V 10/24 - Aligning, centring, orientation detection or correction of the image
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G06V 10/62 - Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
75.
System and method for presenting a virtual store shelf that emulates a physical store shelf
An apparatus includes a display, interface, and processor. The interface receives a live camera feed from a camera directed at a first physical structure located in a physical space. The processor receives an indication of an event associated with the first physical structure and accordingly displays a first virtual structure corresponding to the first physical structure. The first virtual structure includes virtual items that emulate physical items located on the first physical structure. The processor additionally displays a recording of the live camera feed, which depicts the event associated with the first physical structure.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes
76.
SYSTEM AND METHOD FOR POPULATING A VIRTUAL SHOPPING CART BASED ON A VERIFICATION OF ALGORITHMIC DETERMINATIONS OF ITEMS SELECTED DURING A SHOPPING SESSION IN A PHYSICAL STORE
An apparatus includes a display and a processor. The processor displays a virtual shopping cart. The processor also receives information indicating that an algorithm determined that a physical item was selected by a person during a shopping session in a physical store, based on a set of inputs received from sensors located within the store. In response, the processor displays a virtual item, which includes a graphical representation of the physical item. The processor additionally displays a rack video captured during the shopping session by a rack camera located in the store. The rack camera is directed at a physical rack located in the store, which includes the physical item. In response to displaying the rack video, the processor receives information identifying the virtual item, where the rack video depicts that the person selected the physical item. The processor then stores the virtual item in the virtual shopping cart.
A system that includes a fuel dispenser terminal and a remote controller. The fuel dispenser terminal is configured to send a service request for a fuel purchase to the remote controller and receive a personalized offer in response to sending the service request. The fuel dispenser terminal is further configured to display the personalized offer, receive a user response indicating the personalized offer was accepted, and send the user response to the remote controller. The fuel dispenser terminal is further configured to receive an authorization token for retrieving the personalized offer and output the authorization token to the customer. The remote controller is configured to update the service request by adding a purchase associated with the personized offer to the fuel purchase, send an encrypted service request to a service processor, generate the authorization token, and send the authorization token to the fuel dispenser terminal.
G05B 19/416 - Numerical control (NC), i.e. automatically operating machines, in particular machine tools, e.g. in a manufacturing environment, so as to execute positioning, movement or co-ordinated operations by means of programme data in numerical form characterised by control of velocity, acceleration or deceleration
G06Q 30/0207 - Discounts or incentives, e.g. coupons or rebates
An object tracking system that includes a sensor and a tracking system. The sensor configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is configured to detect an item was removed from the rack. The tracking system is further configured to receive the frame of the rack, to identify a marker on an item within a predefined zone in the frame, and to identify the item associated with the identified marker. The tracking system is further configured to determine a pixel location for a person, to determine the person is within the predefined zone associated with the, and to add the identified item to a digital cart associated with the person.
G01G 19/40 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight
G01G 19/52 - Weighing apparatus combined with other objects, e.g. with furniture
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
An object tracking system that includes a sensor and a tracking system. The sensor is configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is further configured to receive the frame, to determine a pixel location for the first person, and to determine a person is within the predefined zone associated with the rack. The tracking system is further configured to identify the plurality of items in a digital cart associated with the person, to identify an item from the digital cart associated with the first person, and to remove the identified item from the digital cart associated with the person.
An apparatus includes a display, interface, and processor. The interface receives video from a camera located in a physical store and directed at a first physical rack. The camera captures video of the rack during a shopping session. The processor displays a first virtual rack that emulates the first physical rack and includes first and second virtual shelves. The virtual shelves include virtual items, which include graphical representations of physical items located on the physical rack. The processor displays the rack video, which depicts an event including the person interacting with the first physical rack. The processor also displays a virtual shopping cart. The processor receives information associated with the event, identifying the first virtual item. The rack video depicts that the person selected the first physical item while interacting with the first physical rack. The processor then stores the first virtual item in the virtual shopping cart.
A scalable tracking system processes video of a space to track the positions of people within a space. The tracking system determines local coordinates for the people within frames of the video and then assigns these coordinates to time windows based on when the frames were received. The tracking system then combines or clusters certain local coordinates that have been assigned to the same time window to determine a combined coordinate for a person during that time window.
An object tracking system that includes a first sensor and a second sensor that are each configured to capture frames of at least a portion of a global plane for a space. The system is configured to identify a pixel location for a marker within a frame from the first sensor and to determine an (x,y) coordinate for the marker using a first homography. The system is further configured to identify a pixel location for a different marker in a frame from the second sensor and to determine an (x,y) coordinate for the marker using a second homography. The system is further configured to determine a distance difference between the computed distance between the (x,y) coordinates and an actual distance. The system is further configured to recompute the first homography and/or the second homography in response to determining that the distance difference exceeds a difference threshold level.
A system includes a first sensor and a sensor client. During an initial time interval, the sensor client receives images generated by the first sensor and detects contours in the images. The sensor client determines, based on the contours, regions of the images generated by the first sensor to exclude during object tracking. During a subsequent time interval, the sensor client receives a second image generated by the first sensor and detects a contour in the image. The sensor client determines pixel coordinates of the second contour and determines whether at least a threshold percentage of the second pixel coordinates overlap with the region to exclude during object tracking. If at least the threshold percentage of the second pixel coordinates overlap with the region to exclude, a position for tracking the second contour is not determined.
A scalable tracking system processes video of a space to track the positions of objects within a space. The tracking system determines local coordinates for the objects within frames of the video and then assigns these coordinates to time windows based on when the frames were received. The tracking system then combines or clusters certain local coordinates that have been assigned to the same time window to determine a combined coordinate for an object during that time window.
An object tracking system that includes a plurality of sensors and a tracking system. A first sensor from the plurality of sensors is configured to capture a first frame of a global plane for at least a portion of the space. The tracking system is configured to determine a pixel location in the first frame for an object located in the space, and to apply a homography to the pixel location to determine a coordinate in the global plane. The homography is configured to translate between pixel locations in the first frame and coordinates in the global plane.
A scalable tracking system includes a camera subsystem, a weight subsystem, and a central server. The camera subsystem includes cameras that capture video of a space, camera clients that determine local coordinates of objects in the captured videos, and a camera server that determines the physical positions of objects in the space based on the determined local coordinates. The weight subsystem determines when items were removed from shelves. The central server determines which object in the space removed the items based on the physical positions of the objects in the space and the determination of when items were removed.
G01S 17/87 - Combinations of systems using electromagnetic waves other than radio waves
G01G 19/40 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight
G06Q 20/32 - Payment architectures, schemes or protocols characterised by the use of specific devices using wireless devices
G01S 17/66 - Tracking systems using electromagnetic waves other than radio waves
H04N 23/90 - Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
A system includes sensors and a tracking subsystem. The subsystem receives a first image feed from a first sensor and a second image feed from a second sensor. The field-of view of the second sensor at least partially overlaps with that of the first sensor. The subsystem detects an object in a frame from the first feed. The subsystem determines a first pixel position of the object. The subsystem determines a second pixel position of the object. Based on the first pixel position and the second pixel position, a global position for the object is determined in a space.
An object tracking system includes a first sensor, a second sensor, and a tracking system. The first sensor is configured to capture a first frame of a global plane for at least a first portion of a space. The second sensor is configured to capture a second frame of at least a second portion of the space. The tracking system is configured to determine the object is within an overlap region with the second sensor based on a first pixel location. The tracking system is further configured to determine a first coordinate in the global plane for the object, to determine a second pixel location in the second frame for the object based on the first coordinate, and to store the second pixel location with an object identifier a tracking list associated with the second sensor.
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/147 - Optical characteristics of the device performing the acquisition or on the illumination arrangements - Details of sensors, e.g. sensor lenses
An apparatus includes an interface, display, memory, and processor. The interface receives a video feed including first and second camera feeds, each feed corresponding to a camera located in a store. The processor stores a video segment in memory, assigned to a person and capturing a portion of a shopping session. The video segment includes first and second camera feed segments, each segment corresponding to a recording of the corresponding camera feed from a starting to an ending timestamp. Playback of the first and second camera feed segments is synchronized, and a slider bar controls a playback progress of the camera feed segments. The processor displays the camera feed segments and copies of the slider bar on the display. The processor receives an instruction from at least one of the copies of the slider bar to adjust the playback progress of the camera feed segments and adjusts the playback progress.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/6587 - Control parameters, e.g. trick play commands or viewpoint selection
H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
G06V 20/40 - Scenes; Scene-specific elements in video content
90.
Image-based action detection using contour dilation
A system includes a sensor and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor. The tracking subsystem detects an event associated with an item being removed from a rack. The tracking subsystem determines that a first person and a second person may be associated with the event. In response, the tracking subsystem dilates contours associated with the first and second person from a first depth to a second depth until the contours enter a zone adjacent to the rack. A number of iterations is determined for each contour to enter the zone adjacent to the rack. If the first person's contour enters the zone in fewer iterations, the item is assigned to the first person.
G01G 19/52 - Weighing apparatus combined with other objects, e.g. with furniture
G01G 19/414 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight using electromechanical or electronic computing means using electronic computing means only
A system includes a server and merchant device. The server receives product information for a product scanned by a mobile device. The server stores the product information for the product in a digital cart. The server receives a transaction request from the mobile device, determines that the product is associated with a validation requirement, and transmits a validation request to the merchant device. The server receives, from the merchant device, an indication that the validation requirement is satisfied, processes a transaction, and transmits, to the merchant device, an indication that the transaction is complete. The merchant device receives the validation request, determines that the validation requirement is satisfied, and transmits the indication that the validation requirement is satisfied to the server. The merchant device receives, from the server, the indication that the transaction is complete and displays the indication that the transaction on the display.
G06Q 30/06 - Buying, selling or leasing transactions
G06K 7/14 - Methods or arrangements for sensing record carriers by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
G06Q 20/40 - Authorisation, e.g. identification of payer or payee, verification of customer or shop credentials; Review and approval of payers, e.g. check of credit lines or negative lists
An object tracking system includes a sensor and a tracking system. The sensor is configured to capture a frame of at least a portion of a physical space within a global plane for a space. The tracking system is configured to receive the frame, to detect an object within a zone of the frame, and to determine a pixel location for the object. The tracking system is further configured to identify a zone of the physical structure based on the pixel location, to identify an item based on the identified zone.
G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
93.
Access control for an augmented reality experience using interprocess communications
An access control system that includes an access control device configured to receive transaction information that identifies a member identifier for a member and a purchased item. The access control device is configured to compare the purchased item to items in an item list and to determine whether the purchased item matches any items in the item list. The access control device is configured to store an authorization for the member identifier to access an augmented reality experience in response to a match.
An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a structure configured to store items. The image sensor generates angled-view images of the items stored on the structure. A tracking subsystem determines that a person has interacted with the structure and receives image frames of the angled-view images. The tracking subsystem determines that the person interacted with a first item stored on the structure. A first image is identified associated with a first time before the person interacted with the first item, and a second image is identified associated with a second time after the person interacted with the first item. If it is determined, based on a comparison of the first and second images, that the item was removed from the structure, the first item is assigned to the person.
G08B 13/196 - Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
An apparatus includes a display, interface, and processor. The interface receives a camera feed from a camera directed at a first physical rack located in a physical store. The processor displays virtual racks assigned to physical racks. The processor receives an indication of an event associated with the first physical rack and accordingly displays a first virtual rack, which includes first and second virtual shelves. The first and second virtual shelves include virtual items that emulate physical items located on physical shelves of the first physical rack. The processor additionally displays a recording of the camera feed, which depicts the event associated with the first physical rack.
G06T 19/00 - Manipulating 3D models or images for computer graphics
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06Q 30/06 - Buying, selling or leasing transactions
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes
An object tracking system includes a sensor and a tracking system. The sensor is configured to capture a first frame of a global plane for at least a portion of a space. The tracking system is configured to receive a first coordinate in the global plane where a first marker is located in the space and to receive a second coordinate in the global plane where a second marker is located in the space. The tracking system is further configured to identify the first marker and the second marker within the first frame, to determine a first pixel location in the first frame for the first marker, to determine a second pixel location in the first frame for the second marker, and to generate a homography based on the first coordinate, the second coordinate, the first pixel location, and the second pixel location.
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06T 7/80 - Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
G06T 7/70 - Determining position or orientation of objects or cameras
G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
G06V 20/52 - Surveillance or monitoring of activities, e.g. for recognising suspicious objects
G01G 19/40 - Weighing apparatus or methods adapted for special purposes not provided for in groups with provisions for indicating, recording, or computing price or other quantities dependent on the weight
G06Q 10/087 - Inventory or stock management, e.g. order filling, procurement or balancing against orders
A system includes sensors and a processor that tracks first and second objects in a space. Upon detecting that a tracked position of the first object is within a threshold distance of a tracked position of a second object, a top-view image of the first object is received from a first sensor. Based on this top-view image, a first descriptor is determined for the first object. The first descriptor is associated with an observable characteristic of the first object. The processor identifies the first object based at least in part upon the first descriptor.
A validation terminal located at a registered location comprises a barcode reader, a memory, and a processor. The memory stores a public key that is paired with a private key linked with the registered location of the validation terminal. The processor is operably coupled to the barcode reader and the memory, and is configured to detect an encrypted barcode that was scanned by the barcode reader from a mobile device that is located at the registered location of the validation terminal. The encrypted barcode is based at least in part upon transaction information associated with products in a digital cart, and the encrypted barcode is encrypted using the private key. The processor is further configured to decrypt the encrypted barcode using the stored public key, and to indicate the transaction is valid in response to decrypting the encrypted barcode using the public key.
H04L 9/30 - Public key, i.e. encryption algorithm being computationally infeasible to invert and users' encryption keys not requiring secrecy
H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system
G06K 19/06 - Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
99.
Determining candidate object identities during image tracking
A system includes sensors and a tracking subsystem. The subsystem receives frames of top-view images generated by the sensors. The subsystem tracks a first and second object, based on received frames. The subsystem detects that the first object is within a threshold distance of the second object. In response, the subsystem determines a probability that the first object switched identifiers with the second object and updates candidate lists accordingly for the first and second objects. The updated first candidate list includes a probability that the first object is associated with a first identifier and a probability that the first object is associated with a second identifier.
A system includes a sensor and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and detects an event associated with a portion of a person entering a zone adjacent to a rack. The tracking subsystem determines that a first and second person may be associated with the event. The subsystem tracks the item and calculates a velocity of the item as it is moved through the space. The subsystem identifies, based on the calculated velocity, a frame in which the velocity of the item is less than a threshold velocity. The subsystem determines whether the first or second person is nearer the item in the identified frame. If the first person is nearer, the item is assigned to the first person.