An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of an object. A pixel position of a body part of a person is determined in at least a subset of the received image frames, thereby determining a set of pixel positions of the body part. An aggregated body part position is determined based on the set of pixel positions. If the aggregated body part position is determined to correspond to a position associated with the object, a trigger signal is provided indicating an interaction event has occurred.
G06V 10/147 - Caractéristiques optiques de l’appareil qui effectue l’acquisition ou des dispositifs d’éclairage - Détails de capteurs, p.ex. lentilles de capteurs
G06V 10/24 - Alignement, centrage, détection de l’orientation ou correction de l’image
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
2.
System and method for determining an item count in a rack using magnetic sensors
A system includes a longitudinal rack storing a plurality of items, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a pre-selected thickness. Each sensor generates a value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing values generated by the sensors, and a processor configured to determine a position of the shoe/magnet based on the values and determine a count of the items based on the position of the shoe/magnet.
G01D 5/14 - Moyens mécaniques pour le transfert de la grandeur de sortie d'un organe sensible; Moyens pour convertir la grandeur de sortie d'un organe sensible en une autre variable, lorsque la forme ou la nature de l'organe sensible n'imposent pas un moyen de conversion déterminé; Transducteurs non spécialement adaptés à une variable particulière utilisant des moyens électriques ou magnétiques influençant la valeur d'un courant ou d'une tension
A system includes a sensor, a weight sensor, and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and weight measurements from the weight sensor. The tracking subsystem detects an event associated with an item being removed from a rack in which the weight sensor is installed. The tracking subsystem determines that a first person or a second person may be associated with the event. In response to determining that the first or second person may be associated with the event, buffer frames are stored of top-view images generated by the sensor during a time period associated with the event. The tracking subsystem then determines, using at least one of the stored buffer frames and a first action-detection algorithm, whether an action associated with the event was performed by the first person or the second person.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
G08B 13/14 - Déclenchement mécanique par l'enlèvement ou les essais de déplacement d'articles portatifs
G08B 13/196 - Déclenchement influencé par la chaleur, la lumière, ou les radiations de longueur d'onde plus courte; Déclenchement par introduction de sources de chaleur, de lumière, ou de radiations de longueur d'onde plus courte utilisant des systèmes détecteurs de radiations passifs utilisant des systèmes de balayage et de comparaison d'image utilisant des caméras de télévision
A system includes a sensor, a weight sensor, and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor and weight measurements from the weight sensor. The tracking subsystem detects an event associated with an item being removed from a rack in which the weight sensor is installed. The tracking subsystem determines that a first person or a second person may be associated with the event. In response to determining that the first or second person may be associated with the event, buffer frames are stored of top-view images generated by the sensor during a time period associated with the event. The tracking subsystem then determines, using at least one of the stored buffer frames and a first action-detection algorithm, whether an action associated with the event was performed by the first person or the second person.
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
G08B 13/14 - Déclenchement mécanique par l'enlèvement ou les essais de déplacement d'articles portatifs
G08B 13/196 - Déclenchement influencé par la chaleur, la lumière, ou les radiations de longueur d'onde plus courte; Déclenchement par introduction de sources de chaleur, de lumière, ou de radiations de longueur d'onde plus courte utilisant des systèmes détecteurs de radiations passifs utilisant des systèmes de balayage et de comparaison d'image utilisant des caméras de télévision
5.
SCALABLE POSITION TRACKING SYSTEM FOR TRACKING POSITION IN LARGE SPACES
A weight sensor includes a plurality of load cells. A first load cell is configured to produce a first electric current based on a force experienced by the first load cell. A second load cell is configured to produce a second electric current based on a force experienced by the second load cell. A third load cell is configured to produce a third electric current based on a force experienced by the third load cell. And a fourth load cell is configured to produce a fourth electric current based on a force experienced by the fourth load cell.
G01G 19/40 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes avec dispositions pour indiquer, enregistrer ou calculer un prix ou d'autres quantités dépendant du poids
G01S 17/66 - Systèmes de poursuite utilisant d'autres ondes électromagnétiques que les ondes radio
G01S 17/87 - Combinaisons de systèmes utilisant des ondes électromagnétiques autres que les ondes radio
G06Q 20/20 - Systèmes de réseaux présents sur les points de vente
G06Q 20/32 - Architectures, schémas ou protocoles de paiement caractérisés par l'emploi de dispositifs spécifiques utilisant des dispositifs sans fil
An object tracking system that includes a sensor and a tracking system. The sensor is configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is further configured to receive the frame, to determine a pixel location for the first person, and to determine a person is within the predefined zone associated with the rack. The tracking system is further configured to identify the plurality of items in a digital cart associated with the person, to identify an item from the digital cart associated with the first person, and to remove the identified item from the digital cart associated with the person.
A system includes a first sensor configured to generate images of at least a first portion of a space. A processor of the system is configured to determine a position of a possible object in the space based on generated images.
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p.ex. le suivi des coins ou des segments
G01G 19/414 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes avec dispositions pour indiquer, enregistrer ou calculer un prix ou d'autres quantités dépendant du poids utilisant des moyens de calcul électromécaniques ou électroniques utilisant uniquement des moyens de calcul électroniques
G01G 19/52 - Appareils de pesée combinés avec d'autres objets, p.ex. avec de l'ameublement
A system includes a longitudinal rack storing a plurality of items, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a thickness of an item stored in the rack. Each sensor generates a voltage value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing voltage values generated by the sensors, and a processor configured to monitor voltage values generated by the sensors, detect that a particular sensor has generated a maximum voltage value and determine a number of items stored in the rack based on a particular number of items corresponding to the particular sensor.
A47F 1/12 - Récipients avec dispositifs pour la distribution d'articles la distribution se faisant par le côté d'un tas sensiblement horizontal
G01R 15/20 - Adaptations fournissant une isolation en tension ou en courant, p.ex. adaptations pour les réseaux à haute tension ou à courant fort utilisant des dispositifs galvano-magnétiques, p.ex. des dispositifs à effet Hall
G06Q 10/0875 - Gestion d’inventaires ou de stocks, p.ex. exécution des commandes, approvisionnement ou régularisation par rapport aux commandes Énumération ou classification des pièces, des fournitures ou des services, p.ex. nomenclatures
H05K 1/18 - Circuits imprimés associés structurellement à des composants électriques non imprimés
G01D 5/14 - Moyens mécaniques pour le transfert de la grandeur de sortie d'un organe sensible; Moyens pour convertir la grandeur de sortie d'un organe sensible en une autre variable, lorsque la forme ou la nature de l'organe sensible n'imposent pas un moyen de conversion déterminé; Transducteurs non spécialement adaptés à une variable particulière utilisant des moyens électriques ou magnétiques influençant la valeur d'un courant ou d'une tension
9.
ITEM IDENTIFICATION USING DIGITAL IMAGE PROCESSING
A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
A system determines an interaction period during which a fuel dispensing operation is performed at a fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with fuel dispensed from the fuel dispensing terminal by dividing the determined volume for fuel by the interaction period. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system retrieves a video feed that shows the fuel dispensing terminal during the fuel dispensing operation. The system creates a file for the fuel dispensing operation. The system stores the video feed in the created file.
B67D 7/04 - Appareils ou dispositifs pour transférer des liquides à partir de récipients ou de réservoirs de stockage en vrac vers des véhicules ou des récipients portables, p.ex. pour la vente au détail pour transférer des carburants, des lubrifiants ou leurs mélanges
B67D 7/22 - Aménagements des indicateurs ou enregistreurs
12.
ANOMALY DETECTION AND CONTROLLING OPERATIONS OF FUEL DISPENSING TERMINAL DURING OPERATIONS
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
B67D 7/34 - Moyens pour empêcher un débit de liquide non autorisé
G05B 19/042 - Commande à programme autre que la commande numérique, c.à d. dans des automatismes à séquence ou dans des automates à logique utilisant des processeurs numériques
13.
System and method for identifying unmoved items on a platform during item identification
In response to detecting a first triggering event corresponding to placement of a first item on a platform, a first image of the platform is captured. A first item identifier of the first item is identified and stored in a memory. In response to detecting a second triggering event corresponding to placement of a second item on the platform, a second image of the platform is captured. The second image is compared with the first image. Upon determining that the first item depicted in the second image overlaps with the first item depicted in the first image and the overlap equals or exceeds a threshold, the first item identifier is assigned to the first item depicted in the second image. A second item identifier of the second item is identified, and information associated with the first item identifier and the second item identifier is displayed on a user interface device.
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c. à d. systèmes dans lesquels le signal vidéo n'est pas diffusé
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
14.
SYSTEM AND METHOD FOR CAMERA RE-CALIBRATION BASED ON AN UPDATED HOMOGRAPHY
A device for object tracking receives an image from a camera, where the image shows a set of points on a calibration board placed on a platform. The device determines a pixel location array that comprises pixel locations associated with the points in the image. The device determines, by applying a first homography to the pixel location array, a calculated location array identifying calculated physical location coordinates of the set of points in the global plane. The device determines that the difference between a reference location array and the calculated location array is more than a threshold value. In response, the device determines that the camera and/or the platform has moved from a respective initial location when the first homography was determined. The device determines a second homography by multiplying an inverse of the pixel location array by the reference location array and calibrates the camera using the second homography.
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images of the first item are captured. For each image of the first item, a cropped image is generated including a bounding box around the first item depicted in the image. For each cropped image, a ratio is calculated between a portion of a total area within the bounding box occupied by the first item and the total area. If the ratio equals or exceeds a minimum threshold, an item identifier associated with the first item is identified based on the cropped image. On the other hand, if the ratio is below the threshold, the cropped image is discarded. A particular item identifier is selected from a set of cropped images that were not discarded.
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
16.
ANOMALY DETECTION DURING FUEL DISPENSING OPERATIONS USING FUEL VOLUME DETERMINATIONS
A system determines that a fuel dispensing operation may be anomalous. In response, the system accesses fuel inventory data that indicates fuel levels in a fuel tank within a threshold period. The system determines a measure amount of fuel that left the fuel tank based on the fuel inventory data. The system determines a calculated amount of dispensed fuel associated with one or more fuel dispensing operations within the threshold period. The system compares the measured amount of fuel that left the fuel tank with the calculated amount of dispensed fuel. The system determines that the measured amount of fuel that left the fuel tank is more than the calculated amount of dispensed fuel. In response, the system confirms that the fuel dispensing operation is anomalous and communicates an electronic signal to the fuel dispensing terminal that causes the fuel dispensing terminal to stop dispensing fuel.
A system detects a fuel dispensing operation that indicates fuel is being dispensed from the fuel dispensing terminal. The system determines an identifier value associated with a volume of fuel dispensed from the fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with the fuel dispensed from the fuel dispensing terminal by dividing the determined identifier value by a unit parameter. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system communicates an electronic signal to the fuel dispensing terminal that instructs the fuel dispensing terminal to stop dispensing fuel.
B67D 7/34 - Moyens pour empêcher un débit de liquide non autorisé
G05B 19/042 - Commande à programme autre que la commande numérique, c.à d. dans des automatismes à séquence ou dans des automates à logique utilisant des processeurs numériques
18.
System and method for detecting a trigger event for identification of an item
A reference depth image of a platform is captured using a three-dimensional (3D) sensor. A plurality of secondary depth images are captured, wherein for each secondary depth image, a depth difference parameter is determined by comparing the secondary depth image and the reference depth image. In response to determining that the depth difference parameter has stayed constant at a value higher than zero for a duration of a first time interval, it is determined that an item has been placed on the platform.
A device captures an image of a first item and generates a first encoded vector for the image. The device identifies a set of items that have at least one attribute in common with the first item. The device determines the identity of the first item based at least on attributes of the first item. The device determines that a confidence score associated with the identity of the first item is less than a threshold percentage. In response, the device determines a height of the first item. The device identifies item(s) with average heights within a threshold range from the height of the first item. The device compares the first encoded vector with a second encoded vector associated with a second item from the identified item(s). If the first encoded vector corresponds to the second encoded vector, the device determines that the first item corresponds to the second item.
G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/40 - Extraction de caractéristiques d’images ou de vidéos
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
20.
System and method for identifying moved items on a platform during item identification
In response to detecting a first triggering event corresponding to placement of a first item on a platform, a plurality of first images are captured of the first item and a plurality of cropped first images are generated based on the first images. A first item identifier associated with the first item is identified based on the cropped first images. In response to detecting a second triggering event corresponding to placement of a second item on the platform, a plurality of second images of the first item are captured and a plurality of cropped second images are generated from the second images. In response to determining that the cropped first images match with the cropped second images, the first item identifier is assigned to the first item depicted in the second images.
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c. à d. systèmes dans lesquels le signal vidéo n'est pas diffusé
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
21.
System and method for selecting an item from a plurality of identified items by filtering out back images of the items
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images are captured of the first item and a plurality of cropped images are generated based on the first images. An item identifier is identified based on each cropped image, wherein each item identifier is associated with a numerical similarity value. Each cropped image is further tagged as a front image or a back image. A particular item identifier identified for a corresponding cropped image tagged as a front image is selected and associated with the first item. An indicator of the particular item identifier is displayed on a user interface device.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
An item tracking system comprises a plurality of cameras, a memory storing associations between item identifiers of respective items, and a processor configured to capture a plurality of first images of a first item and identify a first item identifier of the first item based on the first images. The processor captures a plurality of second images of a second item, generates cropped image of the second item from each second image, and identifies an item identifier for each cropped image. Based on the associations stored in the memory, the processor determines that an association exists between the first item identifier of the first item and a second item identifier, and assigns the second item identifier to the second item when at least one of the item identifiers corresponding to the cropped images is the second item identifier.
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c. à d. systèmes dans lesquels le signal vidéo n'est pas diffusé
G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
25.
SYSTEM AND METHOD FOR ITEM IDENTIFICATION USING CONTAINER-BASED CLASSIFICATION
A device detects a triggering event that corresponds to a placement of an item on a platform. In response, the device captures an image of the item and generates a first encoded vector for the image. The first encoded vector describes one or more attributes of the item. The device determines that the item is associated with a first container category based on the one or more attributes of the item. The device identifies one or more items that have been identified as having placed inside a container associated with the first container category. The device displays a list of item options that comprises the one or more items on a graphical user interface (GUI). The device receives a selection of a first item from along the list of item options and identifies the first item as being inside the container.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/12 - Acquisition d’images - Détails des dispositions d’acquisition; Leurs détails structurels
G06V 10/22 - Prétraitement de l’image par la sélection d’une région spécifique contenant ou référençant une forme; Localisation ou traitement de régions spécifiques visant à guider la détection ou la reconnaissance
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images are captured of the first item and a plurality of cropped images are generated based on the images. For each cropped image, a first encoded vector is generated and compared to encoded vectors in an encoded vector library that are tagged as a front image. Based on the comparison, a second encoded vector is selected from the encoded vector library that most closely matches with the first encoded vector. An item identifier is identified that is associated with the second encoded vector. A particular item identifier is selected that is identified for a particular cropped image.
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c. à d. systèmes dans lesquels le signal vidéo n'est pas diffusé
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
27.
SYSTEM AND METHOD FOR SPACE SEARCH REDUCTION IN IDENTIFYING ITEMS FROM IMAGES VIA ITEM HEIGHT
A device detects a triggering event that corresponds to a placement of a first item on a platform. In response, the device captures an image of the first item and generates a first encoded vector for the image. The first encoded vector describes one or more attributes of the first item. The device determines a height of the first item. The device identifies one or more items in an encoded vector library that are associated with average heights within a threshold range from the determined height of the first item. The device compares the first encoded vector with a second encoded vector associated with a second item from among the one or more items. The device determines that the first encoded vector corresponds to the second encoded vector. In response, the device determines that the first item corresponds to the second item.
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
In response to detecting a triggering event corresponding to placement of a first item on a platform, a plurality of images are captured of the first item and a plurality of cropped images are generated based on the first images. An item identifier is identified based on each cropped image, wherein each item identifier is associated with a numerical similarity value. Item identifiers associated with a highest and next highest similarity values are selected. When a difference between the highest and the next highest similarity values equals or exceeds a threshold, the item identifier associated with the highest similarity value is associated with the first item placed on the platform. An indicator of the item identifier is displayed on a user interface device.
G06F 16/583 - Recherche caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement utilisant des métadonnées provenant automatiquement du contenu
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p.ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]; Caractéristiques régionales saillantes
29.
SYSTEM AND METHOD FOR IDENTIFYING AN ITEM BASED ON INTERACTION HISTORY OF A USER
In response to detecting a first triggering event corresponding to placement of a first item on a platform, a plurality of first images are captured of the first item. An item identifier associated with the first item is identified based on the first images and assigned to the first item. In response to detecting a second triggering event corresponding to placement of a second item on the platform, a plurality of second images are captured of the second item, a plurality of cropped images are generated based on the second images, and a plurality of item identifiers are determined for the second item based on the cropped images. When a process for selecting a particular item identifier from the plurality of item identifiers fails, a second item identifier is assigned to the second item based on an association between the first item identifier and the second item identifier.
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c. à d. systèmes dans lesquels le signal vidéo n'est pas diffusé
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
30.
SYSTEM AND METHOD FOR AGGREGATING METADATA FOR ITEM IDENTIFICATION USING DIGITAL IMAGE PROCESSING
A system for identifying items based on aggregated metadata obtains images of an item. The system extracts a set of features from images of the item. The system identifies a first value of a first feature associated with a first image of the item. The system identifies a second value of the first feature associated with a second image of the item. The system aggregates the first value and the second value. The system associates the item to the aggregated first value and the second value, where the aggregated first value and the second value represent the first feature of the item. The system adds a new entry for each image of the item to a training dataset associated with an item identification model.
G06V 20/00 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène
G06T 7/90 - Détermination de caractéristiques de couleur
G01G 19/414 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes avec dispositions pour indiquer, enregistrer ou calculer un prix ou d'autres quantités dépendant du poids utilisant des moyens de calcul électromécaniques ou électroniques utilisant uniquement des moyens de calcul électroniques
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06F 18/22 - Critères d'appariement, p.ex. mesures de proximité
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
31.
TOOL FOR GENERATING A VIRTUAL STORE THAT EMULATES A PHYSICAL STORE
An apparatus to create a virtual layout of a virtual store to emulate a physical layout of a physical store includes a memory and a processor. The processor receives a physical position and orientation associated with a physical rack located in the physical store. In response, the processor places a virtual rack at a virtual position and orientation on the virtual layout, to represent the physical position and orientation of the physical rack on the physical layout. The processor receives virtual items associated with physical items located on physical shelves of the physical rack. In response, the processor places the virtual items on virtual shelves of the virtual rack, the virtual shelves representing the physical shelves of the physical rack. The processor assigns a rack camera, which captures video that includes the physical rack, to the virtual rack and stores the virtual layout in the memory.
A system includes a longitudinal rack storing a plurality of packs of cigarettes, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a thickness of a pack stored in the rack. Each sensor generates a voltage value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing voltage values generated by the sensors, and a processor configured to monitor voltage values generated by the sensors, detect that a particular sensor has generated a maximum voltage value and determine a number of packs stored in the rack based on a particular number of packs corresponding to the particular sensor.
G06Q 10/0875 - Gestion d’inventaires ou de stocks, p.ex. exécution des commandes, approvisionnement ou régularisation par rapport aux commandes Énumération ou classification des pièces, des fournitures ou des services, p.ex. nomenclatures
G01R 15/20 - Adaptations fournissant une isolation en tension ou en courant, p.ex. adaptations pour les réseaux à haute tension ou à courant fort utilisant des dispositifs galvano-magnétiques, p.ex. des dispositifs à effet Hall
H05K 1/18 - Circuits imprimés associés structurellement à des composants électriques non imprimés
A47F 1/12 - Récipients avec dispositifs pour la distribution d'articles la distribution se faisant par le côté d'un tas sensiblement horizontal
G01D 5/14 - Moyens mécaniques pour le transfert de la grandeur de sortie d'un organe sensible; Moyens pour convertir la grandeur de sortie d'un organe sensible en une autre variable, lorsque la forme ou la nature de l'organe sensible n'imposent pas un moyen de conversion déterminé; Transducteurs non spécialement adaptés à une variable particulière utilisant des moyens électriques ou magnétiques influençant la valeur d'un courant ou d'une tension
33.
System and method for determining an item count in a rack using magnetic sensors
A system includes a longitudinal rack storing a plurality of packs of cigarettes, a shoe movably attached to the rack, a magnet coupled to the shoe and a longitudinal circuit board arranged along the length of the rack. The circuit board includes an array of sensors along the length of the rack, wherein spacing between each pair of sensors equals a pre-selected thickness. Each sensor generates a value depending on a position of the magnet in relation to the sensor. The circuit board further includes a memory storing values generated by the sensors, and a processor configured to determine a position of the shoe/magnet based on the values and determine a pack count of the packs based on the position of the shoe/magnet.
G01D 5/14 - Moyens mécaniques pour le transfert de la grandeur de sortie d'un organe sensible; Moyens pour convertir la grandeur de sortie d'un organe sensible en une autre variable, lorsque la forme ou la nature de l'organe sensible n'imposent pas un moyen de conversion déterminé; Transducteurs non spécialement adaptés à une variable particulière utilisant des moyens électriques ou magnétiques influençant la valeur d'un courant ou d'une tension
A device is configured to receive a data request that includes an encrypted data element. The device is further configured to identify a data source device associated with the data request, to identify a first encryption key associated with the data source device, and to decrypt the encrypted data element using the first encryption key. The device is further configured to identify a first data processor device associated with receiving the data request, to identify a second encryption key associated with the first data processor device, wherein the second encryption key is different from the first encryption key, and to re-encrypt the decrypted data element. The device is further configured to identify routing instructions associated with the first data processor device and to send the re-encrypted data element to the first data processor device in accordance with the routing instructions.
A data prediction subsystem receives event data indicating amounts of items removed from locations over a previous period of time, An event probability is determined based at least in part on a number of concurrent days without detected item removal events for a first item at a first location and an anticipated item removal amount per day. After determining that the event probability is less than the threshold value, an updated status is determined for the first item at the first location. The updated status is an empty status indicating that the first item is not believed to be present at the first location. Based at least in part on the updated status for the first item at the first location, a prediction value is determined corresponding to a recommended amount of the first item to request for a future time.
A data prediction subsystem includes receives event data indicating amounts of items removed from locations over a previous period of time. For a first day of the first set of event data having zero events or an empty status indicating that the first item is not believed to be present at the first location, longitudinal and cross-sectional components are determined. An anticipated event value for the first item at the first location is determined using the longitudinal component and the cross-sectional component. Based at least in part on the anticipated event value, a prediction value is determined that corresponds to a recommended amount of the first item to request at a future time.
A data prediction subsystem stores hourly event potential data indicating an expected amount of removal events as a function of time of day. Based at least in part on event data, it is determined that a first item associated with a first location has an empty status at a time after a start of a day. For the day, an anticipated event value is determined for the first item at the first location. Using the anticipated event value and the hourly event potential data, a time-adjusted event value is determined. Based at least in part on the time-adjusted event value, a prediction value is determined that corresponds to a recommended amount of the first item to request for a future time.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
G06N 5/02 - Représentation de la connaissance; Représentation symbolique
39.
Proactive request communication system with improved data prediction using event-to-status transformation
A data prediction subsystem stores event data indicating amounts of items removed from locations over a previous period of time and event-to-status transition rules for each location. An event is detected at a first location. The detected event is associated with a change in status of a first item. Based on the detected event and the event-to-status transition rules, an anticipated item status is determined for the first item, indicating whether the first item is believed to be present at the first location at a time during the previous period of time of the event data. Based at least in part on the anticipated item status for the first item, a prediction value is determined that corresponds to a recommended amount of the first item to request for a future time.
A system determines an interaction period during which a fuel dispensing operation is performed at a fuel dispensing terminal. The system determines a measured volume per unit time parameter associated with fuel dispensed from the fuel dispensing terminal by dividing the determined volume for fuel by the interaction period. The system compares the measured volume per unit time parameter with a threshold volume per unit time parameter. In response to determining that the measured volume per unit time parameter is less than the threshold volume per unit time parameter, the system retrieves a video feed that shows the fuel dispensing terminal during the fuel dispensing operation. The system creates a file for the fuel dispensing operation. The system stores the video feed in the created file.
B67D 7/14 - Aménagements des dispositifs pour commander, indiquer, mesurer ou enregistrer la quantité ou le prix du liquide transféré répondant à l'insertion d'information de programmes enregistrés, p.ex. sur cartes perforées
B67D 7/30 - Aménagements des dispositifs pour commander, indiquer, mesurer ou enregistrer la quantité ou le prix du liquide transféré avec des moyens pour prédéterminer la quantité de liquide à transférer
41.
SYSTEM AND METHOD FOR DIAGNOSTIC ANALYSIS OF A TOILET OVER TIME INTERVALS
A system for diagnostic analysis of a toilet over time intervals comprises a processor operable to receive a distance measurement from a sensor over a network. The processor is operable to determine an instance of a decrease in a water level in a toilet tank based on a comparison of the received distance measurement to a setpoint and to determine a plurality of instances of the decrease in the water level within a period of time. The processor is operable to calculate a ratio of the determined number of the plurality of instances of the decrease in the water level to a number of instances wherein a door changes from a first position to a second position. The processor is operable to compare the calculated ratio to a threshold ratio and to send an alert to a user device when the calculated ratio is less than the threshold ratio.
G01F 23/00 - Indication ou mesure du niveau des liquides ou des matériaux solides fluents, p.ex. indication en fonction du volume ou indication au moyen d'un signal d'alarme
A system includes first and second sensors, and a computing system. The first sensor measures a first property of a first piece of equipment, and the second sensor measures a second property of a second piece of equipment. The computing system includes a processor and memory, which stores a condition that depends on both the first property and the second property. Satisfaction of the condition indicates that maintenance of the first piece of equipment should be prioritized over maintenance of the second piece of equipment. The processor receives the measured first property and the measured second property. In response to determining, based on the measured first and second properties, that the third condition is satisfied, transmits an alert for display on a user device. The alert indicates that maintenance of the first piece of equipment has a higher priority than maintenance of the second piece of equipment.
G06Q 10/06 - Ressources, gestion de tâches, des ressources humaines ou de projets; Planification d’entreprise ou d’organisation; Modélisation d’entreprise ou d’organisation
G06Q 10/0631 - Planification, affectation, distribution ou ordonnancement de ressources d’entreprises ou d’organisations
G06Q 10/20 - Administration de la réparation ou de la maintenance des produits
43.
CUP DISPENSER SENSOR FOR AUTOMATICALLY GENERATING REFILL ALERTS
A system for providing a measure of a number of cups housed within a cup dispenser includes a sensor and a computing system. The cup dispenser includes a body configured to house a stack of cups, and a plunger configured to engage a first cup of the stack of cups and to bias the stack of cups toward a discharge opening defined by a first end of the body. The sensor is coupled to the plunger and is configured to measure a distance from the plunger to a second end of the body, and to transmit the measured distance across a network. The computing system receives the measured distance from the network, and determines, based on the measured distance, the measure of the number of cups housed within the cup dispenser. In response to determining that the number of cups is less than the threshold, the system transmits an alert.
G07F 13/10 - Appareils déclenchés par pièces de monnaie pour commander la distribution de fluides, de produits semi-liquides ou de produits granuleux contenus dans des réservoirs avec en même temps distribution de récipients, p.ex. tasse ou autres articles
A system includes a sensor and a computing system. The sensor is coupled to a syrup line of the beverage dispenser and is configured to measure a pressure within the line. The syrup line is coupled at a first end to a syrup bag and at a second end to a pump configured to pump syrup from the syrup bag to an outlet of the beverage dispenser. The pump is operated using pressurized gas and generates a pressure corresponding to the pressure of the gas within the syrup line when both the syrup bag is full and the outlet of the beverage dispenser is closed. The sensor transmits a measured pressure to the computing system. The computing system determines that the measured pressure is less than a threshold pressure. In response, the processor transmits an alert for display on a user device, which identifies the syrup bag for replacement.
A system for measuring a fill level of a trash can comprises a processor operable to receive a distance measurement from a network, wherein a sensor communicatively coupled to the processor through the network is operable to determine the distance measurement. The processor is operable to calculate a percentage of waste in the trash can based on the received distance measurement and a difference between a first setpoint and a second setpoint. The processor is operable to determine a threshold for a first period of time based on entity information. The processor is operable to compare the percentage of waste in the trash can to the threshold for the first period of time and to send an alert for display on a user device when the percentage of waste is greater than the threshold for the first period of time.
A system for determining an ambient concentration of compositions for bathroom cleaning comprises a processor operable to receive a concentration measurement from a sensor over a network within a first period of time. The processor is operable to compare the received concentration measurement to a first threshold and to a second threshold greater than the first threshold. The processor is operable to instruct a memory communicatively coupled to the processor to store an indication that a bathroom was cleaned in response to a determination that the received concentration measurement is greater than the first threshold and less than the second threshold. The processor is operable to send an alert for display on a user device indicating either that the sensor has been tampered or that a spill event has occurred in response to a determination that the received concentration measurement is greater than the second threshold.
G01N 27/404 - Cellules avec l'anode, la cathode et l'électrolyte de la cellule du même côté d'une membrane perméable qui les sépare du fluide de l'échantillon
G01N 33/00 - Recherche ou analyse des matériaux par des méthodes spécifiques non couvertes par les groupes
47.
SYSTEMS AND METHODS FOR MONITORING FOOD TEMPERATURES
A system includes one or more memory units and a processor. The processor is configured to receive, from a food temperature probe, a first temperature associated with a first food item. The processor is further configured to receive, from the food temperature probe, a second temperature associated with a second food item. The processor is further configured to receive, from the food temperature probe, a third temperature that was measured by the food temperature probe after measuring the first temperature but before measuring the second temperature, the third temperature associated with a cleaning of the food temperature probe. The processor is further configured to send an alert for display on a user device when the third temperature is greater than the cleaning threshold temperature.
A system includes a door sensor that provides a status of whether a door of a refrigeration system is open or closed, a temperature sensor that measures a temperature of a food compartment of the refrigeration system, a power sensor that measures an amount of power consumed by the refrigeration system, and a compressor sensor that provides acoustic data about a compressor of the refrigeration system. The system further includes a remote computing system configured to send an alert indicating that the refrigeration system needs servicing when the temperature of the food compartment of the refrigeration system is determined to be above a predetermined temperature while: the door of the refrigeration system is closed, the amount of power consumed by the refrigeration system is within a predetermined power range, and acoustic signals of the compressor of the refrigeration system are within a predetermined acoustic range.
A system includes a local device and a remote computing system. The local device is disposed in a toilet paper dispenser and includes a sensor and a first processor. The sensor is configured to measure a distance to a toilet paper roll. The first processor is configured to calculate, using the measured distance to the toilet paper roll, a percentage of toilet paper remaining on the toilet paper roll. The first processor is further configured to transmit, when it is determined that the percentage of toilet paper remaining is less than a minimum threshold, sensor data across a wireless communications network. The remote computing system includes a second processor configured to receive the sensor data and send an alert for display on a user device in response to receiving the sensor data.
An accessory mounting apparatus and system are provided. The system includes a structural framing, an accessory mounting apparatus, and an accessory. The accessory mounting apparatus includes a first component, a second component, and a connector arm. The first component includes a first base, a first rail connector that extends from the first base, and one or more first connectors. The second component includes a second base and a second rail connector that extends from the second base. The connector arm includes a second connector and a mounting arm, and the second connector is coupled to the one or more first connectors. Further, the first rail connector and the second rail connector are configured to engage a first rail slot and a second rail slot of the structural framing, accordingly. Additionally, the accessory is coupled to the mounting arm.
F16M 11/06 - Moyens pour la fixation des appareils; Moyens permettant le réglage des appareils par rapport au banc permettant la rotation
F16M 13/02 - Autres supports ou appuis pour positionner les appareils ou les objets; Moyens pour maintenir en position les appareils ou objets tenus à la main pour être portés par un autre objet ou lui être fixé, p.ex. à un arbre, une grille, un châssis de fenêtre, une bicyclette
B33Y 80/00 - Produits obtenus par fabrication additive
51.
PROACTIVE REQUEST COMMUNICATION SYSTEM WITH IMPROVED DATA PREDICTION USING ARTIFICIAL INTELLIGENCE
An event tracking subsystem detects events for the removal of items at a plurality of locations. A data prediction subsystem receives event data based on the detected events that indicates an amount of an item removed from each of the plurality of locations over a previous period of time. The data prediction subsystem determines a prediction data value for each item at each of the plurality of locations. The prediction data is used to proactively request items with improved communication and computational efficiency.
A data prediction subsystem receives event data indicating an amount of an item removed from each of a plurality of locations over a previous period of time. For each location, prediction data is determined using the event data. The prediction data includes, for each day over a future period of time, a non-integer value indicating an anticipated amount of the item that will be removed from the location. An improved rounding process is used to round the prediction value for each day. The resulting prediction data is used to proactively request items with improved communication and computational efficiency.
G06Q 10/08 - Logistique, p.ex. entreposage, chargement ou distribution; Gestion d’inventaires ou de stocks
G06Q 10/04 - Prévision ou optimisation spécialement adaptées à des fins administratives ou de gestion, p. ex. programmation linéaire ou "problème d’optimisation des stocks"
G06F 7/499 - Maniement de valeur ou d'exception, p.ex. arrondi ou dépassement
G06F 17/18 - Opérations mathématiques complexes pour l'évaluation de données statistiques
A device configured to receive a resource reservation request that identifies a user and a resource. The device is further configured to generate a resource allocation that includes an association between the user, location information for the resource, a resource identifier for the resource, and a token value. The device is further configured to associate the resource allocation with a time interval indicating a deadline for using the resource allocation. The device is further configured to receive a reservation verification request from a network device. The device is further configured to determine the location information for the network device is within a predetermined distance from the location information for the resource, to determine the resource identifier matches the resource identifier for the resource, and to determine a current time is within the time interval. The device is further configured to generate resource allocation instructions authorizing access to the resource.
A system for selecting delivery mechanisms sends a request to a plurality of servers associated with one or more autonomous delivery mechanism and one or more non-autonomous delivery mechanisms to provide delivery metadata. The request comprises a pickup location coordinate and a delivery location coordinate. The delivery metadata comprises a delivery time and a delivery quote. The system receives a first set of delivery metadata associated with one or more autonomous delivery mechanisms, and a second set of delivery metadata associated with one or more non-autonomous delivery mechanisms. The system identifies a particular autonomous delivery mechanism from within the category of autonomous delivery mechanisms based on the first set of delivery metadata. The system identifies a particular non-autonomous delivery mechanism from within the category of non-autonomous delivery mechanisms based on the second set of delivery metadata.
A system receives content of a memory resource. The system compares the content of the memory resource with a first resource data associated with a first physical space, and with a second resource data associated with a second physical space. The system determines which of the first physical space and the second physical space can fulfill more than a threshold percentage of objects in the memory resource based on the comparison between the content of the memory resource with the first and second resource data. The system determines that the first physical space can fulfill more than the threshold percentage of objects in the memory resource. The system assigns the first physical space to the memory resource for concluding an operation associated with the memory resource.
A system sends a drop-off location coordinate to a server associated with a delivery mechanism. The system receives, from the server, a hyperlink that upon access, the drop-off location coordinate is displayed on a virtual map. The system links the hyperlink to an adjust drop-off location element, such that when the adjust drop-off element is accessed, the virtual map is displayed within a delivery user interface. The system integrates the adjust drop-off element into the delivery user interface such that the adjust drop-off element is accessible from within the delivery user interface.
A system generates a set of instructions for preparing content of a memory resource that comprises a set of objects in a particular sequence. The system sends the content of the memory resource and the set of instructions to a user device. The system receives a first message that indicates the set of objects is being prepared from the user device. The system sends, to a server associated with a delivery vehicle, a third message to alert the delivery vehicle to pick up the set of objects from a pickup location and deliver to a delivery location coordinate. The system receives, from the server, an alert message that indicates the delivery vehicle has reached the pickup location coordinate. The system forwards the alert message to the user device.
A system presents a plurality of objects on a delivery user interface. The system updates content of a memory resource as one or more objects are added to the memory resource. In response to determining that the content of the memory resource is finalized, the system determines a selection of a particular autonomous delivery mechanism and a particular non-autonomous delivery mechanism based on filtering conditions associated with the content of the memory resource. The system presents the selection of the particular autonomous delivery mechanism and the particular non-autonomous delivery mechanism on a delivery user interface. The system determines that a delivery mechanism is selected from the selection. The system receives one or more status updates associated with the selected delivery mechanism, and displays the one or more status updates on the delivery user interface.
A system presents, on a user interface, a first message that indicates an operation associates with a memory resource is concluded. The system present, on the user interface, content of the memory resource that comprises a plurality of objects. The system presents, on the user interface, a set of instructions to prepare the plurality of objects in a particular sequence. The system receives a second message that indicates the plurality of objects in ready for pickup by a delivery mechanism. The system presents, on the user interface, an alert message that indicates the delivery mechanism has reached a pickup location coordinate. If the category of the delivery mechanism is an autonomous delivery mechanism, the system presents, on the user interface, a pin number that unlocks the autonomous delivery mechanism.
A device configured to receive a rack identifier for a rack that is configured to hold items. The device is further configured to identify a master template that is associated with the rack. The device is further configured to receive images of the plurality of items on the rack and to combine the images into a composite image of the rack. The device is further configured to identify shelves on the rack within the composite image and to generate bounding boxes that correspond with an item on the rack. The device is further configured to associate each bounding box with an item identifier and an item location. The device is further configured to generate a rack analysis message based on a comparison of the item locations for each bounding box and the rack positions from the master template and to output the rack analysis message.
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
A device configured to receive a resource reservation request that identifies a user and a resource. The device is further configured to generate a resource allocation that includes an association between the user, location information for the resource, a resource identifier for the resource, and a token value. The device is further configured to associate the resource allocation with a time interval indicating a deadline for using the resource allocation. The device is further configured to receive a reservation verification request from a network device. The device is further configured to determine the location information for the network device is within a predetermined distance from the location information for the resource, to determine the resource identifier matches the resource identifier for the resource, and to determine a current time is within the time interval. The device is further configured to generate resource allocation instructions authorizing access to the resource.
A device configured to detect a triggering event at a platform and to capture a depth image of items on the platform using a three-dimensional (3D) sensor. The device is further configured to determine an object pose for each item on the platform and to identify one or more cameras from among a plurality of cameras based on the object pose for each item on the platform. The device is further configured to capture one or more images of the items on the platform using the identified cameras and to identify items within the one or more images based on features of the items. The device is further configured to identify a user associated with the identified items on the platform, to identify an account that is associated with the user, and to associate the identified items with the account of the user.
A system for updating a training dataset of an item identification model determines that an item is not included in a training dataset. In response to determining that the item is not included in the training dataset, the system obtains an identifier of the item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features associated with the item from the images. The system associates the item to the identifier and the set of features. The system adds a new entry to the training dataset, where the new entry represents the item labeled with the identifier and the set of features.
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
H04N 5/247 - Disposition des caméras de télévision
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
H04N 13/207 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
64.
System and method for capturing images for training of an item identification model
A system for capturing images for training an item identification model obtains an identifier of an item. The system detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system causes the platform to rotate. The system causes at least one camera to capture an image of the item while the platform is rotating. The system extracts a set of features associated with the item from the image. The system associates the item to the identifier and the set of features. The system adds a new entry to a training dataset of the item identification model, where the new entry represents the item labeled with the identifier and the set of features.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06T 11/20 - Traçage à partir d'éléments de base, p.ex. de lignes ou de cercles
G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
65.
System and method for aggregating metadata for item identification using digital image processing
A system for identifying items based on aggregated metadata obtains images of an item. The system extracts a set of features from images of the item. The system identifies a first value of a first feature associated with a first image of the item. The system identifies a second value of the first feature associated with a second image of the item. The system aggregates the first value and the second value. The system associates the item to the aggregated first value and the second value, where the aggregated first value and the second value represent the first feature of the item. The system adds a new entry for each image of the item to a training dataset associated with an item identification model.
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 20/00 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène
G01G 19/414 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes avec dispositions pour indiquer, enregistrer ou calculer un prix ou d'autres quantités dépendant du poids utilisant des moyens de calcul électromécaniques ou électroniques utilisant uniquement des moyens de calcul électroniques
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06F 18/22 - Critères d'appariement, p.ex. mesures de proximité
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
A device configured to capture a first overhead depth image of the platform using a three-dimensional (3D) sensor at a first time instance and a second overhead depth image of a first object using the 3D sensor at a second time instance. The device is further configured to determine that a first portion of the first object is within a region-of-interest and a second portion of the first object is outside the region-of-interest in the second overhead depth image. The device is further configured to capture a third overhead depth image of a second object placed on the platform using the 3D sensor at a third time instance. The device is further configured to capture a first image of the second object using a camera in response to determining that the first object is outside of the region-of-interest and the second object is within the region-of-interest for the platform.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/32 - Alignement ou centrage du capteur d'image ou de la zone image
G06T 7/55 - Récupération de la profondeur ou de la forme à partir de plusieurs images
H04N 7/18 - Systèmes de télévision en circuit fermé [CCTV], c. à d. systèmes dans lesquels le signal vidéo n'est pas diffusé
A device configured to detect a triggering event corresponding with a user placing a first item on the platform, to capture a first image of the first item on the platform using a camera, and to input the first image into a machine learning model that is configured to output a first encoded vector based on features of the first item that are present in the first image. The device is further configured to identify a second encoded vector in an encoded vector library that most closely matches the first encoded vector and to identify a first item identifier in the encoded vector library that is associated with the second encoded vector. The device is further configured to identify the user, to identify an account that is associated with the user, and to associate the first item identifier with the account of the user.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
A device configured to receive a first point cloud data for a first item, to identify a first plurality of data points for the first object within the first point cloud data, and to extract the first plurality of data points from the first point cloud data. The device is further configured to receive a second point cloud data for the first item, to identify a second plurality of data points for the first object within the second point cloud data, and to extract a second plurality of data points from the second point cloud data. The device is further configured to merge the first plurality of data points and the second plurality of data points to generate combined point cloud data and to determine dimensions for the first object based on the combined point cloud data.
G06Q 10/08 - Logistique, p.ex. entreposage, chargement ou distribution; Gestion d’inventaires ou de stocks
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur les caractéristiques
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
69.
System and method for refining an item identification model based on feedback
A system for refining an item identification model detects a triggering event at a platform, where the triggering event corresponds to a user placing the item on a platform. The system captures images of the item. The system extracts a set of features from at least one of the images. The system identifies the item based on the set of features. The system receives an indication that the item is not identified correctly. The system receives an identifier of the item. The system identifies the item based on the identifier of the item. The system feeds the identifier of the item and the images to the item identification model. The system retrains the item identification model to learn to associate the item to the images. The system updates the set of features based on the determined association between the item and the images.
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G01G 23/36 - Dispositifs indicateurs, p.ex. pour indication à distance; Dispositifs enregistreurs; Echelles, p.ex. graduées indiquant le poids par des moyens électriques, p.ex. par utilisation de cellules photo-électriques
G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan
G06T 7/62 - Analyse des attributs géométriques de la superficie, du périmètre, du diamètre ou du volume
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 20/00 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène
H04N 13/207 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant un seul capteur d’images 2D
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
H04N 23/90 - Agencement de caméras ou de modules de caméras, p. ex. de plusieurs caméras dans des studios de télévision ou des stades de sport
A device configured to identify a first pixel location within a first plurality of pixels corresponding with an item in a first image and to apply a first homography to the first pixel location to determine a first (x,y) coordinate. The device is further configured to identify a second pixel location within a second plurality of pixels corresponding with the item in a second image and to apply a second homography to the second pixel location to determine a second (x,y) coordinate. The device is further configured to determine that the distance between the first (x,y) coordinate and the second (x,y) coordinate is less than or equal to the distance threshold value, to associate the first plurality of pixels and the second plurality of pixels with a cluster for the item, and to output the first plurality of pixels and the second plurality of pixels.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
71.
REDUCING A SEARCH SPACE FOR ITEM IDENTIFICATION USING MACHINE LEARNING
A device configured to receive a first encoded vector and receive one or more feature descriptors for a first object. The device is further configured to remove one or more encoded vectors from an encoded vector library that are not associated with the one or more feature descriptors and to identify a second encoded vector in the encoded vector library that most closely matches the first encoded vector based on the numerical values within the first encoded vector. The device is further configured to identify a first item identifier in the encoded vector library that is associated with the second encoded vector and to output the first item identifier.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/46 - Extraction d'éléments ou de caractéristiques de l'image
G06T 7/90 - Détermination de caractéristiques de couleur
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G01G 19/414 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes avec dispositions pour indiquer, enregistrer ou calculer un prix ou d'autres quantités dépendant du poids utilisant des moyens de calcul électromécaniques ou électroniques utilisant uniquement des moyens de calcul électroniques
A device configured to capture a first image of an item on a platform using a camera and to determine a first number of pixels in the first image that corresponds with the item. The device is further configured to capture a first depth image of an item on the platform using a three-dimensional (3D) sensor and to determine a second number of pixels within the first depth image that corresponds with the item. The device is further configured to determine that the difference between the first number of pixels in the first image and the second number of pixels in the first depth image is less than the difference threshold value, to extract the plurality of pixels corresponding with the item in the first image from the first image to generate a second image, and to output the second image.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 7/136 - Découpage; Détection de bords impliquant un seuillage
G06K 9/62 - Méthodes ou dispositions pour la reconnaissance utilisant des moyens électroniques
G06T 7/50 - Récupération de la profondeur ou de la forme
H04N 5/247 - Disposition des caméras de télévision
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
H04N 13/271 - Générateurs de signaux d’images où les signaux d’images générés comprennent des cartes de profondeur ou de disparité
H04N 13/204 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques
A coffee machine includes a housing that comprises a first platform, a second platform, a first robotic arm, a second robotic arm, and an information handling system. The information handling system comprises a memory and a processor. The memory is configured to receive and store a multiple beverage orders in a log and to monitor a position of a first cup and a second cup corresponding to a first beverage order and a second beverage order. The processor is configured to initiate a second sub-step of the second beverage order prior to initiating a second sub-step of the first beverage order in response to determining that a first sub-step of the second beverage order has terminated first and to transmit to the log of the memory that the second cup is at the third designated position for the second sub-step of the second beverage order to be performed.
A coffee machine includes a first housing and a second housing, wherein the second housing is disposed on top of the first housing. The second housing comprises a first platform, a second platform disposed below the first platform, and a lift disposed at a first side of the second housing and configured to translate between the first platform and the second platform. The second housing further comprises a first robotic arm disposed above the first platform, a second robotic arm disposed between the second platform and the first platform, and a coffee brewing machine actionable to dispense one or more fluids into a cup. The coffee machine further comprises an information handling system comprising a processor, wherein the processor is configured to actuate the first robotic arm, the second robotic arm, the coffee brewing machine, and the lift.
A47J 31/44 - Eléments ou parties constitutives des appareils à préparer des boissons
A47J 31/52 - Mécanismes commandés par un réveil-matin pour les appareils à préparer le café ou le thé
A47J 31/40 - Appareils à préparer des boissons avec des moyens de distribution pour ajouter une quantité mesurée d'ingrédients, p.ex. du café, de l'eau, du sucre, du cacao, du lait, du thé
A47J 31/41 - Appareils à préparer des boissons avec des moyens de distribution pour ajouter une quantité mesurée d'ingrédients, p.ex. du café, de l'eau, du sucre, du cacao, du lait, du thé d'ingrédients liquides
A47J 43/044 - Machines de ménage non prévues ailleurs, p.ex. pour moudre, mélanger, agiter, pétrir, émulsionner, fouetter ou battre les aliments, p.ex. actionnées par moteur à outils actionnés du côté du haut
An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates images of the items stored on the rack. Over a period of time, a tracking subsystem tracks a pixel position of the wrist of a person interacting with items stored on the rack, receives image frames of the angled-view images. The tracking subsystem determines whether an item was interacted with by a person and, if so, the identified item is assigned to the person.
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
76.
Event trigger based on region-of-interest near hand-shelf interaction
An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a rack storing items. The image sensor generates angled-view images of the items stored on the rack. A tracking subsystem determines that a person is within a threshold distance of the rack and receives image frames of the angled-view images. A pixel position of a wrist of the person is determined in at least a subset of the received image frames, thereby determining a set of pixel positions of the wrist. An aggregated wrist position is determined based on the set of pixel positions. If the aggregated wrist position is determined to correspond to a position on a shelf of the rack, a trigger signal is provided indicating a shelf-interaction event has occurred.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06V 10/147 - Caractéristiques optiques de l’appareil qui effectue l’acquisition ou des dispositifs d’éclairage - Détails de capteurs, p.ex. lentilles de capteurs
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 10/24 - Alignement, centrage, détection de l’orientation ou correction de l’image
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
G06V 10/62 - Extraction de caractéristiques d’images ou de vidéos relative à une dimension temporelle, p.ex. extraction de caractéristiques axées sur le temps; Suivi de modèle
77.
System and method for presenting a virtual store shelf that emulates a physical store shelf
An apparatus includes a display, interface, and processor. The interface receives a live camera feed from a camera directed at a first physical structure located in a physical space. The processor receives an indication of an event associated with the first physical structure and accordingly displays a first virtual structure corresponding to the first physical structure. The first virtual structure includes virtual items that emulate physical items located on the first physical structure. The processor additionally displays a recording of the live camera feed, which depicts the event associated with the first physical structure.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
78.
SYSTEM AND METHOD FOR POPULATING A VIRTUAL SHOPPING CART BASED ON A VERIFICATION OF ALGORITHMIC DETERMINATIONS OF ITEMS SELECTED DURING A SHOPPING SESSION IN A PHYSICAL STORE
An apparatus includes a display and a processor. The processor displays a virtual shopping cart. The processor also receives information indicating that an algorithm determined that a physical item was selected by a person during a shopping session in a physical store, based on a set of inputs received from sensors located within the store. In response, the processor displays a virtual item, which includes a graphical representation of the physical item. The processor additionally displays a rack video captured during the shopping session by a rack camera located in the store. The rack camera is directed at a physical rack located in the store, which includes the physical item. In response to displaying the rack video, the processor receives information identifying the virtual item, where the rack video depicts that the person selected the physical item. The processor then stores the virtual item in the virtual shopping cart.
A system that includes a fuel dispenser terminal and a remote controller. The fuel dispenser terminal is configured to send a service request for a fuel purchase to the remote controller and receive a personalized offer in response to sending the service request. The fuel dispenser terminal is further configured to display the personalized offer, receive a user response indicating the personalized offer was accepted, and send the user response to the remote controller. The fuel dispenser terminal is further configured to receive an authorization token for retrieving the personalized offer and output the authorization token to the customer. The remote controller is configured to update the service request by adding a purchase associated with the personized offer to the fuel purchase, send an encrypted service request to a service processor, generate the authorization token, and send the authorization token to the fuel dispenser terminal.
G06Q 50/06 - Fourniture d'électricité, de gaz ou d'eau
G05B 19/416 - Commande numérique (CN), c.à d. machines fonctionnant automatiquement, en particulier machines-outils, p.ex. dans un milieu de fabrication industriel, afin d'effectuer un positionnement, un mouvement ou des actions coordonnées au moyen de données d'u caractérisée par la commande de vitesse, d'accélération ou de décélération
G06Q 30/0207 - Remises ou incitations, p.ex. coupons ou rabais
G06Q 20/38 - Architectures, schémas ou protocoles de paiement - leurs détails
G06Q 20/18 - Architectures de paiement impliquant des terminaux en libre-service, des distributeurs automatiques, des bornes ou des terminaux multimédia
80.
Identifying non-uniform weight objects using a sensor array
An object tracking system that includes a sensor and a tracking system. The sensor configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is configured to detect an item was removed from the rack. The tracking system is further configured to receive the frame of the rack, to identify a marker on an item within a predefined zone in the frame, and to identify the item associated with the identified marker. The tracking system is further configured to determine a pixel location for a person, to determine the person is within the predefined zone associated with the, and to add the identified item to a digital cart associated with the person.
G01G 19/40 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes avec dispositions pour indiquer, enregistrer ou calculer un prix ou d'autres quantités dépendant du poids
G01G 19/52 - Appareils de pesée combinés avec d'autres objets, p.ex. avec de l'ameublement
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 10/62 - Extraction de caractéristiques d’images ou de vidéos relative à une dimension temporelle, p.ex. extraction de caractéristiques axées sur le temps; Suivi de modèle
81.
Detecting and identifying misplaced items using a sensor array
An object tracking system that includes a sensor and a tracking system. The sensor is configured to capture a frame of at least a portion of a rack within a global plane for a space. The tracking system is further configured to receive the frame, to determine a pixel location for the first person, and to determine a person is within the predefined zone associated with the rack. The tracking system is further configured to identify the plurality of items in a digital cart associated with the person, to identify an item from the digital cart associated with the first person, and to remove the identified item from the digital cart associated with the person.
An apparatus includes a display, interface, and processor. The interface receives video from a camera located in a physical store and directed at a first physical rack. The camera captures video of the rack during a shopping session. The processor displays a first virtual rack that emulates the first physical rack and includes first and second virtual shelves. The virtual shelves include virtual items, which include graphical representations of physical items located on the physical rack. The processor displays the rack video, which depicts an event including the person interacting with the first physical rack. The processor also displays a virtual shopping cart. The processor receives information associated with the event, identifying the first virtual item. The rack video depicts that the person selected the first physical item while interacting with the first physical rack. The processor then stores the first virtual item in the virtual shopping cart.
A scalable tracking system processes video of a space to track the positions of people within a space. The tracking system determines local coordinates for the people within frames of the video and then assigns these coordinates to time windows based on when the frames were received. The tracking system then combines or clusters certain local coordinates that have been assigned to the same time window to determine a combined coordinate for a person during that time window.
An object tracking system that includes a first sensor and a second sensor that are each configured to capture frames of at least a portion of a global plane for a space. The system is configured to identify a pixel location for a marker within a frame from the first sensor and to determine an (x,y) coordinate for the marker using a first homography. The system is further configured to identify a pixel location for a different marker in a frame from the second sensor and to determine an (x,y) coordinate for the marker using a second homography. The system is further configured to determine a distance difference between the computed distance between the (x,y) coordinates and an actual distance. The system is further configured to recompute the first homography and/or the second homography in response to determining that the distance difference exceeds a difference threshold level.
A system includes a first sensor and a sensor client. During an initial time interval, the sensor client receives images generated by the first sensor and detects contours in the images. The sensor client determines, based on the contours, regions of the images generated by the first sensor to exclude during object tracking. During a subsequent time interval, the sensor client receives a second image generated by the first sensor and detects a contour in the image. The sensor client determines pixel coordinates of the second contour and determines whether at least a threshold percentage of the second pixel coordinates overlap with the region to exclude during object tracking. If at least the threshold percentage of the second pixel coordinates overlap with the region to exclude, a position for tracking the second contour is not determined.
A scalable tracking system processes video of a space to track the positions of objects within a space. The tracking system determines local coordinates for the objects within frames of the video and then assigns these coordinates to time windows based on when the frames were received. The tracking system then combines or clusters certain local coordinates that have been assigned to the same time window to determine a combined coordinate for an object during that time window.
An object tracking system that includes a plurality of sensors and a tracking system. A first sensor from the plurality of sensors is configured to capture a first frame of a global plane for at least a portion of the space. The tracking system is configured to determine a pixel location in the first frame for an object located in the space, and to apply a homography to the pixel location to determine a coordinate in the global plane. The homography is configured to translate between pixel locations in the first frame and coordinates in the global plane.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p.ex. le suivi des coins ou des segments
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
A scalable tracking system includes a camera subsystem, a weight subsystem, and a central server. The camera subsystem includes cameras that capture video of a space, camera clients that determine local coordinates of objects in the captured videos, and a camera server that determines the physical positions of objects in the space based on the determined local coordinates. The weight subsystem determines when items were removed from shelves. The central server determines which object in the space removed the items based on the physical positions of the objects in the space and the determination of when items were removed.
G01S 17/87 - Combinaisons de systèmes utilisant des ondes électromagnétiques autres que les ondes radio
G01G 19/40 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes avec dispositions pour indiquer, enregistrer ou calculer un prix ou d'autres quantités dépendant du poids
G06Q 20/32 - Architectures, schémas ou protocoles de paiement caractérisés par l'emploi de dispositifs spécifiques utilisant des dispositifs sans fil
G01S 17/66 - Systèmes de poursuite utilisant d'autres ondes électromagnétiques que les ondes radio
H04N 23/90 - Agencement de caméras ou de modules de caméras, p. ex. de plusieurs caméras dans des studios de télévision ou des stades de sport
A system includes sensors and a tracking subsystem. The subsystem receives a first image feed from a first sensor and a second image feed from a second sensor. The field-of view of the second sensor at least partially overlaps with that of the first sensor. The subsystem detects an object in a frame from the first feed. The subsystem determines a first pixel position of the object. The subsystem determines a second pixel position of the object. Based on the first pixel position and the second pixel position, a global position for the object is determined in a space.
An object tracking system includes a first sensor, a second sensor, and a tracking system. The first sensor is configured to capture a first frame of a global plane for at least a first portion of a space. The second sensor is configured to capture a second frame of at least a second portion of the space. The tracking system is configured to determine the object is within an overlap region with the second sensor based on a first pixel location. The tracking system is further configured to determine a first coordinate in the global plane for the object, to determine a second pixel location in the second frame for the object based on the first coordinate, and to store the second pixel location with an object identifier a tracking list associated with the second sensor.
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/147 - Caractéristiques optiques de l’appareil qui effectue l’acquisition ou des dispositifs d’éclairage - Détails de capteurs, p.ex. lentilles de capteurs
An apparatus includes an interface, display, memory, and processor. The interface receives a video feed including first and second camera feeds, each feed corresponding to a camera located in a store. The processor stores a video segment in memory, assigned to a person and capturing a portion of a shopping session. The video segment includes first and second camera feed segments, each segment corresponding to a recording of the corresponding camera feed from a starting to an ending timestamp. Playback of the first and second camera feed segments is synchronized, and a slider bar controls a playback progress of the camera feed segments. The processor displays the camera feed segments and copies of the slider bar on the display. The processor receives an instruction from at least one of the copies of the slider bar to adjust the playback progress of the camera feed segments and adjusts the playback progress.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
H04N 21/431 - Génération d'interfaces visuelles; Rendu de contenu ou données additionnelles
G06Q 30/02 - Marketing; Estimation ou détermination des prix; Collecte de fonds
H04N 21/845 - Structuration du contenu, p.ex. décomposition du contenu en segments temporels
H04N 21/6587 - Paramètres de contrôle, p.ex. commande de lecture à vitesse variable ("trick play") ou sélection d’un point de vue
H04N 21/472 - Interface pour utilisateurs finaux pour la requête de contenu, de données additionnelles ou de services; Interface pour utilisateurs finaux pour l'interaction avec le contenu, p.ex. pour la réservation de contenu ou la mise en place de rappels, pour la requête de notification d'événement ou pour la transformation de contenus affichés
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
92.
Image-based action detection using contour dilation
A system includes a sensor and a tracking subsystem. The tracking subsystem receives an image feed of top-view images generated by the sensor. The tracking subsystem detects an event associated with an item being removed from a rack. The tracking subsystem determines that a first person and a second person may be associated with the event. In response, the tracking subsystem dilates contours associated with the first and second person from a first depth to a second depth until the contours enter a zone adjacent to the rack. A number of iterations is determined for each contour to enter the zone adjacent to the rack. If the first person's contour enters the zone in fewer iterations, the item is assigned to the first person.
G01G 19/52 - Appareils de pesée combinés avec d'autres objets, p.ex. avec de l'ameublement
G01G 19/414 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes avec dispositions pour indiquer, enregistrer ou calculer un prix ou d'autres quantités dépendant du poids utilisant des moyens de calcul électromécaniques ou électroniques utilisant uniquement des moyens de calcul électroniques
A system includes a server and merchant device. The server receives product information for a product scanned by a mobile device. The server stores the product information for the product in a digital cart. The server receives a transaction request from the mobile device, determines that the product is associated with a validation requirement, and transmits a validation request to the merchant device. The server receives, from the merchant device, an indication that the validation requirement is satisfied, processes a transaction, and transmits, to the merchant device, an indication that the transaction is complete. The merchant device receives the validation request, determines that the validation requirement is satisfied, and transmits the indication that the validation requirement is satisfied to the server. The merchant device receives, from the server, the indication that the transaction is complete and displays the indication that the transaction on the display.
G06Q 30/06 - Transactions d’achat, de vente ou de crédit-bail
G06K 7/14 - Méthodes ou dispositions pour la lecture de supports d'enregistrement par radiation corpusculaire utilisant la lumière sans sélection des longueurs d'onde, p.ex. lecture de la lumière blanche réfléchie
G06Q 20/20 - Systèmes de réseaux présents sur les points de vente
G06Q 20/40 - Autorisation, p.ex. identification du payeur ou du bénéficiaire, vérification des références du client ou du magasin; Examen et approbation des payeurs, p.ex. contrôle des lignes de crédit ou des listes négatives
An object tracking system includes a sensor and a tracking system. The sensor is configured to capture a frame of at least a portion of a physical space within a global plane for a space. The tracking system is configured to receive the frame, to detect an object within a zone of the frame, and to determine a pixel location for the object. The tracking system is further configured to identify a zone of the physical structure based on the pixel location, to identify an item based on the identified zone.
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
95.
Access control for an augmented reality experience using interprocess communications
An access control system that includes an access control device configured to receive transaction information that identifies a member identifier for a member and a purchased item. The access control device is configured to compare the purchased item to items in an item list and to determine whether the purchased item matches any items in the item list. The access control device is configured to store an authorization for the member identifier to access an augmented reality experience in response to a match.
A63F 13/79 - Aspects de sécurité ou de gestion du jeu incluant des données sur les joueurs, p.ex. leurs identités, leurs comptes, leurs préférences ou leurs historiques de jeu
G06Q 30/02 - Marketing; Estimation ou détermination des prix; Collecte de fonds
G06Q 30/0226 - Systèmes d’incitation à un usage fréquent, p.ex. programmes de miles pour voyageurs fréquents ou systèmes de points
96.
DETECTION OF OBJECT REMOVAL AND REPLACEMENT FROM A SHELF
An image sensor is positioned such that a field-of-view of the image sensor encompasses at least a portion of a structure configured to store items. The image sensor generates angled-view images of the items stored on the structure. A tracking subsystem determines that a person has interacted with the structure and receives image frames of the angled-view images. The tracking subsystem determines that the person interacted with a first item stored on the structure. A first image is identified associated with a first time before the person interacted with the first item, and a second image is identified associated with a second time after the person interacted with the first item. If it is determined, based on a comparison of the first and second images, that the item was removed from the structure, the first item is assigned to the person.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06K 9/32 - Alignement ou centrage du capteur d'image ou de la zone image
G08B 13/196 - Déclenchement influencé par la chaleur, la lumière, ou les radiations de longueur d'onde plus courte; Déclenchement par introduction de sources de chaleur, de lumière, ou de radiations de longueur d'onde plus courte utilisant des systèmes détecteurs de radiations passifs utilisant des systèmes de balayage et de comparaison d'image utilisant des caméras de télévision
An apparatus includes a display, interface, and processor. The interface receives a camera feed from a camera directed at a first physical rack located in a physical store. The processor displays virtual racks assigned to physical racks. The processor receives an indication of an event associated with the first physical rack and accordingly displays a first virtual rack, which includes first and second virtual shelves. The first and second virtual shelves include virtual items that emulate physical items located on physical shelves of the first physical rack. The processor additionally displays a recording of the camera feed, which depicts the event associated with the first physical rack.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06Q 30/06 - Transactions d’achat, de vente ou de crédit-bail
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
An object tracking system includes a sensor and a tracking system. The sensor is configured to capture a first frame of a global plane for at least a portion of a space. The tracking system is configured to receive a first coordinate in the global plane where a first marker is located in the space and to receive a second coordinate in the global plane where a second marker is located in the space. The tracking system is further configured to identify the first marker and the second marker within the first frame, to determine a first pixel location in the first frame for the first marker, to determine a second pixel location in the first frame for the second marker, and to generate a homography based on the first coordinate, the second coordinate, the first pixel location, and the second pixel location.
G06K 9/00 - Méthodes ou dispositions pour la lecture ou la reconnaissance de caractères imprimés ou écrits ou pour la reconnaissance de formes, p.ex. d'empreintes digitales
G06T 7/80 - Analyse des images capturées pour déterminer les paramètres de caméra intrinsèques ou extrinsèques, c. à d. étalonnage de caméra
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06V 10/75 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques utilisant l’analyse de contexte; Sélection des dictionnaires
G06V 20/52 - Activités de surveillance ou de suivi, p.ex. pour la reconnaissance d’objets suspects
G01G 19/40 - Appareils ou méthodes de pesée adaptés à des fins particulières non prévues dans les groupes avec dispositions pour indiquer, enregistrer ou calculer un prix ou d'autres quantités dépendant du poids
G06Q 10/087 - Gestion d’inventaires ou de stocks, p.ex. exécution des commandes, approvisionnement ou régularisation par rapport aux commandes
A system includes sensors and a processor that tracks first and second objects in a space. Upon detecting that a tracked position of the first object is within a threshold distance of a tracked position of a second object, a top-view image of the first object is received from a first sensor. Based on this top-view image, a first descriptor is determined for the first object. The first descriptor is associated with an observable characteristic of the first object. The processor identifies the first object based at least in part upon the first descriptor.
A validation terminal located at a registered location comprises a barcode reader, a memory, and a processor. The memory stores a public key that is paired with a private key linked with the registered location of the validation terminal. The processor is operably coupled to the barcode reader and the memory, and is configured to detect an encrypted barcode that was scanned by the barcode reader from a mobile device that is located at the registered location of the validation terminal. The encrypted barcode is based at least in part upon transaction information associated with products in a digital cart, and the encrypted barcode is encrypted using the private key. The processor is further configured to decrypt the encrypted barcode using the stored public key, and to indicate the transaction is valid in response to decrypting the encrypted barcode using the public key.
H04L 9/30 - Clé publique, c. à d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
G06K 19/06 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques caractérisés par le genre de marque numérique, p.ex. forme, nature, code