09 - Appareils et instruments scientifiques et électriques
35 - Publicité; Affaires commerciales
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Downloadable computer programs and downloadable computer
software using artificial intelligence for natural language
processing, generation, understanding and analysis;
downloadable computer programs and downloadable computer
software for machine learning; downloadable computer
programs and downloadable computer software for image
recognition and generation; downloadable computer programs
and downloadable computer software using artificial
intelligence for music generation and suggestions;
downloadable computer programs and downloadable computer
software for artificial intelligence, namely, computer
software for developing, running and analyzing algorithms
that are able to learn to analyze, classify, and take
actions in response to exposure to data; downloadable
computer software using artificial intelligence for image
and video editing and retouching; downloadable computer
software using artificial intelligence for the generation of
text, images, photos, videos, audio, and multimedia content;
downloadable computer software using artificial intelligence
for connecting consumers with targeted promotional
advertisements; downloadable computer software using
artificial intelligence for the generation of advertisements
and promotional materials; downloadable computer software
using artificial intelligence for creating and generating
text; downloadable computer software using artificial
intelligence for translating words or text from one language
to another; downloadable chatbot software for image
recognition and generation; downloadable chatbot software
for music generation; downloadable chatbot software for
image and video editing and retouching; downloadable chatbot
software for the generation of text, images, photos, videos,
audio and multimedia content; downloadable chatbot software
for connecting consumers with promotional messaging;
downloadable chatbot software for simulating human
conversations; downloadable chatbot software for suggesting
image, video, audio, text, and multimedia content;
downloadable chatbot software for responding to oral and
written prompts. Advertising, marketing, and promotion services; marketing,
advertising, and promotional services using artificial
intelligence software, chatbot software, and augmented
reality software; dissemination of advertising for others
via computer and other communication networks; online retail
store services featuring a wide variety of consumer goods of
others; promoting the goods and services of others by
providing an internet website portal featuring links to the
websites of others; facilitating the exchange and sale of
services and products of third parties via computer and
communication networks, namely, operating on-line
marketplaces for sellers and buyers of goods and services;
consumer profiling for commercial or marketing purposes;
providing consumer information and advice for consumers in
the selection of products to buy. Research and development in the field of artificial
intelligence; providing online non-downloadable software
using artificial intelligence for natural language
processing, generation, understanding, and analysis;
providing online non-downloadable software for developing,
running and analyzing algorithms that are able to learn to
analyze, classify, and take actions in response to exposure
to data; software as a service (saas) services featuring
software for using language models; providing online
non-downloadable software for machine-learning based
language and speech processing; providing online
non-downloadable software for the translation text from one
language to another; providing on-line non-downloadable
software using artificial intelligence for image recognition
and generation; providing on-line non-downloadable software
using artificial intelligence for text recognition and
generation; providing online non-downloadable software for
the generation of advertisements and promotional materials;
providing on-line non-downloadable software using artificial
intelligence for music generation and suggestions; providing
on-line non-downloadable software using artificial
intelligence for image and video editing and retouching;
providing on-line non-downloadable software using artificial
intelligence for the generation of text, images, photos,
videos, audio, and multimedia content; providing on-line
non-downloadable software using artificial intelligence for
connecting consumers with promotional advertisements;
providing temporary use of online non-downloadable chatbot
software using artificial intelligence for image recognition
and generation; providing temporary use of online
non-downloadable chatbot software using artificial
intelligence for text recognition and generation; providing
temporary use of online non-downloadable chatbot software
using artificial intelligence for music recognition and
generation; providing temporary use of online
non-downloadable chatbot software using artificial
intelligence for the generation of text, images, photos,
video, audio, text, and multimedia content; providing
temporary use of online non-downloadable chatbot software
using artificial intelligence for connecting consumers with
advertisements; providing temporary use of online
non-downloadable chatbot software using artificial
intelligence for simulating human conversations; providing
temporary use of online non-downloadable chatbot software
using artificial intelligence for responding to oral and
written prompts.
A case for a portable device like a smartphone includes light sources such as LEDs, which, when illuminated, can be detected and tracked by a head-worn augmented or virtual reality device. The light sources may be located at the corners of the case and may emit infrared light. A relative pose between the smartphone and the head-worn device can be determined based on computer vision techniques performed on images captured by the head-worn device that includes light from the light sources. Relative movement between the smartphone and the head-worn device can be used to provide user input to the head-worn device, as can touch input on the portable device. In some instances, the case is powered inductively from the portable device.
H04M 1/72409 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens de soutien local des applications accroissant la fonctionnalité par interfaçage avec des accessoires externes
H04B 1/3888 - Dispositions pour le transport ou la protection d’émetteurs-récepteurs
H04M 1/72454 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens permettant d’adapter la fonctionnalité du dispositif dans des circonstances spécifiques en tenant compte des contraintes imposées par le contexte ou par l’environnement
A method for carving a 3D space using hands tracking is described. In one aspect, a method includes accessing a first frame from a camera of a display device, tracking, using a hand tracking algorithm operating at the display device, hand pixels corresponding to one or more user hands depicted in the first frame, detecting, using a sensor of the display device, depths of the hand pixels, identifying a 3D region based on the depths of the hand pixels, and applying a 3D reconstruction engine to the 3D region.
A case for a portable device like a smartphone includes light sources such as LEDs, which, when illuminated, can be detected and tracked by a head-worn augmented or virtual reality device. The light sources may be located at the corners of the case and may emit infrared light. A relative pose between the smartphone and the head-worn device can be determined based on computer vision techniques performed on images captured by the head- worn device that includes light from the light sources. Relative movement between the smartphone and the head-worn device can be used to provide user input to the head-worn device, as can touch input on the portable device. In some instances, the case is powered inductively from the portable device.
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
5.
MULTIFUNCTIONAL CASE FOR ELECTRONICS-ENABLED EYEWEAR
A carry case for an electronics-enabled eyewear device has incorporated therein electronic components for connection to the eyewear device while storing the eyewear device. The case comprises a rigid frame structure defining an openable holding space for the pair of smart glasses, and a compressible shock-resistant protective cover on the frame structure. The exterior of the case may be predominantly defined by the shock resistant protective cover.
Systems, methods, and computer readable media for a service manger to manage services on a wearable device are disclosed. The service manager remains active in memory and listens for requests for services. The service manager then determines which services to run and which to stop to respond to the requests for services. After running a service, the service manager calls the service to respond to the request and sends a response to the request to the sender of the request. The service manager may be resident on a different processor than a processor from which the requests for services originate. The service manager maintains priorities of the services to determine which services to stop or remove from memory.
Systems and methods of generating ground truth datasets for producing virtual reality (VR) experiences, for testing simulated sensor configurations, and for training machine-learning algorithms. In one example, a recording device with one or more cameras and one or more inertial measurement units captures images and motion data along a real path through a physical environment. A SLAM application uses the captured data to calculate the trajectory of the recording device. A polynomial interpolation module uses Chebyshev polynomials to generate a continuous time trajectory (CTT) function. The method includes identifying a virtual environment and assembling a simulated sensor configuration, such as a VR headset. Using the CTT function, the method includes generating a ground truth output dataset that represents the simulated sensor configuration in motion along a virtual path through the virtual environment. The virtual path is closely correlated with the motion along the real path as captured by the recording device. Accordingly, the output dataset produces a realistic and life-like VR experience. In addition, the methods described can be used to generate multiple output datasets, at various sample rates, which are useful for training the machine-learning algorithms which are part of many VR systems.
A display-enabled eyewear device has an integrated head sensor that dynamically and continuously measures or detects various cephalic parameters of a wearer's head. The head sensor includes a loop coupler system integrated in a lens-carrying frame to sense proximate ambient RF absorption influenced by head presence, size, and/or distance. Autonomous device management dynamically adjust or cause adjustment of selected device features based on current detected values for the cephalic parameters, which can include wear status, head size, and frame-head spacing.
Methods and systems are disclosed for performing real-time stylizing operations. The system receives an image that includes a depiction of a whole body of a real-world person. The system applies a machine learning model to the image to generate a stylized version of the whole body of the real-world person corresponding to a given style, the machine learning model being trained using training data to establish a relationship between a plurality of training images depicting synthetically rendered whole bodies of persons and corresponding ground-truth stylized versions of the whole bodies of the persons of the given style. The system replaces the depiction of the whole body of the real-world person in the image with the generated stylized version of the whole body of the real-world person.
A method for recognizing sign language using collaborative augmented reality devices is described. In one aspect, a method includes accessing a first image generated by a first augmented reality device and a second image generated by a second augmented reality device, the first image and the second image depicting a hand gesture of a user of the first augmented reality device, synchronizing the first augmented reality device with the second augmented reality device, in response to the synchronizing, distributing one or more processes of a sign language recognition system between the first and second augmented reality devices, collecting results from the one or more processes from the first and second augmented reality devices, and displaying, in near real-time in a first display of the first augmented reality device, text indicating a sign language translation of the hand gesture based on the results.
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing reduced availability modes in messaging. The program and method provide for maintaining a count of consecutive time periods in which message content has been exchanged between a first user and a second user in a messaging application; receiving, from a device associated with the first user, a request to set an availability mode for the first user to a reduced availability mode with respect to the messaging application; setting, in response to receiving the request, the availability mode for the first user to the reduced availability mode; and refraining from updating the count while the availability mode is set to the reduced availability mode.
H04L 51/043 - Messagerie en temps réel ou quasi en temps réel, p.ex. messagerie instantanée [IM] en utilisant ou en gérant les informations de présence
H04L 51/224 - Surveillance ou traitement des messages en fournissant une notification sur les messages entrants, p.ex. des poussées de notifications des messages reçus
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
12.
HEAD PROPERTY DETECTION IN DISPLAY-ENABLED WEARABLE DEVICES
A display-enabled eyewear device has an integrated head sensor that dynamically and continuously measures or detects various cephalic parameters of a wearer's head. The head sensor includes a loop coupler system integrated in a lens-carrying frame to sense proximate ambient RF absorption influenced by head presence, size, and/or distance. Autonomous device management dynamically adjust or cause adjustment of selected device features based on current detected values for the cephalic parameters, which can include wear status, head size, and frame-head spacing.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
13.
SIGN LANGUAGE INTERPRETATION WITH COLLABORATIVE AGENTS
A method for recognizing sign language using collaborative augmented reality devices is described. In one aspect, a method includes accessing a first image generated by a first augmented reality device and a second image generated by a second augmented reality device, the first image and the second image depicting a hand gesture of a user of the first augmented reality device, synchronizing the first augmented reality device with the second augmented reality device, in response to the synchronizing, distributing one or more processes of a sign language recognition system between the first and second augmented reality devices, collecting results from the one or more processes from the first and second augmented reality devices, and displaying, in near real-time in a first display of the first augmented reality device, text indicating a sign language translation of the hand gesture based on the results.
G06F 40/58 - Utilisation de traduction automatisée, p.ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
Systems, devices, media, and methods are presented for determining a level of abusive network behavior suspicion for groups of entities and for identifying suspicious entity groups. A suspiciousness metric is developed and used to evaluate a multi-view graph across multiple views where entities are associated with nodes of the graph and attributes of the entities are associated with levels of the graph.
A computer-implement method comprises: training a classifier with labeled data from a dataset; classifying, by the trained classifier, unlabeled data from the dataset; providing, by the classifier to a policy gradient, a reward signal for each data/query pair; transferring, by the classifier to a ranker, learning; training, by the policy gradient, the ranker; ranking data from the dataset based on a query; and retrieving data from the ranked data in response to the query.
A method for enhancing a presentation of a network document by a client terminal with real time social media content. The method comprises analyzing a content in a web document to identify a relation to a first of a plurality of multi participant events documented in an event dataset, each of the plurality of multi participant events is held in a geographical venue which hosts an audience of a plurality of participants, matching a plurality of event indicating tags of each of a plurality of user uploaded media content files with at least one feature of the first multi participant event to identify a group of user uploaded media content files selected from the plurality of user uploaded media content files, and forwarding at least some members of the group to a simultaneous presentation on a browser running on a client terminal and presenting the web document.
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
H04L 67/02 - Protocoles basés sur la technologie du Web, p.ex. protocole de transfert hypertexte [HTTP]
A hand-tracking platform generates gesture components for use as user inputs into an application of an Augmented Reality (AR) system. In some examples, the hand-tracking platform generates real-world scene environment frame data based on gestures being made by a user of the AR system using a camera component of the AR system. The hand-tracking platform recognizes a gesture component based on the real-world scene environment frame data and generates gesture component data based on the gesture component. The application utilizes the gesture component data as user input in a user interface of the application.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
An energy-efficient adaptive 3D sensing system. The adaptive 3D sensing system includes one or more cameras and one or more projectors. The adaptive 3D sensing system captures images of a real-world scene using the one or more cameras and computes depth estimates and depth estimate confidence values for pixels of the images. The adaptive 3D sensing system computes an attention mask based on the one or more depth estimate confidence values and commands the one or more projectors to send a distributed laser beam into one or more areas of the real-world scene based on the attention mask. The adaptive 3D sensing system captures 3D sensing image data of the one or more areas of the real-world scene and generates 3D sensing data for the real-world scene based on the 3D sensing image data.
H04N 5/222 - TRANSMISSION D'IMAGES, p.ex. TÉLÉVISION - Détails des systèmes de télévision Équipements de studio
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
H04N 23/56 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande munis de moyens d'éclairage
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
H04N 13/128 - Ajustement de la profondeur ou de la disparité
20.
SYSTEMS, METHODS AND DEVICES FOR PROVIDING SEQUENCE BASED DISPLAY DRIVERS
A display driver device (210) receives a downloadable “sequence” for dynamically reconfiguring displayed image characteristics in an image system. The display driver device comprises one or more storage devices, for example, memory devices, for storing image data (218) and portions of drive sequences (219) that are downloadable and/or updated in real time depending on various inputs (214).
G09G 3/20 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice
21.
HYPEREXTENDING HINGE FOR WEARABLE ELECTRONIC DEVICE
Eyewear having a frame, a hinge, and a hyperextendable temple. An extender is coupled to the hinge and the temple, and the extender extends with respect to the hinge allowing hyperextension of the temple with respect to the frame. The extender may include a bushing and a spring that allows the temple hyperextension, and which also creates a bias force to urge the temple against a user's head during use.
Methods and systems are disclosed for performing real-time stylizing operations. The system receives an image that includes a depiction of a whole body of a real-world person. The system applies a machine learning model to the image to generate a stylized version of the whole body of the real-world person corresponding to a given style, the machine learning model being trained using training data to establish a relationship between a plurality of training images depicting synthetically rendered whole bodies of persons and corresponding ground-truth stylized versions of the whole bodies of the persons of the given style. The system replaces the depiction of the whole body of the real-world person in the image with the generated stylized version of the whole body of the real-world person.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Systems, methods, and computer readable media for voice-controlled user interfaces (UIs) for augmented reality (AR) wearable devices are disclosed. Embodiments are disclosed that enable a user to interact with the AR wearable device without using physical user interface devices. An application has a non-voice-controlled UI mode and a voice-controlled UI mode. The user selects the mode of the UI. The application running on the AR wearable device displays UI elements on a display of the AR wearable device. The UI elements have types. Predetermined actions are associated with each of the UI element types. The predetermined actions are displayed with other information and used by the user to invoke the corresponding UI element.
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
G10L 15/16 - Classement ou recherche de la parole utilisant des réseaux neuronaux artificiels
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine
24.
REMOTE ANNOTATION AND NAVIGATION USING AN AR WEARABLE DEVICE
Systems, methods, and computer readable media for remote annotations, drawings, and navigation instructions sent to an augmented reality (AR) wearable device from a computing device are disclosed. The AR wearable device captures images and sends them to the remote computing device to provide a real-time view of what the user of the AR wearable device sees. A user of the remote computing device can add navigation instructions and can select an image to annotate or draw on. The AR wearable device provides 3-dimensional (3D) coordinate information within a 3D world of the AR wearable device for the selected image. The user of the remote computing device then annotates or draws on the selected image. The remote computing device determines 3D coordinates for the annotations and drawings within the 3D world of the AR wearable device. The annotations and drawings are sent to the AR wearable device with associated 3D coordinates.
Systems, methods, and computer readable media for remote annotations, drawings, and navigation instructions sent to an augmented reality (AR) wearable device from a computing device are disclosed. The AR wearable device captures images and sends them to the remote computing device to provide a real-time view of what the user of the AR wearable device sees. A user of the remote computing device can add navigation instructions and can select an image to annotate or draw on. The AR wearable device provides 3-dimensional (3D) coordinate information within a 3D world of the AR wearable device for the selected image. The user of the remote computing device then annotates or draws on the selected image. The remote computing device determines 3D coordinates for the annotations and drawings within the 3D world of the AR wearable device. The annotations and drawings are sent to the AR wearable device with associated 3D coordinates.
A UAV having a wireless-front end including propellers that are dual purposed to function as ground communication antenna elements. This design reduces weight and size of the UAV, hence enabling a compact design with the capability of handling a heavier payload.
B64U 101/30 - Véhicules aériens sans pilote spécialement adaptés à des utilisations ou à des applications spécifiques à l’imagerie, à la photographie ou à la vidéographie
H01Q 1/28 - Adaptation pour l'utilisation dans ou sur les avions, les missiles, les satellites ou les ballons
H01Q 1/48 - ANTENNES, c. à d. ANTENNES RADIO - Détails de dispositifs associés aux antennes Écrans de terre; Contrepoids
Aspects of the present disclosure involve a system for presenting AR items. The system receives a video that includes a depiction of a real-world object in a real-world environment. The system generates a three-dimensional (3D) bounding box for the real-world object and stabilizes the 3D bounding box based on one or more sensors of the device. The system determines a position, orientation, and dimensions of the real-world object based on the stabilized 3D bounding box and renders a display of an augmented reality (AR) item within the video based on the position, orientation, and dimensions of the real-world object.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
Methods and systems are disclosed for performing generating AR experiences on a messaging platform. The methods and systems receive, from a client device, a request to access an augmented reality (AR) experience and access a list of event types associated with the AR experience used to generate one or more metrics. The methods and systems determine that an interaction associated with the AR experience corresponds to a first event type of the list of event types and generates interaction data for the first event type representing the interaction. In response to receiving a request to terminate the AR experience, the systems and methods transmit the interaction data to a remote server.
A system for deformation or bending correction in an Augmented Reality (AR) system. Sensors are positioned in a frame of a head-worn AR system to sense forces or pressure acting on the frame by temple pieces attached to the frame. The sensed forces or pressure are used in conjunction with a model of the frame to determine a corrected model of the frame. The corrected model is used to correct video data captured by the AR system and to correct a video virtual overlay that is provided to a user wearing the head- worn AR system.
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
Systems and methods are provided for performing operations on an augmented reality (AR) device using an external screen streaming system. The system establishes, by one or more processors of an AR device, a communication with an external client device. The system causes overlay of, by the AR device, a first AR object on a real-world environment being viewed using the AR device. The system receives, by the AR device, a first image from the external client device. The system, in response to receiving the first image from the external client device, overlays the first image on the first AR object by the AR device.
A system including a drone having a projector to project an image from a projection origin. The drone also has a navigation unit to determine location information for the drone. A processor coupled to the drone includes a memory. Execution of programming by the processor configures the system to obtain a projection surface architecture for a projection surface. The projection surface architecture includes reference points that correspond to physical locations on the projection surface. Each reference point is associated with relationship data with respect to an architecture origin. The system also receives location information for the drone, adapts the relationship data responsive to change in the location information, adjusts the image using the adapted relationship data, and projects the adjusted image onto the projection surface.
Embodiments described herein include an expressive icon system to present an animated graphical icon, wherein the animated graphical icon is generated by capture facial tracking data at a client device. In some embodiments, the system may track and capture facial tracking data of a user via a camera associated with a client device (e.g., a front facing camera, or a paired camera), and process the facial tracking data to animate a graphical icon.
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
H04M 1/72427 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens de soutien local des applications accroissant la fonctionnalité pour donner accès à des jeux ou à des animations graphiques
H04M 1/7243 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles avec des moyens de soutien local des applications accroissant la fonctionnalité avec des moyens interactifs de gestion interne des messages
H04M 1/72469 - Interfaces utilisateur spécialement adaptées aux téléphones sans fil ou mobiles pour faire fonctionner le dispositif en sélectionnant des fonctions à partir de plusieurs éléments affichés, p.ex. des menus ou des icônes
Methods and systems for videoconferencing include generating work quality metrics based on emotion recognition of an individual such as a call center agent. The work quality metrics allow for workforce optimization. One example method includes the steps of receiving a video including a sequence of images, detecting an individual in one or more of the images, locating feature reference points of the individual, aligning a virtual face mesh to the individual in one or more of the images based at least in part on the feature reference points, dynamically determining over the sequence of images at least one deformation of the virtual face mesh, determining that the at least one deformation refers to at least one facial emotion selected from a plurality of reference facial emotions, and generating quality metrics including at least one work quality parameter associated with the individual based on the at least one facial emotion.
G06Q 10/0639 - Analyse des performances des employés; Analyse des performances des opérations d’une entreprise ou d’une organisation
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
H04N 21/4402 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4 impliquant des opérations de reformatage de signaux vidéo pour la redistribution domestique, le stockage ou l'affichage en temps réel
Aspects of the present disclosure involve a system for performing real-time in-painting using machine learning techniques. The system receives a video that includes a depiction of a real-world object in a real -world environment. The system accesses a segmentation associated with the real-world object and removes a depiction of the real -world object from a region of a first frame of the video. The system processes, by a machine learning model, the first frame and one or more previous frames of the video that precede the first frame to generate a new frame in which portions of the first frame have been blended into the region from which the depiction of the real-world object has been removed.
Systems and methods are provided for performing operations on an augmented reality (AR) device using an external screen streaming system. The system establishes, by one or more processors of an AR device, a communication with an external client device. The system causes overlay of, by the AR device, a first AR object on a real -world environment being viewed using the AR device. The system receives, by the AR device, a first image from the external client device. The system, in response to receiving the first image from the external client device, overlays the first image on the first AR object by the AR device.
An augmented reality (AR) content system is provided. The AR content system may analyze audio input obtained from a user to generate a search request. The AR content system may obtain search results in response to the search request and determine a layout by which to display the search results. The search results may be displayed in a user interface within an AR environment according to the layout. The AR content system may also analyze audio input to detect commands to perform with respect to content displayed in the user interface.
An augmented reality (AR) content system is provided. The AR content system may analyze audio input obtained from a user to generate a search request. The AR content system may obtain search results in response to the search request and determine a layout by which to display the search results. The search results may be displayed in a user interface within an AR environment according to the layout. The AR content system may also analyze audio input to detect commands to perform with respect to content displayed in the user interface.
A method of providing an interactive personal mobility system, performed by one or more processors, comprises determining an initial pose by visual-inertial odometry performed on images and inertial measurement unit (IMU) data generated by a wearable augmented reality device. Sensor data transmitted from a personal mobility system is received, and sensor fusion is performed on the data received from the personal mobility system to provide an updated pose. Augmented reality effects are displayed on the wearable augmented reality device based on the updated pose.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
B60L 3/12 - Enregistrement des paramètres de fonctionnement
B60L 15/20 - Procédés, circuits ou dispositifs pour commander la propulsion des véhicules à traction électrique, p.ex. commande de la vitesse des moteurs de traction en vue de réaliser des performances désirées; Adaptation sur les véhicules à traction électrique de l'installation de commande à distance à partir d'un endroit fixe, de différents endroits du véhicule ou de différents véhicules d'un même train pour la commande du véhicule ou de son moteur en vue de réaliser des performances désirées, p.ex. vitesse, couple, variation programmée de la vitesse
G01C 22/02 - Mesure de la distance parcourue sur le sol par des véhicules, des personnes, des animaux ou autres corps solides en mouvement, p.ex. en utilisant des odomètres ou en utilisant des podomètres par conversion en formes d'onde électrique et intégration ultérieure, p.ex. en utilisant un générateur tachymétrique
A system and method are described for generating 3D garments from two-dimensional (2D) scribble images drawn by users. The system includes a conditional 2D generator, a conditional 3D generator, and two intermediate media including dimension-coupling color-density pairs and flat point clouds that bridge the gap between dimensions. Given a scribble image, the 2D generator synthesizes dimension-coupling color-density pairs including the RGB projection and density map from the front and rear views of the scribble image. A density-aware sampling algorithm converts the 2D dimension-coupling color-density pairs into a 3D flat point cloud representation, where the depth information is ignored. The 3D generator predicts the depth information from the flat point cloud. Dynamic variations per garment due to deformations resulting from a wearer's pose as well as irregular wrinkles and folds may be bypassed by taking advantage of 2D generative models to bridge the dimension gap in a non-parametric way.
A mixed-reality media content system may be configured to perform operations that include: causing display of image data at a client device, the image data comprising a depiction of an object that includes a graphical code at a position upon the object; detecting the graphical code at the position upon the depiction of the object based on the image data; accessing media content within a media repository based on the graphical code scanned by the client device; and causing display of a presentation of the media content at the position of the graphical code upon the depiction of the object at the client device.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06K 19/06 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques caractérisés par le genre de marque numérique, p.ex. forme, nature, code
G06F 3/0481 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport
Aspects of the present disclosure involve a system for performing real-time in-painting using machine learning techniques. The system receives a video that includes a depiction of a real-world object in a real-world environment. The system accesses a segmentation associated with the real-world object and removes a depiction of the real-world object from a region of a first frame of the video. The system processes, by a machine learning model, the first frame and one or more previous frames of the video that precede the first frame to generate a new frame in which portions of the first frame have been blended into the region from which the depiction of the real-world object has been removed.
A system for deformation or bending correction in an Augmented Reality (AR) system. Sensors are positioned in a frame of a head-worn AR system to sense forces or pressure acting on the frame by temple pieces attached to the frame. The sensed forces or pressure are used in conjunction with a model of the frame to determine a corrected model of the frame. The corrected model is used to correct video data captured by the AR system and to correct a video virtual overlay that is provided to a user wearing the head-worn AR system.
A method for prohibiting email content propagation that receives, at a server, an email message. At the server, at least one email address associated with the email message which is designated not to receive a content of the email message is identified. At the server, the email message is modified by selectively removing a content of the email message to be conveyed to the at least one email address. The server conveys the modified email message to the at least one email address. The server conveys the email message to one or more recipient email addresses except the at least one email address. Consequently, the server has sent a submitted message to multiple email addresses, while modifying the content sent to a subset of the addresses that received the email message.
H04L 51/212 - Surveillance ou traitement des messages utilisant un filtrage ou un blocage sélectif
G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p.ex. interaction avec des règles ou des cadrans
G06Q 10/107 - Gestion informatisée du courrier électronique
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
H04L 51/214 - Surveillance ou traitement des messages en utilisant le transfert sélectif
H04L 51/48 - Adressage des messages, p.ex. format des adresses ou messages anonymes, alias
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
Aspects of the present disclosure involve a system for presenting AR items. The system receives a video that includes a depiction of a real-world object in a real-world environment. The system generates a three-dimensional (3D) bounding box for the real-world object and stabilizes the 3D bounding box based on one or more sensors of the device. The system determines a position, orientation, and dimensions of the real-world object based on the stabilized 3D bounding box and renders a display of an augmented reality (AR) item within the video based on the position, orientation, and dimensions of the real -world object.
A UAV having a manual gimbal including a camera, and a flight mode selector configured to select both a flight mode and manually establish a camera position as a function of the selected fight mode. A controller responds to a position of the gimbal or selector to establish the flight mode. The flight mode is selected from several available modes, for example, a horizontal flight mode, a 45-degree flight mode, and a vertical (aerial) flight mode. The flight mode selector is mechanically coupled to the gimbal and establishes a pitch angle of the gimbal, and thus the camera angle attached to the gimbal.
B64C 39/02 - Aéronefs non prévus ailleurs caractérisés par un emploi spécial
B64C 19/00 - Dispositifs de commande des aéronefs non prévus ailleurs
B64U 101/30 - Véhicules aériens sans pilote spécialement adaptés à des utilisations ou à des applications spécifiques à l’imagerie, à la photographie ou à la vidéographie
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/08 - Commande de l'attitude, c. à d. élimination ou réduction des effets du roulis, du tangage ou des embardées
G06F 3/0362 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection des translations ou des rotations unidimensionnelles [1D] d’une partie agissante du dispositif de pointage, p.ex. molettes de défilement, curseurs, boutons, rouleaux ou bandes
B64U 50/14 - Propulsion utilisant des soufflantes ou des hélices externes carénées
A mixed-reality media content system may be configured to perform operations that include: causing display of image data at a client device, the image data comprising a depiction of an object that includes a graphical code at a position upon the object; detecting the graphical code at the position upon the depiction of the object based on the image data; accessing media content within a media repository based on the graphical code scanned by the client device; and causing display of a presentation of the media content at the position of the graphical code upon the depiction of the object at the client device.
H04N 21/8545 - Création de contenu pour générer des applications interactives
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
G06K 19/06 - Supports d'enregistrement pour utilisation avec des machines et avec au moins une partie prévue pour supporter des marques numériques caractérisés par le genre de marque numérique, p.ex. forme, nature, code
Systems and methods herein describe privacy preserving multi-touch attribution. The described systems access a plurality of impression events and a plurality of conversion events, and for each impression event and each conversion event, wherein each impression event and each conversion event are associated with user identifiers, the described systems generates a hashed user identifier based on the associated user identifier, initiates a key agreement protocol comprising a key, generates an encrypted identifier by encrypting the hashed user identifier with the key, and stores the encrypted identifier.
G06F 21/00 - Dispositions de sécurité pour protéger les calculateurs, leurs composants, les programmes ou les données contre une activité non autorisée
H04L 9/30 - Clé publique, c. à d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 29/06 - Commande de la communication; Traitement de la communication caractérisés par un protocole
48.
DYNAMICALLY ASSIGNING PARTICIPANT VIDEO FEEDS WITHIN VIRTUAL CONFERENCING SYSTEM
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for dynamically assigning participant video feeds within a virtual conferencing system. The program and method provide, in association with designing a virtual space for virtual conferencing, an interface for configuring a set of rooms, each room being associated with a different number of participant video elements assignable to respective participant video feeds; receive, via the interface, an indication of user input for setting properties for the set of rooms; determine, in association with virtual conferencing, a first number of participants for a room; select a first room corresponding to the first number of participants; provide display of the first room; and assign, for each of the first number of participants, a participant video feed corresponding to the participant with a respective participant video element in the first room.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
An augmented reality (AR) eyewear device has a lens system which includes an optical screening mechanism that enables switching the lens system between a conventional see-through state and an opaque state in which the lens system screens or functionally blocks out the wearer's view of the external environment. Such a screening mechanism allows for expanded use cases of the AR glasses compared to conventional devices, e.g.: as a sleep mask; to view displayed content like movies or sports events against a visually non-distracting background instead of against the external environment; and/or to enable VR functionality.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p.ex. utilisant un modèle de réflectance ou d’éclairage
G06V 10/70 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique
A system for hand tracking for an Augmented Reality (AR) system. The AR system uses a camera of the AR system to capture tracking video frame data of a hand of a user of the AR system. The AR system generates a skeletal model based on the tracking video frame data and determines a location of the hand of the user based on the skeletal model. The AR system causes a steerable camera of the AR system to focus on the hand of the user.
Embodiments described herein relate to an augmented expression system to generate and cause display of a specially configured interface to present an augmented reality perspective. The augmented expression system receives image and video data of a user and tracks facial landmarks of the user based on the image and video data, in real-time to generate and present a 3-dimensional (3D) bitmoji of the user.
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G06T 13/40 - Animation tridimensionnelle [3D] de personnages, p.ex. d’êtres humains, d’animaux ou d’êtres virtuels
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
52.
SOFTWARE APPLICATION MANAGER FOR MESSAGING APPLICATIONS
Among other things, embodiments of the present disclosure improve the functionality of electronic messaging systems by enabling users in an electronic chat conversation to run applications together. In some embodiments, when one user in a chat launches an application, an icon or other visual representation of the application appears in a portion of the chat window (e.g., in a “chat dock”) for other users in the chat to access.
H04L 65/401 - Prise en charge des services ou des applications dans laquelle les services impliquent une session principale en temps réel et une ou plusieurs sessions parallèles additionnelles en temps réel ou sensibles au temps, p.ex. accès partagé à un tableau blanc ou mise en place d’une sous-conférence
H04L 65/1089 - Procédures en session en supprimant des médias
H04L 65/403 - Dispositions pour la communication multipartite, p.ex. pour les conférences
H04L 67/00 - Dispositions ou protocoles de réseau pour la prise en charge de services ou d'applications réseau
H04L 67/1095 - Réplication ou mise en miroir des données, p.ex. l’ordonnancement ou le transport pour la synchronisation des données entre les nœuds du réseau
Devices, media, and methods are presented for an immersive augmented reality (AR) experience using an eyewear device with spatial audio. The eyewear device has a processor, a memory, an image sensor, and a speaker system. The eyewear device captures image information for an environment surrounding the device and identifies an object location within the same environment. The eyewear device then associates a virtual object with the identified object location. The eyewear device monitors the position of the device with respect to the virtual object and presents audio signals to alert the user that the identified object is in the environment.
The subject technology captures first image data by a computing device, the first image data comprising a target face of a target actor and facial expressions of the target actor, the facial expressions including lip movements. The subject technology generates, based at least in part on frames of a source media content, sets of source pose parameters. The subject technology receives a selection of a particular facial expression from a set of facial expressions. The subject technology generates, based at least in part on sets of source pose parameters and the selection of the particular facial expression, an output media content. The subject technology provides augmented reality content based at least in part on the output media content for display on the computing device.
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for providing bot participants for virtual conferencing. The program and method provide, in association with designing a virtual space, a first interface for configuring plural participant video elements, each being assignable to a respective participant; receive, via the first interface, an indication of user input for setting first properties for the plural participant video elements; provide a second interface for configuring a bot participant for simulating an actual participant in association with a participant video element of the plural participant video elements; receive, via the second interface, an indication of second user input for setting second properties for the bot participant; and provide, in association with designing the virtual space, display of the virtual space based on the first and second properties, the bot participant being assigned to the participant video element.
G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p.ex. interaction avec des règles ou des cadrans
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 5/20 - Amélioration ou restauration d'image en utilisant des opérateurs locaux
H04L 51/02 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel en utilisant des réactions automatiques ou la délégation par l’utilisateur, p.ex. des réponses automatiques ou des messages générés par un agent conversationnel
H04L 65/403 - Dispositions pour la communication multipartite, p.ex. pour les conférences
56.
MENU HIERARCHY NAVIGATION ON ELECTRONIC MIRRORING DEVICES
Systems and methods are provided for performing operations comprising: capturing, by an electronic mirroring device, a video feed received from a camera of the electronic mirroring device, the video feed depicting a user; displaying, by one or more processors of the electronic mirroring device, one or more menu options on the video feed that depicts the user, the one or more menu options relating to a first level in a hierarchy of levels; detecting a gesture performed by the user in the video feed; and in response to detecting the gesture, displaying a set of options related to a given option of the one or more menu options, the set of options relating to a second level in the hierarchy of levels.
A drone system is configured to capture an audio stream that includes voice commands from an operator, to process the audio stream for identification of the voice commands, and to perform operations based on the identified voice commands. The drone system can identify a particular voice stream in the audio stream as an operator voice, and perform the command recognition with respect to the operator voice to the exclusion of other voice streams present in the audio stream. The drone can include a directional camera that is automatically and continuously focused on the operator to capture a video stream usable in disambiguation of different voice streams captured by the drone.
G05D 1/12 - Commande pour la recherche d'un objectif
G05D 1/00 - Commande de la position, du cap, de l'altitude ou de l'attitude des véhicules terrestres, aquatiques, aériens ou spatiaux, p.ex. pilote automatique
G05D 1/10 - Commande de la position ou du cap dans les trois dimensions simultanément
G06T 7/246 - Analyse du mouvement utilisant des procédés basés sur les caractéristiques, p.ex. le suivi des coins ou des segments
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine
G10L 15/24 - Reconnaissance de la parole utilisant des caractéristiques non acoustiques
G10L 15/25 - Reconnaissance de la parole utilisant des caractéristiques non acoustiques utilisant la position des lèvres, le mouvement des lèvres ou l’analyse du visage
G10L 17/00 - Identification ou vérification du locuteur
AR-enabled wearable electronic devices such as smart glasses are adapted for use as an (Internet of Things) IoT remote control device where the user can control a pointer on a television screen, computer screen, or other IoT enabled device to select items by looking at them and making selections using gestures. Built-in six-degrees-of-freedom (6DoF) tracking capabilities are used to move the pointer on the screen to facilitate navigation. The display screen is tracked in real-world coordinates to determine the point of intersection of the user's view with the screen using raycasting techniques. Hand and head gesture detection are used to allow the user to execute a variety of control actions by performing different gestures. The techniques are particularly useful for smart displays that offer AR-enhanced content that can be viewed in the displays of the AR-enabled wearable electronic devices.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/00 - Dispositions d'entrée pour le transfert de données destinées à être traitées sous une forme maniable par le calculateur; Dispositions de sortie pour le transfert de données de l'unité de traitement à l'unité de sortie, p.ex. dispositions d'interface
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04L 67/125 - Protocoles spécialement adaptés aux environnements propriétaires ou de mise en réseau pour un usage spécial, p.ex. les réseaux médicaux, les réseaux de capteurs, les réseaux dans les véhicules ou les réseaux de mesure à distance en impliquant la commande des applications des terminaux par un réseau
A resource optimized kiosk mode that improves the mobile experience for creators and users of mobile devices such as an augmented reality (AR)-enabled wearable eyewear device. An eyewear device enters a kiosk mode by receiving a kiosk mode request for an application and, in response to the request, determining which services and application programming interfaces (APIs) are required to execute the selected application. An identification of the determined services and APIs required to execute the selected application are stored and the eyewear device is rebooted. After reboot, the selected application is started, and only the identified services and APIs are enabled. To determine which services and APIs are required to execute the selected application, metadata may be associated with the selected application specifying the services and/or APIs that the selected application requires to use when in operation.
A gesture-based wake process for an AR system is described herein. The AR system places a hand-tracking input pipeline of the AR system in a suspended mode. A camera component of the hand-tracking input pipeline detects a possible visual wake command being made by a user of the AR system. On the basis of detecting the possible visual wake command, the AR system wakes the hand-tracking input pipeline and places the camera component in a fully operational mode. If the AR system, using the hand¬ tracking input pipeline, verifies the possible visual wake command as an actual wake command, the AR system initiates execution of an AR application.
09 - Appareils et instruments scientifiques et électriques
42 - Services scientifiques, technologiques et industriels, recherche et conception
Produits et services
Spatial Light Modulators; Displays, namely, liquid crystal displays and liquid crystal-on-silicon displays; Microdisplays, namely, liquid crystal microdisplays and liquid crystal-on-silicon microdisplays; Emissive displays, namely, OLED (Organic light emitting diode) display panels and OLED microdisplays; Micro Light Emitting Diode displays (microLED displays); Liquid Crystal Devices, namely, liquid crystal displays and liquid crystal microdisplays; Liquid Crystal displays; Display Panels, namely, liquid crystal display panels and microdisplay panels, LED display panels and microdisplay panels; Liquid Crystal Modules, namely, liquid crystal displays; Liquid Crystal-on-Silicon (LCoS) devices, namely, liquid crystal-on-silicon (LCOS) panels and micro panels to project digital images and video; Driver Integrated Circuits; computer hardware, namely, microchips, integrated circuits, semiconductor chips, and circuit boards for modulation of electromagnetic radiation (light) and/or image display; driver software, namely, downloadable and/or recorded computer software for allowing communication with a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display); systems software, namely, downloadable and/or recorded software for managing a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display) via the driver; applications software, namely, downloadable or recorded software for playing of application content on a display and for configuring a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display); control software, namely, downloadable, recorded, and/or embedded software to control a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display) and to interpret commands from the driver; configuration software, namely, downloadable and/or recorded software for creating operation configurations for a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display), and for calibrating the operations performed by a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display) Configuration software, namely, providing temporary use of online non-downloadable software for generating configuration parameters (for generating drive sequences) for a liquid crystal microdisplay, liquid crystal-on-silicon display, emissive display (an OLED or LED display), or Micro Light Emitting Diode display (microLED display)
A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The coordinates of a face in the image are determined, and the face of the user or another person is added to the image at the location. The final image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.
Systems and methods are provided for retrieving first query result data associated with a first user account and rendering the first query result data into a first result item, generating a shareable search result stream comprising the first result item associated with the first user account, retrieving second query result data associated with a second user account and rendering the second query result data into a second result item, adding the second result item to the shareable search result stream associated with the first user account, and providing the sharable search result stream comprising the first result item and the second result item to a first computing device associated with the first user account and a second computing device associated with the second user account.
Systems, methods, and computer readable media for graphical assistance with tasks using an augmented reality (AR) wearable devices are disclosed. Embodiments capture an image of a first user view of a real-world scene and access indications of surfaces and locations of the surfaces detected in the image. The AR wearable device displays indications of the surfaces on a display of the AR wearable device where the locations of the indications are based on the locations of the surfaces and a second user view of the real-world scene. The locations of the surfaces are indicated with 3D world coordinates. The user views are determined based on a location of the user. The AR wearable device enables a user to add graphics to the surfaces and select tasks to perform. Tools such as a bubble level or a measuring tool are available for the user to utilize to perform the task.
A system for hand tracking for an Augmented Reality (AR) system. The AR system uses a camera of the AR system to capture tracking video frame data of a hand of a user of the AR system. The AR system generates a skeletal model based on the tracking video frame data and determines a location of the hand of the user based on the skeletal model. The AR system causes a steerable camera of the AR system to focus on the hand of the user.
A method of generating an image for use in a conversation taking place in a messaging application is disclosed. Conversation input text is received from a user of a portable device that includes a display. Model input text is generated from the conversation input text, which is processed with a text-to-image model to generate an image based on the model input text. The coordinates of a face in the image are determined, and the face of the user or another person is added to the image at the location. The final image is displayed on the portable device, and user input is received to transmit the image to a remote recipient.
An augmented reality (AR) eyewear device has a lens system which includes an optical screening mechanism that enables switching the lens system between a conventional see- through state and an opaque state in which the lens system screens or functionally blocks out the wearer's view of the external environment. Such a screening mechanism allows for expanded use cases of the AR glasses compared to conventional devices, e.g.: as a sleep mask; to view displayed content like movies or sports events against a visually nondistracting background instead of against the external environment; and/or to enable VR functionality.
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for configuring a three-dimensional (3D) model within a virtual conferencing system. The program and method provide, in association with designing a room for virtual conferencing, an interface for configuring a 3D model; receiving, via the interface, an indication of user input for setting properties for the 3D model, the properties specifying image data for projecting onto the 3D model; and in association with virtual conferencing, providing display of the room based on the properties for the 3D model, and causing the image data to be projected onto the 3D model within the room.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/04847 - Techniques d’interaction pour la commande des valeurs des paramètres, p.ex. interaction avec des règles ou des cadrans
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
69.
DEVICE AND METHOD FOR COMPENSATING EFFECTS OF PANTOSCOPIC TILT OR WRAP/SWEEP TILT ON AN IMAGE PRESENTED ON AN AUGMENTED REALITY OR VIRTUAL REALITY DISPLAY
An optical device is disclosed for use in an augmented reality or virtual reality display, comprising a waveguide (12; 22; 32) and an input diffractive optical element (H0; H3; 34) positioned in or on the waveguide, configured to receive light from a projector and couple it into the waveguide so that it is captured within the waveguide under total internal reflection. The input diffractive optical element has an input grating vector (G0; Gig) in the plane of the waveguide. The device includes a first diffractive optical element (H1; H4) and a second diffractive optical element (H2; H5) having first and second grating vectors (G2, G3; GV1, GV2) respectively in the plane of the waveguide, wherein the first diffractive optical element is configured to receive light from the input diffractive optical element and to couple it towards the second diffractive optical element, and wherein the second diffractive optical element is configured to receive light from the first diffractive optical element and to couple it out of the waveguide towards a viewer. The input grating vector, the first grating vector and the second grating vector have different respective magnitudes, and wherein a vector addition of the input grating vector, the first grating vector and the second grating vector sums to zero.
A gesture-based wake process for an AR system is described herein. The AR system places a hand-tracking input pipeline of the AR system in a suspended mode. A camera component of the hand-tracking input pipeline detects a possible visual wake command being made by a user of the AR system. On the basis of detecting the possible visual wake command, the AR system wakes the hand-tracking input pipeline and places the camera component in a fully operational mode. If the AR system, using the hand-tracking input pipeline, verifies the possible visual wake command as an actual wake command, the AR system initiates execution of an AR application.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
Aspects of the present disclosure involve a system and a method for performing operations comprising: receiving, by a messaging application implemented on a client device, input that selects a sound option to add sound to one or more images; in response to receiving the input, presenting a sound editing user interface element that visually indicates a played portion of the sound and separately visually indicates an un-played portion of the sound; receiving an interaction with the sound editing user interface element to modify a start point of the sound; embedding a graphical element representing the sound in the one or more images; playing, by the messaging application, the sound associated with the graphical element starting from the start point together with displaying the one or more images.
Methods and devices for wired charging and communication with a wearable device are described. In one embodiment, a symmetrical contact interface comprises a first contact pad and a second contact pad, and particular wired circuitry is coupled to the first and second contact pads to enable charging as well as receive and transmit communications via the contact pads as part of various device states.
G02C 11/00 - Accessoires non optiques; Fixation de ceux-ci
H01L 27/02 - Dispositifs consistant en une pluralité de composants semi-conducteurs ou d'autres composants à l'état solide formés dans ou sur un substrat commun comprenant des éléments de circuit passif intégrés avec au moins une barrière de potentiel ou une barrière de surface
H01R 13/62 - Moyens pour faciliter l'engagement ou la séparation des pièces de couplage ou pour les maintenir engagées
H02J 7/04 - Régulation du courant ou de la tension de charge
H02J 7/34 - Fonctionnement en parallèle, dans des réseaux, de batteries avec d'autres sources à courant continu, p.ex. batterie tampon
H03K 19/0185 - Dispositions pour le couplage; Dispositions pour l'interface utilisant uniquement des transistors à effet de champ
H04B 3/56 - Circuits de couplage, blocage ou dérivation des signaux
Systems and methods herein describe a method for capturing a video in real-time by an image capture device. The system provides a plurality of visual pose hints, identifies first pose information in the video while capturing the video, applies a first series of virtual effects to the video, identifies second pose information, and applies a second series of virtual effects to the video, the second series of virtual effects based on the first series of virtual effects.
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
H04N 23/611 - Commande des caméras ou des modules de caméras en fonction des objets reconnus les objets reconnus comprenant des parties du corps humain
H04N 23/63 - Commande des caméras ou des modules de caméras en utilisant des viseurs électroniques
A method and a system include receiving a request from a client device to view a media content item, determining at least one comment associated with a respective user profile from a set of connected profiles, generating a summary comments selectable item based at least in part on the respective user profile, causing a display of playback of the media content item and the summary comments selectable item in response to the request to view the media content item, and during the playback of the media content item at the particular time, causing a display of at least one comment.
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
A system includes a communication module that receives a request to post content to an event gallery associated with an event. The request in turn includes geo-location data for a device sending the content, and identification data identifying the device or a user of the device. The system further has an event gallery module to perform a first authorization operation that includes determining that the geo-location data corresponds to a geo-location fence associated with an event. The event gallery module also performs a second authorization operation that includes using the identification data to verify an attribute of the user. Finally, based on the first and second authorization operations, the event gallery module may selectively authorize the device to post the content to the event gallery.
H04L 51/222 - Surveillance ou traitement des messages en utilisant des informations de localisation géographique, p.ex. des messages transmis ou reçus à proximité d'un certain lieu ou d'une certaine zone
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
H04W 4/02 - Services utilisant des informations de localisation
H04W 4/021 - Services concernant des domaines particuliers, p.ex. services de points d’intérêt, services sur place ou géorepères
H04W 4/029 - Services de gestion ou de suivi basés sur la localisation
H04W 4/18 - Conversion de format ou de contenu d'informations, p.ex. adaptation, par le réseau, des informations reçues ou transmises pour une distribution sans fil aux utilisateurs ou aux terminaux
Disclosed is a method of receiving and processing content-sending inputs received by a head-worn device system including one or more display devices, one or more cameras and a vertically-arranged touchpad. The method includes displaying a content item on the one or more display devices, receiving a touch input on the touchpad corresponding to a send instruction, displaying a carousel of potential recipients, receiving a horizontal touch input on the touchpad, scrolling the carousel left or right on the one or more display devices in response to the horizontal touch input, receiving a tap touch input on the touchpad to select a particular recipient, receiving a further touch input, and in response to the further touch input, transmitting the content item to the selected recipient.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0485 - Défilement ou défilement panoramique
G06F 3/0488 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
Methods and systems are disclosed for performing operations for transferring garments from one real-world object to another in real time. The operations comprise receiving a first video that includes a depiction of a first person wearing a first upper-body garment in a first pose and obtaining a second video that includes a depiction of a second person wearing a second upper-body garment in a second pose. A pose of the second person depicted in the second video is modified to match the first pose of the first person depicted in the first video. The operations comprise generating an upper-body segmentation of the second upper-body garment which the second person is wearing in the second video in the modified pose and replacing the first upper-body garment worn by the first person in the first video with the second upper-body garment based on the upper-body segmentation.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
An optical device for use in an augmented reality or virtual reality display, comprising: a waveguide; an input diffractive optical element, DOE, configured to receive light from a projector and to couple the received light into the waveguide along a plurality of optical paths; an output DOE offset from the input DOE along a first direction and configured to couple the received light out of the waveguide and towards a viewer; a first turning DOE offset from the input DOE along a second direction different from the first direction; wherein the input DOE is configured to couple a first portion of the received light in the second direction towards the first turning DOE and the first turning DOE is configured to diffract the first portion of the received light towards the output DOE, and the input DOE is configured to couple a second portion of the received light in the first direction towards the output DOE.
An eyewear device including a strain gauge sensor to determine when the eyewear device is manipulated by a user, such as being put on, taken off, and interacted with. A processor identifies a signature event based on sensor signals received from the strain gauge sensor and a data table of strain gauge sensor measurements corresponding to signature events. The processor controls the eyewear device as a function of the identified signature event, such as powering on a display of the eyewear device as the eyewear device is being put on a user's head, and then turning of the display when the eyewear device is removed from the user's head.
A waterproof UAV that records camera footage while traveling through air and while submerged in water. The UAV alters speed and direction of propellers dependent on the medium that the UAV is traveling through to provide control of the UAV. The propellers are capable of spinning in both directions to enable the UAV to change its depth and orientation in water. A machine learning (ML) model is used to identify humans and objects underwater. A housing coupled to the UAV makes the UAV positively buoyant to float in water and to control buoyancy while submerged.
B64U 101/30 - Véhicules aériens sans pilote spécialement adaptés à des utilisations ou à des applications spécifiques à l’imagerie, à la photographie ou à la vidéographie
Methods and systems are disclosed for performing real-time deforming operations. The system receives an image that includes a depiction of a real-world object. The system applies a machine learning model to the image to generate a warping field and segmentation mask, the machine learning model trained to establish a relationship between a plurality of training images depicting real -world objects and corresponding ground-truth warping fields and segmentation masks associated with a target shape. The system applies the generated warping field and segmentation mask to the image to warp the real- world object depicted in the image to the target shape.
A method for reducing motion-to-photon latency for hand tracking is described. In one aspect, a method includes accessing a first frame from a camera of an Augmented Reality (AR) device, tracking a first image of a hand in the first frame, rendering virtual content based on the tracking of the first image of the hand in the first frame, accessing a second frame from the camera before the rendering of the virtual content is completed, the second frame immediately following the first frame, tracking, using the computer vision engine of the AR device, a second image of the hand in the second frame, generating an annotation based on tracking the second image of the hand in the second frame, forming an annotated virtual content based on the annotation and the virtual content, and displaying the annotated virtual content in a display of the AR device.
Systems, methods, and computer readable media for graphical assistance with tasks using an augmented reality (AR) wearable devices are disclosed. Embodiments capture an image of a first user view of a real-world scene and access indications of surfaces and locations of the surfaces detected in the image. The AR wearable device displays indications of the surfaces on a display of the AR wearable device where the locations of the indications are based on the locations of the surfaces and a second user view of the real-world scene. The locations of the surfaces are indicated with 3D world coordinates. The user views are determined based on a location of the user. The AR wearable device enables a user to add graphics to the surfaces and select tasks to perform. Tools such as a bubble level or a measuring tool are available for the user to utilize to perform the task.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
84.
VIDEO GENERATION SYSTEM TO RENDER FRAMES ON DEMAND USING A FLEET OF GPUS
A content controller system to render frames on demand comprises a rendering server system that includes a plurality of graphics processing units (GPUs). The GPUs in the rendering server system render a set of media content item segments using a media content identification and a main user identification. Rendering the set of media content item segments includes retrieving metadata from a metadata database associated with the media content identification, rendering the set of media content item segments using the metadata, generating a main user avatar based on the main user identification, and incorporating the main user avatar into the set of media content item segments. The rendering server system then uploads the set of media content item segments to a segment database; and updates segment states in a segment state database to indicate that the set of media content item segments are available. Other embodiments are disclosed herein.
H04N 21/262 - Ordonnancement de la distribution de contenus ou de données additionnelles, p.ex. envoi de données additionnelles en dehors des périodes de pointe, mise à jour de modules de logiciel, calcul de la fréquence de transmission de carrousel, retardement d
G06T 1/20 - Architectures de processeurs; Configuration de processeurs p.ex. configuration en pipeline
H04N 21/234 - Traitement de flux vidéo élémentaires, p.ex. raccordement de flux vidéo ou transformation de graphes de scènes MPEG-4
H04N 21/235 - Traitement de données additionnelles, p.ex. brouillage de données additionnelles ou traitement de descripteurs de contenu
H04N 21/239 - Interfaçage de la voie montante du réseau de transmission, p.ex. établissement de priorité des requêtes de clients
H04N 21/258 - Gestion de données liées aux clients ou aux utilisateurs finaux, p.ex. gestion des capacités des clients, préférences ou données démographiques des utilisateurs, traitement des multiples préférences des utilisateurs finaux pour générer des données co
H04N 21/84 - Génération ou traitement de données de description, p.ex. descripteurs de contenu
Various embodiments include systems, methods, and non-transitory computer-readable media for sharing and managing media galleries. Consistent with these embodiments, a method includes receiving a request from a first device to share a media gallery that includes a user avatar; generating metadata associated with the media gallery; generating a message associated with the media gallery, the message at least including the media gallery identifier and the identifier of the user avatar; and transmitting the message to a second device of the recipient user.
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 3/0484 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs
H04L 67/146 - Marqueurs pour l'identification sans ambiguïté d'une session particulière, p.ex. mouchard de session ou encodage d'URL
A push notification mechanism at a mobile user device provides for automated limiting of the rate of production of push notification alerts (such as an audible alert or a vibratory alert) and/or push notifications responsive to the occurrence of chat events relevant to a chat application hosted by the user device. Some chat events automatically trigger suppression periods during which push notification alerts are prevented for subsequent chat events that satisfy predefined suppression criteria. Such push notification and/or alert limiting can be performed separately for separate users, chat groups, and/or chat event types.
H04L 51/04 - Messagerie en temps réel ou quasi en temps réel, p.ex. messagerie instantanée [IM]
H04L 51/224 - Surveillance ou traitement des messages en fournissant une notification sur les messages entrants, p.ex. des poussées de notifications des messages reçus
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
A three-dimensional asset (3D) reconstruction technique for generating a 3D asset representing an object from images of the object. The images are captured from different viewpoints in a darkroom using one or more light sources having known locations. The system estimates camera poses for each of the captured images and then constructs a 3D surface mesh made up of surfaces using the captured images and their respective estimated camera poses. Texture properties for each of the surfaces of the 3D surface mesh are then refined to generate the 3D asset.
A finger gesture recognition system is provided. The finger gesture recognition system includes one or more audio sensors and one or more optic sensors. The finger gesture recognition system captures, using the one or more audio sensors, audio signal data of a finger gesture being made by a user, and captures, using the one or more optic sensors, optic signal data of the finger gesture. The finger gesture recognition system recognizes the finger gesture based on the audio signal data and the optic signal data and communicates finger gesture data of the recognized finger gesture to an Augmented Reality/Combined Reality/Virtual Reality (XR) application.
A pose tracking system is provided. The pose tracking system includes an EMF tracking system having a user-worn head-mounted EMF source and one or more user-worn EMF tracking sensors attached to the wrists of the user. The EMF source is associated with a VIO tracking system such as AR glasses or the like. The pose tracking system determines a pose of the user's head and a ground plane using the VIO tracking system and a pose of the user's hands using the EMF tracking system to determine a fullbody pose for the user. Metal interference with the EMF tracking system is minimized using an IMU mounted with the EMF tracking sensors. Long term drift in the IMU and the VIO tracking system are minimized using the EMF tracking system.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
Eyewear devices including a tether and methods for identifying proper installation of the tether are disclosed. An eyewear device includes transmission lines extending through the temples to electrical and electronic components positioned adjacent to edges of a frame. A tether is attached to the temples to enable power and communication flow between the electrical and electronic components rather than through the frame. Proper installation is identified based on communications passing between the electrical and electronic components via the tether.
Methods and systems are disclosed for performing operations for controlling brightness in an AR device. The operations comprise displaying an image on an eyewear device worn by a user; detecting a gaze direction of a pupil of the user; identifying a first region of the image that corresponds to the gaze direction of the pupil; and modifying a brightness level or value of pixels in the image based on the gaze direction such that pixels in the first region of the image are set to a first brightness value and pixels in a second region of the image are set to a second brightness value that is lower than the first brightness value.
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
Methods and systems are disclosed for performing real-time deforming operations. The system receives an image that includes a depiction of a real-world object. The system applies a machine learning model to the image to generate a warping field and segmentation mask, the machine learning model trained to establish a relationship between a plurality of training images depicting real-world objects and corresponding ground-truth warping fields and segmentation masks associated with a target shape. The system applies the generated warping field and segmentation mask to the image to warp the real-world object depicted in the image to the target shape.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
The subject technology detects from a set of frames, a first gesture, the first gesture corresponding to a pinch gesture. The subject technology detects a first location and a first position of a first representation of a first finger from the first gesture and a second location and a second position of a second representation of a second finger from the first gesture. The subject technology detects a first collision event corresponding to a first collider and a second collider intersecting with a third collider of a first virtual object. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology modifies the first virtual object to include an additional augmented reality content based at least in part on the first change and the second change.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
H04L 51/046 - Interopérabilité avec d'autres applications ou services réseau
Systems and methods are provided for clustering videos. The system accesses a plurality of content items, the plurality of content items comprising a first set of RGB video frames and a second set of optical flow frames corresponding to the first set of RGB video frames. The system processes the first set of RGB video frames by a first machine learning model to generate a first optimal assignment for the first set of RGB video frames, the first optimal assignment representing initial clustering of the first set of RGB video frames. The system generates an updated first optimal assignment for the first set of RGB video frames based on the first optimal assignment for the first set of RGB video frames and a second optimal assignment of the second set of optical flow frames, the second optimal assignment representing initial clustering of the second set of optical flow frames.
G06V 10/762 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant le regroupement, p.ex. de visages similaires sur les réseaux sociaux
G06T 5/50 - Amélioration ou restauration d'image en utilisant plusieurs images, p.ex. moyenne, soustraction
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
95.
Virtual object manipulation with gestures in a messaging system
The subject technology detects a first gesture and a second gesture, each gesture corresponding to an open trigger finger gesture. The subject technology detects a third gesture and a fourth gesture, each gesture corresponding to a closed trigger finger gesture. The subject technology, selects a first virtual object in a first scene. The subject technology detects a first location and a first position of a first representation of a first finger from the third gesture and a second location and a second position of a second representation of a second finger from the fourth gesture. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology modifies a set of dimensions of the first virtual object to a different set of dimensions.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
96.
GESTURES TO ENABLE MENUS USING AUGMENTED REALITY CONTENT IN A MESSAGING SYSTEM
The subject technology detects a first location and a first position of a first representation of a first finger and a second location and a second position of a second representation of a second finger. The subject technology detects a first particular location and a first particular position of a first particular representation of a first particular finger and a second particular location and a second particular position of a second particular representation of a second particular finger. The subject technology detects a first change in the first location and the first position and a second change in the second location and the second position. The subject technology detects a first particular change in the first particular location and the first particular position and a second particular change in the second particular location and the second particular position. The subject technology generates a set of virtual objects.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
H04L 51/046 - Interopérabilité avec d'autres applications ou services réseau
In various embodiments, boundaries of geo-fences can be made mutable based on principles described herein. The term “mutable” refers to the ability of a thing (in this case, the boundary of a geo-fence) to change and adjust. In a typical embodiment, a mutable geo-fence system is configured to generate and monitor a geo-fence that encompasses a region, in order to dynamically vary the boundary of the geo-fence based on a number of boundary variables. The term “geo-fence” as used herein describes a virtual perimeter (e.g., a boundary) for a real-world geographic area. A geo-fence could be a radius around a point (e.g., a store), or a set of predefined boundaries. Boundary variables, as used herein, refers to a set of variables utilized by the mutable geo-fence system in determining a location of the boundary of the geo-fence.
Augmented reality guidance for guiding a user through an environment using an eyewear device. The eyewear device includes a display system and a position detection system. A user is guided though an environment by monitoring a current position of the eyewear device within the environment, identifying marker positions within a threshold of the current position, the marker positions defined with respect to the environment and associated with guidance markers, registering the marker positions, generating overlay image including the guidance markers, and presenting the overlay image on a display of the eyewear device.
Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and method for selecting ads for a video. The program and method provide for receiving a request for an ad to insert into a video playing on a client device, the request including a first content identifier that identifies a first type of content included in the video; determining a set of content identifiers associated with the first content identifier, the set of content identifiers identifying second types of content to filter with respect to providing the ad in response to the request; selecting an ad from among plural ads, by filtering ads tagged with a second content identifier included in the set of content identifiers; and providing the selected ad as a response to the request.
H04N 21/44 - Traitement de flux élémentaires vidéo, p.ex. raccordement d'un clip vidéo récupéré d'un stockage local avec un flux vidéo en entrée ou rendu de scènes selon des graphes de scène MPEG-4
H04N 21/4788 - Services additionnels, p.ex. affichage de l'identification d'un appelant téléphonique ou application d'achat communication avec d'autres utilisateurs, p.ex. discussion en ligne
A carry case for an electronics-enabled eyewear device, such as smart glasses, has charging contacts that are movable relative to a storage chamber in which the eyewear device is receivable. The charging contacts are connected to a battery carried by the case for charging the eyewear device via contact coupling of the charging contacts to corresponding contact formations on an exterior of the eyewear device. The charging contacts are in some instances mounted on respective flexible walls defining opposite extremities of the storage chamber. The contact formations on the eyewear device are in some instances provided by hinge assemblies that couple respective temples to a frame of the eyewear device.