Image light is generated with a display light source. The image light is visible. An infrared light source emits infrared light. A scan directs the image light and the infrared light to an input coupler of a display waveguide and the display waveguide presents the image light to an eyebox region as a virtual image.
In one embodiment, a method, by one or more computing systems, includes determining, based on frames captured by a camera, a plurality of participants are located in an environment, locating, within a first frame, a first body region of a first participant of the plurality of participants, detecting, at a first time, appearance information of the first body region of the first participant, calculating, using one or more machine-learning models, a confidence score corresponding to a match between the appearance information of the first participant at the first time and one or more profiles of pre-registered participants, updating, using the one or more machine-learning models, the confidence score based on one or more additional appearance information detected within additional frames, determining whether the updated confidence score is above a predetermined threshold, and in response to determining the updated confidence score is above the predetermined threshold, authenticating the first participant.
G06V 10/98 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos Évaluation de la qualité des motifs acquis
G06V 10/26 - Segmentation de formes dans le champ d’image; Découpage ou fusion d’éléments d’image visant à établir la région de motif, p.ex. techniques de regroupement; Détection d’occlusion
G06V 10/74 - Appariement de motifs d’image ou de vidéo; Mesures de proximité dans les espaces de caractéristiques
G06V 10/774 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source méthodes de Bootstrap, p.ex. "bagging” ou “boosting”
G06V 10/776 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant l’intégration et la réduction de données, p.ex. analyse en composantes principales [PCA] ou analyse en composantes indépendantes [ ICA] ou cartes auto-organisatrices [SOM]; Séparation aveugle de source Évaluation des performances
G06V 10/96 - Gestion de tâches de reconnaissance d’images ou de vidéos
G06V 40/10 - Corps d’êtres humains ou d’animaux, p.ex. occupants de véhicules automobiles ou piétons; Parties du corps, p.ex. mains
A wearable device with a display screen has both an image presentation area and a camera portion. The wearable device includes a camera module configured to capture images through the camera portion and a camera frame that is molded onto the display screen, creating a pocket in which the camera module is securely held. This camera frame is made of a material that not only keeps the camera module in place but also forms an environmental seal, effectively protecting the space between the camera module and the display screen.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
H04N 23/57 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande - Détails mécaniques ou électriques de caméras ou de modules de caméras spécialement adaptés pour être intégrés dans d'autres dispositifs
According to examples, a wearable device may include an imaging component, at least one wireless communication component, and a controller. The controller may activate the at least one wireless communication component to perform a wireless scan of at least one radio available to respond to the wireless scan. The controller may also receive, through the at least one wireless communication component, wireless scan data from the at least one radio and may embed the wireless scan data as part of a media metadata. The wireless scan data may be used to determine a location estimate of the at least one radio and thus, the wearable device. The location estimate of the wearable device may also be used to geotag media captured by the wearable device without using a GPS receiver on the wearable device or when a GPS receiver is unable to track a current location.
G01S 5/02 - Localisation par coordination de plusieurs déterminations de direction ou de ligne de position; Localisation par coordination de plusieurs déterminations de distance utilisant les ondes radioélectriques
H04W 4/02 - Services utilisant des informations de localisation
Disclosed herein are light sources (e.g., micro-LEDs and μOLEDs), display electronics, and tiled display panels for high luminance, high resolution display panels used in near-eye display systems. Techniques for three-dimensional integration of multi-color LEDs, micro-LED surface loss reduction using band engineered sidewall passivation structures, micro-LED heat spreading materials and structures, and micro-LED light extraction efficiency improvement using etched outwardly tilted sidewall minors are described. Techniques for drive circuit supply voltage tracking and compensation, dynamic burn-in compensation using interpolation of compensation parameters, and digital misalignment calibration of tiled display panels are also described.
H01L 25/075 - Ensembles consistant en une pluralité de dispositifs à semi-conducteurs ou d'autres dispositifs à l'état solide les dispositifs étant tous d'un type prévu dans le même sous-groupe des groupes , ou dans une seule sous-classe de , , p.ex. ensembles de diodes redresseuses les dispositifs n'ayant pas de conteneurs séparés les dispositifs étant d'un type prévu dans le groupe
H01L 27/12 - Dispositifs consistant en une pluralité de composants semi-conducteurs ou d'autres composants à l'état solide formés dans ou sur un substrat commun comprenant des éléments de circuit passif intégrés avec au moins une barrière de potentiel ou une barrière de surface le substrat étant autre qu'un corps semi-conducteur, p.ex. un corps isolant
H01L 33/58 - DISPOSITIFS À SEMI-CONDUCTEURS NON COUVERTS PAR LA CLASSE - Détails caractérisés par les éléments du boîtier des corps semi-conducteurs Éléments de mise en forme du champ optique
6.
OPTIMIZED SOLVENT-BASED LIQUID METAL COMPOSITIONS AND METHODS OF USING SAME
An optimized solvent-based liquid metal composition includes a solution and a liquid metal mixed with the solution. The solution includes at least one solvent and a polymeric binder dissolved in the at least one solvent. Additionally or optionally, the composition includes a metallic filler. The ingredients of the composition are tailored to extend decap time while maintaining other beneficial properties to permit the use of the composition in various printing techniques.
C09D 11/54 - Encres à base de deux liquides, l’un des liquides étant l’encre, l’autre liquide étant une solution de réaction, un fixateur ou une solution de traitement pour l’encre
C22C 28/00 - Alliages à base d'un métal non mentionné dans les groupes
A lanyard to hold a controller for a headset is provided. The lanyard includes a strap anchored on a top portion of a lid for a battery case of the controller, and a clip anchored to an opening pin in a bottom portion of the lid for the battery case of the controller. The strap includes a stretchable fabric and multiple sticky pads on an outer face that adhere to the stretchable fabric when the strap is looped around the clip. A headset using a controller that includes a lanyard as above, and the controller, are also provided.
A63F 13/98 - Accessoires, c. à d. agencements détachables optionnels à l’utilisation du dispositif de jeu vidéo, p.ex. support de prise de manettes de jeu
A63F 13/24 - Dispositions d'entrée pour les dispositifs de jeu vidéo - Parties constitutives, p.ex. manettes de jeu avec poignées amovibles
8.
MANAGEMENT OF ELONGATED STRUCTURES FOR HEAD MOUNTED DEVICE
A wearable device includes a front portion configured to engage a front portion of a head of a user, and a rear portion configured to engage a rear portion of the head of the user. The rear portion includes a telescoping arm guide, a telescoping arm slidably coupled to the telescoping arm guide such that the telescoping arm is moveable relative to the telescoping arm guide, a cable guide coupled to the telescoping arm guide, and a connector. The device includes an optical fiber coupled to the telescoping arm at a first point, coupled to the connector at a second point, and slidably coupled to the cable guide between the first point and the second point, such that the optical fiber includes a medial portion between the first point and the second point, the medial portion including a bend that is greater than or equal to a minimum bend radius.
Aspects of the present disclosure can trigger an action based on a motion detected by an artificial reality (XR) device, such as a head-mounted display (HMD). The XR device can display an XR experience to a user. While displaying the XR experience, the XR device can detect a physical interaction with the XR device using one of more sensors (e.g., sensors of an inertial measurement unit (IMU)). The physical interaction can generate a movement profile captured by the one or more sensors. The XR device can identify the physical interaction as a particular motion (e.g., one or more taps on the XR device) by applying a machine learning model to the movement profile. In response to identifying the particular motion, the XR device can trigger an action on the XR device (e.g., activating pass-through on the XR device).
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/0487 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p.ex. souris 2D, boules traçantes, crayons ou palets
G06F 3/04883 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels pour l’entrée de données par calligraphie, p.ex. sous forme de gestes ou de texte
10.
APPLICATION MULTITASKING IN A THREE-DIMENSIONAL ENVIRONMENT
Aspects of the present disclosure are directed to application multitasking in a shell with a three-dimensional environment. Implementations immerse a user in a three‑dimensional environment via an artificial reality system, such as a display environment for a system shell. The system shell can execute applications (e.g., system shell applications, remoted applications, etc.). An executing application can correspond to a displayed virtual object (e.g., panel). The system shell can concurrently execute two, three, or more applications and the three‑dimensional environment can concurrently display two, three, or more corresponding virtual objects that display contents for the executing applications. Implementations of a mode manager can manage a mode for the three‑dimensional environment/system shell. Example modes include cooperative mode and exclusive mode. Implementations of cooperative mode permit concurrent display of multiple virtual objects from different applications while the exclusive mode permits display of virtual objects only from the executing application entering exclusive mode.
The disclosed computer-implemented method may include accessing various antenna elements and identifying parameters for an antenna that is to be formed using the accessed antenna elements. The method may also include assembling the antenna elements, using an artificial intelligence (AI) instance, into an assembled antenna that at least partially complies with the identified parameters. Various other methods, systems, and computer-readable media are also disclosed.
Aspects of the present disclosure are directed to an adjustment mechanism for adjusting an eye-to-lens distance of a headset display device. The adjustment mechanism can include an adjustment wheel that is rotatable by a user, a pinion gear fixed to a screw, and a threaded member receiving the screw and mounted to a forehead pad frame supporting a forehead pad. The adjustment wheel can include an internal bevel gear ring. The pinion gear is rotated by rotation of the internal bevel gear ring, and the screw rotates with the pinion gear. The pinion gear and the screw can rotate about a second axis. The threaded member moves along the screw, as the screw rotates, to move the forehead pad to adjust the eye-to-lens distance. The internal bevel gear ring, the pinion gear, and the screw are configured to allow additional components to pass through an interior of the adjustment wheel.
G02B 7/04 - Montures, moyens de réglage ou raccords étanches à la lumière pour éléments optiques pour lentilles avec mécanisme de mise au point ou pour faire varier le grossissement
13.
Presenting Meshed Representations of Physical Objects Within Defined Boundaries for Interacting With Artificial-Reality Content, and Systems and Methods of Use Thereof
A method of interacting with an artificial-reality (AR) content at an AR headset that includes cameras and displays is described. The method includes, while the AR headset has a first position within a physical environment that includes an object, if a distance of the object is within a threshold collision distance from the AR headset, presenting a meshed representation of the object. The meshed representation is displayed at first respective locations on the displays such that the meshed representation is viewable within the artificial reality. After the AR headset moves to a second position with a different distance, and if the different distance is within the threshold collision distance from the AR headset, moving the meshed representation of the object to second respective locations within the artificial reality. The second respective locations correspond to the position of the object in the physical environment.
An edge coupler for coupling a light beam (e.g., a laser beam) into a waveguide comprises a first waveguide section characterized by a first thickness and a first constant width, a second waveguide section physically coupled to the first waveguide section and characterized by the first thickness and a gradually decreasing width, and a third waveguide section partially overlapping with the second waveguide section at an overlap region, the third waveguide section characterized by a gradually increasing width and a second thickness different from (e.g., greater than) the first thickness. In some embodiments, a surface (e.g., the top or bottom surface) of the second waveguide section and a surface (e.g., the top or bottom surface) of the third waveguide section are on a same plane.
G02B 6/12 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage du type guide d'ondes optiques du genre à circuit intégré
G02B 6/122 - Elements optiques de base, p.ex. voies de guidage de la lumière
The disclosed method may include driving a reference signal into a body and detecting, in response to the reference signal, a plurality of biosignal measurements using at least one electrode of a biosensing device. The method may further include determining, based on the plurality of biosignal measurements, a relative location of the at least one electrode with respect to the body, and providing feedback based on the relative location of the at least one electrode. Various other methods, systems, and computer-readable media are also disclosed.
Aspects of the present disclosure are directed to providing an overlay over a concurrently executing artificial reality (XR) environment. Implementations immerse a user in a three‑dimensional XR environment via a XR system. An overlay manager can dynamically display a two-dimensional overlay over the three-dimensional XR environment in response to input that triggers the overlay. The overlay can provide a runtime that executes application components (e.g., system shell applications and applications remote from the system shell). For example, an executing application can provide a two-dimensional virtual object (e.g., panel) displayed in the two‑dimensional overlay. The overlay can provide a user concurrent access to the 2D components of additional applications, without requiring termination of the three‑dimensional XR environment. In some implementations, the XR environment is transitioned to a paused state while the overlay is active and restored to an active state when the overlay is closed.
Some aspects of the present disclosure are directed to providing partial pass-through on an artificial reality (XR) device, such as a head-mounted display (HMD). Some implementations can allow a user to turn on selective areas of pass-through on the XR device, such that one or more portions of a real-world environment physically surrounding the user can be seen. In some implementations, a user can control the area of pass-through by specifying a percentage of the real-world environment he wishes to see, e.g., 50%. In some implementations, a user can control the area of pass-through by specifying an area of XR device she wishes to see, e.g., the top portion of the view. In some implementations, a user can control the area of pass-through by specifying a physical object in the real-world environment he wishes to see, e.g., a real-world desk.
A lanyard to hold a controller for a headset is provided. The lanyard includes a strap anchored on a top portion of a lid for a battery case of the controller, and a clip anchored to an opening pin in a bottom portion of the lid for the battery case of the controller. The strap includes a stretchable fabric and multiple sticky pads on an outer face that adhere to the stretchable fabric when the strap is looped around the clip. A headset using a controller that includes a lanyard as above, and the controller, are also provided.
A45F 5/00 - Dispositifs destinés à tenir ou porter des articles à main; Dispositifs destinés à tenir ou transporter, utilisables en voyage ou au camping
19.
FLEXIBLE ELECTRONIC SYSTEM FOR FLEXIBLE WEARABLE DEVICES AND OTHER ELECTRONIC DEVICES
A flexible electronic system comprises a flexible substrate comprising an interlayer elastomer dielectric and a plurality of rigid components disposed within the flexible substrate. The flexible electronic system may further comprise a high-density interconnect region comprising one or more routing layers. In examples, the flexible electronic system may further comprise a flexible barrier material encapsulating the flexible electronic system. In some examples the flexible electronic system and the high-density interconnect region may further comprise a plurality of routing layers and one or more through-vias. Each through-via may couple at least one of two rigid components, two routing layers, or a rigid component and a routing layer. In examples, at least some of the routing layers may comprise conductive traces.
An eye tracking system for a head-mounted devices includes an interference pattern emitter, an interference pattern detector, and processing logic. The interference pattern emitter provides at least two light beams that combine into a light pattern in an eyebox region of the head-mounted device. The light pattern includes an interference pattern based on constructive and destructive interference of the at least two light beams. The light detector is configured to detect a portion of the light pattern and provides detector data that is representative of one or more light intensities in the portion of the light pattern. The processing logic is coupled to the light detector to receive the detector data and is configured to identify displacement characteristics of the interference pattern. The processing logic is configured to determine orientation characteristics of an eye in the eyebox region based on the displacement characteristics.
In one embodiment, a method includes accessing a map of a building floor plan with locations of access points within the floor plan, the access points being capable of performing wireless communications with wireless devices. Determining a pose of a wireless device within the map using images captured by one or more cameras of the wireless device. Selecting a preferred access point based on the pose of the wireless device, the floor plan, and the locations of the plurality of access points within the floor plan. Configuring wireless communication settings of the wireless device to communicate with the preferred access point based on the pose of the wireless device and the location of the preferred access point within the floor plan.
The disclosed system may include at least one gradient-index liquid crystal lens. The system may include a selection module that selects a viewing angle. The system may also include an adjustment module that dynamically adjusts a phase reset property of the gradient-index liquid crystal lens in response to the selected viewing angle. Various other devices, systems, and methods are also disclosed.
G02F 1/29 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c. à d. déflexion
23.
VIEW SYNTHESIS PIPELINE FOR RENDERING PASSTHROUGH IMAGES
A processor accesses a depth map and a first image of a scene generated using one or more sensors of an artificial reality device. The processor generates, based on the first image, segmentation masks respectively associated with a plurality of object types. The segmentation masks segment the depth map into a plurality of segmented depth maps respectively associated with the object types. The processor generates meshes using, respectively, the segmented depth maps. For each eye of the user, the processor captures a second image and generates, based on the second image, segmentation information. The processor warps the plurality of meshes to generate warped meshes for the eye, and then generates an eye-specific mesh for the eye by compositing the warped meshes according to the segmentation information. The processor renders an output image for the eye using the second image and the eye-specific mesh.
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
An actuator assembly includes a primary electrode, a secondary electrode overlapping at least a portion of the primary electrode, and an electroactive polymer layer disposed between the primary electrode and the secondary electrode, where the electroactive polymer layer includes a non-vertical (e.g., sloped) sidewall with respect to a major surface of at least one of the electrodes. The electroactive polymer layer may be characterized by a non-axisymmetric shape with respect to an axis that is oriented orthogonal to an electrode major surface.
H01L 41/317 - Application de parties ou de corps piézo-électriques ou électrostrictifs sur un élément électrique ou sur un autre support par dépôt de couches piézo-électriques ou électrostrictives, p.ex. par impression par aérosol ou par sérigraphie par dépôt en phase liquide
H01L 41/09 - Eléments piézo-électriques ou électrostrictifs à entrée électrique et sortie mécanique
H01L 41/29 - Formation d’électrodes, de connexions électriques ou de dispositions de bornes
An eye tracking system employing three-dimensional (3D) sensing using time of flight is provided. Instead of directly measuring the time of arrival of the emitted photons, the light intensity of the transmitted laser beam is modulated, and a phase change of the return beam computed by comparison with the transmit waveform. The modulation frequency is resolved with radio frequency (RF) mixing and digital signal processing techniques. A variety of phase detection systems and techniques including, but not limited to, quadrature analog front end detection and analog homodyne phase detection are applied. The modulation frequency may, depending on phase detection technique, be sinusoidal or pulsed.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G01S 7/481 - Caractéristiques de structure, p.ex. agencements d'éléments optiques
G01S 7/4915 - Mesure du temps de retard, p.ex. détails opérationnels pour les composants de pixels; Mesure de la phase
G01S 17/32 - Systèmes déterminant les données relatives à la position d'une cible pour mesurer la distance uniquement utilisant la transmission d'ondes continues, soit modulées en amplitude, en fréquence ou en phase, soit non modulées
A computer-implemented method, comprising accessing an image comprising a handheld device, wherein the image is captured by one or more cameras associated with the computing device, generating a cropped image that comprises a hand of a user or the handheld device from the image by processing the image, generating a vision-based six degrees of freedom (6DoF) pose estimation for the handheld device by processing the cropped image, metadata associated with the image, and first sensor data from one or more sensors associated with the handheld device, generating a map-based 6DoF pose estimation using the handheld device, and generating a final 6DoF pose estimation for the handheld device based on the vision-based 6DoF pose estimation and the map-based 6DoF pose estimation generated using the handheld device.
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
Aspects of the present disclosure are directed to an artificial reality system orchestrating interactions between virtual object “augments.” The orchestration can include linking, which can be forming two or more augments into a combination, embedding an augment within an existing combination, or triggering an action mapped to the linking of those augments. Another type of orchestration can include extracting, which can refer to taking an augment out of an existing combination, either by removing it from the combination or copying the augment to leave a version in the combination and having another version outside the combination.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 16/9536 - Personnalisation de la recherche basée sur le filtrage social ou collaboratif
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
28.
SYSTEMS, METHODS, AND MEDIA FOR GENERATING VISUALIZATION OF PHYSICAL ENVIRONMENT IN ARTIFICIAL REALITY
In one embodiment, a method by a computing system comprising a color camera and two monochrome cameras respectively associated with two eyes of a user includes computing a point cloud corresponding to a visible environment based at least on two stereoscopic grayscale images respectively captured by the two monochrome cameras, generating a mesh corresponding to the visible environment based on the computed point cloud, and generating two stereoscopic colorized images to be respectively displayed to the two eyes of the user, where each of the two stereoscopic colorized images is generated using (1) the mesh, (2) luminance information from one of the two stereoscopic grayscale images that is associated with the eye to which the stereoscopic colorized image is to be displayed, and (3) color information from a color input image captured by the color camera.
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
G06T 7/593 - Récupération de la profondeur ou de la forme à partir de plusieurs images à partir d’images stéréo
H04N 13/243 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant au moins trois capteurs d’images 2D
H04N 13/25 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant plusieurs capteurs d’images aux caractéristiques différentes autres que la position ou le point de vue, p.ex. avec des différences dans la résolution ou les propriétés de saisie de couleurs; Commande des caractéristiques d’un capteur par les signaux d’images d’un autre capteur
H04N 13/254 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques en combinaison avec des sources de rayonnement électromagnétique pour l’éclairage du sujet
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
A projector sequentially generates light pulses to form a sparse grid array. The light pulses reflect off objects in an environment and are reflected towards a sensor array. The sensor array sequentially senses the reflected light pulses across pixels of the sensor array. A depth sensing system calculates depth information of objects in the environment based on the sequential nature of the pulse generation and the pulse sensing. The depth sensing system calculates depth information based on both the positional and temporal information of generated light pulses, and the positional and temporal information of sensed light pulses. The depth sensing system may generate a representation of the environment based on the depth information.
G01B 11/25 - Dispositions pour la mesure caractérisées par l'utilisation de techniques optiques pour mesurer des contours ou des courbes en projetant un motif, p.ex. des franges de moiré, sur l'objet
G01S 17/89 - Systèmes lidar, spécialement adaptés pour des applications spécifiques pour la cartographie ou l'imagerie
H04N 25/40 - Extraction de données de pixels provenant d'un capteur d'images en agissant sur les circuits de balayage, p.ex. en modifiant le nombre de pixels ayant été échantillonnés ou à échantillonner
H04N 25/60 - Traitement du bruit, p.ex. détection, correction, réduction ou élimination du bruit
30.
Electromagnetic interference reduction in extended reality environments
Methods and systems for reducing electromagnetic interference in analog circuit of a control device for an augmented reality (AR) or virtual reality (VR) system are described. An analog circuit associated with the control device may include at least one amplifier and an analog-to-digital converter coupled to an amplifier by one or more electrical conductors. Electromagnetic interference induced in the one or more electrical conductors by an external AC magnetic field may be reduced using at least one component of the control device configured to reduce the electromagnetic interference.
G09G 5/36 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire
A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
A61B 5/296 - Détection, mesure ou enregistrement de signaux bioélectriques ou biomagnétiques du corps ou de parties de celui-ci Électrodes bioélectriques à cet effet spécialement adaptées à des utilisations particulières pour l’électromyographie [EMG]
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
The disclosed computer-implemented method may include systems for generating personalized avatar reactions during live video broadcasts. For example, the systems and methods described herein can access a social networking system user's profile to identify an avatar associated with the social networking system user. The systems and methods can generate an avatar reaction by modifying one or more features of the avatar based on a corresponding emoticon reaction. Once generated, the social networking system user can select the avatar reaction for addition to an ephemeral reaction stream associated with a live video broadcast. Various other methods, systems, and computer-readable media are also disclosed.
In one embodiment, a method by an arbiter associated with hardware resources of a computing system includes associating with N indexed requesters requesting accesses to the hardware resources, where each of the N indexed requesters is associated with a credit counter and a weight, repeatedly granting a right to access the hardware resources to each requester that satisfies conditions in an indexing order among the N indexed requesters until none of the N indexed requesters satisfies the conditions and replenishing, upon a determination that none of the N indexed requesters satisfies the conditions, a credit counter associated with each of the N indexed requesters.
A thermoplastic interposer formed of ordered polymer sheets that may be configured to act as an interconnect and as a thermal energy spreader. In examples, the thermoplastic interposer may include one or more extensions or wings to further dissipate heat. In examples, laser direct structuring (LDS) may be used to form or more through silicon vias (TSVs) or through-chip vias. In examples, the ordered polymer sheets may be extruded via a roll-to-roll process, stacked, and processed to form a laminate interposer structure that makes up the interposer. In examples, the thermoplastic interposer may be used in multi-layer structures interconnecting two or more board assemblies such as printed circuit boards (PCB)s.
An electronic device includes a substrate and a circuit having a plurality of electrically-conductive components disposed on the substrate. The plurality of electrically-conductive components includes first, second and third electrically-conductive components. The third electrically-conductive component has a first end portion forming a first interface with the first electrically-conductive component and a second end portion forming a second interface with the second electrically conductive component. The first electrically-conductive component is made of a first material including a first metal. The second electrically-conductive component is made of a second material including the first metal. The third electrically-conductive component is made of a third material including a gallium-based alloy and a metallic filler. The metallic filler reduces a reactivity of the third electrically-conductive component with the first metal at the first and second interfaces, and thus minimizes deterioration of the first electrically-conductive component and the second electrically-conductive component over time.
H01L 23/29 - Capsulations, p.ex. couches de capsulation, revêtements caractérisées par le matériau
H01B 1/22 - Matériau conducteur dispersé dans un matériau organique non conducteur le matériau conducteur comportant des métaux ou des alliages
H01L 21/48 - Fabrication ou traitement de parties, p.ex. de conteneurs, avant l'assemblage des dispositifs, en utilisant des procédés non couverts par l'un uniquement des groupes
H01L 23/498 - Connexions électriques sur des substrats isolants
35.
STYLIZING REPRESENTATIONS IN IMMERSIVE REALITY APPLICATIONS
A method and system for generating stylized representations in virtual/augmented reality applications. The method includes retrieving a two-dimensional (2D) image of a subject and a depth field associated with the 2D image. The method also includes generating a three-dimensional (3D) mesh based on the 2D image and the depth field. The method also includes generating a stylized texture field based on an analysis of the 2D image of the subject. The method also includes generating a 3D stylized model of the subject by enveloping the 3D mesh with the stylized texture field.
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
The disclosed method may include driving, using a biosensing device, a right-leg drive (RLD) signal, and measuring, using the biosensing device, a biosignal. The method also includes determining, from the measured biosignal, noise within the biosignal, and tuning, based on the noise, the RLD signal. Various other methods, systems, and computer-readable media are also disclosed.
G01D 3/036 - Dispositions pour la mesure prévues pour les objets particuliers indiqués dans les sous-groupes du présent groupe pour atténuer les influences indésirables, p.ex. température, pression sur les dispositions de mesure elles-mêmes
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
37.
View Synthesis Pipeline for Rendering Passthrough Images
A processor accesses a depth map and a first image of a scene generated using one or more sensors of an artificial reality device. The processor generates, based on the first image, segmentation masks respectively associated with a plurality of object types. The segmentation masks segment the depth map into a plurality of segmented depth maps respectively associated with the object types. The processor generates meshes using, respectively, the segmented depth maps. For each eye of the user, the processor captures a second image and generates, based on the second image, segmentation information. The processor warps the plurality of meshes to generate warped meshes for the eye, and then generates an eye-specific mesh for the eye by compositing the warped meshes according to the segmentation information. The processor renders an output image for the eye using the second image and the eye-specific mesh.
G06T 7/285 - Analyse du mouvement utilisant une séquence de paires d'images stéréo
G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
A tunable lens including a pair of reflectors is disclosed. At least one of the reflectors may be curved for contributing to the focusing or defocusing power of the lens. At least one of the reflectors is translatable for tuning the focusing/defocusing power. The reflectors may be configured in a pancake lens configuration where one of the reflectors is a 50/50 reflector and the other is a polarization selective reflector. Refractive elements may be disposed between the reflectors for providing more optical power to the lens, and/or for balancing optical aberrations.
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
G02B 5/30 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques autres que les lentilles Éléments polarisants
A method of forming a uniaxially oriented crystalline polymer article includes heating a segment of a crystallizable polymer article to a first temperature, applying a stress to the crystallizable polymer article in an amount effective to induce a positive strain within the segment of the crystallizable polymer article, and heating the segment of the crystallizable polymer article to a second temperature greater than the first temperature while continuing to apply the stress.
B29C 55/08 - Façonnage par étirage, p.ex. étirage à travers une matrice; Appareils à cet effet de plaques ou de feuilles suivant un seul axe, p.ex. étirage oblique coupant la direction d'alimentation
40.
SYSTEMS FOR CALIBRATING NEUROMUSCULAR SIGNALS SENSED BY A PLURALITY OF NEUROMUSCULAR-SIGNAL SENSORS, AND METHODS OF USE THEREOF
Methods for calibrating neuromuscular signals sensed by a plurality of neuromuscular-signal sensors are provided. One example method includes, in response to a triggering event, receiving a pulse sensed by a neuromuscular-signal sensor of a plurality of neuromuscular-signal sensors of a wrist-wearable device. The method also includes, in accordance with a determination that the pulse indicates that at least one neuromuscular-signal sensor of the plurality of neuromuscular-signal sensors of the wrist-wearable device is offset from a respective predetermined default position, determining a worn position of the wrist-wearable device on the user's wrist, and adjusting analysis of neuromuscular signals of the plurality of neuromuscular-signal sensors based on an offset between the worn position of the wrist-wearable device and a predetermined default position of the wrist-wearable device.
In one embodiment, a method by a computing system comprising a color camera and two monochrome cameras respectively associated with two eyes of a user includes computing a point cloud corresponding to a visible environment based at least on two stereoscopic grayscale images respectively captured by the two monochrome cameras, generating a mesh corresponding to the visible environment based on the computed point cloud, and generating two stereoscopic colorized images to be respectively displayed to the two eyes of the user, where each of the two stereoscopic colorized images is generated using (1) the mesh, (2) luminance information from one of the two stereoscopic grayscale images that is associated with the eye to which the stereoscopic colorized image is to be displayed, and (3) color information from a color input image captured by the color camera.
G06T 17/20 - Description filaire, p.ex. polygonalisation ou tessellation
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
G06T 7/90 - Détermination de caractéristiques de couleur
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 10/60 - Extraction de caractéristiques d’images ou de vidéos relative aux propriétés luminescentes, p.ex. utilisant un modèle de réflectance ou d’éclairage
A distributed imaging system for augmented reality devices is disclosed. The system includes a computing module in communication with a plurality of spatially distributed sensing devices. The computing module is configured to process input images from the sensing devices based on performing a local feature matching computation to generate corresponding first output images. The computing module is further configured to process the input images based on performing an optical flow correspondence computation to generate corresponding second output images. The computing module is further configured to computationally combine first and second output images to generate third output images.
A projector sequentially generates light pulses to form a sparse grid array. The light pulses reflect off objects in an environment and are reflected towards a sensor array. The sensor array sequentially senses the reflected light pulses across pixels of the sensor array. A depth sensing system calculates depth information of objects in the environment based on the sequential nature of the pulse generation and the pulse sensing. The depth sensing system calculates depth information based on both the positional and temporal information of generated light pulses, and the positional and temporal information of sensed light pulses. The depth sensing system may generate a representation of the environment based on the depth information.
A graded-index optical element may include a nanovoided material including a first surface and a second surface opposite the first surface. The nanovoided material may be transparent between the first surface and the second surface. Additionally, the nanovoided material may have a predefined change in effective refractive index in at least one axis due to a change in at least one of nanovoid size or nanovoid distribution along the at least one axis. Various other elements, devices, systems, materials, and methods are also disclosed.
B29D 11/00 - Fabrication d'éléments optiques, p.ex. lentilles ou prismes
G02B 1/04 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques caractérisés par la substance dont ils sont faits; Revêtements optiques pour éléments optiques faits de substances organiques, p.ex. plastiques
Various embodiments set forth high-resolution liquid crystal displays and components thereof. In some embodiments, light emitted by a high-resolution green color liquid crystal display is combined, via a combiner, with light emitted by at least one lower-resolution red and blue color liquid crystal display. The red and blue color display(s) may include a single display or two displays positioned on opposing sides of the combiner. The combiner may be a dichroic or polarization-based combiner. Combined light from the green color display and the red and blue color display(s) is passed through collimating optics, such as a pancake lens or a Fresnel lens, toward a viewer's eye.
Methods and corresponding systems and apparatuses for saving power through selectively disabling clock signals in a systolic array are described. In some embodiments, a clock gate controller is operable to output a gated clock signal from which local clock signals of processing elements in the systolic array are derived. The gated clock signal corresponds to a root clock signal that is distributed through a clock distribution network or clock tree. The clock gate controller is located along one branch of the clock distribution network. The branch can be associated with processing elements that form a column within the systolic array. Disabling the gated clock signal disables the local clock signals along the entire branch, preventing any components that are clocked by those local clock signals from consuming power. Additional clock gate controllers can similarly be provided for other branches, including a branch associated with another column.
G06F 1/00 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et
G06F 1/3237 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements Économie d’énergie caractérisée par l'action entreprise par désactivation de la génération ou de la distribution du signal d’horloge
G06F 15/80 - Architectures de calculateurs universels à programmes enregistrés comprenant un ensemble d'unités de traitement à commande commune, p.ex. plusieurs processeurs de données à instruction unique
G06F 1/3203 - Gestion de l’alimentation, c. à d. passage en mode d’économie d’énergie amorcé par événements
The disclosed system may include a conductive enclosure, a first printed circuit board (PCB) that includes multiple antenna feeds, and a second PCB that includes a grounding layer and one or more sensors. A first antenna feed may be electrically connected to the conductive enclosure, and a second antenna feed may be electrically connected to the grounding layer of the second PCB. As such, the grounding layer of the second PCB may act as a radiating element for a second antenna. Various other mobile electronic devices, apparatuses, and methods of manufacturing are also disclosed.
H01Q 5/328 - Dispositions permettant un fonctionnement sur différentes gammes d’ondes Éléments rayonnants individuels ou couplés, chaque élément étant alimenté d’une façon non précisée utilisant des circuits ou des composants dont la réponse dépend de la fréquence, p.ex. des circuits bouchon ou des condensateurs situés entre un élément rayonnant et la mise à la terre
H01Q 1/52 - Moyens pour réduire le couplage entre les antennes; Moyens pour réduire le couplage entre une antenne et une autre structure
A camera assembly with two distinct magnifications is described. The camera assembly includes a lens assembly with two modules to perceive light. The first module includes two halves of different lens assemblies disposed together to provide two different optical paths and fields of view to obtain the two different optical magnifications. The second module includes a lens assembly common to the two halves of the first module. A biprism provides light separation for the two halves of the first module. Thus, light received and subjected to two different magnifications in the first module at a distal end of the camera assembly is further processed through the second module before being provided to a sensor. The biprism at the camera's distal end keeps the two different lens assemblies of the first module at the same viewing direction.
H04N 23/55 - Pièces optiques spécialement adaptées aux capteurs d'images électroniques; Leur montage
G02B 13/18 - Objectifs optiques spécialement conçus pour les emplois spécifiés ci-dessous avec des lentilles ayant une ou plusieurs surfaces non sphériques, p.ex. pour réduire l'aberration géométrique
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
H04N 5/262 - Circuits de studio, p.ex. pour mélanger, commuter, changer le caractère de l'image, pour d'autres effets spéciaux
49.
SYSTEMS, METHODS, AND DEVICES FOR PRODUCING INTERCONNECTS ON DEFORMABLE SUBSTRATES OF ELECTRONIC DEVICES
The present disclosure provides systems, methods, and devices for producing an interconnect. An electronic device of the present disclosure includes a deformable substrate including a circuit. The circuit includes a channel extending from a first portion of the deformable substrate to a second portion of the deformable substrate. A first circuit component is adjacent to the first portion of the deformable substrate. A second circuit component is adjacent to the second portion of the deformable substrate. A first metal material is formed overlaying a first portion of the deformable substrate including a first portion of the channel. A second metal material interfaces with the first metal material, thereby substantially occupying an interior volume of the channel.
H05K 3/18 - Appareils ou procédés pour la fabrication de circuits imprimés dans lesquels le matériau conducteur est appliqué au support isolant de manière à former le parcours conducteur recherché utilisant la technique de la précipitation pour appliquer le matériau conducteur
50.
ADAPTIVE IMPEDANCE TUNING FOR CAPACITIVE INTERCONNECTS
The disclosed system may include a detachable capsule that includes a capsule-side capacitive plate. The system may also include a receiving antenna electrically connected to an antenna-side capacitive plate, and a capacitive sensor electrically connected to the capsule-side capacitive plate. The two plates may be coupled to each other and may transmit RF signals between them. The capacitive sensor may be configured to detect an amount of capacitance between the two capacitive plates. The system may also include an antenna matching tuner electrically connected to the capacitive sensor and to an antenna feed. Then, upon receiving capacitance measurements from the capacitive sensor, the antenna matching tuner may alter various parameters of the antenna feed including impedance matching parameters. Various other apparatuses and mobile wearable devices are also disclosed.
H01Q 1/27 - Adaptation pour l'utilisation dans ou sur les corps mobiles
G01R 27/26 - Mesure de l'inductance ou de la capacitance; Mesure du facteur de qualité, p.ex. en utilisant la méthode par résonance; Mesure de facteur de pertes; Mesure des constantes diélectriques
G04G 21/04 - Dispositifs d'entrée ou de sortie intégrés dans des garde-temps utilisant des ondes radio
H01Q 7/00 - Cadres ayant une distribution du courant sensiblement uniforme et un diagramme de rayonnement directif perpendiculaire au plan du cadre
Aspects of the present disclosure are directed to creating virtual doors within artificial reality (XR) universes for traversal within that XR universe and between other XR universes. Users can create virtual doors that control access to their privately owned property (e.g., world, parcel, house, etc.) in an XR universe. For example, an owner of a virtual door can manually lock the door to prevent any user from entering their property through the virtual door. As another example, an owner of a virtual door can configure door permissions and/or privacy settings that serve as heuristics by which a door access control manager determines whether to authorize a particular user, XR world, and/or XR universe to access. The XR universe traversal system can control an execution environment to smoothly transition between different applications, thereby enabling a user to traverse between different XR universes without having to leave the XR environment.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
52.
Mapping a Real-World Room for A Shared Artificial Reality Environment
A room manager can generate mappings for a real-world room that support a shared XR environment. For example, the real-world room can include real-world objects and surfaces, such as a table(s), chair(s), wall(s), door(s), window(s), etc. The room manager can generate XR object definitions based on information received about the real-world room, object(s), and surface(s). For example, the room manager can implement a flow that guides a user equipped with an XR system to provide information for the XR object definitions, such as real-world surfaces that map to the XR object(s), borders (e.g., measured using a component of the XR system), such as borders on real-world surfaces, semantic information (e.g., number of seat assignments at an XR table, size of XR objects, etc.), and other suitable information. Implementations generate previews of the shared XR environment, such as a local preview and a remote preview.
Aspects of the present disclosure are directed to creating virtual doors within artificial reality (XR) universes for traversal within that XR universe and between other XR universes. Users can create virtual doors that control access to their privately owned property (e.g., world, parcel, house, etc.) in an XR universe. For example, an owner of a virtual door can manually lock the door to prevent any user from entering their property through the virtual door. As another example, an owner of a virtual door can configure door permissions and/or privacy settings that serve as heuristics by which a door access control manager determines whether to authorize a particular user, XR world, and/or XR universe to access. The XR universe traversal system can control an execution environment to smoothly transition between different applications, thereby enabling a user to traverse between different XR universes without having to leave the XR environment.
A method and system for text-to-video generation. The method includes receiving a text input, generating a representation frame based on the text input using a model trained on text-image pairs, generating a set of frames based on the representation frame and a first frame rate, interpolating the set of frames to a higher frame rate, generating a first video based on the interpolated set of frames, increasing a resolution of the first video based on a first and second super-resolution model, and generating an output video based on a result of the super-resolution models.
Methods, systems, and storage media for generating audio data includes receiving a text input. The method also includes receiving a plurality of representative audio sources and encoding the plurality of representative audio sources into a plurality of audio tokens. The method includes encoding the text input into a plurality of text representations. The method comprises mapping each audio tokens of the plurality of audio tokens to a text representation of the plurality of text representations. The method also comprises determining a relationship score based on mapping each audio tokens to the text representation, wherein the relationship score identifies a distribution of audio tokens from the plurality of audio tokens. The method and systems can also comprise decoding the subgroup of audio tokens to yield a reconstmcted audio source.
G10L 13/08 - Analyse de texte ou génération de paramètres pour la synthèse de la parole à partir de texte, p.ex. conversion graphème-phonème, génération de prosodie ou détermination de l'intonation ou de l'accent tonique
In one example, an apparatus for integrating sensing and display system includes a first semiconductor layer that includes an image sensor; a second semiconductor layer that includes a display; a third semiconductor layer that includes compute circuits configured to support an image sensing operation by the image sensor and a display operation by the display; and a semiconductor package that encloses the first, second, and third semiconductor layers, the semiconductor package further including a first opening to expose the image sensor and a second opening to expose the display. The first, second, and third semiconductor layers form a first stack structure along a first axis. The third semiconductor layer is sandwiched between the first semiconductor layer and the second semiconductor layer in the first stack structure.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06V 10/25 - Détermination d’une région d’intérêt [ROI] ou d’un volume d’intérêt [VOI]
G09G 3/32 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des panneaux électroluminescents semi-conducteurs, p.ex. utilisant des diodes électroluminescentes [LED]
H01L 23/00 - DISPOSITIFS À SEMI-CONDUCTEURS NON COUVERTS PAR LA CLASSE - Détails de dispositifs à semi-conducteurs ou d'autres dispositifs à l'état solide
H01L 27/15 - Dispositifs consistant en une pluralité de composants semi-conducteurs ou d'autres composants à l'état solide formés dans ou sur un substrat commun comprenant des composants semi-conducteurs avec au moins une barrière de potentiel ou une barrière de surface, spécialement adaptés pour l'émission de lumière
H04N 5/232 - Dispositifs pour la commande des caméras de télévision, p.ex. commande à distance
Aspects of the present disclosure are directed to systems for capturing, recording, and playing back steps for building objects and structures in an artificial reality (XR) world. A teacher or creator can initiate a capture context within a build mode of an XR application, during which steps (and potentially other metadata) of a building process are recorded and stored. A XR world building engine can generate an interactive replay of the recorded building process that allows another user to view the building process within an XR environment at the user's preferred pace and/or from different perspectives. In some implementations, the XR world building engine can generate a replay of the teacher's or creator's avatar performing the building process, such that the user can view the building process from a third person perspective and perform the same steps to learn how to build objects and structures in an XR world.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
The disclosed device may include a lens stack. The lens stack may include a first gradient-index liquid crystal lens and a second gradient-index liquid crystal lens in tandem with the first gradient-index liquid crystal lens. The lens stack may be configured to reach a target optical power based on a first optical power of the first gradient-index liquid crystal lens and a second optical power of the second gradient-index liquid crystal lens. Various other devices, systems, and methods are also disclosed.
G02F 1/29 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c. à d. déflexion
A device is provided. The device includes a waveguide configured to guide an image light to propagate from a light inputting surface to a light outputting surface. The waveguide includes a substrate having a back surface facing an eye-box region of the device and a front surface opposite to the back surface, a plurality of out-coupling structures disposed at the back surface or at least partially inside the substrate, and a medium layer embedded inside the substrate between the out-coupling structures and the front surface. The medium layer has a refractive index that is lower than the substrate. The device also includes an optical lens printed over the back surface of the substrate.
Methods, systems, and storage media for generating audio data includes receiving a text input. The method also includes receiving a plurality of representative audio sources and encoding the plurality of representative audio sources into a plurality of audio tokens. The method includes encoding the text input into a plurality of text representations. The method comprises mapping each audio tokens of the plurality of audio tokens to a text representation of the plurality of text representations. The method also comprises determining a relationship score based on mapping each audio tokens to the text representation, wherein the relationship score identifies a distribution of audio tokens from the plurality of audio tokens. The method and systems can also comprise decoding the subgroup of audio tokens to yield a reconstructed audio source.
G10L 19/018 - Mise en place d’un filigrane audio, c. à d. insertion de données inaudibles dans le signal audio
G10L 19/02 - Techniques d'analyse ou de synthèse de la parole ou des signaux audio pour la réduction de la redondance, p.ex. dans les vocodeurs; Codage ou décodage de la parole ou des signaux audio utilisant les modèles source-filtre ou l’analyse psychoacoustique utilisant l'analyse spectrale, p.ex. vocodeurs à transformée ou vocodeurs à sous-bandes
An image-sensing system includes multiple sensing modules. Each of the multiple sensing modules includes multiple optical sensors arranged in an array. Each of the multiple sensors is configured to be switched on and off to generate analog sensing data. The image-sensing system also includes an analog-to-digital converter (ADC) shared by the multiple optical sensors configured to convert analog sensing data generated by the multiple optical sensors into digital data. The image-sensing system also includes a processor configured to control the multiple sensing modules.
In one embodiment, a method includes receiving, by a first client system, from one or more remote servers, a current version of a neural network model including multiple model parameters, training the neural network model on multiple examples retrieved from a local data store to generate multiple updated model parameters, wherein each of the examples includes one or more features and one or more labels, calculating a user valuation associated with the first client system, wherein the user valuation represents a measure of utility of training the neural network model on the multiple examples, and sending, to one or more of the remote servers, the trained neural network model and the user valuation, wherein the user valuation is associated with a likelihood of the first client system being selected for a subsequent training of the neural network model.
Many users access artificial reality (XR) experiences through their mobile phones. However, it is difficult to translate XR experiences to a two-dimensional (2D) screen in a way that feels intuitive and natural. Thus, the technology can map an interaction plane in a three-dimensional (3D) scene to the 2D screen with as many affordances as possible, even if the plane is not parallel to the mobile phone. The plane can be a fixed surface or a dynamically changeable surface through which the user sends inputs through the 2D screen. The mapping of the plane to the 2D screen can control interaction with a virtual object on the interaction plane in the XR environment, enabling parity between the same experience on XR and non-XR interfaces.
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/04842 - Sélection des objets affichés ou des éléments de texte affichés
G06F 3/04845 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] pour la commande de fonctions ou d’opérations spécifiques, p.ex. sélection ou transformation d’un objet, d’une image ou d’un élément de texte affiché, détermination d’une valeur de paramètre ou sélection d’une plage de valeurs pour la transformation d’images, p.ex. glissement, rotation, agrandissement ou changement de couleur
G06F 3/04886 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] utilisant des caractéristiques spécifiques fournies par le périphérique d’entrée, p.ex. des fonctions commandées par la rotation d’une souris à deux capteurs, ou par la nature du périphérique d’entrée, p.ex. des gestes en fonction de la pression exer utilisant un écran tactile ou une tablette numérique, p.ex. entrée de commandes par des tracés gestuels par partition en zones à commande indépendante de la surface d’affichage de l’écran tactile ou de la tablette numérique, p.ex. claviers virtuels ou menus
The disclosed computer-implemented method may include coupling an array of reflective polarizers to a back surface of a back optical substrate; coupling an array of quarter-wave plates to a front surface of the back optical substrate such that the array of quarter-wave plates is aligned with the array of reflective polarizers; molding the back optical substrate with at least one initial mold, wherein the at least one initial mold defines an initial array of optical element surfaces that is aligned with the array of quarter-wave plates; and twin-sheet thermoforming, between a front twin-sheet mold and a back twin-sheet mold, the back optical substrate with a front optical substrate, wherein the front twin-sheet mold defines a front array of optical element surfaces and the back mold defines a back array of optical element surfaces. Various other methods, apparatuses, and systems are also disclosed.
A room manager can generate mappings for a real-world room that support a shared XR environment. For example, the real-world room can include real-world objects and surfaces, such as a table(s), chair(s), wall(s), door(s), window(s), etc. The room manager can generate XR object definitions based on information received about the real-world room, object(s), and surface(s). For example, the room manager can implement a flow that guides a user equipped with an XR system to provide information for the XR object definitions, such as real-world surfaces that map to the XR object(s), borders (e.g., measured using a component of the XR system), such as borders on real-world surfaces, semantic information (e.g., number of seat assignments at an XR table, size of XR objects, etc.), and other suitable information. Implementations generate previews of the shared XR environment, such as a local preview and a remote preview.
A room manager can generate mappings for a real-world room that support a shared XR environment. For example, the real-world room can include real-world objects and surfaces, such as a table(s), chair(s), wall(s), door(s), window(s), etc. The room manager can generate XR object definitions based on information received about the real-world room, object(s), and surface(s). For example, the room manager can implement a flow that guides a user equipped with an XR system to provide information for the XR object definitions, such as real-world surfaces that map to the XR object(s), borders (e.g., measured using a component of the XR system), such as borders on real-world surfaces, semantic information (e.g., number of seat assignments at an XR table, size of XR objects, etc.), and other suitable information. Implementations generate previews of the shared XR environment, such as a local preview and a remote preview.
An administered authentication system can authenticate an artificial reality device using an authorization record between a user account and an artificial reality device. In some implementations, the authorization record is created in response to activation of a user account-specific key sent to a user-supplied contact, where an artificial reality device identifier was provided with the user-supplied contact. In other implementations, the authorization record is created in response to activation of a user account-specific key provided to the artificial reality device as a code, where activation of the key includes adding an artificial reality device identifier to a key activation message. In yet other implementations, the authorization record is created in response to an application associated with a user account activating an artificial reality device-specific key, with an artificial reality device identifier, that is provided via the artificial reality device.
G06F 21/34 - Authentification de l’utilisateur impliquant l’utilisation de dispositifs externes supplémentaires, p.ex. clés électroniques ou cartes à puce intelligentes
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 21/33 - Authentification de l’utilisateur par certificats
G06F 21/36 - Authentification de l’utilisateur par représentation graphique ou iconique
G06F 21/40 - Authentification de l’utilisateur sous réserve d’un quorum, c. à d. avec l’intervention nécessaire d’au moins deux responsables de la sécurité
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
68.
MAPPING A REAL-WORLD ROOM FOR A SHARED ARTIFICIAL REALITY ENVIRONMENT
A room manager can generate mappings for a real-world room that support a shared XR environment. For example, the real-world room can include real-world objects and surfaces, such as a table(s), chair(s), wall(s), door(s), window(s), etc. The room manager can generate XR object definitions based on information received about the real-world room, object(s), and surface(s). For example, the room manager can implement a flow that guides a user equipped with an XR system to provide information for the XR object definitions, such as real-world surfaces that map to the XR object(s), borders (e.g., measured using a component of the XR system), such as borders on real-world surfaces, semantic information (e.g., number of seat assignments at an XR table, size of XR objects, etc.), and other suitable information. Implementations generate previews of the shared XR environment, such as a local preview and a remote preview.
Many users access artificial reality (XR) experiences through their mobile phones. However, it is difficult to translate XR experiences to a two-dimensional (2D) screen in a way that feels intuitive and natural. Thus, the technology can map an interaction plane in a three-dimensional (3D) scene to the 2D screen with as many affordances as possible, even if the plane is not parallel to the mobile phone. The plane can be a fixed surface or a dynamically changeable surface through which the user sends inputs through the 2D screen. The mapping of the plane to the 2D screen can control interaction with a virtual object on the interaction plane in the XR environment, enabling parity between the same experience on XR and non-XR interfaces.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
Aspects of the present disclosure are directed to systems for capturing, recording, and playing back steps for building objects and structures in an artificial reality (XR) world. A teacher or creator can initiate a capture context within a build mode of an XR application, during which steps (and potentially other metadata) of a building process are recorded and stored. A XR world building engine can generate an interactive replay of the recorded building process that allows another user to view the building process within an XR environment at the user's preferred pace and/or from different perspectives. In some implementations, the XR world building engine can generate a replay of the teacher's or creator's avatar performing the building process, such that the user can view the building process from a third person perspective and perform the same steps to learn how to build objects and structures in an XR world.
Aspects of the present disclosure are directed to streaming interactive content from a native application executing at an artificial reality (XR) device into an artificial reality environment and/or to nearby XR device(s). A shell environment at an XR system can manage the software components of the system. The shell environment can include a shell application and a three-dimensional shell XR environment displayed to a user. An additional application, natively executing at the XR system, can provide a host version of content and a remote version of content. A two-dimensional virtual object displayed in the shell XR environment can display the host version of the content, and the remote version of the content can be streamed to a remote XR system. The remote XR system can display the remote content within another two-dimensional virtual object, for example in another shell XR environment displayed by the remote XR system.
A system on a chip includes a first subsystem comprising a first memory; a second subsystem comprising a second memory; and an always-on subsystem. The always-on subsystem can comprise processing circuitry configured to: in response to a first activation event, signal the first subsystem to initiate repair operations on the first memory, and in response to a second activation event occurring after the first event, signal the second subsystem to initiate repair operations on the second memory.
In particular embodiments, a computing system may access a particular image frame and corresponding depth information of a dynamic scene. The depth information is used to generate a point cloud of the particular image frame. The system may generate a first latent representation based on the point cloud. The system may access a sequence of image frames of the dynamic scene and a set of key frames. The system may generate, using a temporal transformer, a second latent representation based on tracking and combining temporal relationship between the sequence of image frames and the set of key frames. The system may access camera parameters for rendering the one or more objects from a desired novel viewpoint and generate a third latent representation. The system may train an improved neural radiance fields (NeRF) based model for free-viewpoint rendering of the dynamic scene based on the first, second, and third latent representations.
A display engine includes an arrayed light source panel including a light source array of individually addressable light sources, each light source being configured to emit a first light beam associated with a first wavelength band. The display engine also includes a beam reshaping module including beam reshaping elements, each beam reshaping element being configured to reshape a first beam profile of the first light beam and output a second light beam with a second beam profile. The display engine also includes a transmissive display driver panel including a display driver module integrated with a pixelated color conversion module that includes a plurality of color conversion units configured to at least partially convert the second light beam into a third light beam associated with a second wavelength band. The display engine also includes an active light modulation medium configured to modulate the third light beam for displaying an image.
The disclosed apparatus may include a wearable haptic ring that features input capabilities relative to a computing system. In various examples, the wearable haptic ring may be designed to curve around a human finger of a wearer with a touchpad that is seamlessly integrated with the ring. For example, the seamlessly integrated touchpad may be operable by another finger of the wearer. Moreover, the haptic ring may include a haptic feedback unit designed to provide haptic feedback in response to input from the wearer. As such, the haptic ring may enable a wide range of user inputs while appearing like a typical ring rather than a computer input/output device. Various other implementations are also disclosed.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0354 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection des mouvements relatifs en deux dimensions [2D] entre le dispositif de pointage ou une partie agissante dudit dispositif, et un plan ou une surface, p.ex. souris 2D, boules traçantes, crayons ou palets
76.
HYBRID ELECTRICALLY AND THERMALLY SWITCHABLE SYSTEM USING HEAT SOURCE
An optical device includes a first optically dimmable switch for providing a first transmittance while the first optically dimmable switch is in a first state and providing a second transmittance distinct from the first transmittance while the first optically dimmable switch is in a second state distinct from the first state. The optical device also includes a dynamic heat source thermally coupled with the first optically dimmable switch. The dynamic heat source is at a first temperature at a first time and is at a second temperature distinct from the first temperature at a second time mutually exclusive from the first time. The optical device may operate as an optical dimming device, which may be used in head-mounted display devices or as dimmable windows or shutters.
G02F 1/13 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides
H01H 61/02 - Relais électrothermiques dans lesquels l'organe thermosensible est chauffé indirectement, p.ex. par chauffage à résistance ou à induction
77.
User Controlled Task Execution with Task Persistence for Assistant Systems
In one embodiment, a method includes receiving a first user request at a client system to suspend a first task being executed by an assistant system operating on the client system, suspending the execution of the first task responsive to the first user request, receiving a second user request at the client system, determining that the second user request is a request to resume the suspended first task based on user interactions with the assistant system with respect to one or more entities associated with the first task, and presenting a prompt to resume the first task at the client system.
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 9/451 - Dispositions d’exécution pour interfaces utilisateur
G06F 9/48 - Lancement de programmes; Commutation de programmes, p.ex. par interruption
G06F 16/9536 - Personnalisation de la recherche basée sur le filtrage social ou collaboratif
G06F 18/2321 - Techniques non hiérarchiques en utilisant les statistiques ou l'optimisation des fonctions, p.ex. modélisation des fonctions de densité de probabilité
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/00 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 20/30 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les albums, les collections ou les contenus partagés, p.ex. des photos ou des vidéos issus des réseaux sociaux
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
G10L 15/06 - Création de gabarits de référence; Entraînement des systèmes de reconnaissance de la parole, p.ex. adaptation aux caractéristiques de la voix du locuteur
H04L 51/212 - Surveillance ou traitement des messages utilisant un filtrage ou un blocage sélectif
H04L 51/222 - Surveillance ou traitement des messages en utilisant des informations de localisation géographique, p.ex. des messages transmis ou reçus à proximité d'un certain lieu ou d'une certaine zone
H04L 51/224 - Surveillance ou traitement des messages en fournissant une notification sur les messages entrants, p.ex. des poussées de notifications des messages reçus
H04L 51/52 - Messagerie d'utilisateur à utilisateur dans des réseaux à commutation de paquets, transmise selon des protocoles de stockage et de retransmission ou en temps réel, p.ex. courriel pour la prise en charge des services des réseaux sociaux
The disclosed computer-implemented method may include applying, via a sound reproduction system, sound cancellation that reduces an amplitude of various sound signals. The method further includes identifying, among the sound signals, an external sound whose amplitude is to be reduced by the sound cancellation. The method then includes analyzing the identified external sound to determine whether the identified external sound is to be made audible to a user and, upon determining that the external sound is to be made audible to the user, the method includes modifying the sound cancellation so that the identified external sound is made audible to the user. Various other methods, systems, and computer-readable media are also disclosed.
G10K 11/178 - Procédés ou dispositifs de protection contre le bruit ou les autres ondes acoustiques ou pour amortir ceux-ci, en général utilisant des effets d'interférence; Masquage du son par régénération électro-acoustique en opposition de phase des ondes acoustiques originales
H04R 1/40 - Dispositions pour obtenir la fréquence désirée ou les caractéristiques directionnelles pour obtenir la caractéristique directionnelle désirée uniquement en combinant plusieurs transducteurs identiques
An example method for execution on a system on a chip (SoC) having a plurality of subsystems includes receiving, by a storage controller from a subsystem of the plurality of subsystems, a command to fetch, from a local memory, task descriptor data comprising access parameters for accessing a storage device, the access parameters including a storage device address; obtaining, by an encryption engine of the SoC, the command to fetch the task descriptor data; determining, by the encryption engine based on an access rule, whether the subsystem has sufficient privilege to access the storage device address; in response to determining that the subsystem has sufficient privilege to access the storage device, encrypting, source data in the local memory according to an encryption key associated with the subsystem; and providing the encrypted source data to the storage controller for writing to the storage device at the storage device address.
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 15/78 - Architectures de calculateurs universels à programmes enregistrés comprenant une seule unité centrale
G06F 21/10 - Protection de programmes ou contenus distribués, p.ex. vente ou concession de licence de matériel soumis à droit de reproduction
G06F 21/64 - Protection de l’intégrité des données, p.ex. par sommes de contrôle, certificats ou signatures
G06F 21/79 - Protection de composants spécifiques internes ou périphériques, où la protection d'un composant mène à la protection de tout le calculateur pour assurer la sécurité du stockage de données dans les supports de stockage à semi-conducteurs, p.ex. les mémoires adressables directement
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
80.
Methods and apparatus for autocalibration of a wearable electrode sensor system
Methods and systems used in calibrating the position and/or orientation of a wearable device configured to be worn on a wrist or forearm of a user, the method comprises sensing a plurality of neuromuscular signals from the user using a plurality of sensors arranged on the wearable device, and providing the plurality of neuromuscular signals and/or signals derived from the plurality of neuromuscular signals as inputs to one or more trained autocalibration models, determining based, at least in part, on the output of the one or more trained autocalibration models, a current position and/or orientation of the wearable device on the user, and generating a control signal based, at least in part, on the current position and/or orientation of the wearable device on the user and the plurality of neuromuscular signals.
A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
A61B 5/06 - Dispositifs autres que ceux à radiation, pour détecter ou localiser les corps étrangers
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
Methods, systems, and storage media for auto-generating an artificial reality environment based on access to personal user content are disclosed. Exemplary implementations may: receive consent from a user to access user content on a user device, the user content comprising digital media; generate a user profile based at least in part on the user content; determine user preferences based at least in part on the user profile; generate an artificial reality environment based at least in part on the user preferences; and share the artificial reality environment with contacts of the user.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06Q 50/00 - Systèmes ou procédés spécialement adaptés à un secteur particulier d’activité économique, p.ex. aux services d’utilité publique ou au tourisme
82.
Systems and methods for spatial update latency compensation for head-tracked audio
A system can include a position sensor configured to output position data of a HWD. The system can include one or more processors configured to identify a first head angle of the HWD using the position sensor, generate an audio signal using the first head angle, identify a second head angle of the HWD using the position sensor, determine an angle error based at least on the first head angle and the second head angle, and apply at least one of a time difference or a level difference to the audio signal based at least on the angle error to adjust the audio signal. The system can include an audio output device configured to output the adjusted audio signal. By adjusting the audio signal using the angle error, the system can correct for long spatial update latencies and reduce the perceptual impact of such latencies for the user.
The disclosed system may include a support structure dimensioned for a user's hand. The system may also include transmitting electrodes coupled to a first finger portion of the support structure and may further include receiving electrodes coupled to a second, different finger portion of the support structure. The system may also include a controller that is coupled to the support structure and that is communicatively connected to the transmitting and receiving electrodes. The controller may also be configured to cause the transmitting electrodes to transmit a signal, detect at least some of the transmitted signal at the receiving electrodes and, based on the detected signal, determine that at least two fingers of the user's hand are touching each other. Various other methods, systems, and computer-readable media are also disclosed.
A wrist device for gathering user data for biometric analysis and related methods, systems, and storage media are disclosed. The wrist device may include a low-power display, a base battery, and sensors. The base module may be configured to operate as a standalone module in a low-power mode. The base module may also be configured to gather data through the sensors. A computing module may be removably coupled with the base module. The computing module may include a high-definition display and a computing battery. The computing module may be configured to analyze the data when coupled to the base module. The computing battery of the computing module may be configured to charge the base battery of the base module when the computing module is coupled to the base module.
A system suppresses howl in a device including microphones and speakers, for example, an artificial reality headset. A speaker of the device presents audio content. The audio content presented by the speaker is received by a microphone of the device thereby creating a howl in certain situations. The system detects the presence of the howl in a region of the audio content using an adaptive notch filter. The system suppresses the howl by reducing gain of one or more frequencies of the audio content. The system may detect presence of the howl by monitoring flatness of the signal. The system may detect presence of the howl based on tonality detection based on linear prediction
A speaker produces acoustic frequencies within a housing that outputs the acoustic frequencies to a port. The produced frequencies travel through a cavity to the port which may have a cavity resonance that amplifies certain frequencies, affecting the frequency sensitivity of the speaker. To mitigate the cavity resonance, the speaker includes a membrane with regions having different breakup frequencies. One region is tuned to break up at a desired bandwidth of the speaker, and another region is tuned to break up at the cavity resonance, mitigating the distortion on frequency response due to the cavity resonance.
H04R 1/28 - Supports de transducteurs ou enceintes conçus pour réponse de fréquence spécifique; Enceintes de transducteurs modifiées au moyen d'impédances mécaniques ou acoustiques, p.ex. résonateur, moyen d'amortissement
H04R 1/02 - Boîtiers; Meubles; Montages à l'intérieur de ceux-ci
87.
OPTICAL MODULATOR AND IMAGE PROJECTOR BASED ON LEAKY-MODE WAVEGUIDE WITH TEMPORAL MULTIPLEXING
A leaky-mode acousto-optical modulator may be used to generate visual images suitable for direct viewing, without image-forming optics. To extend a field of view of the modulator to limits suitable for visual displays, the leaky-mode acousto-optical modulator may be equipped with a switchable beam redirector, e.g. a switchable-angle reflector, providing field of view portions one by one in a time-sequential manner. The field of view portions coalesce into a continuous synthetic field of view suitable for wide-angle visual display applications.
G03B 21/00 - Projecteurs ou visionneuses du type par projection; Leurs accessoires
G02F 1/29 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c. à d. déflexion
G02F 1/335 - Dispositifs de déflexion acousto-optique ayant une structure de guide d'ondes optique
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
The disclosed system may include a support structure that may be configured to house electronic components. The system may also include a first antenna mounted to the support structure. The first antenna may be configured to provide a wireless intralink on a first frequency to a local mobile electronic device. The system may also include multiple second antennas that are configured to established wireless interlinks on other frequencies to various external wireless networks. The second antennas may be positioned a specified minimum distance away from the first antenna and may be positioned at an at least partially opposing angle to each other. As such, the second antennas may provide at least a minimum threshold amount of spherical radiation to transmit and receive data using the established wireless interlinks. Various other apparatuses, methods of manufacturing, and mobile electronic devices are also disclosed.
H01Q 5/307 - Dispositions permettant un fonctionnement sur différentes gammes d’ondes Éléments rayonnants individuels ou couplés, chaque élément étant alimenté d’une façon non précisée
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
H01Q 1/22 - Supports; Moyens de montage par association structurale avec d'autres équipements ou objets
89.
GEOMETRICAL WAVEGUIDE WITH PARTIAL-COVERAGE BEAM SPLITTERS
A waveguide may include a substrate and an array of beam splitters embedded within the substrate, where each beam splitter within the array of beam splitters does not fully transect the substrate. Various other devices, systems, and methods of manufacture are also disclosed.
G02B 27/10 - Systèmes divisant ou combinant des faisceaux
G02B 6/28 - Moyens de couplage optique ayant des bus de données, c. à d. plusieurs guides d'ondes interconnectés et assurant un système bidirectionnel par nature en mélangeant et divisant les signaux
An eye tracking system with in-optical-assembly plane illumination is described. Side-emitting light emitting diodes (LEDs) aligned with a plane of an optical assembly of a near-eye display device are used to illuminate the eye of a user and generate glints that can be detected by an eye tracking camera. When a corrective optical lens or similar element is included in the optical assembly that may distort illumination beams from the light emitting diodes (LEDs), the distortion is mitigated by using in-package or externally modified LEDs that provide angled beams. In addition to in-package level mitigations such as reflectors or labels, edge portions of the distorting optical elements may be shaped or complemented with refractive elements to redirect the beams toward the eye.
Methods of assembling a head-mounted display (HMD) may include coupling a first digital projector assembly to an HMD frame, coupling a second digital projector assembly to the HMD frame, and then warping the HMD frame to optically align the first digital projector assembly with the second digital projector assembly. The warped HMD frame may be fixed such that the first digital projector assembly and the second digital projector assembly are optically aligned to within a predetermined threshold. Various other methods and systems are also disclosed.
An optical device includes a first electrode and a medium that includes ferroelectric liquid crystals and chiral dopants. The medium is located adjacent to the first electrode. The optical device may also include a second electrode distinct and separate from the first electrode. The optical device may be used as an optical dimming device, controlling an amount light passing through the optical device based on a voltage gradient provided to the optical device.
G02F 1/135 - Cellules à cristaux liquides associées structurellement avec une couche photoconductrice ou ferro-électrique dont les caractéristiques peuvent être optiquement ou électriquement modifiées
C09K 19/58 - Agents de dopage ou de transfert de charge
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
93.
CONDUCTIVE ELASTOMERIC FOAM MATERIALS AND METHODS OF USE
Described herein are conductive elastomeric foam materials and methods of making and using the same. The conductive elastomeric foam materials include a polymeric matrix, one or more conductive fillers, and one or more foaming agents. The polymeric matrix can include a thermoset polymer or a thermoplastic polymer. Also described herein are methods of making conductive elastomeric foam materials. Further described herein are molded products including the conductive elastomeric foam materials as described herein and wearable devices including the molded products.
C08J 9/00 - Mise en œuvre de substances macromoléculaires pour produire des matériaux ou objets poreux ou alvéolaires; Leur post-traitement
A61B 5/268 - Détection, mesure ou enregistrement de signaux bioélectriques ou biomagnétiques du corps ou de parties de celui-ci Électrodes bioélectriques à cet effet caractérisées par les matériaux des électrodes contenant des polymères conducteurs, p.ex. des polymères PEDOT:PSS
94.
Artificial Reality System Having Multi-Bank, Multi-Port Distributed Shared Memory
This disclosure describes various examples of a system which uses a multi-bank, multi-port shared memory system that may be implemented as part of a system on a chip. The shared memory system may have particular applicability in the context of an artificial reality system, and may be designed to have distributed or varied latency for one or more memory banks and/or one or more components or subsystems within the system on a chip. The described shared memory system may be logically a single entity, but physically may have multiple memory banks, each accessible by any of a number of components or subsystems. In some examples, the memory system may enable concurrent, common, and/or shared access to memory without requiring, in some situations, full locking or arbitration.
Disclosed herein are related to systems and methods for wireless communication. In one aspect, a system includes a first wireless interface configured to communicate at a first frequency band and a second frequency band. In one aspect, the system includes a second wireless interface configured to communicate at a third frequency band and a fourth frequency band. In one aspect, the system includes a switch configured to select either communication at the second frequency band or the third frequency band. In one aspect, the system includes a multi-band filter configured to couple i) the first wireless interface, ii) the second wireless interface, and iii) the switch, to a single antenna.
Disclosed herein are systems and methods related to using restricted target wake time in wireless communication. In one aspect, a first wireless communication device may configure a first field indicating (i) one or more traffic streams that are latency sensitive, and (ii) a direction of each of the one or more traffic streams between the first wireless communication device and a second wireless communication device. Each of the one or more traffic streams is to be communicated during a respective service period of a restricted target wake time (rTWT) schedule. The first wireless communication device may send a message including the first field to a second wireless communication device.
A sensor includes a plurality of pixels that each have dedicated compute circuitry within a compute layer. The plurality of pixels includes a first group of pixels and a second group of pixels. The first group of pixels is configured to detect light from a local area that has a first modulation frequency. The second group of pixels is configured to detect light from the local area that has a second modulation frequency. The compute layer is positioned below the plurality of pixels, and includes the compute circuitry for each of the plurality of pixels. The compute layer is configured to determine depth information for the local area using an indirect time-of-flight technique and one or both of the detected light that has the first modulation frequency and the detected light that has the second modulation frequency.
G01S 7/4915 - Mesure du temps de retard, p.ex. détails opérationnels pour les composants de pixels; Mesure de la phase
G01S 17/32 - Systèmes déterminant les données relatives à la position d'une cible pour mesurer la distance uniquement utilisant la transmission d'ondes continues, soit modulées en amplitude, en fréquence ou en phase, soit non modulées
A wireless communication device may include one or more processors. The one or more processors may transmit one or more first request frames to a wireless communication node and a wireless user device. The one or more first request frames may indicate to the wireless communication node to perform a first transmission to the wireless communication device during a first service period (SP). The one or more first request frames may indicate to the wireless user device to perform a second transmission to the wireless communication device during the first SP. The one or more processors may receive, during the first SP, according to the one or more first request frames, the first transmission from the wireless communication node, and the second transmission from the wireless user device.
A technology to have better experiences, e.g., beyond-arm's-length experiences, with fine-tuned interactions in an extended-reality environment can include methods, extended-reality-compatible devices, and/or systems configured to generate, e.g., via an extended-reality-compatible device, a copy of a representation of an object in an extended-reality environment, to initiate control of the copy of the representation of the object according to a schema; and to control the copy of the representation of the object at a first location in the extended-reality environment, e.g., the first location including a location beyond-arm's-length distance from a second location in the extended-reality environment, the second location being a location of an avatar or representation of a user and/or a controller of the extended-reality-compatible device.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
100.
TECHNOLOGY FOR CREATING, REPLICATING AND/OR CONTROLLING AVATARS IN EXTENDED REALITY
A technology for creating, replicating and/or controlling avatars in an extended-reality (ER) environment can include methods, extended-reality-compatible devices, and/or systems configured to generate, e.g., via an ER-compatible device, a copy of a representation of a user, e.g., a primary avatar, and/or an object in an ER environment, to initiate a recording of the graphical representation of the user or object according to a schema; to produce a copy of the recording of the graphical representation of the user as a new graphical representation of the user, e.g., a new avatar, in the ER environment; and to control the new graphical representation of the user at a first location in the ER environment. In examples, the technology enables moving the graphical representation of the user around the ER environment while the new graphical representation of the user performs motion and/or produces sound from the copy of the recording.