A display system may include a wearable display for rendering a three-dimensional virtual image content which appears to be located in an environment of a user of the display. The relative positions of the display and one or more eyes of the user may not be in desired positions to receive, or register, image information outputted by the display. For example, the display-to-eye alignment may vary for different users and/or may change over time (e.g., as a user moves or as the display becomes displaced). The wearable device may determine a relative position and/or alignment between the display and the user's eyes by determining whether features of the eye are at certain vertical positions relative to the display. Based on the relative positions, the wearable device may determine if it is properly fitted to the user, and position render camera(s) accordingly to present virtual image content.
Embodiments provide image display systems and methods for a camera calibration using a two-sided diffractive optical element (DOE). More specifically, embodiments are directed to determining intrinsic parameters of a camera using a single image obtained using a two-sided DOE. The two-sided DOE has a first pattern on a first surface and a second pattern on a second surface. Each of the first and second patterns may be formed by repeating sub-patterns that are lined when tiled on each surface. The patterns on the two-sided DOE are formed such that the brightness of the central intensity peak on the image of the image pattern formed by the DOE is reduced to a predetermined amount.
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system images the user's eye and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of rotation of the user's eye. The display system may render virtual image content with a render camera positioned at the determined position of the center of rotation of said eye.
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
A61B 3/11 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour mesurer la distance interpupillaire ou le diamètre de la pupille
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G02B 30/40 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques donnant à l’observateur d'une seule image bidimensionnelle [2D] une impression perceptive de profondeur
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
5.
MISCALIBRATION DETECTION FOR VIRTUAL REALITY AND AUGMENTED REALITY SYSTEMS
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for performing miscalibration detection. One of the methods includes receiving sensor data from each of multiple sensors of a device in a system configured to provide augmented reality or mixed reality output to a user. Feature values are determined based on the sensor data for a predetermined set of features. The determined feature values are processed using a miscalibration detection model that has been trained, based on examples of captured sensor data from one or more devices, to predict whether a miscalibration condition of one or more of the multiple sensors has occurred. Based on the output of the miscalibration detection model, the system determines whether to initiate recalibration of extrinsic parameters for at least one of the multiple sensors or to bypass recalibration.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 20/70 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène Étiquetage du contenu de scène, p.ex. en tirant des représentations syntaxiques ou sémantiques
H04N 13/246 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques Étalonnage des caméras
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
6.
METHODS, SYSTEMS, AND COMPUTER PROGRAM PRODUCTS FOR ALIGNMENT OF A WEARABLE DEVICE
Aligning extended-reality (XR) systems may present a first target at a closer, first location and a second target at a farther, second location to a user using the XR device, align the first and the second targets to each other with an alignment process, and determine a nodal point for an eye of the user based at least in part upon the first and the second target. Aligning extended-reality (XR) system may spatially register a set of targets in display portion of a user interface of the XR device comprising an adjustment mechanism that is used to adjust a relative position of the XR device to a user, trigger execution of a device fit process in response to receiving a device fit check signal, and adjust a relative position of the XR device to the user based on the device fit process.
Example techniques are disclosed for increasing the sensitivity of an augmented or virtual reality display system to collecting eye-tracking data for detecting physiological conditions, such as neural processes. An example method includes accessing eye-tracking information associated with a control population and an experimental population, the eye-tracking information reflecting, for each user of the control population and the experimental population, eye-tracking metrics associated with the user; scaling the eye-tracking information based on the eye-tracking information associated with the control population; and determining a sensitivity measure reflecting a distance measure between the control population and experimental population. The sensitivity measure may be utilized to modify physical or operational parameters for the display system and/or the protocol for performing a test.
Disclosed herein are systems and methods for presenting an audio signal associated with presentation of a virtual object colliding with a surface. The virtual object and the surface may be associated with a mixed reality environment. Generation of the audio signal may be based on at least one of an audio stream from a microphone and a video stream form a sensor. In some embodiments, the collision between the virtual object and the surface is associated with a footstep on the surface.
G06V 20/40 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans le contenu vidéo
G10L 25/57 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes spécialement adaptées pour un usage particulier pour comparaison ou différentiation pour le traitement des signaux vidéo
Systems include three optical elements arranged along an optical axis each having a different cylinder axis and a variable cylinder refractive power. Collectively, the three elements form a compound optical element having an overall spherical refractive power (SPH), cylinder refractive power (CYL), and cylinder axis (Axis) that can be varied according to a prescription (Rx).
A wearable device may include a head-mounted display (HMD) for rendering a three-dimensional (3D) virtual object which appears to be located in an ambient environment of a user of the display. The relative positions of the HMD and one or more eyes of the user may not be in desired positions to receive, or register, image information outputted by the HMD. For example, the HMD-to-eye alignment may vary for different users and may change over time (e.g., as a user moves around and/or the HMD slips or is otherwise displaced). The wearable device may determine a relative position or alignment between the HMD and the user's eyes. Based on the relative positions, the wearable device may determine if it is properly fitted to the user, may provide feedback on the quality of the fit to the user, and may take actions to reduce or minimize effects of any misalignment.
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
A61B 3/11 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour mesurer la distance interpupillaire ou le diamètre de la pupille
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
G02B 30/00 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques
G02B 30/40 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques donnant à l’observateur d'une seule image bidimensionnelle [2D] une impression perceptive de profondeur
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/0346 - Dispositifs de pointage déplacés ou positionnés par l'utilisateur; Leurs accessoires avec détection de l’orientation ou du mouvement libre du dispositif dans un espace en trois dimensions [3D], p.ex. souris 3D, dispositifs de pointage à six degrés de liberté [6-DOF] utilisant des capteurs gyroscopiques, accéléromètres ou d’inclinaiso
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
A method for placing content in an augmented reality system. A notification is received regarding availability of new content to display in the augmented reality system. A confirmation is received that indicates acceptance of the new content. Three dimensional information that describes the physical environment is provided, to an external computing device, to enable the external computing device to be used for selecting an assigned location in the physical environment for the new content. Location information is received, from the external computing device, that indicates the assigned location. A display location on a display system of the augmented reality system at which to display the new content so that the new content appears to the user to be displayed as an overlay at the assigned location in the physical environment is determined, based on the location information. The new content is displayed on the display system at the display location.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
Systems and methods for matching content elements to surfaces in a spatially organized 3D environment. The method includes receiving content, identifying one or more elements in the content, determining one or more surfaces, matching the one or more elements to the one or more surfaces, and displaying the one or more elements as virtual content onto the one or more surfaces.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
14.
SECURE EXCHANGE OF CRYPTOGRAPHICALLY SIGNED RECORDS
Systems and methods for securely exchanging cryptographically signed records are disclosed. In one aspect, after receiving a content request, a sender device can send a record to a receiver device (e.g., an agent device) making the request. The record can be sent via a short range link in a decentralized (e.g., peer-to-peer) manner while the devices may not be in communication with a centralized processing platform. The record can comprise a sender signature created using the sender device's private key. The receiver device can verify the authenticity of the sender signature using the sender device's public key. After adding a cryptography-based receiver signature, the receiver device can redeem the record with the platform. Upon successful verification of the record, the platform can perform as instructed by a content of the record (e.g., modifying or updating a user account).
H04L 9/32 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité comprenant des moyens pour vérifier l'identité ou l'autorisation d'un utilisateur du système
H04L 9/14 - Dispositions pour les communications secrètes ou protégées; Protocoles réseaux de sécurité utilisant plusieurs clés ou algorithmes
H04L 9/30 - Clé publique, c. à d. l'algorithme de chiffrement étant impossible à inverser par ordinateur et les clés de chiffrement des utilisateurs n'exigeant pas le secret
A method of processing an acoustic signal is disclosed. According to one or more embodiments, a first acoustic signal is received via a first microphone. The first acoustic signal is associated with a first speech of a user of a wearable headgear unit. A first sensor input is received via a sensor, a control parameter is determined based on the sensor input. The control parameter is applied to one or more of the first acoustic signal, the wearable headgear unit, and the first microphone. Determining the control parameter comprises determining, based on the first sensor input, a relationship between the first speech and the first acoustic signal.
A method of fabricating an optical element includes providing a substrate, forming a castable material coupled to the substrate, and casting the castable material using a mold. The method also includes curing the castable material and removing the mold. The optical element comprises a planar region and a clear aperture adjacent the planar region and characterized by an optical power.
Examples of the disclosure describe systems and methods for generating and displaying a virtual companion. In an example method, a first input from an environment of a user is received at a first time via a first sensor. An occurrence of an event in the environment is determined based on the first input. A second input from the user is received via a second sensor, and an emotional reaction of the user is identified based on the second input. An association is determined between the emotional reaction and the event. A view of the environment is presented at a second time later than the first time via a display. A stimulus is presented at the second time via a virtual companion displayed via the display, wherein the stimulus is determined based on the determined association between the emotional reaction and the event.
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 21/53 - Contrôle des usagers, programmes ou dispositifs de préservation de l’intégrité des plates-formes, p.ex. des processeurs, des micrologiciels ou des systèmes d’exploitation au stade de l’exécution du programme, p.ex. intégrité de la pile, débordement de tampon ou prévention d'effacement involontaire de données par exécution dans un environnement restreint, p.ex. "boîte à sable" ou machine virtuelle sécurisée
Methods, systems, and apparatus for performing bundling adjustment using epipolar constraints. A method includes receiving image data from a headset for a particular pose. The image data includes a first image from a first camera of the headset and a second image from a second camera of the headset. The method includes identifying at least one key point in a three-dimensional model of an environment at least partly represented in the first image and the second image and performing bundle adjustment. Bundle adjustment is performed by jointly optimizing a reprojection error for the at least one key point and an epipolar error for the at least one key point. Results of the bundle adjustment are used to perform at least one of (i) updating the three-dimensional model, (ii) determining a position of the headset at the particular pose, or (iii) determining extrinsic parameters of the first camera and second camera.
A method of presenting a signal to a speech processing engine is disclosed. According to an example of the method, an audio signal is received via a microphone. A portion of the audio signal is identified, and a probability is determined that the portion comprises speech directed by a user of the speech processing engine as input to the speech processing engine. In accordance with a determination that the probability exceeds a threshold, the portion of the audio signal is presented as input to the speech processing engine. In accordance with a determination that the probability does not exceed the threshold, the portion of the audio signal is not presented as input to the speech processing engine.
G10L 15/22 - Procédures utilisées pendant le processus de reconnaissance de la parole, p.ex. dialogue homme-machine
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G10L 15/14 - Classement ou recherche de la parole utilisant des modèles statistiques, p.ex. des modèles de Markov cachés [HMM]
G10L 15/25 - Reconnaissance de la parole utilisant des caractéristiques non acoustiques utilisant la position des lèvres, le mouvement des lèvres ou l’analyse du visage
G10L 15/30 - Reconnaissance distribuée, p.ex. dans les systèmes client-serveur, pour les applications en téléphonie mobile ou réseaux
21.
INTERAURAL TIME DIFFERENCE CROSSFADER FOR BINAURAL AUDIO RENDERING
Examples of the disclosure describe systems and methods for presenting an audio signal to a user of a wearable head device. In an example, a received first input audio signal is processed to generate a left output audio signal and a right output audio signal presented to ears of the user. Processing the first input audio signal comprises applying a delay process to the first input audio signal to generate a left audio signal and a right audio signal; adjusting gains of the left audio signal and the right audio signal; applying head-related transfer functions (HRTFs) to the left and right audio signals to generate the left and right output audio signals. Applying the delay process to the first input audio signal comprises applying an interaural time delay (ITD) to the first input audio signal, the ITD determined based on the source location.
A cross reality system enables any of multiple devices to efficiently access previously stored maps. Both stored maps and tracking maps used by portable devices may have any of multiple types of location metadata associated with them. The location metadata may be used to select a set of candidate maps for operations, such as localization or map merge, that involve finding a match between a location defined by location information from a portable device and any of a number of previously stored maps. The types of location metadata may prioritized for use in selecting the subset. To aid in selection of candidate maps, a universe of stored maps may be indexed based on geo-location information. A cross reality platform may update that index as it interacts with devices that supply geo-location information in connection with location information and may propagate that geo-location information to devices that do not supply it.
G06F 16/29 - Bases de données d’informations géographiques
G06F 16/907 - Recherche caractérisée par l’utilisation de métadonnées, p.ex. de métadonnées ne provenant pas du contenu ou de métadonnées générées manuellement
Disclosed are systems and methods for mixed reality collaboration. A method may include receiving persistent coordinate data; presenting a first virtual session handle to a first user at a first position via a transmissive display of a wearable device, wherein the first position is based on the persistent coordinate data; presenting a virtual object to the first user at a second location via the transmissive display, wherein the second position is based on the first position; receiving location data from a second user, wherein the location data relates a position of the second user to a position of a second virtual session handle; presenting a virtual avatar to the first user at a third position via the transmissive display, wherein the virtual avatar corresponds to the second user, wherein the third position is based on the location data, and wherein the third position is further based on the first position.
An example a head-mounted display device includes a light projector and an eyepiece. The eyepiece is arranged to receive light from the light projector and direct the light to a user during use of the wearable display system. The eyepiece includes a waveguide having an edge positioned to receive light from the display light source module and couple the light into the waveguide. The waveguide includes a first surface and a second surface opposite the first surface. The waveguide includes several different regions, each having different grating structures configured to diffract light according to different sets of grating vectors.
Augmented reality and virtual reality display systems and devices are configured for efficient use of projected light. In some aspects, a display system includes a light projection system and a head-mounted display configured to project light into an eye of the user to display virtual image content. The head-mounted display includes at least one waveguide comprising a plurality of in-coupling regions each configured to receive, from the light projection system, light corresponding to a portion of the user's field of view and to in-couple the light into the waveguide; and a plurality of out-coupling regions configured to out-couple the light out of the waveguide to display the virtual content, wherein each of the out-coupling regions are configured to receive light from different ones of the in-coupling regions. In some implementations, each in-coupling region has a one-to-one correspondence with a unique corresponding out-coupling region.
An eye tracking system can include a first camera configured to capture a first plurality of visual data of a right eye at a first sampling rate. The system can include a second camera configured to capture a second plurality of visual data of a left eye at a second sampling rate. The second plurality of visual data can be captured during different sampling times than the first plurality of visual data. The system can estimate, based on at least some visual data of the first and second plurality of visual data, visual data of at least one of the right or left eye at a sampling time during which visual data of an eye for which the visual data is being estimated are not being captured. Eye movements of the eye based on at least some of the estimated visual data and at least some visual data of the first or second plurality of visual data can be determined.
The present invention provides an apparatus (3) comprising a first out-coupling diffractive optical element (10) and a second out-coupling diffractive optical element (20). Each of the first and second out-coupling diffractive optical elements comprises a first region (12a, 22a) having a first repeated diffraction spacing, d1, and a second region (12b, 22b) adjacent to the first region having a second repeated diffraction spacing, d2, different from the first spacing, d1. The first region (12a) of the first out-coupling diffractive optical element (10) is superposed on and aligned with the second region (22b) of the second out-coupling diffractive optical element (20). The second region (12b) of the first out-coupling diffractive optical element (10) is superposed on and aligned with the first region (22a) of the second out-coupling diffractive optical element (20).
A display system can include a head-mounted display configured to project light to an eye of a user to display virtual image content at different amounts of divergence and collimation. The display system can include an inward-facing imaging system possibly comprising a plurality of cameras that image the user's eye and glints for thereon and processing electronics that are in communication with the inward-facing imaging system and that are configured to obtain an estimate of a center of rotation of the user's eye using cornea data derived from the glint images. The display system may render virtual image content with a render camera positioned at the determined position of the center of rotation of said eye.
A thin transparent layer can be integrated in a head mounted display device and disposed in front of the eye of a wearer. The thin transparent layer may be configured to output light such that light is directed onto the eye to create reflections therefrom that can be used, for example, for glint based tracking. The thin transparent layer can be configured to reduced obstructions in the field of the view of the user.
A method for measuring performance of a head-mounted display module, the method including arranging the head-mounted display module relative to a plenoptic camera assembly so that an exit pupil of the head-mounted display module coincides with a pupil of the plenoptic camera assembly; emitting light from the head-mounted display module while the head-mounted display module is arranged relative to the plenoptic camera assembly; filtering the light at the exit pupil of the head-mounted display module; acquiring, with the plenoptic camera assembly, one or more light field images projected from the head-mounted display module with the filtered light; and determining information about the performance of the head-mounted display module based on acquired light field image.
A plurality of waveguide display substrates, each waveguide display substrate having a cylindrical portion having a diameter and a planar surface, a curved portion opposite the planar surface defining a nonlinear change in thickness across the substrate and having a maximum height D with respect to the cylindrical portion, and a wedge portion between the cylindrical portion and the curved portion defining a linear change in thickness across the substrate and having a maximum height W with respect to the cylindrical portion. A target maximum height Dt of the curved portion is 10−7 to 10−6 times the diameter, D is between about 70% and about 130% of Dt, and W is less than about 30% of Dt.
G02B 6/13 - Circuits optiques intégrés caractérisés par le procédé de fabrication
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G02B 6/122 - Elements optiques de base, p.ex. voies de guidage de la lumière
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
32.
FRAME-BY-FRAME RENDERING FOR AUGMENTED OR VIRTUAL REALITY SYSTEMS
One embodiment is directed to a user display device comprising a housing frame mountable on the head of the user, a lens mountable on the housing frame and a projection sub system coupled to the housing frame to determine a location of appearance of a display object in a field of view of the user based at least in part on at least one of a detection of a head movement of the user and a prediction of a head movement of the user, and to project the display object to the user based on the determined location of appearance of the display object.
G06T 7/70 - Détermination de la position ou de l'orientation des objets ou des caméras
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
A dynamically actuable lens includes a substrate having a surface and a metasurface diffractive optical element (DOE) formed on the surface. The metasurface DOE includes a plurality of raised portions and defines a plurality of recesses between adjacent raised portions. The dynamically actuable lens also includes a movable cover overlying the metasurface DOE and comprising a hydrophilic material, a quantity of a fluid disposed on the movable cover, and a drive mechanism coupled to the movable cover. The drive mechanism is configured to move the movable cover toward the metasurface DOE to displace a portion of the quantity of the fluid into the plurality of recesses, thereby rendering the metasurface DOE in an “off” state, and move the movable cover away from the metasurface DOE, causing the portion of the quantity of the fluid retracting from the plurality of recesses, thereby rendering the metasurface DOE in an “on” state.
G02B 26/00 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
Wearable spectroscopy systems and methods for identifying one or more characteristics of a target object are described. Spectroscopy systems may include a light source configured to emit light in an irradiated field of view and an electromagnetic radiation detector configured to receive reflected light from a target object irradiated by the light source. One or more processors of the systems may identify a characteristic of the target object based on a determined level of light absorption by the target object. Some systems and methods may include one or more corrections for scattered and/or ambient light such as applying an ambient light correction, passing the reflected light through an anti-scatter grid, or using a time-dependent variation in the emitted light.
G01J 3/02 - Spectrométrie; Spectrophotométrie; Monochromateurs; Mesure de la couleur - Parties constitutives
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G01J 3/42 - Spectrométrie d'absorption; Spectrométrie à double faisceau; Spectrométrie par scintillement; Spectrométrie par réflexion
G01N 21/25 - Couleur; Propriétés spectrales, c. à d. comparaison de l'effet du matériau sur la lumière pour plusieurs longueurs d'ondes ou plusieurs bandes de longueurs d'ondes différentes
G01N 21/27 - Couleur; Propriétés spectrales, c. à d. comparaison de l'effet du matériau sur la lumière pour plusieurs longueurs d'ondes ou plusieurs bandes de longueurs d'ondes différentes en utilisant la détection photo-électrique
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G09G 5/37 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire - Détails concernant le traitement de dessins graphiques
35.
EYEPIECE IMAGING ASSEMBLIES FOR A HEAD MOUNTED DISPLAY
A head mounted display can include a frame, an eyepiece, an image injection device, a sensor array, a reflector, and an off-axis optical element. The frame can be configured to be supported on the head of the user. The eyepiece can be coupled to the frame and configured to be disposed in front of an eye of the user. The eyepiece can include a plurality of layers. The image injection device can be configured to provide image content to the eyepiece for viewing by the user. The sensor array can be integrated in or one the eyepiece. The reflector can be disposed in or on the eyepiece and configured to reflect light received from an object for imaging by the sensor array. The off-axis optical element can be disposed in or one the eyepiece. The off-axis optical element can be configured to receive light reflected from the reflector and direct at least a portion of the light toward the sensor array.
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
Augmented reality systems and methods for creating, saving and rendering designs comprising multiple items of virtual content in a three-dimensional (3D) environment of a user. The designs may be saved as a scene, which is built by a user from pre-built sub-components, built components, and/or previously saved scenes. Location information, expressed as a saved scene anchor and position relative to the saved scene anchor for each item of virtual content, may also be saved. Upon opening the scene, the saved scene anchor node may be correlated to a location within the mixed reality environment of the user for whom the scene is opened. The virtual items of the scene may be positioned with the same relationship to that location as they have to the saved scene anchor node. That location may be selected automatically and/or by user input.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
A display system aligns the location of its exit pupil with the location of a viewer's pupil by changing the location of the portion of a light source that outputs light. The light source may include an array of pixels that output light, thereby allowing an image to be displayed on the light source. The display system includes a camera that captures images of the eye and negatives of the images are displayed by the light source. In the negative image, the dark pupil of the eye is a bright spot which, when displayed by the light source, defines the exit pupil of the display system. The location of the pupil of the eye may be tracked by capturing the images of the eye, and the location of the exit pupil of the display system may be adjusted by displaying negatives of the captured images using the light source.
Techniques for addressing deformations in a virtual or augmented headset described. In some implementations, cameras in a headset can obtain image data at different times as the headset moves through a series of poses of the headset. One or more miscalibration conditions for the headset that have occurred as the headset moved through the series of poses can be detected. The series of poses can be divided into groups of poses based on the one or more miscalibration conditions, and bundle adjustment for the groups of poses can be performed using a separate set of camera calibration data. The bundle adjustment for the poses in each group is performed using a same set of calibration data for the group. The camera calibration data for each group is estimated jointly with bundle adjustment estimation for the poses in the group.
A cross reality system enables any of multiple devices to efficiently and accurately access previously persisted maps of very large scale environments and render virtual content specified in relation to those maps. The cross reality system may build a persisted map, which may be in canonical form, by merging tracking maps from the multiple devices. A map merge process determines mergibility of a tracking map with a canonical map and merges a tracking map with a canonical map in accordance with mergibility criteria, such as, when a gravity direction of the tracking map aligns with a gravity direction of the canonical map. Refraining from merging maps if the orientation of the tracking map with respect to gravity is not preserved avoids distortions in persisted maps and results in multiple devices, which may use the maps to determine their locations, to present more realistic and immersive experiences for their users.
Embodiments of this disclosure systems and methods for displays. In embodiments, a display system includes a light source configured to emit a first light, a lens configured to receive the first light, and an image generator configured receive the first light and emit a second light. The display system further includes a plurality of waveguides, where at least two of the plurality of waveguides include an in-coupling grating configured to selectively couple the second light. In some embodiments, the light source can comprise a single pupil light source having a reflector and a micro-LED array disposed in the reflector.
Disclosed here in are systems and methods for mapping environment information. In some embodiments, the systems and methods are configured for mapping information in a mixed reality environment. In some embodiments, the system is configured to perform a method including scanning an environment including capturing, with a sensor, a plurality of points of the environment; tracking a plane of the environment; updating observations associated with the environment by inserting a keyframe into the observations; determining whether the plane is coplanar with a second plane of the environment; in accordance with a determination that the plane is coplanar with the second plane, performing planar bundle adjustment on the observations associated with the environment; and in accordance with a determination that the plane is not coplanar with the second plane, performing planar bundle adjustment on a portion of the observations associated with the environment.
A sensory eyewear system for a mixed reality device can facilitate user's interactions with the other people or with the environment. As one example, the sensory eyewear system can recognize and interpret a sign language, and present the translated information to a user of the mixed reality device. The wearable system can also recognize text in the user's environment, modify the text (e.g., by changing the content or display characteristics of the text), and render the modified text to occlude the original text.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 1/16 - TRAITEMENT ÉLECTRIQUE DE DONNÉES NUMÉRIQUES - Détails non couverts par les groupes et - Détails ou dispositions de structure
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06F 40/58 - Utilisation de traduction automatisée, p.ex. pour recherches multilingues, pour fournir aux dispositifs clients une traduction effectuée par le serveur ou pour la traduction en temps réel
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/20 - Mouvements ou comportement, p.ex. reconnaissance des gestes
A method for refining poses includes receiving a plurality of poses and computing a relative pose set by determining a first set of relative poses between image frame pairs for a first subset of the image frame pairs having a temporal separation between image frames of the image frame pairs less than a threshold, and determining a second set of relative poses between image frame pairs for a second subset of the image frame pairs having a temporal separation between image frames of the image frame pairs greater than the threshold.
H04N 23/10 - Caméras ou modules de caméras comprenant des capteurs d'images électroniques; Leur commande pour générer des signaux d'image à partir de différentes longueurs d'onde
G06T 7/33 - Détermination des paramètres de transformation pour l'alignement des images, c. à d. recalage des images utilisant des procédés basés sur les caractéristiques
G06T 7/73 - Détermination de la position ou de l'orientation des objets ou des caméras utilisant des procédés basés sur les caractéristiques
46.
TUNABLE ATTENUATION OF LIGHT TRANSMISSION ARTIFACTS IN WEARABLE DISPLAYS
A method for displaying an image using a wearable display system including directing display light from a display towards a user through an eyepiece to project images in the user's field of view, determining a relative location between an ambient light source and the eyepiece, and adjusting an attenuation of ambient light from the ambient light source through the eyepiece depending on the relative location between the ambient light source and the eyepiece.
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
G02F 1/13363 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs Éléments à biréfringence, p.ex. pour la compensation optique
G02F 1/1335 - Association structurelle de cellules avec des dispositifs optiques, p.ex. des polariseurs ou des réflecteurs
G02F 1/01 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur
G02F 1/1347 - Disposition de couches ou de cellules à cristaux liquides dans lesquelles un faisceau lumineux est modifié par l'addition des effets de plusieurs couches ou cellules
Described herein are techniques and technologies to identify an encrypted content within a field of view of a user of a VR/AR system and process the encrypted content appropriately. The user of the VR/AR technology may have protected content in a field of view of the user. Encrypted content is mapped to one or more protected surfaces on a display device. Contents mapped to a protected surface may be rendered on the display device but prevented from being replicated from the display device.
A display system can include a head-mounted display configured to project light to an eye of a user to display augmented reality image content to the user. The display system can include one or more user sensors configured to sense the user and can include one or more environmental sensors configured to sense surroundings of the user. The display system can also include processing electronics in communication with the display, the one or more user sensors, and the one or more environmental sensors. The processing electronics can be configured to sense a situation involving user focus, determine user intent for the situation, and alter user perception of a real or virtual object within the vision field of the user based at least in part on the user intent and/or sensed situation involving user focus. The processing electronics can be configured to at least one of enhance or de-emphasize the user perception of the real or virtual object within the vision field of the user.
A61B 90/00 - Instruments, outillage ou accessoires spécialement adaptés à la chirurgie ou au diagnostic non couverts par l'un des groupes , p.ex. pour le traitement de la luxation ou pour la protection de bords de blessures
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06F 3/03 - Dispositions pour convertir sous forme codée la position ou le déplacement d'un élément
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
A61B 34/00 - Chirurgie assistée par ordinateur; Manipulateurs ou robots spécialement adaptés à l’utilisation en chirurgie
50.
SYSTEMS AND METHODS FOR VIRTUAL AND AUGMENTED REALITY
Disclosed herein are systems and methods for distributed computing and/or networking for mixed reality systems. A method may include capturing an image via a camera of a head-wearable device. Inertial data may be captured via an inertial measurement unit of the head-wearable device. A position of the head-wearable device can be estimated based on the image and the inertial data via one or more processors of the head-wearable device. The image can be transmitted to a remote server. A neural network can be trained based on the image via the remote server. A trained neural network can be transmitted to the head-wearable device.
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 20/20 - RECONNAISSANCE OU COMPRÉHENSION D’IMAGES OU DE VIDÉOS Éléments spécifiques à la scène dans les scènes de réalité augmentée
G06V 40/16 - Visages humains, p.ex. parties du visage, croquis ou expressions
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
This disclosure describes techniques for device authentication and/or pairing. A display system can comprise a head mountable display, computer memory, and processor(s). In response to receiving a request to authenticate a connection between the display system and a companion device (e.g., controller or other computer device), first data may be determined, the first data based at least partly on audio data spoken by a user. The first data may be sent to an authentication device configured to compare the first data to second data received from the companion device, the second data based at least partly on the audio data. Based at least partly on a correspondence between the first and second data, the authentication device can send a confirmation to the display system to permit communication between the display system and companion device.
H04W 4/80 - Services utilisant la communication de courte portée, p.ex. la communication en champ proche, l'identification par radiofréquence ou la communication à faible consommation d’énergie
H04W 8/00 - Gestion de données relatives au réseau
H04M 1/60 - COMMUNICATIONS TÉLÉPHONIQUES Équipement de sous-station, p.ex. pour utilisation par l'abonné comprenant des amplificateurs de parole
G06F 21/44 - Authentification de programme ou de dispositif
Various examples of cross-application systems and methods for authoring, transferring, and evaluating rigging control systems for virtual characters are disclosed. Embodiments of a method include the steps or processes of creating, in a first application which implements a first rigging control protocol, a rigging control system description; writing the rigging control system description to a data file; and initiating transfer of the data file to a second application. In such embodiments, the rigging control system description may be defined according to a different second rigging control protocol. The rigging control system description may specify a rigging control input, such as a lower-order rigging element (e.g., a core skeleton for a virtual character), and at least one rule for operating on the rigging control input to produce a rigging control output, such as a higher-order skeleton or other higher-order rigging element.
A cross reality system enables any of multiple devices to efficiently render shared location-based content. The cross reality system may include a cloud-based service that responds to requests from devices to localize with respect to a stored map. The service may return to the device information that localizes the device with respect to the stored map. In conjunction with localization information, the service may provide information about locations in the physical world proximate the device for which virtual content has been provided. Based on information received from the service, the device may render, or stop rendering, virtual content to each of multiple users based on the user's location and specified locations for the virtual content.
A method, includes providing a wafer including a first surface grating extending over a first area of a surface of the wafer and a second surface grating extending over a second area of the surface of the wafer; de-functionalizing a portion of the surface grating in at least one of the first surface grating area and the second surface grating area; and singulating an eyepiece from the wafer, the eyepiece including a portion of the first surface grating area and a portion of the second surface grating area. The first surface grating in the eyepiece corresponds to an input coupling grating for a head-mounted display and the second surface grating corresponds to a pupil expander grating for the head-mounted display.
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for camera calibration during bundle adjustment. One of the methods includes maintaining a three-dimensional model of an environment and a plurality of image data clusters that each include data generated from images captured by two or more cameras included in a device. The method includes jointly determining, for a three-dimensional point represented by an image data cluster (i) the newly estimated coordinates for the three-dimensional point for an update to the three-dimensional model or a trajectory of the device, and (ii) the newly estimated calibration data that represents the spatial relationship between the two or more cameras.
Structures for forming an optical feature and methods for forming the optical feature are disclosed. In some embodiments, the structure comprises a patterned layer comprising a pattern corresponding to the optical feature; a base layer; and an intermediate layer bonded to the patterned layer and the base layer.
The systems and methods described can include approaches to calibrate head-mounted displays for improved viewing experiences. Some methods include receiving data of a first target image associated with an undeformed state of a first eyepiece of a head-mounted display device; receiving data of a first captured image associated with deformed state of the first eyepiece of the head-mounted display device; determining a first transformation that maps the first captured image to the image; and applying the first transformation to a subsequent image for viewing on the first eyepiece of the head-mounted display device.
G06T 3/00 - Transformation géométrique de l'image dans le plan de l'image
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
An audio system and method of spatially rendering audio signals that uses modified virtual speaker panning is disclosed. The audio system may include a fixed number F of virtual speakers, and the modified virtual speaker panning may dynamically select and use a subset P of the fixed virtual speakers. The subset P of virtual speakers may be selected using a low energy speaker detection and culling method, a source geometry-based culling method, or both. One or more processing blocks in the decoder/virtualizer may be bypassed based on the energy level of the associated audio signal or the location of the sound source relative to the user/listener, respectively. In some embodiments, a virtual speaker that is designated as an active virtual speaker at a first time, may also be designated as an active virtual speaker at a second time to ensure the processing completes.
H04S 7/00 - Dispositions pour l'indication; Dispositions pour la commande, p.ex. pour la commande de l'équilibrage
G10L 19/008 - Codage ou décodage du signal audio multi-canal utilisant la corrélation inter-canaux pour réduire la redondance, p.ex. stéréo combinée, codage d’intensité ou matriçage
G10L 25/21 - Techniques d'analyses de la parole ou de la voix qui ne se limitent pas à un seul des groupes caractérisées par le type de paramètres extraits les paramètres extraits étant l’information sur la puissance
H04S 3/00 - Systèmes utilisant plus de deux canaux, p.ex. systèmes quadriphoniques
59.
VIRTUAL, AUGMENTED, AND MIXED REALITY SYSTEMS AND METHODS
A virtual, augmented, or mixed reality display system includes a display configured to display virtual, augmented, or mixed reality image data, the display including one or more optical components which introduce optical distortions or aberrations to the image data. The system also includes a display controller configured to provide the image data to the display. The display controller includes memory for storing optical distortion correction information, and one or more processing elements to at least partially correct the image data for the optical distortions or aberrations using the optical distortion correction information.
An augmented reality display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity is selected using a light source that outputs light for different images from different locations, with spatial differences in the locations of the light output providing differences in the paths that the light takes to the eye, which in turn provide different amounts of parallax disparity. Advantageously, the wavefront divergence, and the accommodation cue provided to the eye of the user, may be varied by appropriate selection of parallax disparity, which may be set by selecting the amount of spatial separation between the locations of light output.
G02B 30/24 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type stéréoscopique impliquant un multiplexage temporel, p.ex. utilisant des obturateurs gauche et droit activés séquentiellement
G02B 30/34 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p.ex. visionneuses de diapositives 3D
H04N 13/128 - Ajustement de la profondeur ou de la disparité
H04N 13/383 - Suivi des spectateurs pour le suivi du regard, c. à d. avec détection de l’axe de vision des yeux du spectateur
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/341 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques utilisant le multiplexage temporel
H04N 13/339 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques utilisant le multiplexage spatial
An apparatus configured to be head-worn by a user, includes: a transparent screen configured to allow the user to see therethrough; a sensor system configured to sense a characteristic of a physical object in an environment in which the user is located; and a processing unit coupled to the sensor system, the processing unit configured to: cause the screen to display a user-controllable object, and cause the screen to display an image of a feature that is resulted from a virtual interaction between the user-controllable object and the physical object, so that the feature will appear to be a part of the physical object in the environment or appear to be emanating from the physical object.
G09G 5/377 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire - Détails concernant le traitement de dessins graphiques pour mélanger ou superposer plusieurs dessins graphiques
G09G 5/38 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire avec des moyens pour commander la position de l'affichage
Examples of an imaging system for use with a head mounted display (HMD) are disclosed. The imaging system can include a forward-facing imaging camera and a surface of a display of the HMD can include an off-axis diffractive optical element (DOE) or hot mirror configured to reflect light to the imaging camera. The DOE or hot mirror can be segmented. The imaging system can be used for eye tracking, biometric identification, multiscopic reconstruction of the three-dimensional shape of the eye, etc.
A61B 3/10 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient
A61B 3/113 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour déterminer ou enregistrer le mouvement de l'œil
A61B 3/14 - Dispositions spécialement adaptées à la photographie de l'œil
A61B 3/12 - Appareils pour l'examen optique des yeux; Appareils pour l'examen clinique des yeux du type à mesure objective, c. à d. instruments pour l'examen des yeux indépendamment des perceptions ou des réactions du patient pour examiner le fond de l'œil, p.ex. ophtalmoscopes
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
A61B 5/11 - Mesure du mouvement du corps entier ou de parties de celui-ci, p.ex. tremblement de la tête ou des mains ou mobilité d'un membre
A61B 5/00 - Mesure servant à établir un diagnostic ; Identification des individus
A61B 5/16 - Dispositifs pour la psychotechnie; Test des temps de réaction
63.
DISPLAY SYSTEM AND METHOD FOR PROVIDING VARIABLE ACCOMMODATION CUES USING MULTIPLE INTRA-PUPIL PARALLAX VIEWS FORMED BY LIGHT EMITTER ARRAYS
A display system is configured to direct a plurality of parallactically-disparate intra-pupil images into a viewer's eye. The parallactically-disparate intra-pupil images provide different parallax views of a virtual object, and impinge on the pupil from different angles. In the aggregate, the wavefronts of light forming the images approximate a continuous divergent wavefront and provide selectable accommodation cues for the user, depending on the amount of parallax disparity between the intra-pupil images. The amount of parallax disparity may be selected using an array of shutters that selectively regulate the entry of image light into an eye. Each opened shutter in the array provides a different intra-pupil image, and the locations of the open shutters provide the desired amount of parallax disparity between the images. In some other embodiments, the images may be formed by an emissive micro-display. Each pixel formed by the micro-display may be formed by one of a group of light emitters, which are at different locations such that the emitted light takes different paths to the eye, the different paths providing different amounts of parallax disparity.
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
G02B 30/24 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type stéréoscopique impliquant un multiplexage temporel, p.ex. utilisant des obturateurs gauche et droit activés séquentiellement
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G09G 3/00 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques
64.
METHOD AND SYSTEM FOR PATTERNING A LIQUID CRYSTAL LAYER
An optical master is created by using a nanoimprint alignment layer to pattern a liquid crystal layer. The nanoimprint alignment layer and the liquid crystal layer constitute the optical master. The optical master is positioned above a photo-alignment layer. The optical master is illuminated and light propagating through the nanoimprinted alignment layer and the liquid crystal layer is diffracted and subsequently strikes the photo-alignment layer. The incident diffracted light causes the pattern in the liquid crystal layer to be transferred to the photo-alignment layer. A second liquid crystal layer is deposited onto the patterned photo-alignment layer, which subsequently is used to align the molecules of the second liquid crystal layer. The second liquid crystal layer in the patterned photo-alignment layer may be utilized as a replica optical master or as a diffractive optical element for directing light in optical devices such as augmented reality display devices.
G03H 1/02 - Procédés ou appareils holographiques utilisant la lumière, les infrarouges ou les ultraviolets pour obtenir des hologrammes ou pour en obtenir une image; Leurs détails spécifiques - Détails
This disclosure describes a wearable display system configured to project light to the eye(s) of a user to display virtual (e.g., augmented reality) image content in a vision field of the user. The system can include light source(s) that output light, spatial light modulator(s) that modulate the light to provide the virtual image content, and an eyepiece configured to convey the modulated light toward the eye(s) of the user. The eyepiece can include waveguide(s) and a plurality of in-coupling optical elements arranged on or in the waveguide(s) to in-couple the modulated light received from the spatial light modulator(s) into the waveguide(s) to be guided toward the user's eye(s). The spatial light modulator(s) may be movable, and/or may include movable components, to direct different portions of the modulated light toward different ones of the in-coupling optical elements at different times.
In some embodiments, a near-eye, near-eye display system comprises a stack of waveguides having pillars in a central, active portion of the waveguides. The active portion may include light outcoupling optical elements configured to outcouple image light from the waveguides towards the eye of a viewer. The pillars extend between and separate neighboring ones of the waveguides. The light outcoupling optical elements may include diffractive optical elements that are formed simultaneously with the pillars, for example, by imprinting or casting. The pillars are disposed on one or more major surfaces of each of the waveguides. The pillars may define a distance between two adjacent waveguides of the stack of waveguides. The pillars may be bonded to adjacent waveguides may be using one or more of the systems, methods, or devices herein. The bonding provides a high level of thermal stability to the waveguide stack, to resist deformation as temperatures change.
A wearable device may include a head-mounted display (HMD) for rendering a three-dimensional (3D) virtual object which appears to be located in an ambient environment of a user of the display. The relative positions of the HMD and one or more eyes of the user may not be in desired positions to receive image information outputted by the HMD. For example, the HAMID-to-eye vertical alignment may be different between the left and right eyes. The wearable device may determine if the HMD is level on the user's head and may then provide the user with a left-eye alignment marker and a right-eye alignment marker. Based on user feedback, the wearable device may determine if there is any left-right vertical misalignment and may take actions to reduce or minimize the effects of any misalignment.
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G09G 5/38 - Dispositions ou circuits de commande de l'affichage communs à l'affichage utilisant des tubes à rayons cathodiques et à l'affichage utilisant d'autres moyens de visualisation caractérisés par l'affichage de dessins graphiques individuels en utilisant une mémoire à mappage binaire avec des moyens pour commander la position de l'affichage
Systems and methods for regulating the speed of movement of virtual objects presented by a wearable system are described. The wearable system may present three-dimensional (3D) virtual content that moves, e.g., laterally across the user's field of view and/or in perceived depth from the user. The speed of the movement may follow the profile of an S-curve, with a gradual increase to a maximum speed, and a subsequent gradual decrease in speed until an end point of the movement is reached. The decrease in speed may be more gradual than the increase in speed. This speed curve may be utilized in the movement of virtual objections for eye-tracking calibration. The wearable system may track the position of a virtual object (an eye-tracking target) which moves with a speed following the S-curve. This speed curve allows for rapid movement of the eye-tracking target, while providing a comfortable viewing experience and high accuracy in determining the initial and final positions of the eye as it tracks the target.
A cross reality system that renders virtual content generated by executing native mode applications may be configured to render web-based content using components that render content from native applications. The system may include a Prism manager that provides Prisms in which content from executing native applications is rendered. For rendering web based content, a browser, accessing the web based content, may be associated with a Prism and may render content into its associated Prism, creating the same immersive experience for the user as when content is generated by a native application. The user may access the web application from the same program launcher menu as native applications. The system may have tools that enable a user to access these capabilities, including by creating for a web location an installable entity that, when processed by the system, results in an icon for the web content in a program launcher menu.
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
G06F 3/0482 - Interaction avec des listes d’éléments sélectionnables, p.ex. des menus
G06F 16/955 - Recherche dans le Web utilisant des identifiants d’information, p.ex. des localisateurs uniformisés de ressources [uniform resource locators - URL]
G06F 16/954 - Navigation, p.ex. en utilisant la navigation par catégories
G06F 3/04817 - Techniques d’interaction fondées sur les interfaces utilisateur graphiques [GUI] fondées sur des propriétés spécifiques de l’objet d’interaction affiché ou sur un environnement basé sur les métaphores, p.ex. interaction avec des éléments du bureau telles les fenêtres ou les icônes, ou avec l’aide d’un curseur changeant de comport utilisant des icônes
71.
NEURAL NETWORK FOR EYE IMAGE SEGMENTATION AND IMAGE QUALITY ESTIMATION
Systems and methods for eye image segmentation and image quality estimation are disclosed. In one aspect, after receiving an eye image, a device such as an augmented reality device can process the eye image using a convolutional neural network with a merged architecture to generate both a segmented eye image and a quality estimation of the eye image. The segmented eye image can include a background region, a sclera region, an iris region, or a pupil region. In another aspect, a convolutional neural network with a merged architecture can be trained for eye image segmentation and image quality estimation. In yet another aspect, the device can use the segmented eye image to determine eye contours such as a pupil contour and an iris contour. The device can use the eye contours to create a polar image of the iris region for computing an iris code or biometric authentication.
G06T 7/194 - Découpage; Détection de bords impliquant une segmentation premier plan-arrière-plan
G06V 10/56 - Extraction de caractéristiques d’images ou de vidéos relative à la couleur
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06V 10/98 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos Évaluation de la qualité des motifs acquis
G06V 40/18 - Caractéristiques de l’œil, p.ex. de l’iris
G06F 18/2413 - Techniques de classification relatives au modèle de classification, p.ex. approches paramétriques ou non paramétriques basées sur les distances des motifs d'entraînement ou de référence
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
An eyepiece waveguide for an augmented reality display system. The eyepiece waveguide can include an input coupling grating (ICG) region. The ICG region can couple an input beam into the substrate of the eyepiece waveguide as a guided beam. A first combined pupil expander-extractor (CPE) grating region can be formed on or in a surface of the substrate. The first CPE grating region can receive the guided beam, create a first plurality of diffracted beams at a plurality of distributed locations, and out-couple a first plurality of output beams. The eyepiece waveguide can also include a second CPE grating region formed on or in the opposite surface of the substrate. The second CPE grating region can receive the guided beam, create a second plurality of diffracted beams at a plurality of distributed locations, and out-couple a second plurality of output beams.
A method of efficiently and accurately computing a pose of an image with respect to other image information. The image may be acquired with a camera on a portable device and the other information may be a map, such that the computation of pose localizes the device relative to the map. Such a technique may be applied in a cross reality system to enable devices to efficiently and accurately access previously persisted maps. Localizing with respect to a map may enable multiple cross reality devices to render virtual content at locations specified in relation to those maps, providing an enhanced experience for uses of the system. The method may be used in other devices and for other purposes, such as for navigation of autonomous vehicles.
A system may comprise a selectively transparent projection device for projecting an image toward an eye of a viewer from a projection device position in space relative to the eye of the viewer, the projection device being capable of assuming a substantially transparent state when no image is projected; an occlusion mask device coupled to the projection device and configured to selectively block light traveling toward the eye from one or more positions opposite of the projection device from the eye of the viewer in an occluding pattern correlated with the image projected by the projection device; and a zone plate diffraction patterning device interposed between the eye of the viewer and the projection device and configured to cause light from the projection device to pass through a diffraction pattern having a selectable geometry as it travels to the eye.
G02B 30/24 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type stéréoscopique impliquant un multiplexage temporel, p.ex. utilisant des obturateurs gauche et droit activés séquentiellement
G03B 21/00 - Projecteurs ou visionneuses du type par projection; Leurs accessoires
G02B 30/34 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p.ex. visionneuses de diapositives 3D
G02B 30/52 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques l’image étant construite à partir d'éléments d'image répartis sur un volume 3D, p.ex. des voxels le volume 3D étant construit à partir d'une pile ou d'une séquence de plans 2D, p.ex. systèmes d'échantillonnage en profondeur
In some embodiments, an interconnect electrical connects a light emitter to wiring on a substrate. The interconnect may be deposited by 3D printing and lays flat on the light emitter and substrate. In some embodiments, the interconnect has a generally rectangular or oval cross-sectional profile and extends above the light emitter to a height of about 50 μm or less, or about 35 μm or less. This small height allows close spacing between an overlying optical structure and the light emitter, thereby providing high efficiency in the injection of light from the light emitter into the optical structure, such as a light pipe.
H01L 33/62 - Dispositions pour conduire le courant électrique vers le corps semi-conducteur ou depuis celui-ci, p.ex. grille de connexion, fil de connexion ou billes de soudure
An apparatus including a set of three illumination sources disposed in a first plane. Each of the set of three illumination sources is disposed at a position in the first plane offset from others of the set of three illumination sources by 120 degrees measured in polar coordinates. The apparatus also includes a set of three waveguide layers disposed adjacent the set of three illumination sources. Each of the set of three waveguide layers includes an incoupling diffractive element disposed at a lateral position offset by 180 degrees from a corresponding illumination source of the set of three illumination sources.
A scanning micromirror system includes a base having an axis passing therethrough, a plurality of support flexures coupled to the base, and a platform coupled to the base by the plurality of support flexures. The platform has a first side and a second side opposing the first side and is operable to oscillate about the axis. The scanning micromirror system also includes a stress relief layer positioned on the first side of the platform and a reflector positioned on the first side of the platform. The stress relief layer is positioned between the reflector and the platform.
G02B 26/08 - Dispositifs ou dispositions optiques pour la commande de la lumière utilisant des éléments optiques mobiles ou déformables pour commander la direction de la lumière
B81B 3/00 - Dispositifs comportant des éléments flexibles ou déformables, p.ex. comportant des membranes ou des lamelles élastiques
A method of reducing optical artifacts includes injecting a light beam generated by an illumination source into a polarizing beam splitter (PBS), reflecting a spatially defined portion of the light beam from a display panel, reflecting, at an interface in the PBS, the spatially defined portion of the light beam towards a projector lens, passing at least a portion of the spatially defined portion of the light beam through a circular polarizer disposed between the PBS and the projector lens, reflecting, by one or more elements of the projector lens, a return portion of the spatially defined portion of the light beam, and attenuating, at the circular polarizer, the return portion of the spatially defined portion of the light beam.
An eyepiece waveguide for augmented reality applications includes a substrate and a set of incoupling diffractive optical elements coupled to the substrate. A first subset of the set of incoupling diffractive optical elements is operable to diffract light into the substrate along a first range of propagation angles and a second subset of the set of incoupling diffractive optical elements is operable to diffract light into the substrate along a second range of propagation angles. The eyepiece waveguide also includes a combined pupil expander diffractive optical element coupled to the substrate.
An eyepiece includes a substrate, an input coupling grating on a first side of the substrate, and a morphed grating comprising characteristics of both a primary grating and a secondary grating on at least the first side of the substrate. The primary grating and the secondary grating may differ in pitch, orientation, and dimensions.
An optical device may include a wedge-shaped light turning element. The optical device can include a first surface that is parallel to a horizontal axis and a second surface opposite to the first surface that is inclined with respect to the horizontal axis by a wedge angle. The optical device may include a light module that includes a plurality of light emitters. The light module can be configured to combine light for the plurality of emitters. The optical device can further include a light input surface that is between the first and the second surfaces and is disposed with respect to the light module to receive light emitted from the plurality of emitters. The optical device may include an end reflector that is disposed on a side opposite the light input surface. The second surface may be inclined such that a height of the light input surface is less than a height of the side opposite the light input surface. The light coupled into the wedge-shaped light turning element may be reflected by the end reflector and/or reflected from the second surface towards the first surface.
G02B 27/14 - Systèmes divisant ou combinant des faisceaux fonctionnant uniquement par réflexion
G03B 21/00 - Projecteurs ou visionneuses du type par projection; Leurs accessoires
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G09G 3/24 - Dispositions ou circuits de commande présentant un intérêt uniquement pour l'affichage utilisant des moyens de visualisation autres que les tubes à rayons cathodiques pour la présentation d'un ensemble de plusieurs caractères, p.ex. d'une page, en composant l'ensemble par combinaison d'éléments individuels disposés en matrice utilisant des sources lumineuses commandées utilisant des filaments incandescents
G02B 30/26 - Systèmes ou appareils optiques pour produire des effets tridimensionnels [3D], p.ex. des effets stéréoscopiques en fournissant des première et seconde images de parallaxe à chacun des yeux gauche et droit d’un observateur du type autostéréoscopique
G02B 27/00 - Systèmes ou appareils optiques non prévus dans aucun des groupes ,
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G02B 5/30 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques autres que les lentilles Éléments polarisants
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G02F 1/137 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de l'intensité, de la phase, de la polarisation ou de la couleur basés sur des cristaux liquides, p.ex. cellules d'affichage individuelles à cristaux liquides caractérisés par l'effet électro-optique ou magnéto-optique, p.ex. transition de phase induite par un champ, effet d'orientation, interaction entre milieu récepteur et matière additive ou diffusion dynamique
A method of presenting an audio signal to a user of a mixed reality environment is disclosed, the method comprising the steps of detecting a first audio signal in the mixed reality environment, where the first audio signal is a real audio signal; identifying a virtual object intersected by the first audio signal in the mixed reality environment; identifying a listener coordinate associated with the user; determining, using the virtual object and the listener coordinate, a transfer function; applying the transfer function to the first audio signal to produce a second audio signal; and presenting, to the user, the second audio signal.
Disclosed herein are systems and methods for presenting audio content in mixed reality environments. A method may include receiving a first input from an application program; in response to receiving the first input, receiving, via a first service, an encoded audio stream; generating, via the first service, a decoded audio stream based on the encoded audio stream; receiving, via a second service, the decoded audio stream; receiving a second input from one or more sensors of a wearable head device; receiving, via the second service, a third input from the application program, wherein the third input corresponds to a position of one or more virtual speakers; generating, via the second service, a spatialized audio stream based on the decoded audio stream, the second input, and the third input; presenting, via one or more speakers of the wearable head device, the spatialized audio stream.
In one aspect, an optical device comprises a plurality of waveguides formed over one another and having formed thereon respective diffraction gratings, wherein the respective diffraction gratings are configured to diffract visible light incident thereon into respective waveguides, such that visible light diffracted into the respective waveguides propagates therewithin. The respective diffraction gratings are configured to diffract the visible light into the respective waveguides within respective field of views (FOVs) with respect to layer normal directions of the respective waveguides. The respective FOVs are such that the plurality of waveguides are configured to diffract the visible light within a combined FOV that is continuous and greater than each of the respective FOVs
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G02B 6/10 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage du type guide d'ondes optiques
G02F 1/29 - Dispositifs ou dispositions pour la commande de l'intensité, de la couleur, de la phase, de la polarisation ou de la direction de la lumière arrivant d'une source lumineuse indépendante, p.ex. commutation, ouverture de porte ou modulation; Optique non linéaire pour la commande de la position ou de la direction des rayons lumineux, c. à d. déflexion
Augmented reality systems and methods for automatically repositioning a virtual object with respect to a destination object in a three-dimensional (3D) environment of a user are disclosed. The systems and methods can automatically attach the target virtual object to the destination object and re-orient the target virtual object based on the affordances of the virtual object or the destination object. The systems and methods can also track the movement of a user and detach the virtual object from the destination object when the user's movement passes a threshold condition.
H04N 13/279 - Générateurs de signaux d’images à partir de modèles 3D d’objets, p.ex. des signaux d’images stéréoscopiques générés par ordinateur les positions des points de vue virtuels étant choisies par les spectateurs ou déterminées par suivi
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
H04N 13/239 - Générateurs de signaux d’images utilisant des caméras à images stéréoscopiques utilisant deux capteurs d’images 2D dont la position relative est égale ou en correspondance à l’intervalle oculaire
G06F 3/04815 - Interaction s’effectuant dans un environnement basé sur des métaphores ou des objets avec un affichage tridimensionnel, p.ex. modification du point de vue de l’utilisateur par rapport à l’environnement ou l’objet
H04N 13/395 - Affichages volumétriques, c. à d. systèmes où l’image est réalisée à partir d’éléments répartis dans un volume avec échantillonnage de la profondeur, c. à d. construction du volume à partir d’un ensemble ou d’une séquence de plans d’image 2D
G02B 30/34 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p.ex. visionneuses de diapositives 3D
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
This document describes imaging and visualization systems in which the intent of a group of users in a shared space is determined and acted upon. In one aspect, a method includes identifying, for a group of users in a shared virtual space, a respective objective for each of two or more of the users in the group of users. For each of the two or more users, a determination is made, based on inputs from multiple sensors having different input modalities, a respective intent of the user. At least a portion of the multiple sensors are sensors of a device of the user that enables the user to participate in the shared virtual space. A determination is made, based on the respective intent, whether the user is performing the respective objective for the user. Output data is generated and provided based on the respective objectives respective intents.
Various methods and apparatus are described herein for enabling one or more users to interface with virtual or augmented reality environments. An example system includes a computing network having computer servers interconnected through high bandwidth interfaces to gateways for processing data and/or for enabling communication of data between the servers and one or more local user interface devices. The servers include memory, processing circuitry, and software for designing and/or controlling virtual worlds, as well as for storing and processing user data and data provided by other components of the system. One or more virtual worlds may be presented to a user through a user device for the user to experience and interact. A large number of users may each use a device to simultaneously interface with one or more digital worlds by using the device to observe and interact with each other and with objects produced within the digital worlds.
Techniques are disclosed for using and training a descriptor network. An image may be received and provided to the descriptor network. The descriptor network may generate an image descriptor based on the image. The image descriptor may include a set of elements distributed between a major vector comprising a first subset of the set of elements and a minor vector comprising a second subset of the set of elements. The second subset of the set of elements may include more elements than the first subset of the set of elements. A hierarchical normalization may be imposed onto the image descriptor by normalizing the major vector to a major normalization amount and normalizing the minor vector to a minor normalization amount. The minor normalization amount may be less than the major normalization amount.
G06F 16/56 - Recherche d’informations; Structures de bases de données à cet effet; Structures de systèmes de fichiers à cet effet de données d’images fixes en format vectoriel
G06V 10/46 - Descripteurs pour la forme, descripteurs liés au contour ou aux points, p.ex. transformation de caractéristiques visuelles invariante à l’échelle [SIFT] ou sacs de mots [BoW]; Caractéristiques régionales saillantes
G06V 10/44 - Extraction de caractéristiques locales par analyse des parties du motif, p.ex. par détection d’arêtes, de contours, de boucles, d’angles, de barres ou d’intersections; Analyse de connectivité, p.ex. de composantes connectées
G06F 18/214 - Génération de motifs d'entraînement; Procédés de Bootstrapping, p.ex. ”bagging” ou ”boosting”
G06V 10/764 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant la classification, p.ex. des objets vidéo
G06V 10/82 - Dispositions pour la reconnaissance ou la compréhension d’images ou de vidéos utilisant la reconnaissance de formes ou l’apprentissage automatique utilisant les réseaux neuronaux
An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.
Disclosed herein are systems and methods for sharing and synchronizing virtual content. A method may include receiving, from a host application via a wearable device comprising a transmissive display, a first data package comprising first data; identifying virtual content based on the first data; presenting a view of the virtual content via the transmissive display; receiving, via the wearable device, first user input directed at the virtual content; generating second data based on the first data and the first user input; sending, to the host application via the wearable device, a second data package comprising the second data, wherein the host application is configured to execute via one or more processors of a computer system remote to the wearable device and in communication with the wearable device.
G06F 30/12 - CAO géométrique caractérisée par des moyens d’entrée spécialement adaptés à la CAO, p.ex. interfaces utilisateur graphiques [UIG] spécialement adaptées à la CAO
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
G06T 19/20 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie Édition d'images tridimensionnelles [3D], p.ex. modification de formes ou de couleurs, alignement d'objets ou positionnements de parties
A mixed reality virtual environment is sharable among multiple users through the use of multiple view modes that are selectable by a presenter. Multiple users may wish to view a common virtual object, such as one that is used for educational purposes, such as a piece of art in a museum, automobile, biological specimen, chemical compound, etc. The virtual object may be presented in a virtual room to any number of users. A presentation may be controlled by a presenter (e.g., a teacher of a class of students) that leads multiple participants (e.g., students) through information associated with the virtual object. Use of different viewing modes allows individual users to see different virtual content despite being in a shared viewing space or alternatively, to see the same virtual content in different locations within a shared space.
G09B 5/12 - Matériel à but éducatif à commande électrique avec présentation individuelle d'une information à une pluralité de postes d'élèves différents postes étant capables de présenter des informations différentes simultanément
H04L 12/18 - Dispositions pour la fourniture de services particuliers aux abonnés pour la diffusion ou les conférences
G06F 3/01 - Dispositions d'entrée ou dispositions d'entrée et de sortie combinées pour l'interaction entre l'utilisateur et le calculateur
G06T 19/00 - Transformation de modèles ou d'images tridimensionnels [3D] pour infographie
94.
Display panel or portion thereof with a transitional mixed reality graphical user interface
An eyepiece for an augmented reality display system. The eyepiece can include a waveguide substrate. The waveguide substrate can include an input coupler grating (ICG), an orthogonal pupil expander (OPE) grating, a spreader grating, and an exit pupil expander (EPE) grating. The ICG can couple at least one input light beam into at least a first guided light beam that propagates inside the waveguide substrate. The OPE grating can divide the first guided light beam into a plurality of parallel, spaced-apart light beams. The spreader grating can receive the light beams from the OPE grating and spread their distribution. The spreader grating can include diffractive features oriented at approximately 90° to diffractive features of the OPE grating. The EPE grating can re-direct the light beams from the first OPE grating and the first spreader grating such that they exit the waveguide substrate.
An optical system comprises an optically transmissive substrate comprising a multilevel metasurface which comprises a grating comprising a plurality of multilevel unit cells. Each unit cell comprises, on a lowermost level, a laterally-elongated first lowermost level nanobeam having a first width and a laterally-elongated second lowermost level nanobeam having a second width larger than the first width. Each unit cell further comprises, on an uppermost level, a laterally-elongated first uppermost level nanobeam above the first lowermost level nanobeam and a laterally-elongated second uppermost level nanobeam above the second lowermost level nanobeam.
H04N 13/344 - Affichage pour le visionnement à l’aide de lunettes spéciales ou de visiocasques avec des visiocasques portant des affichages gauche et droit
G02B 5/30 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES Éléments optiques autres que les lentilles Éléments polarisants
H04N 13/349 - Affichages multi-vues pour afficher au moins trois points de vue géométriques, sans suivi du spectateur
G02B 30/35 - Stéréoscopes fournissant une paire stéréoscopique d'images séparées correspondant à des vues déplacées parallèlement du même objet, p.ex. visionneuses de diapositives 3D utilisant des éléments optiques réfléchissants dans le chemin optique entre les images et les observateurs
A virtual or augmented reality display system that controls power inputs to the display system as a function of image data. Image data itself is made of a plurality of image data frames, each with constituent color components of, and depth planes for displaying on, rendered content. Light sources or spatial light modulators to relay illumination from the light sources may receive signals from a display controller to adjust a power setting to the light source or spatial light modulator, and/or control depth of displayed image content, based on control information embedded in an image data frame.
Systems and methods for depth plane selection in display system such as augmented reality display systems, including mixed reality display systems, are disclosed. A display(s) may present virtual image content via image light to an eye(s) of a user. The display(s) may output the image light to the eye(s) of the user, the image light to have different amounts of wavefront divergence corresponding to different depth planes at different distances away from the user. A camera(s) may capture images of the eye(s). An indication may be generated based on obtained images of the eye(s), indicating whether the user is identified. The display(s) may be controlled to output the image light to the eye(s) of the user, the image light to have the different amounts of wavefront divergence based at least in part on the generated indication indicating whether the user is identified.
A display system comprises a waveguide having light incoupling or light outcoupling optical elements formed of a metasurface. The metasurface is a multilevel (e.g., bi-level) structure having a first level defined by spaced apart protrusions formed of a first optically transmissive material and a second optically transmissive material between the protrusions. The metasurface also includes a second level formed by the second optically transmissive material. The protrusions on the first level may be patterned by nanoimprinting the first optically transmissive material, and the second optically transmissive material may be deposited over and between the patterned protrusions. The widths of the protrusions and the spacing between the protrusions may be selected to diffract light, and a pitch of the protrusions may be 10-600 nm.
F21V 8/00 - Utilisation de guides de lumière, p.ex. dispositifs à fibres optiques, dans les dispositifs ou systèmes d'éclairage
G02B 6/00 - OPTIQUE ÉLÉMENTS, SYSTÈMES OU APPAREILS OPTIQUES - Détails de structure de dispositions comprenant des guides de lumière et d'autres éléments optiques, p.ex. des moyens de couplage
G02B 6/293 - Moyens de couplage optique ayant des bus de données, c. à d. plusieurs guides d'ondes interconnectés et assurant un système bidirectionnel par nature en mélangeant et divisant les signaux avec des moyens de sélection de la longueur d'onde