A watermark representing a link to an original video and/or metadata such as haptic metadata associated with the original video is embedded in the original video in such a way that a re-recording to the original video can still preserve the watermark. The watermark can be used to link to the original video or to the metadata related thereto.
H04N 19/467 - Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
An affective gaming system includes: an administrator unit configured to host a session of a video game; a receiving unit configured to receive biometric data associated with a first user of a plurality of users participating in the session of a video game; a first generating unit configured to generate, based on at least part of the biometric data, current emotion data associated with the first user; a second generating unit configured to generate, based at least in part on the current emotion data associated with the first user, target emotion data associated with the first user; and a modifying unit configured to modify, responsive to the difference between the target emotion data associated with the first user and the current emotion data associated with the first user, one or more aspects of the video game that are specific to the first user.
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A lens system provides to a user, a high-definition image in which generation of concentric circles is reduced. The lens system has one or more Fresnel lenses. A lens surface of each of the Fresnel lenses has a plurality of grooves that is concentrically formed. Both a pitch which is the distance between two adjacent grooves and the depth of each of the plurality of grooves vary with the distance from an optical axis that passes through the center of the lens system.
G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
G02B 3/04 - Simple or compound lenses with non-spherical faces with continuous faces that are rotationally symmetrical but deviate from a true sphere
G02B 3/08 - Simple or compound lenses with non-spherical faces with discontinuous faces, e.g. Fresnel lens
Provided is a cradle for supporting input devices having grips and tracked parts extending from the grips, the cradle including rear support parts that are capable of supporting rear parts of the grips, front support parts that are positioned forward away from the rear support parts and are capable of supporting front parts of the grips of the input devices, and, side support parts that are positioned outside in a left-right direction with respect to the rear support parts and the front support parts and are capable of supporting the tracked parts.
Systems and methods are disclosed for determining that a first end-user entity has performed a task within a computer simulation for which a non-fungible token (NFT) is to be provided, where the NFT is associated with a digital asset. Responsive to the determination, the NFT is provided to the first end-user entity so that the digital asset may be used, via the NFT, across plural different computer simulations and/or across plural different computer simulation platforms. Ownership of the NFT may also be subsequently transferred to other end-user entities for their own use across different simulations and/or platforms.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
For stability of a bit rate for groups of pictures (GOPs), a rate buffer bit controller feedback loop and a proportional integral derivative (PID) bit controller feedback loop may be used to maintain at least one video buffer.
An affective gaming system includes: an administrator unit configured to host a session of a video game; a receiving unit configured to receive biometric data associated with two or more users participating in the session of a video game; a first generating unit configured to generate, based on at least part of the biometric data, current emotion data associated with each of the two or more users; a selecting unit configured to select a first user based on at least part of the current emotion data; a second generating unit configured to generate, based at least in part on the current emotion data that is associated with a second user, target emotion data associated with the first user; and a modifying unit configured to modify, responsive to the difference between the target emotion data and the current emotion data that is associated with the first user, one or more aspects of the video game that are specific to the first user.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
To enhance the sensory experience of voice, in some cases at a later time than the speech was spoken to enable reliving emotions and experiences, vocal sounds captured by a microphone are processed by a computer game controller API. The API plays back the vocal sounds at a later time in haptic format on the controller. The vocal sounds may be computer game dialogue, party chat, or vocal sounds of the user as demanded by the computer game.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
H04R 3/04 - Circuits for transducers for correcting frequency response
9.
SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
A system includes a first image sensor that generates a first image signal by synchronously scanning all pixels at a prescribed timing, a second image sensor including an event-driven type vision sensor that, upon detecting a change in an intensity of incident light on each of the pixels, generates a second image signal asynchronously, an inertial sensor that acquires attitude information on the first image sensor and the second image sensor, a first computation processing device that recognizes a user on the basis of at least the second image signal and calculates coordinate information regarding the user on the basis of at least the second image signal, a second computation processing device that performs coordinate conversion on the coordinate information on the basis of the attitude information, and an image generation device that generates a display image which indicates a condition of the user, on the basis of the converted coordinate information.
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
Methods and systems for reconstructing a game world of a video game includes tracking status of game objects in the game world to detect wear on the one or more game objects exceeding a predefined threshold. An option to rebuild the one or more game objects is provided to a user and tools to rebuild the one or more game objects are provide in response to the user selecting the option to rebuild the game objects. The rebuilt game objects are used during subsequent gameplay of the video game.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
A63F 13/847 - Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
11.
INFORMATION PROCESSING DEVICE, CONTROL METHOD OF INFORMATION PROCESSING DEVICE, AND PROGRAM
An information processing device obtains information regarding the position of each fingertip of a user in a real space, and determines contact between a virtual object set within a virtual space and a finger of the user. The information processing device sets the virtual object in a partly deformed state such that a part of the virtual object, the part corresponding to the position of the finger determined to be in contact with the object among the fingers of the user, is located more to a far side from a user side than the finger, and displays the virtual object having the shape set thereto as an image in the virtual space on a display device.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/25 - Output arrangements for video game devices
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
A processing device includes: a detection unit configured to detect input information representative of a sequence of input signals for a video game that are input by a user using one or more input controls; an identification unit configured to identify one or more input signal variations in dependence upon one or more differences between the detected input information and predetermined input information representative of one or more predetermined sequences of input signals for the video game; a generation unit configured to generate assistance information in dependence upon the one or more identified input signal variations; and a provision unit configured to provide the generated assistance information to the user.
An image generation apparatus increases an adjustment amount of a luminance distribution to a target value B at a time t0 at which an amount of light entering the eyes of a user changes to such a degree that the change has an influence on an action of photoreceptor cells, to thereby cause a head-mounted display to display an image 310b having a luminance increased from that of an original image 310a. The image generation apparatus gradually decreases the adjustment amount of the luminance distribution during a restoration period Δt in such a manner that an image 310c having the original luminance distribution is displayed at a later time t1.
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 40/18 - Eye characteristics, e.g. of the iris
H04N 9/69 - Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
14.
CUSTOMIZABLE VIRTUAL REALITY SCENES USING EYE TRACKING
Eye tracking of the wearer of a virtual reality headset is used to customize/personalize VR video. Based on eye tracking, the VR scene may present different types of trees for different types of gaze directions. As another example, based on gaze direction, a VR scene can be augmented with additional objects based on gaze direction at a particular related object. A friend's gaze-dependent personalization may be imported into the wearer's system to increase companionship and user engagement. Customized options can be recorded and sold to other players.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
To avoid startling a computer game player immersed in virtual reality for example, active noise cancelation is gradually introduced. As an alternative, ambient noise is gradually increased to conceal loud external sounds. The noise cancelation or ambient noise generation is established according to sound exceeding a background threshold as detected by a microphone. The noise cancelation or ambient noise generation can be established according to images of a noisy object as imaged by a camera.
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
16.
ALTERING AUDIO AND/OR PROVIDING NON-AUDIO CUES ACCORDING TO LISTENER'S AUDIO DEPTH PERCEPTION
The 3D audio perception of a listener such as a computer gamer is tested “stereoscopically” and the results input to a source of audio such as a computer game. Audio from the source of audio (such as a head-mounted display of a computer game system or speaker outputting audio from a game console) may be altered to account for the listener's measured 3D audio acuity. In addition, or alternatively, visual or haptic cues may be provided to alert the listener of 3D audio events.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
17.
METHOD AND SYSTEM FOR AUTO-PLAYING PORTIONS OF A VIDEO GAME
A method for providing an auto-play mode option to a user during gameplay of a video game includes accessing, by a server, a user play model, which incorporates extracted features related to gameplay by the user and classification of the extracted features. The accessing of the model is triggered at a current time during gameplay. The method also includes identifying, by the server, predicted interactive activity that is predicted to occur ahead of the current time of gameplay. The method further includes identifying, by the server, at least part of the predicted interactive activity to be anticipated grinding content (AGC). The method also includes providing a notification, by the server, to a display screen of a user device, where the notification identifies the AGC in upcoming gameplay and provides the user with an option to use the auto-play mode during gameplay of the AGC.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
18.
SYSTEMS AND METHODS FOR APPLYING A MODIFICATION MICROSERVICE TO A GAME INSTANCE
A method for implementing a modification microservice with a game cloud system is described. The method includes executing a game instance of a game. The game instance is executed using a plurality of microservices assembled for the game instance. The method further includes accessing a modification microservice engineered to be executed with the game instance. The modification microservice adds a compute capability to the game instance. The modification microservice is executed outside of a server system in which the plurality of microservices is assembled for the game instance. Also, the modification microservice is accessed by one or more application programming interface (API) calls that obtain results data from said execution of the modification microservice. The one or more API calls are managed via a modification interface that manages the access to the modification microservice and use of the results data by the game instance.
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
19.
TEXT MESSAGE OR APP FALLBACK DURING NETWORK FAILURE IN A VIDEO GAME
A method for managing gameplay of a video game is provided, including: executing a session of a video game by a cloud gaming resource; streaming video generated by the session over a network to a client device associated to a player of the video game, to enable gameplay of the session by the player; detecting a loss of network connectivity between the client device and the session; responsive to detecting the loss of network connectivity, then initiating transmission of updates regarding the session, via an alternative communication channel, to a secondary device associated to the player.
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A condition information acquisition section of an image generation device acquires a condition of communication and condition information of a head-mounted display. An image generation section generates a display image including distorted images for a left eye and a right eye. A reduction processing section converts the display image to data in which different regions have different reduction ratios in accordance with the condition of communication, etc., and transmits the data through an output section. An image size restoration section of the head-mounted display restores the transmitted data to the display image in an original size to cause the display image to be displayed by a display section.
A method of improving accessibility for the user operation of a first application on a computer includes the steps of taking one or more measurements of a current user's interaction with an application on the computer, comparing the one or more measurements with expectations derived from measurements from a first corpus of users, characterising one or more needs of the current user based upon the comparison, and modifying at least a first property of the first application responsive to the characterised need or needs.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
22.
GROUP CONTROL OF COMPUTER GAME USING AGGREGATED AREA OF GAZE
Groups of people control a computer game using teamwork. This can be done by eye tracking of each person to detect where each person is looking on screen at objects such as game control objects. The control action of the object looked at by the most people in a “heat map” style of data collection is implemented by the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
23.
HAPTIC ASSET GENERATION FOR ECCENTRIC ROTATING MASS (ERM) FROM LOW FREQUENCY AUDIO CONTENT
Computer game developers can implicitly create haptic assets from audio assets. A low pass filter passes only audio assets with frequencies less than a threshold to a mapping module. The audio assets are then mapped to haptic assets that can be output by an ERM of a computer game controller. The haptic output can be in synchronization with play of the audio assets on speakers.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
24.
SYSTEMS AND METHODS FOR INTEGRATING REAL-WORLD CONTENT IN A GAME
A method for integration of real-world content into a game is described. The method includes receiving a request to play the game and accessing overlay multimodal data generated from a portion of real-world multimodal data received as user generated content (RGC). The overlay multimodal data relates to authored multimodal data generated for the game. The method includes replacing the authored multimodal data in one or more scenes of the game with the overlay multimodal data.
A63F 13/537 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A harassment detection apparatus includes: an executing unit configured to execute a session of a shared environment; an input unit configured to receive biometric data, the biometric data being associated with a plurality of users participating in the executed session of the shared environment; a generating unit configured to generate, based on at least a part of the biometric data, emotion data associated with the plurality of users, the emotion data comprising a valence value and/or an arousal value associated with each of the plurality of users; a detection unit configured to detect, responsive to at least a first part of the emotion data satisfying one or more of a first set of criteria, one or more first users associated with the at least first part of the emotion data; and a modifying unit configured to modify, responsive to the detection of the one or more first users, one or more aspects of the shared environment.
A cloud-based gaming system generates first and second instances of a virtual world of an online game for first and second players, respectively. First and second video streams of the first and second instances of the virtual world, respectively, are transmitted to the first and second players, respectively. The second video stream includes a ghosted version of a feature within the first instance of the virtual world. A request is received from the second player to merge the first and second instances of the virtual world. With the first player's approval, a merged instance of the virtual world is automatically generated by the cloud-gaming system as a combination of the first and second instances of the virtual world. Third and fourth video streams of the merged instance of the virtual world are transmitted to the first and second players, respectively, in lieu of the first and second video streams, respectively.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
An information processing device includes a control unit that performs control to output information regarding an image of a user viewpoint in a virtual space, in which the control unit performs control to switch a display corresponding to a performer included in the image between a live-action image generated based on a captured image of the performer and a character image corresponding to the performer, according to a result of detecting a state of at least one user in the virtual space or a state of the virtual space.
A data processing apparatus includes circuitry configured to: receive a first signal indicative of one or more words communicated from a first user of online content to one or more second users of online content; classify the one or more words using the first signal; receive one or more second signals indicative of one or more physiological characteristics of the first user within a time period of a start of the communication of the one or more words; classify the one or more physiological characteristics using the one or more second signals; based on a classification of the one or more words and a classification of the one or more physiological characteristics, generate an action signal indicating that an action associated with the first user of the online content is to be taken, the action signal indicating a characteristic of the action determined based on a combination of the classification of the one or more words and the classification of the one or more physiological characteristics; and output the action signal.
A data processing apparatus includes circuitry configured to: receive a first signal indicative of one or more words communicated from a first user of online content to one or more second users of online content; classify the one or more words using the first signal; receive one or more second signals indicative of one or more physiological characteristics of the one or more second users in response to the communicated one or more words; classify the one or more physiological characteristics of the one or more second users using the one or more second signals; determine, based on a classification of the one or more words and a classification of the one or more physiological characteristics of the one or more second users, whether to generate an action signal, the action signal indicating that an action associated with the first user of the online content is to be taken; and when it is determined an action signal is to be generated, generate and output the action signal.
A method including receiving from a device over a network at an optimizer server a plurality of game assets of a video game. The method including generating at least one combined game asset to represent the plurality of game assets. The method including sending the at least one combined game asset to the device for use in the video game.
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
To provide a game presenting system having a plurality of game systems for each executing a process of a game in which a plurality of users participate, and a game presenting machine for presenting a situation of the game executed by the game system, wherein the game presenting machine obtains a motion image related to the game executed by each of the plurality of game systems, and produces a game presenting screen image showing, as a list, at least some of the plurality of motion images obtained.
A63F 13/5252 - Changing parameters of virtual cameras using two or more virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character changes room or displaying a rear-mirror view in a car-driving game
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
33.
AI Player Model Gameplay Training and Highlight Review
Methods and systems for engaging an AI player of a user to play a video game on behalf of the user includes creating the AI player for the user using at least some of the attributes of the user, training the AI player using inputs provided by the user during game play of the video game, and providing access to the video game for game play to the AI player. The access allows the AI player to provide inputs to the video game that substantially mimics a play style of the user. Control of the game play of the video game can be transitioned to the user at any time during the game play of the AI player. The user can also control the game play of the AI player from a video recording of the game play.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
An input device for controlling a computing system includes one or more sensors configured to sense a change in weight distribution of a user positioned on the input device in use, and a transmitter configured to transmit a signal based on the sensed change in weight distribution, for use in a virtual joystick input to a computing system.
Gaze tracking data representing a user's gaze is analyzed to determine one or more regions of interest. One or more gaze tracking parameters are determined from the gaze tracking data. Adjusted foveation data is determined representing an adjusted size and/or shape of one or more regions of interest in one or more images to be subsequently presented to the user based on the one or more gaze tracking parameters. The compression of the one or more transmitted images is adjusted so that fewer bits are needed to transmit data for portions of an image outside the one or more regions of interest than for portions of the image within the one or more regions of interest. Adjusting compression includes eliminating the region(s) of interest from images that are presented to the user during the saccade or blink.
Methods and apparatus provide for a head-mounted display to be worn by a user located within a physical environment and for engaging in an interactive experience; an entertainment device including a processing section that executes an interactive application that is manipulated in part through receiving inputs from the user; a communication partner operating to relay video and audio signals outputted from the entertainment device to the head-mounted display; continuously acquiring information of the environment at a predetermined frame rate; at least one camera on the head-mounted display, which captures images of the physical environment, where the communication partner is connected to the entertainment device with wired communication, and the communication partner is connected to the head-mounted display with wireless communication.
To help a computer game player in understanding a computer game, upon pausing the game, visual subtitles may be presented. In addition, or alternatively, Braille representing subtitles may be output as a series of vibrations on a touch pad of the controller. When the person's finger reaches the edge of the touch pad, a new series of Braille subtitles may be presented. Depending on where the player is in reading the subtitles and how fast the player reads them, the game video may be slowed down from normal speed.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/214 - Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
A63F 13/49 - Saving the game status; Pausing or ending the game
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
G09B 21/00 - Teaching, or communicating with, the blind, deaf or mute
Deep learning is used to dynamically adapt virtual humans in metaverse applications. The adaptation can be according to user preferences. In addition or alternatively, virtual humans and pets can be adapted for metaverse applications based on demographics of the user. The user's personal demographics may be used to establish the costume, skin color, emotion, voice, and behavior of the virtual humans. Similar considerations may be used to adapt virtual pets to the user's experience of the metaverse.
Deep learning techniques such as vector graphics are used to create 3D content and assets for metaverse applications. Vector graphics is a scalable format that provides rich 3D content. A vector graphics encoder such as a deep neural network such as a recurrent neural network (RNN) or transformer receives vector graphics and generates an encoded output. The encoded output is decoded by a 3D decoder such as another deep neural network that outputs 2D graphics for comparison with the original image. Loss is computed between the original and the output of the 3D decoder. The loss is back propagated to train the vector graphics encoder to generate 3D content.
Methods and apparatus provide for acquiring position information about a head-mounted display; performing information processing using the position information about the head-mounted display; generating and outputting data of an image to be displayed as a result of the information processing; and generating and outputting data of an image of a user guide indicating position information about a user in a real space using the position information about the head-mounted display, where the image of the user guide represents a state of the real space in which the user is physically located, as viewed obliquely.
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
Ambisonics audio such as may be used for computer simulations such as computer games is improved by using multi-order optimizations that frame an optimization problem that minimizes a cost function across a subset of Ambisonics orders for a chosen Ambisonics order “N”. In a simple form, this cost function minimizes error across all orders (0<=n<=N), and additional weighting is applied to emphasize or de-emphasize particular orders. The cost functions and optimization criteria may be different for binaural and speaker outputs.
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
G10L 19/005 - Correction of errors induced by the transmission channel, if related to the coding algorithm
A technique for encoding Ambisonics audio includes inputting audio to multiple Ambisonics encoders producing respective Ambisonics soundfields. Prior to mixing the soundfields, each soundfield is weighted to mitigate artifacts from order-truncation. After weighting, the soundfields are mixed to produce Ambisonics audio.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
H04R 5/02 - Spatial or constructional arrangements of loudspeakers
H04S 5/00 - Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
44.
SYSTEMS AND METHODS FOR MODIFYING USER SENTIMENT FOR PLAYING A GAME
A method for modifying user sentiment is described. The method includes analyzing behavior of a group of players during a play of a game. The behavior of the group of players is indicative of a sentiment of the group of players during the play of the game. The method includes accessing a non-player character (NPC) during the play of the game. The NPC has a characteristic that influences a change in the sentiment of the group of players. The method includes placing the NPC into one or more scenes of the game during the play of the game for a period of time until the change in the sentiment of the group of players is determined. The change in the sentiment of the group of players is determined by analyzing of the behavior of the group of players during said play of the game.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
45.
SYSTEMS AND METHODS FOR CONTROLLING DIALOGUE COMPLEXITY IN VIDEO GAMES
A method of controlling the complexity levels of dialogues in a video game, includes the steps of: loading a first dialogue from a game database according to a parameter related to complexity level of dialogue in user settings; outputting the first dialogue; accepting a user operation of in response to the first dialogue; adjusting the parameter related to complexity level of dialogues in user settings based on the user operation; loading a second dialogue from the game database according to the adjusted parameter related to complexity level of dialogue; and outputting the second dialogue.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
Interactive display of virtual trophies includes scanning a surface for one or more location anchor points. A trophy rack location is determined using the location anchor points. A trophy rack mesh is applied over an image frame of the surface using the determined trophy rack location. One or more trophy models are displayed over the trophy rack mesh with a display device. A trophy rack layout is generated information from the one or more trophy models and the trophy rack mesh and finally the trophy rack layout information is stored or transmitted.
A method for protecting personal space in a multi-user virtual environment includes the steps of generating an avatar for a target user in the multi-user virtual environment, determining a relationship score between the target user and a peer user, creating a personal space around the avatar of the target user, wherein the dimensions of the personal space is computed based on the relationship score with the peer user, detecting the peer user's avatar crossing the boundary of the personal space, and applying rules to the peer user to restrict his/her interactions with the target user.
An input device includes: a plurality of input members; an upper surface having a right region in which a part of the plurality of input members is disposed, a left region in which another part of the plurality of input members is disposed, and a center region that is a region between the right region and the left region; and a light emitting region formed along an outer edge of the center region. The light emitting region includes a first light emitting portion configured to indicate identification information assigned to a plurality of input devices connected to an information processing apparatus, and a second light emitting portion configured to emit light based on information different from the identification information.
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
A63F 13/21 - Input arrangements for video game devices characterised by their sensors, purposes or types
F21V 33/00 - Structural combinations of lighting devices with other articles, not otherwise provided for
G06F 3/0338 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of limited linear or angular displacement of an operating part of the device from a neutral position, e.g. isotonic or isometric joysticks
49.
OVERLAPPING RENDERING, STREAMOUT, AND DISPLAY AT A CLIENT OF RENDERED SLICES OF A VIDEO FRAME
A method of cloud gaming is disclosed. The method including receiving an encoded video frame at a client, wherein a server executes an application to generate a rendered video frame which is then encoded at an encoder at the server as the encoded video frame, wherein the encoded video frame includes one or more encoded slices that are compressed. The method including decoding the one or more encoded slices at a decoder of the client to generate one or more decoded slices. The method including rendering the one or more decoded slices for display at the client. The method including begin displaying the one or more decoded slices that are rendered before fully receiving the one or more encoded slices at the client.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
H04L 67/131 - Protocols for games, networked simulations or virtual reality
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
H04N 21/8547 - Content authoring involving timestamps for synchronizing content
50.
TRACKING HISTORICAL GAME PLAY OF ADULTS TO DETERMINE GAME PLAY ACTIVITY AND COMPARE TO ACTIVITY BY A CHILD, TO IDENTIFY AND PREVENT CHILD FROM PLAYING ON AN ADULT ACCOUNT
Methods and systems for warning misuse of a user account of an adult user includes tracking use of the user account. The interactions at the user account are monitored and when the content accessed by a user is adult content and the user is determined to be a child, providing an alert to the adult user informing the adult user of the child accessing age-inappropriate content.
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
51.
IMAGE DISPLAYING SYSTEM, DISPLAY APPARATUS, AND IMAGE DISPLAYING METHOD
A state information acquisition section of an image generation apparatus acquires state information of the head of a user. An image generation section generates a display image corresponding to a visual field. A down-sampling section down-samples image data and transmits the down-sampled image data from a transmission section. A distortion correction section of a head-mounted display performs, after an up-sampling section up samples the data, correction according to aberration of the eyepiece, for each primary color and causes the resulting data to be displayed on a display section.
An information processing device connected to a display device and to a sensor which detects relative positions of a user and the display device is provided. This information processing device acquires information indicating the relative positions of the user and the display device and detected by the sensor, and controls a position or a posture of at least one virtual object as a control target within data of a video on the basis of the acquired information indicating the relative positions of the user and the display device. Thereafter, the information processing device outputs the data of the video generated on the basis of information associated with a virtual space where the virtual object is arranged to the display device, and causes the display device to display the data.
A user accessibility method for a videogame system includes obtaining trophy records for a plurality of games played by the user, generating an accessibility profile responsive to accessibility issues indicated by at least some of the trophy records, and modifying one or more operational parameters of the videogame system in response to the generated accessibility profile.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
56.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
Provided is an information processing apparatus including a map generating section that detects a surface of a real object in a three-dimensional space on the basis of a camera image captured with a camera of a head-mounted display and generates map data representing information regarding the detected surface, a position estimating section that collates the map data and the camera image and estimates a position of a user used for execution of an application, and a look-around screen generating section that causes the head-mounted display to display a synthesized image on which an object representing the detected surface of the real object is superimposed on an image of a corresponding surface in the camera image, in a period of generation of the map data.
Disclosed herein is an information processing apparatus including at least one processor that has hardware. The at least one processor generates a first content image in a three-dimensional virtual reality space to be displayed on a head-mounted display, generates a second content image to be displayed on a flat-screen display, and generates a third content image from the first content image and/or the second content image.
A63F 13/26 - Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
A system for evaluating content, the system comprising a data obtaining unit configured to obtain data relating to one or more properties of the content, a processing unit configured to determine an expected contribution of one or more of the properties to a cognitive load for a user, an evaluation unit configured to determine an expected cognitive load associated with the content in dependence upon the expected contributions, and an image generation unit operable to generate an image for display in dependence upon the determined expected cognitive load.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A cognitive load assistance method includes: providing a virtual environment comprising a route to complete a current task, and providing within the virtual environment one or more interactive elements not essential to the current task that may be encountered during normal performance of the current task, receiving an indication that cognitive load assistance is required, and reducing the interactivity of at least a first interactive element not essential to the current task in response to the indication.
G16H 20/70 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
G06F 9/451 - Execution arrangements for user interfaces
A system for modifying a virtual environment to be interacted with by a user, the system comprising a location identifying unit configured to identify the locations of one or more events and the locations of one or more corresponding triggers for each of those events, a preference determining unit configured to determine one or more preferences of the user, a relation modifying unit configured to modify the temporal and/or spatial relation between respective parts of one or more pairs of corresponding events and triggers in dependence upon the determined user preferences, and a virtual environment modification unit operable to generate a modified virtual environment, the modified virtual environment comprising the modified temporal and/or spatial relations.
Without an increase in the cost, a movable range is defined. A first joint mechanism includes a base part, and a rotary part that rotates relatively to the base part. The first joint mechanism includes a regulated part that rotates together with the rotary part, a regulating part that is disposed on the extension of the rotation locus of the regulated part, and that has a function of regulating rotation of the regulated part relative to the base part, within a first movable range, and a movable range defining member that is disposed on either the base part or the rotary part, and that defines, as a movable range of the regulated part, a second movable range that is narrower than the first movable range.
F16H 1/14 - Toothed gearings for conveying rotary motion without gears having orbital motion involving only two intermeshing members with non-parallel axes comprising conical gears only
Techniques are described for smooth switchover of computer game control. The current states of game input is communicated to a new player assuming control. The new player is allowed time to catch up to the game. New player control is detected, and any errors are communicated to the new player. If there are differences between the old control scheme and that of the new player, they are reconciled. The outgoing player is adjusted to the transition.
A63F 13/422 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
A63F 13/25 - Output arrangements for video game devices
A63F 13/45 - Controlling the progress of the video game
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
An accessibility computer game controller includes a central control button on a round base and peripheral control buttons on the base surrounding the central control button. The peripheral control buttons can have distinct sizes and shapes. Removable button labels can be applied on top of or underneath the buttons to aid in button identification.
An apparatus, for assisting at least a first user in communicating with one or more other users via a network, includes: a storage unit configured to store: phrase data corresponding to one or more phrases, where each phrase comprises one or more words, tag data corresponding to one or more tags, where each tag comprises at least part of one word, and first association data corresponding to one or more associations between one or more of the phrases and one or more of the tags; an input unit configured to receive one or more audio signals from the at least first user; a recognition unit configured to recognise one or more spoken words included within the received audio signals; an evaluation unit configured to evaluate whether a given recognised spoken word corresponds to a given tag; and if so, a transmission unit configured to transmit one or more of the phrases associated with the given tag to one or more of the other users.
There is provided an information processing device including a photographed-image acquisition section that acquires a photographed image taken by a camera mounted on a head-mount display, and a photographing parameter that is adjusted according to a brightness with use of the camera, and a play area control section that detects a play area for defining a movable range of a user by analyzing the photographed image while changing an analysis condition according to an estimated brightness on the basis of the photographing parameter, and then, acquiring 3D information regarding a real object.
H04N 23/71 - Circuitry for evaluating the brightness variation
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes
H04N 23/72 - Combination of two or more compensation controls
69.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
Disclosed herein is an information processing apparatus including an image correction section that corrects a camera image captured by a camera of a head-mounted display, a state estimation section that estimates a state of an actual physical body with use of the corrected camera image, and a calibration section that causes the head-mounted display to display a guide image that represents an extraction situation of feature points from the camera image, the feature points being necessary for calibration of the camera, collects data of the feature points, performs calibration, and updates a correction parameter used by the image correction section.
Methods and apparatus provide for processing information for a head-mounted display for blocking out an outside world from a user's vision when worn by the user to present a video, by carrying out actions comprising: measuring outside world information; detecting whether or not the measured information contains any notification information to be notified to the user, including detection of moving objects as notification information; and notifying the user when notification information is detected.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/147 - Digital output to display device using display panels
G08B 21/02 - Alarms for ensuring the safety of persons
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
G09G 3/20 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
G09G 5/377 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory - Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
A dual camera tracking system includes a main imager and an auxiliary imager the output of which is used to alter an aim and/or focus of the main imager. Both imagers may be mounted on a common housing. In embodiments, the common housing may be a head-mounted display (HMD) for a computer simulation such as a computer game.
H04N 5/232 - Devices for controlling television cameras, e.g. remote control
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/26 - Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
H04N 13/25 - Image signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
72.
AI STREAMER WITH FEEDBACK TO AI STREAMER BASED ON SPECTATORS
A method is provided, including: executing a session of a video game; executing an artificial intelligence (AI) player that performs gameplay in the session of the video game; streaming video of the AI player's gameplay over a network to one or more spectator devices for viewing by one or more spectators respectively associated to the one or more spectator devices; receiving, over the network from the one or more spectator devices, feedback data indicating reactions of the one or more spectators to the video of the AI player's gameplay; adjusting the gameplay by the AI player based on the feedback data.
A method is disclosed including setting, at a plurality of devices, a plurality of VSYNC signals to a plurality of VSYNC frequencies, wherein a corresponding device VSYNC signal of a corresponding device is set to a corresponding device VSYNC frequency. The method including sending a plurality of signals between the plurality of devices, which are analyzed and used to adjust the relative timing between corresponding device VSYNC signals of at least two devices.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/8547 - Content authoring involving timestamps for synchronizing content
H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
An exterior member for an input device includes: a housing having an upper surface on which one or more operation members for manipulation by a user are arranged, and a middle portion located between right and left portions, respectively of the housing; a first sound hole extending from an exterior, through the exterior member, to an interior of the housing, the first sound hole allowing sound to propagate to a first location within the housing; a second sound hole extending from the exterior, through the exterior member, to the interior of the housing, the second sound hole allowing sound to propagate to a second location within the housing, wherein a position of the first sound hole and a position of the second sound hole are separated in at least one of the front-rear direction and an up-down direction.
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
75.
AUTOMATED DETECTION OF VISUAL IMPAIRMENT AND ADJUSTMENT OF SETTINGS FOR VISUAL IMPAIRMENT
A method, system and computer program product for automated visual setting importation is disclosed. A first application running on a first device requests a vision setting for a second application. A vision setting of the first application running on a first device is determined to correspond to the vision setting for the second application. The vision setting for the second application is then applied to the corresponding vision setting of the first application.
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A61B 3/06 - Subjective types, i.e. testing apparatus requiring the active assistance of the patient for testing colour vision
A61B 3/032 - Devices for presenting test symbols or characters, e.g. test chart projectors
A61B 3/024 - Subjective types, i.e. testing apparatus requiring the active assistance of the patient for determining the visual field, e.g. perimeter types
A61B 3/18 - Arrangement of plural eye-testing or -examining apparatus
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/80 - Special adaptations for executing a specific game genre or game mode
Image-based customization comprises extracting feature parameters of a subject in a digital image with one or more neural networks trained with a machine learning algorithm configured to determine feature parameters of the subject. The feature parameters are then applied to a virtual model of the subject.
A method for building an artificial intelligence (AI) model. The method includes accessing data related to monitored behavior of a user. The data is classified, wherein the classes include an objective data class identifying data relevant to a group of users including the user, and a subjective data class identifying data that is specific to the user. Objective data is accessed and relates to monitored behavior of a plurality of users including the user. The method includes providing as a first set of inputs into a deep learning engine performing AI the objective data and the subjective data of the user, and a plurality of objective data of the plurality of users. The method includes determining a plurality of learned patterns predicting user behavior when responding to the first set of inputs. The method includes building a local AI model of the user including the plurality of learned patterns.
G06N 3/008 - Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
A system and method provide application assistance with a mobile device including running an application on a computer system. A challenging application state of the application is detected. An assistance for the challenging application state may be determined wherein the assistance includes display of one or more assistance frames on the mobile device. Information regarding the determined one or more assistance frames is sent to the mobile device.
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/92 - Video game devices specially adapted to be hand-held while playing
Feature discovery includes determining what a user is doing or trying to do with respect to a computer platform from situational awareness information relating to the user's use of the platform. Feature discovery logic is applied to the situational awareness information and personalized user information to determine (a) when to present information to the user regarding a platform feature or features relevant to what the user is doing or trying to do, (b) what information to present to the user regarding the feature(s), and (c) how to best present the information to the user with a user interface. After the user interface presents the information regarding the platform feature(s) the feature discovery logic, personalized user information, or situational awareness information are updated according to the user's response to presentation of the information.
G06F 9/451 - Execution arrangements for user interfaces
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
In customized audio attenuation a computer system generates audible sounds in one or more frequency ranges from electronic signals. An audiogram for a listener is inferred from the listener's response to the audible sounds and an attenuation profile is determined from the audiogram. The attenuation profile includes an attenuation level for each of the one or more frequency ranges. Each attenuation level is inversely related to the listener's sensitivity to hearing sounds in the corresponding frequency range. Subsequent signals or data corresponding to subsequent audible sounds in the one or more frequency ranges are generated. The attenuation profile is applied to the subsequent signals to generate attenuated signals and the attenuated signals are transmitted to an audio transducer.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
81.
USE OF AI TO MONITOR USER CONTROLLER INPUTS AND ESTIMATE EFFECTIVENESS OF INPUT SEQUENCES WITH RECOMMENDATIONS TO INCREASE SKILL SET
Methods and systems for providing assistance to a user for playing a video game includes identifying attributes of inputs of the user from prior gameplays of the video game. The attributes are analyzed to identify input capabilities of the user. Skills required to progress in the video game are identified and hints are provided to the user to guide the user to obtain certain ones of the skills. The obtained skills assist the user to progress in the video game.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
An entertainment system includes a user terminal, a robot, and a control device. The user terminal causes a display unit to display contents which allow a user to select any one character from a plurality of characters for which personalities different from one another are defined. The control device determines an operation mode of the robot on the basis of a parameter corresponding to a personality of the character selected on the user terminal.
Provided is an information processing device including a captured image acquisition section that acquires data of frames of a currently captured moving image, a crop section that cuts out an image of a specific region from each of the frames arranged in chronological order, and an image analysis section that analyzes the image of the specific region to acquire predetermined information. The crop section moves a cut-out target region in accordance with predetermined rules with respect to a time axis.
A head-mounted display includes a main body in which a display is built; a wearing band extending from the main body to a rear side and having a shape enclosing a head of a user as a whole; a right-side extending section configured to make up a right-side part of the wearing band; a left-side extending section configured to make up a left-side part of the wearing band; a movable section configured to make up a part of a rear side of the wearing band, link a rear section of the right-side extending section with a rear section of the left-side extending section, and be movable relative to the right-side extending section and the left-side extending section in a retracting direction in which the length of the wearing band is decreased and in an extending direction in which the length of the wearing band is increased, a first manipulating member operable by a user and rotatably provided on the movable section with an axis section that extends in a front-rear direction; where a vertical width of the first manipulating member is larger than a vertical width of the movable section.
A control device generates a first image in which, together with an object in the real world appearing in an image captured by a camera of a robot, a virtual object appears and which presents an exterior appearance of the virtual object while the position of the camera of the robot is set as a view-point. An HMD displays the first image generated by the control device. In a case in which the position of the HMD in the height direction changes, the control device generates a second image from which the object in the real world is deleted and which presents the exterior appearance of the virtual object viewed from a new view-point according to the change in the position of the HMD in the height direction. The HMD displays the second image in place of the first image.
In a method and system for viewer interaction with streaming media, a scene from a media presentation is displayed with a broadcasting device that includes an operating system level broadcaster interface overlay. The overlay generates an image frame from the scene. The image frame is sent to a viewer device. Viewer interaction parameters are received from a viewing device and a viewer interaction is displayed over a subsequent scene from the media presentation with the broadcaster interface overlay. On a viewing device an image frame from the media presentation is received from the broadcasting device. An operating system level viewer interface overlay is generated over the image frame and the image frame of the media presentation is displayed. Viewer interaction parameters are generated from a viewer interaction with the operating system level viewer interface overlay and sent to the broadcasting device.
H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
A player who is interrupted during play of a computer simulation such as a computer game and must attend to other matters (colloquially referred to herein as being “away from keyboard” or “AFK”) is assisted in three phases. A first type of assistance is rendered when entering AFK, as in, at the start of the interruption. A second type of assistance is rendered during AFK, while the player is away. A third type of assistance is rendered when the player returns from AFK to the computer simulation.
Privacy of a conversation between a computer game player and an entity such as a person or virtual assistant is preserved by various techniques so that other players of the computer game who are not part of the conversation cannot apprehend the conversation. The conversation may be voice or txt but is not an online chat associated with the computer game.
G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
G10L 25/57 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for processing of video signals
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/537 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
89.
GESTURE TRAINING FOR SKILL ADAPTATION AND ACCESSIBILITY
A system that teaches players gestures, for instance during the introduction of the game, and asks the player to invoke the gesture. Rather than asking the player to repeat over and over until the player succeeds, the game looks for commonality in the player's attempts, and after a small number of attempts, the game can learn how that player interprets the gesture given the player's own ability. The game can then adapt itself to look for that pattern to trigger the action.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
Detection of whether a video is a fake video derived from an original video and altered is undertaken using a block chain that either forbids adding to the block chain copies of original videos that have been altered or indicating in the block chain that an altered video has been altered. Image fingerprinting techniques are described for determining whether video sought to be added to block chain has been altered.
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/8358 - Generation of protective data, e.g. certificates involving watermark
H04L 9/06 - Arrangements for secret or secure communications; Network security protocols the encryption apparatus using shift registers or memories for blockwise coding, e.g. D.E.S. systems
H04N 21/439 - Processing of audio elementary streams
Methods and/or apparatus provide for acquiring a first image obtained by imaging an event taking place in a real space; generating a display image by merging the first image into a second image representing a virtual space that establishes an alternative version by: (i) reproducing real elements that are a focus of the event, (ii) omitting real peripheral elements, and (iii) presenting alternative virtual elements, such that the first image has an inconspicuous border when the display image is presented to a user.
To protect a user's privacy by reducing a malicious developer's ability to eavesdrop on unwitting HMD users by converting signals from a motion sensor in the HMD to speech or speaker recognition, a microphone can record ambient sound and voice which is subtracted from the motion sensor data before the sensor data is made available to the game developer. Additionally, ANC (active noise cancellation) techniques can be adapted to cancel noise from a motion sensor's data. In another technique, a band pass filter subtracts frequency in the sensor signals within the voice range. Still a third technique blends statistical noise into the motion sensor signal before passing to game developers to obfuscate the user's speech.
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
93.
EYE TRACKING FOR ACCESSIBILITY AND VISIBILITY OF CRITICAL ELEMENTS AS WELL AS PERFORMANCE ENHANCEMENTS
A map of a person's spatial vision abilities, including areas of low acuity and areas of high acuity, may be generated from medical records or from a calibration phase. During presentation of a computer simulation such as a computer game, the map is provided to a foveated renderer to optimize which areas should be rendered most crisply. Content placement may be optimized to ensure that any critical elements to the game, for instance, any text that needs to be seen or treasures or special pickups that need to be seen clearly can be moved into regions of the player's field of view that the person has higher acuity in.
A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/5378 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for displaying an additional top view, e.g. radar screens or maps
A63F 13/55 - Controlling game characters or game objects based on the game progress
A63F 13/525 - Changing parameters of virtual cameras
94.
TUNABLE FILTERING OF VOICE-RELATED COMPONENTS FROM MOTION SENSOR
To protect a user's privacy by reducing a malicious developer's ability to eavesdrop on unwitting HMD users by converting signals from a motion sensor in the HMD to speech or speaker recognition, a microphone can record ambient sound and voice which is subtracted from the motion sensor data before the sensor data is made available to the game developer. In another technique, a band pass filter subtracts frequency in the sensor signals within the voice range. Still a third technique blends statistical noise into the motion sensor signal before passing to game developers to obfuscate the user's speech. The amount by which voice components in the motion signal are eliminated or obfuscated can be tuned by a person or app.
To improve the fidelity of a motion sensor, voice-induced components in signals from the motion sensor as well as haptic-induced components in signals from the motion sensor are canceled prior to outputting the final motion signal to an app requiring knowledge of device motion, such as motion of a HMD for a computer game.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
Haptic support for UI navigation is provided so that in addition to visual cues such as a cursor moving onscreen and audio cues such as sound effects, tactile feedback is provided through haptic generators in an input device such as a computer simulation controller. As the cursor moves right, for example, a haptic generator on the right side of the controller may be activated to generate a tactile sensation on the right side of the controller, and vice versa.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/426 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving on-screen location information, e.g. screen coordinates of an area at which the player is aiming with a light gun
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/04812 - Interaction techniques based on cursor appearance or behaviour, e.g. being affected by the presence of displayed objects
97.
MULTI-USER CROSS-DEVICE SYNCHRONIZATION OF METAVERSE INSTANTIATION
A method including receiving a request to establish a multi-player session for users to enable participation in a metaverse. The method including determining whether an application generating the metaverse is installed on local devices of the users selected for participation in the metaverse by the users. The method including launching a corresponding local instance of the application when the application is installed on a corresponding local device. The method including launching a corresponding cloud instance of the application on a cloud based streaming server when the application is not installed on the corresponding local device. The method including determining that each instance of the application for the users has been launched, wherein each instance for the users is a local instance or a cloud instance. The method including enabling a start of the multi-player session when the instances of the application for the users has been launched.
A63F 13/352 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/48 - Starting a game, e.g. activating a game device or waiting for other players to join a multiplayer session
98.
IMPROVING ACCURACY OF INTERACTIONS FOR GAZE-ENABLED AR OBJECTS WHEN IN MOTION
Methods and systems for providing augment reality overlay associated with a real-world object includes detecting a gaze target of a user viewing the real-world environment using a pair of AR glasses by tracking a gaze of the user. Position parameters affecting the gaze of the user are tracked and one or more attributes of the gaze target are selectively corrected to allow the user to maintain their gaze on the gaze target. An AR trigger element associated with the gaze target is triggered in response to the gaze of the user. The AR trigger element provides additional information related to the gaze target selected by the user.
A method performed for evaluating activity in a video game, including: executing a multi-player session of a video game; during the multi-player session, receiving flag event data from a first player device, the flag event data indicating that a first player has flagged a gameplay incident occurring during the multi-player session as potentially inappropriate; responsive to receiving the flag event data, then sending a request to a plurality of second player devices, wherein responsive to said request each of the plurality of second player devices renders a voting interface to obtain voting input from each of a plurality of second players regarding whether the gameplay incident is inappropriate; receiving said voting input from the plurality of second player devices, and responsive to said voting input identifying a threshold amount of the plurality of second players considering the gameplay incident to be inappropriate, then administering a penalty for the gameplay incident.
A63F 13/75 - Enforcing rules, e.g. detecting foul play or generating lists of cheating players
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
100.
LEARNING APPARATUS, MOVING IMAGE GENERATING APPARATUS, METHOD OF GENERATING LEARNED MODEL, MOVING IMAGE GENERATING METHOD, AND PROGRAM
A learning apparatus, a method of generating a learned model, and a program are capable of adequately widening the dynamic ranges of various images in a unified fashion. A training data generating section generates, on the basis of a second-class image, a first-class image associated with the second-class image, by referring to associative data where luminance values in a second dynamic range and luminance values in a first dynamic range are associated with each other. A learning section performs a learning process of a machine learning model by using the first-class image and the second-class image associated with the first-class image.