A watermark representing a link to an original video and/or metadata such as haptic metadata associated with the original video is embedded in the original video in such a way that a re-recording to the original video can still preserve the watermark. The watermark can be used to link to the original video or to the metadata related thereto.
H04N 19/467 - Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
An affective gaming system includes: an administrator unit configured to host a session of a video game; a receiving unit configured to receive biometric data associated with a first user of a plurality of users participating in the session of a video game; a first generating unit configured to generate, based on at least part of the biometric data, current emotion data associated with the first user; a second generating unit configured to generate, based at least in part on the current emotion data associated with the first user, target emotion data associated with the first user; and a modifying unit configured to modify, responsive to the difference between the target emotion data associated with the first user and the current emotion data associated with the first user, one or more aspects of the video game that are specific to the first user.
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A lens system provides to a user, a high-definition image in which generation of concentric circles is reduced. The lens system has one or more Fresnel lenses. A lens surface of each of the Fresnel lenses has a plurality of grooves that is concentrically formed. Both a pitch which is the distance between two adjacent grooves and the depth of each of the plurality of grooves vary with the distance from an optical axis that passes through the center of the lens system.
G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
G02B 3/04 - Simple or compound lenses with non-spherical faces with continuous faces that are rotationally symmetrical but deviate from a true sphere
G02B 3/08 - Simple or compound lenses with non-spherical faces with discontinuous faces, e.g. Fresnel lens
Provided is a cradle for supporting input devices having grips and tracked parts extending from the grips, the cradle including rear support parts that are capable of supporting rear parts of the grips, front support parts that are positioned forward away from the rear support parts and are capable of supporting front parts of the grips of the input devices, and, side support parts that are positioned outside in a left-right direction with respect to the rear support parts and the front support parts and are capable of supporting the tracked parts.
Systems and methods are disclosed for determining that a first end-user entity has performed a task within a computer simulation for which a non-fungible token (NFT) is to be provided, where the NFT is associated with a digital asset. Responsive to the determination, the NFT is provided to the first end-user entity so that the digital asset may be used, via the NFT, across plural different computer simulations and/or across plural different computer simulation platforms. Ownership of the NFT may also be subsequently transferred to other end-user entities for their own use across different simulations and/or platforms.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
For stability of a bit rate for groups of pictures (GOPs), a rate buffer bit controller feedback loop and a proportional integral derivative (PID) bit controller feedback loop may be used to maintain at least one video buffer.
Computer game developers can implicitly create haptic assets from audio assets. A low pass filter passes (302) only audio assets with frequencies less than a threshold to a mapping module. The audio assets are then mapped (304) to haptic assets that can be output (306) by an ERM (208/700) of a computer game controller (206). The haptic output can be in synchronization with play of the audio assets on speakers.
G05G 9/047 - Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously the controlling member being movable by hand about orthogonal axes, e.g. joysticks
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
A63F 13/20 - Input arrangements for video game devices
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
An affective gaming system includes: an administrator unit configured to host a session of a video game; a receiving unit configured to receive biometric data associated with two or more users participating in the session of a video game; a first generating unit configured to generate, based on at least part of the biometric data, current emotion data associated with each of the two or more users; a selecting unit configured to select a first user based on at least part of the current emotion data; a second generating unit configured to generate, based at least in part on the current emotion data that is associated with a second user, target emotion data associated with the first user; and a modifying unit configured to modify, responsive to the difference between the target emotion data and the current emotion data that is associated with the first user, one or more aspects of the video game that are specific to the first user.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
To enhance the sensory experience of voice, in some cases at a later time than the speech was spoken to enable reliving emotions and experiences, vocal sounds captured by a microphone are processed by a computer game controller API. The API plays back the vocal sounds at a later time in haptic format on the controller. The vocal sounds may be computer game dialogue, party chat, or vocal sounds of the user as demanded by the computer game.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
H04R 3/04 - Circuits for transducers for correcting frequency response
10.
SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
A system includes a first image sensor that generates a first image signal by synchronously scanning all pixels at a prescribed timing, a second image sensor including an event-driven type vision sensor that, upon detecting a change in an intensity of incident light on each of the pixels, generates a second image signal asynchronously, an inertial sensor that acquires attitude information on the first image sensor and the second image sensor, a first computation processing device that recognizes a user on the basis of at least the second image signal and calculates coordinate information regarding the user on the basis of at least the second image signal, a second computation processing device that performs coordinate conversion on the coordinate information on the basis of the attitude information, and an image generation device that generates a display image which indicates a condition of the user, on the basis of the converted coordinate information.
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
Methods and systems for reconstructing a game world of a video game includes tracking status of game objects in the game world to detect wear on the one or more game objects exceeding a predefined threshold. An option to rebuild the one or more game objects is provided to a user and tools to rebuild the one or more game objects are provide in response to the user selecting the option to rebuild the game objects. The rebuilt game objects are used during subsequent gameplay of the video game.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
A63F 13/847 - Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
To enhance the sensory experience of voice, in some cases at a later time than the speech was spoken (300) to enable reliving emotions and experiences, vocal sounds captured by a microphone are processed (304) by a computer game controller API. The API plays back (306) the vocal sounds at a later time in haptic format on the controller. The vocal sounds may be computer game dialogue, party chat, or vocal sounds of the user as demanded by the computer game.
G10L 25/48 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G10L 25/00 - Speech or voice analysis techniques not restricted to a single one of groups
Methods and systems for reconstructing a game world of a video game includes tracking status of game objects in the game world to detect wear on the one or more game objects exceeding a predefined threshold. An option to rebuild the one or more game objects is provided to a user and tools to rebuild the one or more game objects are provide in response to the user selecting the option to rebuild the game objects. The rebuilt game objects are used during subsequent gameplay of the video game.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
14.
ALTERING AUDIO AND/OR PROVIDING NON-AUDIO CUES ACCORDING TO LISTENER'S AUDIO DEPTH PERCEPTION
The 3D audio perception of a listener such as a computer gamer is tested "stereoscopically" and the results input to a source of audio such as a computer game. Audio (802) from the source of audio (such as a head-mounted display of a computer game system or speaker outputting audio from a game console) may be altered (810) to account for the listener's measured 3D audio acuity. In addition, or alternatively, visual or haptic cues may be provided (814) to alert the listener of 3D audio events.
Groups of people control a computer game using teamwork. This can be done by eye tracking (400) of each person to detect where each person is looking on screen at objects such as game control objects. The control action of the object looked at by the most people (404) in a "heat map" style of data collection is implemented (408) by the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
Eye tracking (1100) of the wearer of a virtual reality headset is used to customize/personalize (1102) VR video. Based on eye tracking, the VR scene may present different types of trees (302, 304, 306) for different types of gaze directions. As another example, based on gaze direction, a VR scene can be augmented with additional objects (502) based on gaze direction at a particular related object. A friend's gaze-dependent personalization may be imported (1104) into the wearer's system to increase companionship and user engagement. Customized options can be recorded and sold to other players.
A63F 13/525 - Changing parameters of virtual cameras
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
17.
INFORMATION PROCESSING DEVICE, CONTROL METHOD OF INFORMATION PROCESSING DEVICE, AND PROGRAM
An information processing device obtains information regarding the position of each fingertip of a user in a real space, and determines contact between a virtual object set within a virtual space and a finger of the user. The information processing device sets the virtual object in a partly deformed state such that a part of the virtual object, the part corresponding to the position of the finger determined to be in contact with the object among the fingers of the user, is located more to a far side from a user side than the finger, and displays the virtual object having the shape set thereto as an image in the virtual space on a display device.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/25 - Output arrangements for video game devices
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
A processing device includes: a detection unit configured to detect input information representative of a sequence of input signals for a video game that are input by a user using one or more input controls; an identification unit configured to identify one or more input signal variations in dependence upon one or more differences between the detected input information and predetermined input information representative of one or more predetermined sequences of input signals for the video game; a generation unit configured to generate assistance information in dependence upon the one or more identified input signal variations; and a provision unit configured to provide the generated assistance information to the user.
An image generation apparatus increases an adjustment amount of a luminance distribution to a target value B at a time t0 at which an amount of light entering the eyes of a user changes to such a degree that the change has an influence on an action of photoreceptor cells, to thereby cause a head-mounted display to display an image 310b having a luminance increased from that of an original image 310a. The image generation apparatus gradually decreases the adjustment amount of the luminance distribution during a restoration period Δt in such a manner that an image 310c having the original luminance distribution is displayed at a later time t1.
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 40/18 - Eye characteristics, e.g. of the iris
H04N 9/69 - Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
20.
CUSTOMIZABLE VIRTUAL REALITY SCENES USING EYE TRACKING
Eye tracking of the wearer of a virtual reality headset is used to customize/personalize VR video. Based on eye tracking, the VR scene may present different types of trees for different types of gaze directions. As another example, based on gaze direction, a VR scene can be augmented with additional objects based on gaze direction at a particular related object. A friend's gaze-dependent personalization may be imported into the wearer's system to increase companionship and user engagement. Customized options can be recorded and sold to other players.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
To avoid startling a computer game player immersed in virtual reality for example, active noise cancelation is gradually introduced. As an alternative, ambient noise is gradually increased to conceal loud external sounds. The noise cancelation or ambient noise generation is established according to sound exceeding a background threshold as detected by a microphone. The noise cancelation or ambient noise generation can be established according to images of a noisy object as imaged by a camera.
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
22.
ALTERING AUDIO AND/OR PROVIDING NON-AUDIO CUES ACCORDING TO LISTENER'S AUDIO DEPTH PERCEPTION
The 3D audio perception of a listener such as a computer gamer is tested “stereoscopically” and the results input to a source of audio such as a computer game. Audio from the source of audio (such as a head-mounted display of a computer game system or speaker outputting audio from a game console) may be altered to account for the listener's measured 3D audio acuity. In addition, or alternatively, visual or haptic cues may be provided to alert the listener of 3D audio events.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
23.
METHOD AND SYSTEM FOR AUTO-PLAYING PORTIONS OF A VIDEO GAME
A method for providing an auto-play mode option to a user during gameplay of a video game includes accessing, by a server, a user play model, which incorporates extracted features related to gameplay by the user and classification of the extracted features. The accessing of the model is triggered at a current time during gameplay. The method also includes identifying, by the server, predicted interactive activity that is predicted to occur ahead of the current time of gameplay. The method further includes identifying, by the server, at least part of the predicted interactive activity to be anticipated grinding content (AGC). The method also includes providing a notification, by the server, to a display screen of a user device, where the notification identifies the AGC in upcoming gameplay and provides the user with an option to use the auto-play mode during gameplay of the AGC.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
24.
SYSTEMS AND METHODS FOR APPLYING A MODIFICATION MICROSERVICE TO A GAME INSTANCE
A method for implementing a modification microservice with a game cloud system is described. The method includes executing a game instance of a game. The game instance is executed using a plurality of microservices assembled for the game instance. The method further includes accessing a modification microservice engineered to be executed with the game instance. The modification microservice adds a compute capability to the game instance. The modification microservice is executed outside of a server system in which the plurality of microservices is assembled for the game instance. Also, the modification microservice is accessed by one or more application programming interface (API) calls that obtain results data from said execution of the modification microservice. The one or more API calls are managed via a modification interface that manages the access to the modification microservice and use of the results data by the game instance.
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
25.
SIGNAL PROCESSING CIRCUIT, SIGNAL PROCESSING METHOD, AND PROGRAM
Provided is a signal processing circuit for processing event signals generated by an event-based vision sensor (EVS), the signal processing circuit comprising a memory for storing program code and a processor for executing operations according to the program code, wherein the operations include: detecting at least one line segment or curve formed by a set of in-block positions of event signals generated in blocks into which an EVS detection area is divided; and correcting at least one of a first line segment or a first curve detected in the first block, or a second line segment or a second curve detected in a second block adjacent to the first block, so that a first end point of the first line segment or the first curve overlaps a second end point of the second line segment or the second curve.
A method for providing an auto-play mode option to a user during gameplay of a video game includes accessing, by a server, a user play model, which incorporates extracted features related to gameplay by the user and classification of the extracted features. The accessing of the model is triggered at a current time during gameplay. The method also includes identifying, by the server, predicted interactive activity that is predicted to occur ahead of the current time of gameplay. The method further includes identifying, by the server, at least part of the predicted interactive activity to be anticipated grinding content (AGC). The method also includes providing a notification, by the server, to a display screen of a user device, where the notification identifies the AGC in upcoming gameplay and provides the user with an option to use the auto-play mode during gameplay of the AGC.
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
A63F 13/497 - Partially or entirely replaying previous game actions
27.
TEXT MESSAGE OR APP FALLBACK DURING NETWORK FAILURE IN A VIDEO GAME
A method for managing gameplay of a video game is provided, including: executing a session of a video game by a cloud gaming resource; streaming video generated by the session over a network to a client device associated to a player of the video game, to enable gameplay of the session by the player; detecting a loss of network connectivity between the client device and the session; responsive to detecting the loss of network connectivity, then initiating transmission of updates regarding the session, via an alternative communication channel, to a secondary device associated to the player.
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A condition information acquisition section of an image generation device acquires a condition of communication and condition information of a head-mounted display. An image generation section generates a display image including distorted images for a left eye and a right eye. A reduction processing section converts the display image to data in which different regions have different reduction ratios in accordance with the condition of communication, etc., and transmits the data through an output section. An image size restoration section of the head-mounted display restores the transmitted data to the display image in an original size to cause the display image to be displayed by a display section.
A method of improving accessibility for the user operation of a first application on a computer includes the steps of taking one or more measurements of a current user's interaction with an application on the computer, comparing the one or more measurements with expectations derived from measurements from a first corpus of users, characterising one or more needs of the current user based upon the comparison, and modifying at least a first property of the first application responsive to the characterised need or needs.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
30.
GROUP CONTROL OF COMPUTER GAME USING AGGREGATED AREA OF GAZE
Groups of people control a computer game using teamwork. This can be done by eye tracking of each person to detect where each person is looking on screen at objects such as game control objects. The control action of the object looked at by the most people in a “heat map” style of data collection is implemented by the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
31.
HAPTIC ASSET GENERATION FOR ECCENTRIC ROTATING MASS (ERM) FROM LOW FREQUENCY AUDIO CONTENT
Computer game developers can implicitly create haptic assets from audio assets. A low pass filter passes only audio assets with frequencies less than a threshold to a mapping module. The audio assets are then mapped to haptic assets that can be output by an ERM of a computer game controller. The haptic output can be in synchronization with play of the audio assets on speakers.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
32.
SYSTEMS AND METHODS FOR INTEGRATING REAL-WORLD CONTENT IN A GAME
A method for integration of real-world content into a game is described. The method includes receiving a request to play the game and accessing overlay multimodal data generated from a portion of real-world multimodal data received as user generated content (RGC). The overlay multimodal data relates to authored multimodal data generated for the game. The method includes replacing the authored multimodal data in one or more scenes of the game with the overlay multimodal data.
A63F 13/537 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
33.
TEXT MESSAGE OR APP FALLBACK DURING NETWORK FAILURE IN A VIDEO GAME
A method for managing gameplay of a video game is provided, including: executing a session of a video game by a cloud gaming resource; streaming video generated by the session over a network to a client device associated to a player of the video game, to enable gameplay of the session by the player; detecting a loss of network connectivity between the client device and the session; responsive to detecting the loss of network connectivity, then initiating transmission of updates regarding the session, via an alternative communication channel, to a secondary device associated to the player.
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
34.
SYSTEMS AND METHODS FOR APPLYING A MODIFICATION MICROSERVICE TO A GAME INSTANCE
A method for implementing a modification microservice with a game cloud system is described. The method includes executing a game instance of a game. The game instance is executed using a plurality of microservices assembled for the game instance. The method further includes accessing a modification microservice engineered to be executed with the game instance. The modification microservice adds a compute capability to the game instance. The modification microservice is executed outside of a server system in which the plurality of microservices is assembled for the game instance. Also, the modification microservice is accessed by one or more application programming interface (API) calls that obtain results data from said execution of the modification microservice. The one or more API calls are managed via a modification interface that manages the access to the modification microservice and use of the results data by the game instance.
A63F 13/352 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
35.
SYSTEMS AND METHODS FOR INTEGRATING REAL-WORLD CONTENT IN A GAME
A method for integration of real- world content into a game is described. The method includes receiving a request to play the game and accessing overlay multimodal data generated from a portion of real-world multimodal data received as user generated content (RGC). The overlay multimodal data relates to authored multimodal data generated for the game. The method includes replacing the authored multimodal data in one or more scenes of the game with the overlay multimodal data.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A harassment detection apparatus includes: an executing unit configured to execute a session of a shared environment; an input unit configured to receive biometric data, the biometric data being associated with a plurality of users participating in the executed session of the shared environment; a generating unit configured to generate, based on at least a part of the biometric data, emotion data associated with the plurality of users, the emotion data comprising a valence value and/or an arousal value associated with each of the plurality of users; a detection unit configured to detect, responsive to at least a first part of the emotion data satisfying one or more of a first set of criteria, one or more first users associated with the at least first part of the emotion data; and a modifying unit configured to modify, responsive to the detection of the one or more first users, one or more aspects of the shared environment.
A cloud-based gaming system generates first and second instances of a virtual world of an online game for first and second players, respectively. First and second video streams of the first and second instances of the virtual world, respectively, are transmitted to the first and second players, respectively. The second video stream includes a ghosted version of a feature within the first instance of the virtual world. A request is received from the second player to merge the first and second instances of the virtual world. With the first player's approval, a merged instance of the virtual world is automatically generated by the cloud-gaming system as a combination of the first and second instances of the virtual world. Third and fourth video streams of the merged instance of the virtual world are transmitted to the first and second players, respectively, in lieu of the first and second video streams, respectively.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
An information processing device includes a control unit that performs control to output information regarding an image of a user viewpoint in a virtual space, in which the control unit performs control to switch a display corresponding to a performer included in the image between a live-action image generated based on a captured image of the performer and a character image corresponding to the performer, according to a result of detecting a state of at least one user in the virtual space or a state of the virtual space.
A cloud-based gaming system generates first and second instances of a virtual world of an online game for first and second players, respectively. First and second video streams of the first and second instances of the virtual world, respectively, are transmitted to the first and second players, respectively. The second video stream includes a ghosted version of a feature within the first instance of the virtual world. A request is received from the second player to merge the first and second instances of the virtual world. With the first player's approval, a merged instance of the virtual world is automatically generated by the cloud-gaming system as a combination of the first and second instances of the virtual world. Third and fourth video streams of the merged instance of the virtual world are transmitted to the first and second players, respectively, in lieu of the first and second video streams, respectively.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/47 - Controlling the progress of the video game involving branching, e.g. choosing one of several possible scenarios at a given point in time
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
40.
RAPID GENERATION OF 3D HEADS WITH NATURAL LANGUAGE
Two dimensional images are converted (302) to a 3D neural radiance field (NeRF), which is modified (402) based on text input to resemble the type of character demanded by the text. An open-source "CLIP" model scores (404) how well an image matches a line of text to produce a final 3D NeRF, which may be converted (408) to a polygonal mesh and imported into a computer simulation such as a computer game.
G06N 3/006 - Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A data processing apparatus includes circuitry configured to: receive a first signal indicative of one or more words communicated from a first user of online content to one or more second users of online content; classify the one or more words using the first signal; receive one or more second signals indicative of one or more physiological characteristics of the first user within a time period of a start of the communication of the one or more words; classify the one or more physiological characteristics using the one or more second signals; based on a classification of the one or more words and a classification of the one or more physiological characteristics, generate an action signal indicating that an action associated with the first user of the online content is to be taken, the action signal indicating a characteristic of the action determined based on a combination of the classification of the one or more words and the classification of the one or more physiological characteristics; and output the action signal.
A data processing apparatus includes circuitry configured to: receive a first signal indicative of one or more words communicated from a first user of online content to one or more second users of online content; classify the one or more words using the first signal; receive one or more second signals indicative of one or more physiological characteristics of the one or more second users in response to the communicated one or more words; classify the one or more physiological characteristics of the one or more second users using the one or more second signals; determine, based on a classification of the one or more words and a classification of the one or more physiological characteristics of the one or more second users, whether to generate an action signal, the action signal indicating that an action associated with the first user of the online content is to be taken; and when it is determined an action signal is to be generated, generate and output the action signal.
A method including receiving from a device over a network at an optimizer server a plurality of game assets of a video game. The method including generating at least one combined game asset to represent the plurality of game assets. The method including sending the at least one combined game asset to the device for use in the video game.
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
Deep learning techniques such as vector graphics (300) are used to create 3D content and assets for metaverse applications. Vector graphics is a scalable format that provides rich 3D content. A vector graphics encoder (302) such as a deep neural network such as a recurrent neural network (RNN) or transformer receives (400) vector graphics and generates (402) an encoded output. The encoded output is decoded (404) by a 3D decoder such as another deep neural network that outputs 2D graphics for comparison with the original image. Loss is computed (408) between the original and the output of the 3D decoder. The loss is back propagated (410) to train the vector graphics encoder to generate 3D content.
Deep learning is used to dynamically adapt virtual humans (300) in metaverse applications. The adaptation can be according to user preferences (400). In addition or alternatively, virtual humans and pets (302) can be adapted for metaverse applications based on demographics (408) of the user. The user's personal demographics may be used to establish (410) the costume, skin color, emotion, voice, and behavior of the virtual humans. Similar considerations may be used to adapt virtual pets to the user's experience of the metaverse.
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
To help a computer game player in understanding a computer game, upon pausing (300, 500) the game, visual subtitles may be presented (304). In addition, or alternatively, Braille representing subtitles may be output (506) as a series of vibrations on a touch pad of the controller. When the person's finger reaches the edge of the touch pad, a new series of Braille subtitles may be presented (510). Depending on where the player is in reading the subtitles and how fast the player reads them, the game video may be slowed down (310) from normal speed.
A method including receiving from a device over a network at an optimizer server a plurality of game assets of a video game. The method including generating at least one combined game asset to represent the plurality of game assets. The method including sending the at least one combined game asset to the device for use in the video game.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
48.
SIGNAL PROCESSING CIRCUIT, SIGNAL PROCESSING METHOD, AND PROGRAM
Provided is a signal processing circuit which processes an event signal generated by an event-based vision sensor (EVS) and which comprises a memory for storing a program code and a processor for executing an operation according to the program code, wherein the operation includes using a first method to detect a relationship between positions within a block from event signals generated in blocks obtained by dividing an EVS detection region if the ratio of the eigenvalues of the variance-covariance matrix of the positions exceeds a threshold value and using a second method different from the first method to detect the relationship between the positions if the ratio of the eigenvalues does not exceed the threshold value.
To provide a game presenting system having a plurality of game systems for each executing a process of a game in which a plurality of users participate, and a game presenting machine for presenting a situation of the game executed by the game system, wherein the game presenting machine obtains a motion image related to the game executed by each of the plurality of game systems, and produces a game presenting screen image showing, as a list, at least some of the plurality of motion images obtained.
A63F 13/5252 - Changing parameters of virtual cameras using two or more virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character changes room or displaying a rear-mirror view in a car-driving game
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
51.
AI Player Model Gameplay Training and Highlight Review
Methods and systems for engaging an AI player of a user to play a video game on behalf of the user includes creating the AI player for the user using at least some of the attributes of the user, training the AI player using inputs provided by the user during game play of the video game, and providing access to the video game for game play to the AI player. The access allows the AI player to provide inputs to the video game that substantially mimics a play style of the user. Control of the game play of the video game can be transitioned to the user at any time during the game play of the AI player. The user can also control the game play of the AI player from a video recording of the game play.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
An input device for controlling a computing system includes one or more sensors configured to sense a change in weight distribution of a user positioned on the input device in use, and a transmitter configured to transmit a signal based on the sensed change in weight distribution, for use in a virtual joystick input to a computing system.
Gaze tracking data representing a user's gaze is analyzed to determine one or more regions of interest. One or more gaze tracking parameters are determined from the gaze tracking data. Adjusted foveation data is determined representing an adjusted size and/or shape of one or more regions of interest in one or more images to be subsequently presented to the user based on the one or more gaze tracking parameters. The compression of the one or more transmitted images is adjusted so that fewer bits are needed to transmit data for portions of an image outside the one or more regions of interest than for portions of the image within the one or more regions of interest. Adjusting compression includes eliminating the region(s) of interest from images that are presented to the user during the saccade or blink.
Methods and apparatus provide for a head-mounted display to be worn by a user located within a physical environment and for engaging in an interactive experience; an entertainment device including a processing section that executes an interactive application that is manipulated in part through receiving inputs from the user; a communication partner operating to relay video and audio signals outputted from the entertainment device to the head-mounted display; continuously acquiring information of the environment at a predetermined frame rate; at least one camera on the head-mounted display, which captures images of the physical environment, where the communication partner is connected to the entertainment device with wired communication, and the communication partner is connected to the head-mounted display with wireless communication.
To help a computer game player in understanding a computer game, upon pausing the game, visual subtitles may be presented. In addition, or alternatively, Braille representing subtitles may be output as a series of vibrations on a touch pad of the controller. When the person's finger reaches the edge of the touch pad, a new series of Braille subtitles may be presented. Depending on where the player is in reading the subtitles and how fast the player reads them, the game video may be slowed down from normal speed.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/214 - Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
A63F 13/49 - Saving the game status; Pausing or ending the game
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
G09B 21/00 - Teaching, or communicating with, the blind, deaf or mute
Deep learning is used to dynamically adapt virtual humans in metaverse applications. The adaptation can be according to user preferences. In addition or alternatively, virtual humans and pets can be adapted for metaverse applications based on demographics of the user. The user's personal demographics may be used to establish the costume, skin color, emotion, voice, and behavior of the virtual humans. Similar considerations may be used to adapt virtual pets to the user's experience of the metaverse.
Deep learning techniques such as vector graphics are used to create 3D content and assets for metaverse applications. Vector graphics is a scalable format that provides rich 3D content. A vector graphics encoder such as a deep neural network such as a recurrent neural network (RNN) or transformer receives vector graphics and generates an encoded output. The encoded output is decoded by a 3D decoder such as another deep neural network that outputs 2D graphics for comparison with the original image. Loss is computed between the original and the output of the 3D decoder. The loss is back propagated to train the vector graphics encoder to generate 3D content.
A method for modifying user sentiment is described. The method includes analyzing behavior of a group of players during a play of a game. The behavior of the group of players is indicative of a sentiment of the group of players during the play of the game. The method includes accessing a non-player character (NPC) during the play of the game. The NPC has a characteristic that influences a change in the sentiment of the group of players. The method includes placing the NPC into one or more scenes of the game during the play of the game for a period of time until the change in the sentiment of the group of players is determined. The change in the sentiment of the group of players is determined by analyzing of the behavior of the group of players during said play of the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
59.
AI PLAYER MODEL GAMEPLAY TRAINING AND HIGHLIGHT REVIEW
Methods and systems for engaging an Al player of a user to play a video game on behalf of the user includes creating the Al player for the user using at least some of the attributes of the user, training the Al player using inputs provided by the user during game play of the video game, and providing access to the video game for game play to the Al player. The access allows the Al player to provide inputs to the video game that substantially mimics a play style of the user. Control of the game play of the video game can be transitioned to the user at any time during the game play of the Al player. The user can also control the game play of the Al player from a video recording of the game play.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/86 - Watching games played by other players
Methods and apparatus provide for acquiring position information about a head-mounted display; performing information processing using the position information about the head-mounted display; generating and outputting data of an image to be displayed as a result of the information processing; and generating and outputting data of an image of a user guide indicating position information about a user in a real space using the position information about the head-mounted display, where the image of the user guide represents a state of the real space in which the user is physically located, as viewed obliquely.
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
Ambisonics audio such as may be used for computer simulations such as computer games is improved by using multi-order optimizations that frame an optimization problem that minimizes a cost function across a subset of Ambisonics orders for a chosen Ambisonics order “N”. In a simple form, this cost function minimizes error across all orders (0<=n<=N), and additional weighting is applied to emphasize or de-emphasize particular orders. The cost functions and optimization criteria may be different for binaural and speaker outputs.
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
G10L 19/005 - Correction of errors induced by the transmission channel, if related to the coding algorithm
A technique for encoding Ambisonics audio includes inputting audio to multiple Ambisonics encoders producing respective Ambisonics soundfields. Prior to mixing the soundfields, each soundfield is weighted to mitigate artifacts from order-truncation. After weighting, the soundfields are mixed to produce Ambisonics audio.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
H04R 5/02 - Spatial or constructional arrangements of loudspeakers
H04S 5/00 - Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
64.
SYSTEMS AND METHODS FOR MODIFYING USER SENTIMENT FOR PLAYING A GAME
A method for modifying user sentiment is described. The method includes analyzing behavior of a group of players during a play of a game. The behavior of the group of players is indicative of a sentiment of the group of players during the play of the game. The method includes accessing a non-player character (NPC) during the play of the game. The NPC has a characteristic that influences a change in the sentiment of the group of players. The method includes placing the NPC into one or more scenes of the game during the play of the game for a period of time until the change in the sentiment of the group of players is determined. The change in the sentiment of the group of players is determined by analyzing of the behavior of the group of players during said play of the game.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
Provided is an operation device (10) capable of expressing a pseudo weight. This operation device (10) includes: a plurality of link shafts (SF); a plurality of node mechanism units (ND) that form a grid with the plurality of link shafts (SF), each of the node mechanism units (ND) respectively holding one end of two or more link shafts (SF) among the plurality of link shafts (SF) in a manner that allows changing of the orientations of the two or more link shafts (SF); a placement table (90) on which the plurality of node mechanism units (ND) are placed; and a pulling mechanism (80) that pulls the node mechanism units (ND) in a direction for returning to a predetermined reference position on the placement table (90).
Interactive display of virtual trophies includes scanning a surface for one or more location anchor points. A trophy rack location is determined using the location anchor points. A trophy rack mesh is applied over an image frame of the surface using the determined trophy rack location. One or more trophy models are displayed over the trophy rack mesh with a display device. A trophy rack layout is generated information from the one or more trophy models and the trophy rack mesh and finally the trophy rack layout information is stored or transmitted.
A technique for encoding Ambisonics audio includes inputting audio to multiple Ambisonics encoders producing respective Ambisonics soundfields. Prior to mixing the soundfields, each soundfield is weighted to mitigate artifacts from order-truncation. After weighting, the soundfields are mixed to produce Ambisonics audio. Accordingly, an apparatus includes at least one processor configured with instructions which are executable to receive mono audio sources with direction and target Ambisonics order respectively and send respective mono audio with respective direction to a respective Ambisonics encoder to cause the encoder to output a respective soundfield of respective Ambisonics order.
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
H04S 3/00 - Systems employing more than two channels, e.g. quadraphonic
H04N 21/2368 - Multiplexing of audio and video streams
68.
SYSTEMS AND METHODS FOR CONTROLLING DIALOGUE COMPLEXITY IN VIDEO GAMES
A method of controlling the complexity levels of dialogues in a video game, includes the steps of: loading a first dialogue from a game database according to a parameter related to complexity level of dialogue in user settings; outputting the first dialogue; accepting a user operation of in response to the first dialogue; adjusting the parameter related to complexity level of dialogues in user settings based on the user operation; loading a second dialogue from the game database according to the adjusted parameter related to complexity level of dialogue; and outputting the second dialogue.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
Interactive display of virtual trophies includes scanning a surface for one or more location anchor points. A trophy rack location is determined using the location anchor points. A trophy rack mesh is applied over an image frame of the surface using the determined trophy rack location. One or more trophy models are displayed over the trophy rack mesh with a display device. A trophy rack layout is generated information from the one or more trophy models and the trophy rack mesh and finally the trophy rack layout information is stored or transmitted.
Ambisonics audio such as may be used for computer simulations such as computer games is improved by using multi-order optimizations that frame an optimization problem that minimizes a cost function (602) across a subset of Ambisonics orders for a chosen Ambisonics order "N". In a simple form, this cost function minimizes error across all orders (0 <= n <= N), and additional weighting (604) is applied to emphasize or de-emphasize particular orders. The cost functions and optimization criteria may be different for binaural and speaker outputs.
G10L 19/00 - Speech or audio signal analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
G06F 16/60 - Information retrieval; Database structures therefor; File system structures therefor of audio data
71.
SYSTEMS AND METHODS OF PROTECTING PERSONAL SPACE IN MULTI-USER VIRTUAL ENVIRONMENT
A method for protecting personal space in a multi-user virtual environment includes the steps of generating an avatar for a target user in the multi-user virtual environment, determining a relationship score between the target user and a peer user, creating a personal space around the avatar of the target user, wherein the dimensions of the personal space is computed based on the relationship score with the peer user, detecting the peer user's avatar crossing the boundary of the personal space, and applying rules to the peer user to restrict his/her interactions with the target user.
An input device includes: a plurality of input members; an upper surface having a right region in which a part of the plurality of input members is disposed, a left region in which another part of the plurality of input members is disposed, and a center region that is a region between the right region and the left region; and a light emitting region formed along an outer edge of the center region. The light emitting region includes a first light emitting portion configured to indicate identification information assigned to a plurality of input devices connected to an information processing apparatus, and a second light emitting portion configured to emit light based on information different from the identification information.
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
A63F 13/21 - Input arrangements for video game devices characterised by their sensors, purposes or types
F21V 33/00 - Structural combinations of lighting devices with other articles, not otherwise provided for
G06F 3/0338 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of limited linear or angular displacement of an operating part of the device from a neutral position, e.g. isotonic or isometric joysticks
73.
OVERLAPPING RENDERING, STREAMOUT, AND DISPLAY AT A CLIENT OF RENDERED SLICES OF A VIDEO FRAME
A method of cloud gaming is disclosed. The method including receiving an encoded video frame at a client, wherein a server executes an application to generate a rendered video frame which is then encoded at an encoder at the server as the encoded video frame, wherein the encoded video frame includes one or more encoded slices that are compressed. The method including decoding the one or more encoded slices at a decoder of the client to generate one or more decoded slices. The method including rendering the one or more decoded slices for display at the client. The method including begin displaying the one or more decoded slices that are rendered before fully receiving the one or more encoded slices at the client.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
H04L 67/10 - Protocols in which an application is distributed across nodes in the network
H04L 67/1095 - Replication or mirroring of data, e.g. scheduling or transport for data synchronisation between network nodes
H04L 67/131 - Protocols for games, networked simulations or virtual reality
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
H04N 21/8547 - Content authoring involving timestamps for synchronizing content
74.
TRACKING HISTORICAL GAME PLAY OF ADULTS TO DETERMINE GAME PLAY ACTIVITY AND COMPARE TO ACTIVITY BY A CHILD, TO IDENTIFY AND PREVENT CHILD FROM PLAYING ON AN ADULT ACCOUNT
Methods and systems for warning misuse of a user account of an adult user includes tracking use of the user account. The interactions at the user account are monitored and when the content accessed by a user is adult content and the user is determined to be a child, providing an alert to the adult user informing the adult user of the child accessing age-inappropriate content.
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
Provided is an operation device (10) capable of expressing haptic perception. The operation device (10) comprises: a plurality of link shafts (SF); a plurality of node mechanism parts (ND) forming a lattice shape with the plurality of link shafts (SF), each of the node mechanism parts (ND) holding one end of at least two or more link shafts (SF) among the plurality of link shafts (SF) so that it is possible to change the orientation of the two or more link shafts (SF); and a vibration unit that vibrates the operation device (10) according to the state of at least one of the plurality of node mechanism parts (ND).
A state information acquisition section of an image generation apparatus acquires state information of the head of a user. An image generation section generates a display image corresponding to a visual field. A down-sampling section down-samples image data and transmits the down-sampled image data from a transmission section. A distortion correction section of a head-mounted display performs, after an up-sampling section up samples the data, correction according to aberration of the eyepiece, for each primary color and causes the resulting data to be displayed on a display section.
An information processing device connected to a display device and to a sensor which detects relative positions of a user and the display device is provided. This information processing device acquires information indicating the relative positions of the user and the display device and detected by the sensor, and controls a position or a posture of at least one virtual object as a control target within data of a video on the basis of the acquired information indicating the relative positions of the user and the display device. Thereafter, the information processing device outputs the data of the video generated on the basis of information associated with a virtual space where the virtual object is arranged to the display device, and causes the display device to display the data.
G06T 7/70 - Determining position or orientation of objects or cameras
78.
TRACKING HISTORICAL GAME PLAY OF ADULTS TO DETERMINE GAME PLAY ACTIVITY AND COMPARE TO ACTIVITY BY A CHILD, TO IDENTIFY AND PREVENT CHILD FROM PLAYING ON AN ADULT ACCOUNT
Methods and systems for warning misuse of a user account of an adult user includes tracking use of the user account. The interactions at the user account are monitored and when the content accessed by a user is adult content and the user is determined to be a child, providing an alert to the adult user informing the adult user of the child accessing age-inappropriate content.
A user accessibility method for a videogame system includes obtaining trophy records for a plurality of games played by the user, generating an accessibility profile responsive to accessibility issues indicated by at least some of the trophy records, and modifying one or more operational parameters of the videogame system in response to the generated accessibility profile.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
82.
AI STREAMER WITH FEEDBACK TO AI STREAMER BASED ON SPECTATORS
A method is provided, including: executing a session of a video game; executing an artificial intelligence (Al) player that performs gameplay in the session of the video game; streaming video of the Al player's gameplay over a network to one or more spectator devices for viewing by one or more spectators respectively associated to the one or more spectator devices; receiving, over the network from the one or more spectator devices, feedback data indicating reactions of the one or more spectators to the video of the Al player's gameplay; adjusting the gameplay by the Al player based on the feedback data.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/86 - Watching games played by other players
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
83.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
Provided is an information processing apparatus including a map generating section that detects a surface of a real object in a three-dimensional space on the basis of a camera image captured with a camera of a head-mounted display and generates map data representing information regarding the detected surface, a position estimating section that collates the map data and the camera image and estimates a position of a user used for execution of an application, and a look-around screen generating section that causes the head-mounted display to display a synthesized image on which an object representing the detected surface of the real object is superimposed on an image of a corresponding surface in the camera image, in a period of generation of the map data.
An accessibility computer game controller includes a central control button (402) on a round base (400) and peripheral control buttons (404) on the base surrounding the central control button. The peripheral control buttons can have distinct sizes and shapes. Removable button labels (1112) can be applied on top of or underneath the buttons to aid in button identification.
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
A63F 13/245 - Constructional details thereof, e.g. game controllers with detachable joystick handles specially adapted to a particular type of game, e.g. steering wheels
A63F 13/90 - Constructional details or arrangements of video game devices not provided for in groups or , e.g. housing, wiring, connections or cabinets
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
A63F 13/20 - Input arrangements for video game devices
A63F 13/92 - Video game devices specially adapted to be hand-held while playing
A63F 13/98 - Accessories, i.e. detachable arrangements optional for the use of the video game device, e.g. grip supports of game controllers
A dual camera tracking system includes a main imager (200) and an auxiliary imager (204) the output of which is used to alter an aim and/or focus of the main imager. Both imagers may be mounted on a common housing (210). In embodiments, the common housing may be a head-mounted display (HMD) for a computer simulation such as a computer game.
H04N 13/25 - Image signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
H04N 23/67 - Focus control based on electronic image sensor signals
H04N 23/69 - Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
H04N 23/90 - Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
Disclosed herein is an information processing apparatus including at least one processor that has hardware. The at least one processor generates a first content image in a three-dimensional virtual reality space to be displayed on a head-mounted display, generates a second content image to be displayed on a flat-screen display, and generates a third content image from the first content image and/or the second content image.
A63F 13/26 - Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
A system for evaluating content, the system comprising a data obtaining unit configured to obtain data relating to one or more properties of the content, a processing unit configured to determine an expected contribution of one or more of the properties to a cognitive load for a user, an evaluation unit configured to determine an expected cognitive load associated with the content in dependence upon the expected contributions, and an image generation unit operable to generate an image for display in dependence upon the determined expected cognitive load.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A cognitive load assistance method includes: providing a virtual environment comprising a route to complete a current task, and providing within the virtual environment one or more interactive elements not essential to the current task that may be encountered during normal performance of the current task, receiving an indication that cognitive load assistance is required, and reducing the interactivity of at least a first interactive element not essential to the current task in response to the indication.
G16H 20/70 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
G06F 9/451 - Execution arrangements for user interfaces
A system for modifying a virtual environment to be interacted with by a user, the system comprising a location identifying unit configured to identify the locations of one or more events and the locations of one or more corresponding triggers for each of those events, a preference determining unit configured to determine one or more preferences of the user, a relation modifying unit configured to modify the temporal and/or spatial relation between respective parts of one or more pairs of corresponding events and triggers in dependence upon the determined user preferences, and a virtual environment modification unit operable to generate a modified virtual environment, the modified virtual environment comprising the modified temporal and/or spatial relations.
Without an increase in the cost, a movable range is defined. A first joint mechanism includes a base part, and a rotary part that rotates relatively to the base part. The first joint mechanism includes a regulated part that rotates together with the rotary part, a regulating part that is disposed on the extension of the rotation locus of the regulated part, and that has a function of regulating rotation of the regulated part relative to the base part, within a first movable range, and a movable range defining member that is disposed on either the base part or the rotary part, and that defines, as a movable range of the regulated part, a second movable range that is narrower than the first movable range.
F16H 1/14 - Toothed gearings for conveying rotary motion without gears having orbital motion involving only two intermeshing members with non-parallel axes comprising conical gears only
Techniques are described for smooth switchover of computer game control. The current states of game input is communicated to a new player assuming control. The new player is allowed time to catch up to the game. New player control is detected, and any errors are communicated to the new player. If there are differences between the old control scheme and that of the new player, they are reconciled. The outgoing player is adjusted to the transition.
A63F 13/422 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle automatically for the purpose of assisting the player, e.g. automatic braking in a driving game
A63F 13/25 - Output arrangements for video game devices
A63F 13/45 - Controlling the progress of the video game
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
An accessibility computer game controller includes a central control button on a round base and peripheral control buttons on the base surrounding the central control button. The peripheral control buttons can have distinct sizes and shapes. Removable button labels can be applied on top of or underneath the buttons to aid in button identification.
An estimation processing unit 220 derives posture information indicating the posture of an HMD worn on the head of a user. A game image generation unit 230 uses the posture information relating to the HMD to generate a content image of a three-dimensional virtual reality space displayed on the HMD. A system image generation unit 240 generates a system image for configuring settings relating to a camera image to be distributed together with the content image in a state where the HMD is worn on the head of the user.
G06F 3/04847 - Interaction techniques to control parameter settings, e.g. interaction with sliders or dials
A63F 13/21 - Input arrangements for video game devices characterised by their sensors, purposes or types
A63F 13/25 - Output arrangements for video game devices
A63F 13/525 - Changing parameters of virtual cameras
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
G09G 5/377 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory - Details of the operation on graphic patterns for mixing or overlaying two or more graphic patterns
G09G 5/38 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory with means for controlling the display position
Techniques are described for smooth switchover of computer game control. The current states of game input is communicated (1800) to a new player assuming control. The new player is allowed time (1802) to catch up to the game. New player control is detected (1804), and any errors are communicated (1806) to the new player. If there are differences between the old control scheme and that of the new player, they are reconciled (1808). The outgoing player is adjusted (1810) to the transition.
A63F 13/23 - Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
A63F 13/25 - Output arrangements for video game devices
A63F 13/45 - Controlling the progress of the video game
An apparatus, for assisting at least a first user in communicating with one or more other users via a network, includes: a storage unit configured to store: phrase data corresponding to one or more phrases, where each phrase comprises one or more words, tag data corresponding to one or more tags, where each tag comprises at least part of one word, and first association data corresponding to one or more associations between one or more of the phrases and one or more of the tags; an input unit configured to receive one or more audio signals from the at least first user; a recognition unit configured to recognise one or more spoken words included within the received audio signals; an evaluation unit configured to evaluate whether a given recognised spoken word corresponds to a given tag; and if so, a transmission unit configured to transmit one or more of the phrases associated with the given tag to one or more of the other users.
There is provided an information processing device including a photographed-image acquisition section that acquires a photographed image taken by a camera mounted on a head-mount display, and a photographing parameter that is adjusted according to a brightness with use of the camera, and a play area control section that detects a play area for defining a movable range of a user by analyzing the photographed image while changing an analysis condition according to an estimated brightness on the basis of the photographing parameter, and then, acquiring 3D information regarding a real object.
H04N 23/71 - Circuitry for evaluating the brightness variation
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
G06V 20/20 - Scenes; Scene-specific elements in augmented reality scenes
H04N 23/72 - Combination of two or more compensation controls
100.
INFORMATION PROCESSING APPARATUS AND INFORMATION PROCESSING METHOD
Disclosed herein is an information processing apparatus including an image correction section that corrects a camera image captured by a camera of a head-mounted display, a state estimation section that estimates a state of an actual physical body with use of the corrected camera image, and a calibration section that causes the head-mounted display to display a guide image that represents an extraction situation of feature points from the camera image, the feature points being necessary for calibration of the camera, collects data of the feature points, performs calibration, and updates a correction parameter used by the image correction section.