A noise cancellation system comprising an audio identification unit configured to identify audio to which a noise cancellation process is to be applied, an output generation unit configured to generate one or more noise cancellation signals in dependence upon the identified audio, and two or more audio output units each configured to reproduce a respective subset of the generated one or more noise cancellation signals, such that the two or more audio output units are configured to produce different outputs.
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
A graphics rendering apparatus, comprising: a first receiving unit configured to receive mesh data comprising a mesh that defines a geometry of a virtual element that is comprised within a virtual environment of a video game; a second receiving unit configured to receive one or more input signals; an animation unit configured to cause the virtual element to perform a given predefined action in response to one or more of the input signals; a determination unit configured to determine whether the virtual element has performed the given predefined action at least a threshold number of times; an identification unit configured to identify one or more predefined mesh deformations associated with the given predefined action if the virtual element has performed the given predefined action at least the threshold number of times; a deformation unit configured to deform one or more subsets of the mesh in accordance with one or more of the predefined mesh deformations, and at least in part thereby generate subsequent mesh data comprising a subsequent mesh that defines a subsequent geometry of the virtual element; and a rendering unit configured to render the virtual element based on the subsequent mesh data.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A medical observation system includes a white light source that emits white light, an infrared light source that emits infrared light, a light source controller that performs control to alternately repeat a first mode and a second mode in chronological order, in the first mode the white light source is caused to emit the white light, and in the second mode the infrared light source is caused to emit the infrared light and the white light source is caused to emit light with a wavelength from green to blue wavelength bands, and an imaging unit that captures a subject. Therefore, a medical observation system is provided that is configured to simultaneously perform normal observation and infrared observation without hindering the operation of an operator.
A61B 1/06 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor with illuminating arrangements
A61B 1/00 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
A61B 1/04 - Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
A force sense presentation device 1 includes a base body 2 that is gripped by a hand of a user, a movable unit 3 having a finger placement portion 5 on which the user is to place a fingertip in a state in which the base body 2 is gripped by the user, and a moving mechanism that moves the movable unit 3 with respect to the base body 2. The base body 2 is held by two or more fingers. The finger placement portion 5 has a finger engagement portion 4 provided thereon uprightly in a direction different from a moving direction of the movable unit 3.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0354 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of 2D relative movements between the device, or an operating part thereof, and a plane or surface, e.g. 2D mice, trackballs, pens or pucks
The number of light sources that emit light according to identification information regarding an input device is reduced. An input device (10) includes a first light source (S1), a second light source (S2), a first illumination part (E1) that is illuminated by light from the first light source (S1) in order to indicate identification information allocated to the input device, two second illumination parts (E2) that are placed at positions different from a position of the first illumination part (E1) and that are illuminated by light from the second light source in order to indicate the identification information, and a light guide member (50) that guides the light from the second light source (E2) to the two second illumination parts (E2).
Provided are a moving image reception device, a control method, and a program which can reduce a sense of discomfort felt by a user in a situation in which a moving image generated by a moving image transmission device in response to an operation on the moving image reception device is displayed on the moving image reception device. An operation data transmission unit transmits operation data corresponding to an input operation of the user to a cloud server. An image data reception unit receives, from the cloud server, interval data representing a time from reception of the operation data to a start of generation of the frame image based on this operation data in the cloud server. A transmission timing control unit controls a timing of the transmission of the next operation data, on the basis of the interval data.
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
Provided is an input device that enables a user to quickly recognize functions assigned to operation buttons. A controller (2000) has: a main body (2100); a plurality of second operation buttons (2120, 3200) that are attached to the main body and have an upper surface portion that can be pressed downward; and mark members (2400, 3400) that can be attached to and detached from the plurality of second operation buttons. Each mark member has a top portion (2401, 3401) on which a mark is indicated for the user to recognize the functions assigned to the plurality of second operation buttons.
An electronic device and method for multiscale inter-prediction for dynamic point cloud compression is provided. The electronic device receives a set of reference point cloud frames and a current point cloud frame. The electronic device generates reference frame data comprising a feature set for each reference point cloud frame and a first set of features for the current point cloud frame. The electronic device predicts a second set of features for the current point cloud frame, using a first neural network predictor, based on the reference frame data. The electronic device computes a set of residual features based on the first set of features and the second set of features. The electronic device generates a set of quantized residual features based on the set of residual features and a bitstream of encoded point cloud data for the current 3D point cloud frame based on the set of quantized residual features.
In the current implementation of V-DMC, the (u, v) coordinates are generated using Microsoft UV Atlas and they, together with the 3D positions and the topology, are carried in the base mesh sub-bitstream. High level syntax structures described herein support projection-based atlas map generation, and the means to derive the (u, v) coordinates on the decoder side using V3C syntax structure extensions. In comparison with previous implementations and in order to preserve the cunent V3C geometry bitstream concept, a separate sub-bitstream referred to hereby as vertex property sub-bitstream is used to carry displacement information.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
A method for cloud gaming including receiving a request to instantiate an instance of a video game for a game play of a player. The method including establishing a cloud based game engine for executing game logic of the video game in the instance of the video game. The method including assembling microservices for the cloud based game engine to instantiate the instance of the video game. The method including establishing communication between the cloud based game engine and each of the microservices over a communication fabric. The method including executing the game logic in the instance of the video game using the cloud based game engine based on controller input associated with the game play. The method including monitoring demand for computing resources while executing the instance of the video game. The method including adjusting an allocation of computing resources for the set of microservices based on the demand.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/48 - Starting a game, e.g. activating a game device or waiting for other players to join a multiplayer session
11.
DISPLAY UNIT, METHOD OF MANUFACTURING DISPLAY UNIT, AND ELECTRONIC APPARATUS
A display unit includes a plurality of pixels, a reflector layer, and an auxiliary electrode. Each of the plurality of pixels has a first electrode, an organic layer, and a second electrode in this order. The organic layer and the second electrode are provided on the first electrode. The organic layer includes a light-emitting layer. The reflector layer has a light-reflecting surface around each of the pixels. The auxiliary electrode is provided on the reflector layer and is projected from an upper end of the light-reflecting surface. The auxiliary electrode has a portion which is exposed from the organic layer, and the exposed portion is covered with the second electrode.
An image generation device 200 includes a source image generation unit 203 that generates a source image to which no distortion is provided, an HMD image generation unit 204a that generates an HMD image to be displayed on a head-mounted display, in reference to the source image, and a mirroring image generation unit 204b that generates a mirroring image for mirroring of the HMD image on a flat plate type display, in reference to the source image.
This technology relates to a transmission apparatus, a reception apparatus, and a data processing method for easily implementing circuits on the receiving side.
This technology relates to a transmission apparatus, a reception apparatus, and a data processing method for easily implementing circuits on the receiving side.
There is provided a transmission apparatus including a processing section configured to process a stream with respect to each of multiple PLPs included in a broadcast signal, the stream being constituted by packets, the processing section further causing a header of each of the packets to include mapping information mapped to identification information identifying the PLP to which each of the packets belongs. This technology may be applied to data transmission methods such as the IP transmission method or the MPEG2-TS method, for example.
H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/2381 - Adapting the multiplex stream to a specific network, e.g. an IP [Internet Protocol] network
H04N 21/438 - Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
14.
MULTISCALE INTER-PREDICTION FOR DYNAMIC POINT CLOUD COMPRESSION
An electronic device and method for multiscale inter-prediction for dynamic point cloud compression is provided. The electronic device receives a set of reference point cloud frames and a current point cloud frame. The electronic device generates reference frame data comprising a feature set for each reference point cloud frame and a first set of features for the current point cloud frame. The electronic device predicts a second set of features for the current point cloud frame, using a first neural network predictor, based on the reference frame data. The electronic device computes a set of residual features based on the first set of features and the second set of features. The electronic device generates a set of quantized residual features based on the set of residual features and a bitstream of encoded point cloud data for the current 3D point cloud frame based on the set of quantized residual features.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/105 - Selection of the reference unit for prediction within a chosen coding or prediction mode, e.g. adaptive choice of position and number of pixels used for prediction
H04N 19/30 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
H04N 19/503 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
Provided are an input device that facilitates the work of mounting an operating button, and an operating button. A controller (2000) comprises: a first magnetic structure which includes at least two magnetic poles that are provided in at least one of a button receiving portion (2301) and an operating button (2301) and are spaced apart in a first direction; and a second magnetic structure which includes a magnetic body (2152) or a magnet that is provided in the other of the button receiving portion and the operating button, and that is opposed to the at least two magnetic poles. The operating button comprises an engagement portion (2151) that is positioned in a second direction intersecting the first direction with respect to the at least two magnetic poles, and that is engaged with the button receiving portion.
Provided is a novel input device that enables comfortable operation for a user who has difficulty feeling comfortable when operating conventional input devices. A controller (2000) comprises: a body (2100); a first operation button (2110) attached to the body; and a second operation button (2120E) attached to the body at a position different from the position of the first operation button. The first operation button has an upper surface part serving as an operation part with which the user performs an operation. The second operation button has an upper surface part (2123E) serving as an operation part with which the user performs an operation, a portion of the upper surface part having an overhang (2132E) that covers a portion of the upper surface part of the first operation button.
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
G05G 1/01 - Arrangements of two or more controlling members with respect to one another
G05G 1/02 - Controlling members for hand-actuation by linear movement, e.g. push buttons
H01H 13/705 - Switches having rectilinearly-movable operating part or parts adapted for pushing or pulling in one direction only, e.g. push-button switch having a plurality of operating members associated with different sets of contacts, e.g. keyboard with contacts carried by or formed from layers in a multilayer structure, e.g. membrane switches characterised by construction, mounting or arrangement of operating parts, e.g. push-buttons or keys
H01H 21/00 - Switches operated by an operating part in the form of a pivotable member acted upon directly by a solid body, e.g. by a hand
17.
METERING BITS IN VIDEO ENCODER TO ACHIEVE TARGETED DATA RATE AND VIDEO QUALITY LEVEL
For stability of a bit rate for groups of pictures (GOPs), a rate buffer bit controller feedback loop and a proportional integral derivative (PID) bit controller feedback loop (700) may be used to maintain at least one video buffer.
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/174 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/152 - Data rate or code amount at the encoder output by measuring the fullness of the transmission buffer
A watermark representing a link to an original video and/or metadata such as haptic metadata associated with the original video is embedded in the original video in such a way that a re-recording to the original video can still preserve the watermark. The watermark can be used to link to the original video or to the metadata related thereto.
H04N 19/467 - Embedding additional information in the video signal during the compression process characterised by the embedded information being invisible, e.g. watermarking
An affective gaming system includes: an administrator unit configured to host a session of a video game; a receiving unit configured to receive biometric data associated with a first user of a plurality of users participating in the session of a video game; a first generating unit configured to generate, based on at least part of the biometric data, current emotion data associated with the first user; a second generating unit configured to generate, based at least in part on the current emotion data associated with the first user, target emotion data associated with the first user; and a modifying unit configured to modify, responsive to the difference between the target emotion data associated with the first user and the current emotion data associated with the first user, one or more aspects of the video game that are specific to the first user.
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A lens system provides to a user, a high-definition image in which generation of concentric circles is reduced. The lens system has one or more Fresnel lenses. A lens surface of each of the Fresnel lenses has a plurality of grooves that is concentrically formed. Both a pitch which is the distance between two adjacent grooves and the depth of each of the plurality of grooves vary with the distance from an optical axis that passes through the center of the lens system.
G02B 13/18 - Optical objectives specially designed for the purposes specified below with lenses having one or more non-spherical faces, e.g. for reducing geometrical aberration
G02B 3/04 - Simple or compound lenses with non-spherical faces with continuous faces that are rotationally symmetrical but deviate from a true sphere
G02B 3/08 - Simple or compound lenses with non-spherical faces with discontinuous faces, e.g. Fresnel lens
Provided is a cradle for supporting input devices having grips and tracked parts extending from the grips, the cradle including rear support parts that are capable of supporting rear parts of the grips, front support parts that are positioned forward away from the rear support parts and are capable of supporting front parts of the grips of the input devices, and, side support parts that are positioned outside in a left-right direction with respect to the rear support parts and the front support parts and are capable of supporting the tracked parts.
Systems and methods are disclosed for determining that a first end-user entity has performed a task within a computer simulation for which a non-fungible token (NFT) is to be provided, where the NFT is associated with a digital asset. Responsive to the determination, the NFT is provided to the first end-user entity so that the digital asset may be used, via the NFT, across plural different computer simulations and/or across plural different computer simulation platforms. Ownership of the NFT may also be subsequently transferred to other end-user entities for their own use across different simulations and/or platforms.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
There is provided an image processing apparatus and method that make it possible to suppress degradation of the encoding efficiency. In the case where primary transform that is a transform process for a prediction residual that is a difference between an image and a prediction image of the image is to be skipped, also secondary transform, which is a transform process for a primary transform coefficient obtained by the primary transform of the prediction residual, is skipped. The present disclosure can be applied, for example, to an image processing apparatus, an image encoding apparatus, an image decoding apparatus and so forth.
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/517 - Processing of motion vectors by encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
24.
METERING BITS IN VIDEO ENCODER TO ACHIEVE TARGETED DATA RATE AND VIDEO QUALITY LEVEL
For stability of a bit rate for groups of pictures (GOPs), a rate buffer bit controller feedback loop and a proportional integral derivative (PID) bit controller feedback loop may be used to maintain at least one video buffer.
An electronic device and method for generation of reflectance maps for relightable 3D models is disclosed. The electronic device acquires multi-view image data that includes a set of images of an object and generates a 3D mesh of the object based on the multi-view image data. The electronic device obtains a set of motion-corrected images based on a minimization of a rigid motion associated with the object between images of the set of images and generates texture maps in a UV space based on the set of motion-corrected images and the 3D mesh. The electronic device obtains specular and diffuse reflectance maps based on a separation of specular and diffuse reflectance components from the texture maps. and obtains a relightable 3D model of the object based on the specular and diffuse reflectance maps.
An electronic device and method for generation of reflectance maps for relightable 3D models is disclosed. The electronic device acquires multi-view image data that includes a set of images of an object and generates a 3D mesh of the object based on the multi-view image data. The electronic device obtains a set of motion-corrected images based on a minimization of a rigid motion associated with the object between images of the set of images and generates texture maps in a UV space based on the set of motion-corrected images and the 3D mesh. The electronic device obtains specular and diffuse reflectance maps based on a separation of specular and diffuse reflectance components from the texture maps. and obtains a relightable 3D model of the object based on the specular and diffuse reflectance maps.
G01B 11/245 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
G01B 11/25 - Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. moiré fringes, on the object
G06T 7/55 - Depth or shape recovery from multiple images
Computer game developers can implicitly create haptic assets from audio assets. A low pass filter passes (302) only audio assets with frequencies less than a threshold to a mapping module. The audio assets are then mapped (304) to haptic assets that can be output (306) by an ERM (208/700) of a computer game controller (206). The haptic output can be in synchronization with play of the audio assets on speakers.
G05G 9/047 - Manually-actuated control mechanisms provided with one single controlling member co-operating with two or more controlled members, e.g. selectively, simultaneously the controlling member being movable in different independent ways, movement in each individual way actuating one controlled member only in which movement in two or more ways can occur simultaneously the controlling member being movable by hand about orthogonal axes, e.g. joysticks
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/038 - Control and interface arrangements therefor, e.g. drivers or device-embedded control circuitry
A63F 13/20 - Input arrangements for video game devices
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
An affective gaming system includes: an administrator unit configured to host a session of a video game; a receiving unit configured to receive biometric data associated with two or more users participating in the session of a video game; a first generating unit configured to generate, based on at least part of the biometric data, current emotion data associated with each of the two or more users; a selecting unit configured to select a first user based on at least part of the current emotion data; a second generating unit configured to generate, based at least in part on the current emotion data that is associated with a second user, target emotion data associated with the first user; and a modifying unit configured to modify, responsive to the difference between the target emotion data and the current emotion data that is associated with the first user, one or more aspects of the video game that are specific to the first user.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
To enhance the sensory experience of voice, in some cases at a later time than the speech was spoken to enable reliving emotions and experiences, vocal sounds captured by a microphone are processed by a computer game controller API. The API plays back the vocal sounds at a later time in haptic format on the controller. The vocal sounds may be computer game dialogue, party chat, or vocal sounds of the user as demanded by the computer game.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/87 - Communicating with other players during game play, e.g. by e-mail or chat
H04R 3/04 - Circuits for transducers for correcting frequency response
30.
SYSTEM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING PROGRAM
A system includes a first image sensor that generates a first image signal by synchronously scanning all pixels at a prescribed timing, a second image sensor including an event-driven type vision sensor that, upon detecting a change in an intensity of incident light on each of the pixels, generates a second image signal asynchronously, an inertial sensor that acquires attitude information on the first image sensor and the second image sensor, a first computation processing device that recognizes a user on the basis of at least the second image signal and calculates coordinate information regarding the user on the basis of at least the second image signal, a second computation processing device that performs coordinate conversion on the coordinate information on the basis of the attitude information, and an image generation device that generates a display image which indicates a condition of the user, on the basis of the converted coordinate information.
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
Methods and systems for reconstructing a game world of a video game includes tracking status of game objects in the game world to detect wear on the one or more game objects exceeding a predefined threshold. An option to rebuild the one or more game objects is provided to a user and tools to rebuild the one or more game objects are provide in response to the user selecting the option to rebuild the game objects. The rebuilt game objects are used during subsequent gameplay of the video game.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
A63F 13/847 - Cooperative playing, e.g. requiring coordinated actions from several players to achieve a common goal
H04L 9/00 - Arrangements for secret or secure communications; Network security protocols
32.
EFFICIENT MAPPING COORDINATE CREATION AND TRANSMISSION
A method is disclosed to generate (u,v) coordinates at the decoder side by using parameters of orthographic projection functions, transmitted via an atlas bitstream. With the parameters for orthographic projection, the decoder is able to efficiently generate (u,v) coordinates and avoid their expensive coding.
The generation of a texture map using orthographic projections is performed in a fast and efficient manner. A method to generate texture maps taking significantly less time and also allowing maps to exploit the correlation between content of different frames in time is described herein. The texture mapping is able to be used for automatic generation of volumetric content or for more efficient compression of dynamic meshes. The texture map generation described herein includes ways to generate a texture atlas using orthographic projections. A novel stretch metric for orthographic projections is described, and a merging algorithm is devised to optimally cluster triangles into a single patch. Additionally, packing techniques are able to be used for mesh patches that try to optimize size and temporal stability.
The 3D audio perception of a listener such as a computer gamer is tested "stereoscopically" and the results input to a source of audio such as a computer game. Audio (802) from the source of audio (such as a head-mounted display of a computer game system or speaker outputting audio from a game console) may be altered (810) to account for the listener's measured 3D audio acuity. In addition, or alternatively, visual or haptic cues may be provided (814) to alert the listener of 3D audio events.
Groups of people control a computer game using teamwork. This can be done by eye tracking (400) of each person to detect where each person is looking on screen at objects such as game control objects. The control action of the object looked at by the most people (404) in a "heat map" style of data collection is implemented (408) by the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
Eye tracking (1100) of the wearer of a virtual reality headset is used to customize/personalize (1102) VR video. Based on eye tracking, the VR scene may present different types of trees (302, 304, 306) for different types of gaze directions. As another example, based on gaze direction, a VR scene can be augmented with additional objects (502) based on gaze direction at a particular related object. A friend's gaze-dependent personalization may be imported (1104) into the wearer's system to increase companionship and user engagement. Customized options can be recorded and sold to other players.
A63F 13/525 - Changing parameters of virtual cameras
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
Methods and systems for reconstructing a game world of a video game includes tracking status of game objects in the game world to detect wear on the one or more game objects exceeding a predefined threshold. An option to rebuild the one or more game objects is provided to a user and tools to rebuild the one or more game objects are provide in response to the user selecting the option to rebuild the game objects. The rebuilt game objects are used during subsequent gameplay of the video game.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
To enhance the sensory experience of voice, in some cases at a later time than the speech was spoken (300) to enable reliving emotions and experiences, vocal sounds captured by a microphone are processed (304) by a computer game controller API. The API plays back (306) the vocal sounds at a later time in haptic format on the controller. The vocal sounds may be computer game dialogue, party chat, or vocal sounds of the user as demanded by the computer game.
G10L 25/48 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G10L 25/00 - Speech or voice analysis techniques not restricted to a single one of groups
41 - Education, entertainment, sporting and cultural services
Goods & Services
ENTERTAINMENT SERVICES IN THE NATURE OF AN ON-GOING REALITY TELEVISION SERIES; ENTERTAINMENT SERVICES IN THE NATURE OF AN ON-GOING REALITY TELEVISION SERIES PROVIDED THROUGH CABLE, SATELLITE, AND INTERNET TRANSMISSION; PROVIDING NON-DOWNLOADABLE TELEVISION PROGRAMS VIA VIDEO-ON-DEMAND SERVICE; ENTERTAINMENT SERVICES, NAMELY, DISTRIBUTION OF ONGOING REALITY TELEVISION SERIES; PROVIDING A WEBSITE FEATURING ENTERTAINMENT INFORMATION; PROVIDING INFORMATION ABOUT A TELEVISION SERIES VIA A WEBSITE
To avoid startling a computer game player immersed in virtual reality for example, active noise cancelation is gradually introduced. As an alternative, ambient noise is gradually increased to conceal loud external sounds. The noise cancelation or ambient noise generation is established according to sound exceeding a background threshold as detected by a microphone. The noise cancelation or ambient noise generation can be established according to images of a noisy object as imaged by a camera.
G10K 11/178 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound by electro-acoustically regenerating the original acoustic waves in anti-phase
41.
ALTERING AUDIO AND/OR PROVIDING NON-AUDIO CUES ACCORDING TO LISTENER'S AUDIO DEPTH PERCEPTION
The 3D audio perception of a listener such as a computer gamer is tested “stereoscopically” and the results input to a source of audio such as a computer game. Audio from the source of audio (such as a head-mounted display of a computer game system or speaker outputting audio from a game console) may be altered to account for the listener's measured 3D audio acuity. In addition, or alternatively, visual or haptic cues may be provided to alert the listener of 3D audio events.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
42.
CUSTOMIZABLE VIRTUAL REALITY SCENES USING EYE TRACKING
Eye tracking of the wearer of a virtual reality headset is used to customize/personalize VR video. Based on eye tracking, the VR scene may present different types of trees for different types of gaze directions. As another example, based on gaze direction, a VR scene can be augmented with additional objects based on gaze direction at a particular related object. A friend's gaze-dependent personalization may be imported into the wearer's system to increase companionship and user engagement. Customized options can be recorded and sold to other players.
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
43.
INFORMATION PROCESSING DEVICE, CONTROL METHOD OF INFORMATION PROCESSING DEVICE, AND PROGRAM
An information processing device obtains information regarding the position of each fingertip of a user in a real space, and determines contact between a virtual object set within a virtual space and a finger of the user. The information processing device sets the virtual object in a partly deformed state such that a part of the virtual object, the part corresponding to the position of the finger determined to be in contact with the object among the fingers of the user, is located more to a far side from a user side than the finger, and displays the virtual object having the shape set thereto as an image in the virtual space on a display device.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/25 - Output arrangements for video game devices
A63F 13/52 - Controlling the output signals based on the game progress involving aspects of the displayed game scene
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 5/36 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the display of individual graphic patterns using a bit-mapped memory
A processing device includes: a detection unit configured to detect input information representative of a sequence of input signals for a video game that are input by a user using one or more input controls; an identification unit configured to identify one or more input signal variations in dependence upon one or more differences between the detected input information and predetermined input information representative of one or more predetermined sequences of input signals for the video game; a generation unit configured to generate assistance information in dependence upon the one or more identified input signal variations; and a provision unit configured to provide the generated assistance information to the user.
An image generation apparatus increases an adjustment amount of a luminance distribution to a target value B at a time t0 at which an amount of light entering the eyes of a user changes to such a degree that the change has an influence on an action of photoreceptor cells, to thereby cause a head-mounted display to display an image 310b having a luminance increased from that of an original image 310a. The image generation apparatus gradually decreases the adjustment amount of the luminance distribution during a restoration period Δt in such a manner that an image 310c having the original luminance distribution is displayed at a later time t1.
G06V 10/60 - Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
G06V 40/18 - Eye characteristics, e.g. of the iris
H04N 9/69 - Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits for modifying the colour signals by gamma correction
46.
METHOD AND SYSTEM FOR AUTO-PLAYING PORTIONS OF A VIDEO GAME
A method for providing an auto-play mode option to a user during gameplay of a video game includes accessing, by a server, a user play model, which incorporates extracted features related to gameplay by the user and classification of the extracted features. The accessing of the model is triggered at a current time during gameplay. The method also includes identifying, by the server, predicted interactive activity that is predicted to occur ahead of the current time of gameplay. The method further includes identifying, by the server, at least part of the predicted interactive activity to be anticipated grinding content (AGC). The method also includes providing a notification, by the server, to a display screen of a user device, where the notification identifies the AGC in upcoming gameplay and provides the user with an option to use the auto-play mode during gameplay of the AGC.
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
47.
SYSTEMS AND METHODS FOR APPLYING A MODIFICATION MICROSERVICE TO A GAME INSTANCE
A method for implementing a modification microservice with a game cloud system is described. The method includes executing a game instance of a game. The game instance is executed using a plurality of microservices assembled for the game instance. The method further includes accessing a modification microservice engineered to be executed with the game instance. The modification microservice adds a compute capability to the game instance. The modification microservice is executed outside of a server system in which the plurality of microservices is assembled for the game instance. Also, the modification microservice is accessed by one or more application programming interface (API) calls that obtain results data from said execution of the modification microservice. The one or more API calls are managed via a modification interface that manages the access to the modification microservice and use of the results data by the game instance.
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
48.
ORTHOATLAS: TEXTURE MAP GENERATION FOR DYNAMIC MESHES USING ORTHOGRAPHIC PROJECTIONS
The generation of a texture map using orthographic projections is performed in a fast and efficient manner. A method to generate texture maps taking significantly less time and also allowing maps to exploit the correlation between content of different frames in time is described herein. The texture mapping is able to be used for automatic generation of volumetric content or for more efficient compression of dynamic meshes. The texture map generation described herein includes ways to generate a texture atlas using orthographic projections. A novel stretch metric for orthographic projections is described, and a merging algorithm is devised to optimally cluster triangles into a single patch. Additionally, packing techniques are able to be used for mesh patches that try to optimize size and temporal stability.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
49.
METHOD AND SYSTEM FOR AUTO-PLAYING PORTIONS OF A VIDEO GAME
A method for providing an auto-play mode option to a user during gameplay of a video game includes accessing, by a server, a user play model, which incorporates extracted features related to gameplay by the user and classification of the extracted features. The accessing of the model is triggered at a current time during gameplay. The method also includes identifying, by the server, predicted interactive activity that is predicted to occur ahead of the current time of gameplay. The method further includes identifying, by the server, at least part of the predicted interactive activity to be anticipated grinding content (AGC). The method also includes providing a notification, by the server, to a display screen of a user device, where the notification identifies the AGC in upcoming gameplay and provides the user with an option to use the auto-play mode during gameplay of the AGC.
A63F 13/5375 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen for graphically or textually suggesting an action, e.g. by displaying an arrow indicating a turn in a driving game
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
A63F 13/497 - Partially or entirely replaying previous game actions
50.
SIGNAL PROCESSING CIRCUIT, SIGNAL PROCESSING METHOD, AND PROGRAM
Provided is a signal processing circuit for processing event signals generated by an event-based vision sensor (EVS), the signal processing circuit comprising a memory for storing program code and a processor for executing operations according to the program code, wherein the operations include: detecting at least one line segment or curve formed by a set of in-block positions of event signals generated in blocks into which an EVS detection area is divided; and correcting at least one of a first line segment or a first curve detected in the first block, or a second line segment or a second curve detected in a second block adjacent to the first block, so that a first end point of the first line segment or the first curve overlaps a second end point of the second line segment or the second curve.
A method for integration of real-world content into a game is described. The method includes receiving a request to play the game and accessing overlay multimodal data generated from a portion of real-world multimodal data received as user generated content (RGC). The overlay multimodal data relates to authored multimodal data generated for the game. The method includes replacing the authored multimodal data in one or more scenes of the game with the overlay multimodal data.
A63F 13/537 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
52.
GROUP CONTROL OF COMPUTER GAME USING AGGREGATED AREA OF GAZE
Groups of people control a computer game using teamwork. This can be done by eye tracking of each person to detect where each person is looking on screen at objects such as game control objects. The control action of the object looked at by the most people in a “heat map” style of data collection is implemented by the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
53.
HAPTIC ASSET GENERATION FOR ECCENTRIC ROTATING MASS (ERM) FROM LOW FREQUENCY AUDIO CONTENT
Computer game developers can implicitly create haptic assets from audio assets. A low pass filter passes only audio assets with frequencies less than a threshold to a mapping module. The audio assets are then mapped to haptic assets that can be output by an ERM of a computer game controller. The haptic output can be in synchronization with play of the audio assets on speakers.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/24 - Constructional details thereof, e.g. game controllers with detachable joystick handles
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/533 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game for prompting the player, e.g. by displaying a game menu
54.
TEXT MESSAGE OR APP FALLBACK DURING NETWORK FAILURE IN A VIDEO GAME
A method for managing gameplay of a video game is provided, including: executing a session of a video game by a cloud gaming resource; streaming video generated by the session over a network to a client device associated to a player of the video game, to enable gameplay of the session by the player; detecting a loss of network connectivity between the client device and the session; responsive to detecting the loss of network connectivity, then initiating transmission of updates regarding the session, via an alternative communication channel, to a secondary device associated to the player.
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A condition information acquisition section of an image generation device acquires a condition of communication and condition information of a head-mounted display. An image generation section generates a display image including distorted images for a left eye and a right eye. A reduction processing section converts the display image to data in which different regions have different reduction ratios in accordance with the condition of communication, etc., and transmits the data through an output section. An image size restoration section of the head-mounted display restores the transmitted data to the display image in an original size to cause the display image to be displayed by a display section.
A method of improving accessibility for the user operation of a first application on a computer includes the steps of taking one or more measurements of a current user's interaction with an application on the computer, comparing the one or more measurements with expectations derived from measurements from a first corpus of users, characterising one or more needs of the current user based upon the comparison, and modifying at least a first property of the first application responsive to the characterised need or needs.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
57.
TEXT MESSAGE OR APP FALLBACK DURING NETWORK FAILURE IN A VIDEO GAME
A method for managing gameplay of a video game is provided, including: executing a session of a video game by a cloud gaming resource; streaming video generated by the session over a network to a client device associated to a player of the video game, to enable gameplay of the session by the player; detecting a loss of network connectivity between the client device and the session; responsive to detecting the loss of network connectivity, then initiating transmission of updates regarding the session, via an alternative communication channel, to a secondary device associated to the player.
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
58.
SYSTEMS AND METHODS FOR APPLYING A MODIFICATION MICROSERVICE TO A GAME INSTANCE
A method for implementing a modification microservice with a game cloud system is described. The method includes executing a game instance of a game. The game instance is executed using a plurality of microservices assembled for the game instance. The method further includes accessing a modification microservice engineered to be executed with the game instance. The modification microservice adds a compute capability to the game instance. The modification microservice is executed outside of a server system in which the plurality of microservices is assembled for the game instance. Also, the modification microservice is accessed by one or more application programming interface (API) calls that obtain results data from said execution of the modification microservice. The one or more API calls are managed via a modification interface that manages the access to the modification microservice and use of the results data by the game instance.
A63F 13/352 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
59.
SYSTEMS AND METHODS FOR INTEGRATING REAL-WORLD CONTENT IN A GAME
A method for integration of real- world content into a game is described. The method includes receiving a request to play the game and accessing overlay multimodal data generated from a portion of real-world multimodal data received as user generated content (RGC). The overlay multimodal data relates to authored multimodal data generated for the game. The method includes replacing the authored multimodal data in one or more scenes of the game with the overlay multimodal data.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
60.
EFFICIENT MAPPING COORDINATE CREATION AND TRANSMISSION
A method is disclosed to generate (u,v) coordinates at the decoder side by using parameters of orthographic projection functions, transmitted via an atlas bitstream. With the parameters for orthographic projection, the decoder is able to efficiently generate (u,v) coordinates and avoid their expensive coding.
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/46 - Embedding additional information in the video signal during the compression process
A harassment detection apparatus includes: an executing unit configured to execute a session of a shared environment; an input unit configured to receive biometric data, the biometric data being associated with a plurality of users participating in the executed session of the shared environment; a generating unit configured to generate, based on at least a part of the biometric data, emotion data associated with the plurality of users, the emotion data comprising a valence value and/or an arousal value associated with each of the plurality of users; a detection unit configured to detect, responsive to at least a first part of the emotion data satisfying one or more of a first set of criteria, one or more first users associated with the at least first part of the emotion data; and a modifying unit configured to modify, responsive to the detection of the one or more first users, one or more aspects of the shared environment.
A cloud-based gaming system generates first and second instances of a virtual world of an online game for first and second players, respectively. First and second video streams of the first and second instances of the virtual world, respectively, are transmitted to the first and second players, respectively. The second video stream includes a ghosted version of a feature within the first instance of the virtual world. A request is received from the second player to merge the first and second instances of the virtual world. With the first player's approval, a merged instance of the virtual world is automatically generated by the cloud-gaming system as a combination of the first and second instances of the virtual world. Third and fourth video streams of the merged instance of the virtual world are transmitted to the first and second players, respectively, in lieu of the first and second video streams, respectively.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
An Image Activated Cell Sorting (IACS) classification workflow includes: employing a neural network-based feature encoder (or extractor) to extract features of cell images; automatically clustering cells based on extracted cell features; identifying a cluster to pick which cluster(s) to sort based on the cell images; fine-tuning a classification network based on the cluster(s) selected; and once refined, the classification network is used to sort cells for real-time live sorting.
G06F 18/2415 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
An information processing device includes a control unit that performs control to output information regarding an image of a user viewpoint in a virtual space, in which the control unit performs control to switch a display corresponding to a performer included in the image between a live-action image generated based on a captured image of the performer and a character image corresponding to the performer, according to a result of detecting a state of at least one user in the virtual space or a state of the virtual space.
Two dimensional images are converted (302) to a 3D neural radiance field (NeRF), which is modified (402) based on text input to resemble the type of character demanded by the text. An open-source "CLIP" model scores (404) how well an image matches a line of text to produce a final 3D NeRF, which may be converted (408) to a polygonal mesh and imported into a computer simulation such as a computer game.
G06N 3/006 - Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A cloud-based gaming system generates first and second instances of a virtual world of an online game for first and second players, respectively. First and second video streams of the first and second instances of the virtual world, respectively, are transmitted to the first and second players, respectively. The second video stream includes a ghosted version of a feature within the first instance of the virtual world. A request is received from the second player to merge the first and second instances of the virtual world. With the first player's approval, a merged instance of the virtual world is automatically generated by the cloud-gaming system as a combination of the first and second instances of the virtual world. Third and fourth video streams of the merged instance of the virtual world are transmitted to the first and second players, respectively, in lieu of the first and second video streams, respectively.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/47 - Controlling the progress of the video game involving branching, e.g. choosing one of several possible scenarios at a given point in time
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
67.
GAME ASSET OPTIMIZATION OVER NETWORK AT OPTIMIZER SERVER
A method including receiving from a device over a network at an optimizer server a plurality of game assets of a video game. The method including generating at least one combined game asset to represent the plurality of game assets. The method including sending the at least one combined game asset to the device for use in the video game.
A63F 13/57 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A data processing apparatus includes circuitry configured to: receive a first signal indicative of one or more words communicated from a first user of online content to one or more second users of online content; classify the one or more words using the first signal; receive one or more second signals indicative of one or more physiological characteristics of the first user within a time period of a start of the communication of the one or more words; classify the one or more physiological characteristics using the one or more second signals; based on a classification of the one or more words and a classification of the one or more physiological characteristics, generate an action signal indicating that an action associated with the first user of the online content is to be taken, the action signal indicating a characteristic of the action determined based on a combination of the classification of the one or more words and the classification of the one or more physiological characteristics; and output the action signal.
A data processing apparatus includes circuitry configured to: receive a first signal indicative of one or more words communicated from a first user of online content to one or more second users of online content; classify the one or more words using the first signal; receive one or more second signals indicative of one or more physiological characteristics of the one or more second users in response to the communicated one or more words; classify the one or more physiological characteristics of the one or more second users using the one or more second signals; determine, based on a classification of the one or more words and a classification of the one or more physiological characteristics of the one or more second users, whether to generate an action signal, the action signal indicating that an action associated with the first user of the online content is to be taken; and when it is determined an action signal is to be generated, generate and output the action signal.
Deep learning techniques such as vector graphics (300) are used to create 3D content and assets for metaverse applications. Vector graphics is a scalable format that provides rich 3D content. A vector graphics encoder (302) such as a deep neural network such as a recurrent neural network (RNN) or transformer receives (400) vector graphics and generates (402) an encoded output. The encoded output is decoded (404) by a 3D decoder such as another deep neural network that outputs 2D graphics for comparison with the original image. Loss is computed (408) between the original and the output of the 3D decoder. The loss is back propagated (410) to train the vector graphics encoder to generate 3D content.
Deep learning is used to dynamically adapt virtual humans (300) in metaverse applications. The adaptation can be according to user preferences (400). In addition or alternatively, virtual humans and pets (302) can be adapted for metaverse applications based on demographics (408) of the user. The user's personal demographics may be used to establish (410) the costume, skin color, emotion, voice, and behavior of the virtual humans. Similar considerations may be used to adapt virtual pets to the user's experience of the metaverse.
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
To help a computer game player in understanding a computer game, upon pausing (300, 500) the game, visual subtitles may be presented (304). In addition, or alternatively, Braille representing subtitles may be output (506) as a series of vibrations on a touch pad of the controller. When the person's finger reaches the edge of the touch pad, a new series of Braille subtitles may be presented (510). Depending on where the player is in reading the subtitles and how fast the player reads them, the game video may be slowed down (310) from normal speed.
A method including receiving from a device over a network at an optimizer server a plurality of game assets of a video game. The method including generating at least one combined game asset to represent the plurality of game assets. The method including sending the at least one combined game asset to the device for use in the video game.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
74.
SIGNAL PROCESSING CIRCUIT, SIGNAL PROCESSING METHOD, AND PROGRAM
Provided is a signal processing circuit which processes an event signal generated by an event-based vision sensor (EVS) and which comprises a memory for storing a program code and a processor for executing an operation according to the program code, wherein the operation includes using a first method to detect a relationship between positions within a block from event signals generated in blocks obtained by dividing an EVS detection region if the ratio of the eigenvalues of the variance-covariance matrix of the positions exceeds a threshold value and using a second method different from the first method to detect the relationship between the positions if the ratio of the eigenvalues does not exceed the threshold value.
An Image Activated Cell Sorting (IACS) classification workflow includes: employing a neural network-based feature encoder (or extractor) to extract features of cell images; automatically clustering cells based on extracted cell features; identifying a cluster to pick which cluster(s) to sort based on the cell images; fine-tuning a classification network based on the cluster(s) selected; and once refined, the classification network is used to sort cells for real-time live sorting.
G06V 20/69 - Microscopic objects, e.g. biological cells or cellular parts
G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
To provide a game presenting system having a plurality of game systems for each executing a process of a game in which a plurality of users participate, and a game presenting machine for presenting a situation of the game executed by the game system, wherein the game presenting machine obtains a motion image related to the game executed by each of the plurality of game systems, and produces a game presenting screen image showing, as a list, at least some of the plurality of motion images obtained.
A63F 13/5252 - Changing parameters of virtual cameras using two or more virtual cameras concurrently or sequentially, e.g. automatically switching between fixed virtual cameras when a character changes room or displaying a rear-mirror view in a car-driving game
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
85.
AI Player Model Gameplay Training and Highlight Review
Methods and systems for engaging an AI player of a user to play a video game on behalf of the user includes creating the AI player for the user using at least some of the attributes of the user, training the AI player using inputs provided by the user during game play of the video game, and providing access to the video game for game play to the AI player. The access allows the AI player to provide inputs to the video game that substantially mimics a play style of the user. Control of the game play of the video game can be transitioned to the user at any time during the game play of the AI player. The user can also control the game play of the AI player from a video recording of the game play.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
An apparatus comprising circuitry configured to perform a transport channel processing chain, the transport channel processing chain comprising a sub-carrier puncturing function, the sub-carrier puncturing function comprising puncturing, in each subframe of a composite transmission time interval, a set of subcarriers from at least one mapped physical resource block.
A terminal device for use with a wireless telecommunications network, the terminal device comprising: storage configured to store ancillary information not essential to every connection between the terminal device and the wireless telecommunications network; a controller configured to produce data indicative of the stored ancillary information; a transmitter configured to transmit the produced data to the wireless telecommunication network; and a receiver configured to receive an indication from the wireless telecommunication network to transmit the ancillary information to the wireless telecommunication network, wherein in response to the indication, the transmitter is configured to transmit the ancillary information.
A terminal device for use with a wireless telecommunications network, the terminal device comprising: storage configured to store ancillary information not essential to every connection between the terminal device and the wireless telecommunications network; a controller configured to produce data indicative of the stored ancillary information; a transmitter configured to transmit the produced data to the wireless telecommunication network; and a receiver configured to receive an indication from the wireless telecommunication network to transmit the ancillary information to the wireless telecommunication network, wherein in response to the indication, the transmitter is configured to transmit the ancillary information.
To help a computer game player in understanding a computer game, upon pausing the game, visual subtitles may be presented. In addition, or alternatively, Braille representing subtitles may be output as a series of vibrations on a touch pad of the controller. When the person's finger reaches the edge of the touch pad, a new series of Braille subtitles may be presented. Depending on where the player is in reading the subtitles and how fast the player reads them, the game video may be slowed down from normal speed.
A63F 13/285 - Generating tactile feedback signals via the game input device, e.g. force feedback
A63F 13/214 - Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
A63F 13/49 - Saving the game status; Pausing or ending the game
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
G09B 21/00 - Teaching, or communicating with, the blind, deaf or mute
Deep learning is used to dynamically adapt virtual humans in metaverse applications. The adaptation can be according to user preferences. In addition or alternatively, virtual humans and pets can be adapted for metaverse applications based on demographics of the user. The user's personal demographics may be used to establish the costume, skin color, emotion, voice, and behavior of the virtual humans. Similar considerations may be used to adapt virtual pets to the user's experience of the metaverse.
Deep learning techniques such as vector graphics are used to create 3D content and assets for metaverse applications. Vector graphics is a scalable format that provides rich 3D content. A vector graphics encoder such as a deep neural network such as a recurrent neural network (RNN) or transformer receives vector graphics and generates an encoded output. The encoded output is decoded by a 3D decoder such as another deep neural network that outputs 2D graphics for comparison with the original image. Loss is computed between the original and the output of the 3D decoder. The loss is back propagated to train the vector graphics encoder to generate 3D content.
Gaze tracking data representing a user's gaze is analyzed to determine one or more regions of interest. One or more gaze tracking parameters are determined from the gaze tracking data. Adjusted foveation data is determined representing an adjusted size and/or shape of one or more regions of interest in one or more images to be subsequently presented to the user based on the one or more gaze tracking parameters. The compression of the one or more transmitted images is adjusted so that fewer bits are needed to transmit data for portions of an image outside the one or more regions of interest than for portions of the image within the one or more regions of interest. Adjusting compression includes eliminating the region(s) of interest from images that are presented to the user during the saccade or blink.
Methods and apparatus provide for a head-mounted display to be worn by a user located within a physical environment and for engaging in an interactive experience; an entertainment device including a processing section that executes an interactive application that is manipulated in part through receiving inputs from the user; a communication partner operating to relay video and audio signals outputted from the entertainment device to the head-mounted display; continuously acquiring information of the environment at a predetermined frame rate; at least one camera on the head-mounted display, which captures images of the physical environment, where the communication partner is connected to the entertainment device with wired communication, and the communication partner is connected to the head-mounted display with wireless communication.
An input device for controlling a computing system includes one or more sensors configured to sense a change in weight distribution of a user positioned on the input device in use, and a transmitter configured to transmit a signal based on the sensed change in weight distribution, for use in a virtual joystick input to a computing system.
A method for modifying user sentiment is described. The method includes analyzing behavior of a group of players during a play of a game. The behavior of the group of players is indicative of a sentiment of the group of players during the play of the game. The method includes accessing a non-player character (NPC) during the play of the game. The NPC has a characteristic that influences a change in the sentiment of the group of players. The method includes placing the NPC into one or more scenes of the game during the play of the game for a period of time until the change in the sentiment of the group of players is determined. The change in the sentiment of the group of players is determined by analyzing of the behavior of the group of players during said play of the game.
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
96.
AI PLAYER MODEL GAMEPLAY TRAINING AND HIGHLIGHT REVIEW
Methods and systems for engaging an Al player of a user to play a video game on behalf of the user includes creating the Al player for the user using at least some of the attributes of the user, training the Al player using inputs provided by the user during game play of the video game, and providing access to the video game for game play to the Al player. The access allows the Al player to provide inputs to the video game that substantially mimics a play style of the user. Control of the game play of the video game can be transitioned to the user at any time during the game play of the Al player. The user can also control the game play of the Al player from a video recording of the game play.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/493 - Resuming a game, e.g. after pausing, malfunction or power failure
A63F 13/497 - Partially or entirely replaying previous game actions
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/86 - Watching games played by other players
Methods and apparatus provide for acquiring position information about a head-mounted display; performing information processing using the position information about the head-mounted display; generating and outputting data of an image to be displayed as a result of the information processing; and generating and outputting data of an image of a user guide indicating position information about a user in a real space using the position information about the head-mounted display, where the image of the user guide represents a state of the real space in which the user is physically located, as viewed obliquely.
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
Ambisonics audio such as may be used for computer simulations such as computer games is improved by using multi-order optimizations that frame an optimization problem that minimizes a cost function across a subset of Ambisonics orders for a chosen Ambisonics order “N”. In a simple form, this cost function minimizes error across all orders (0<=n<=N), and additional weighting is applied to emphasize or de-emphasize particular orders. The cost functions and optimization criteria may be different for binaural and speaker outputs.
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
G10L 19/005 - Correction of errors induced by the transmission channel, if related to the coding algorithm
A technique for encoding Ambisonics audio includes inputting audio to multiple Ambisonics encoders producing respective Ambisonics soundfields. Prior to mixing the soundfields, each soundfield is weighted to mitigate artifacts from order-truncation. After weighting, the soundfields are mixed to produce Ambisonics audio.
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
H04R 5/02 - Spatial or constructional arrangements of loudspeakers
H04S 5/00 - Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation