The present invention provides a video game development platform. More specifically, aspects of the invention relate to components of applications such as video games including the source code, graphics, sounds, and animations as well as a market place where any of the above are traded for currency, tokens, credits, or given to other people. These components can then be combined, using game editing and creation tools, to make video games. Users can create and edit games, either of their own or based on other users' preexisting games, and can share their games with others. Game components may be bought, sold, traded, or otherwise distributed through an online marketplace.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/71 - Game security or game management aspects using secure communication between game devices and game servers, e.g. by encrypting game data or authenticating players
2.
DISTRIBUTED TRAINING FOR MACHINE LEARNING OF AI CONTROLLED VIRTUAL ENTITIES ON VIDEO GAME CLIENTS
System and methods for utilizing a video game console to monitor the player's video game, detect when a particular gameplay situation occurs during the player's video game experience, and collect game state data corresponding to how the player reacts to the particular gameplay situation or an effect of the reaction. In some cases, the video game console can receive an exploratory rule set to apply during the particular gameplay situation. In some cases, the video game console can trigger the particular gameplay situation. A system can receive the game state data from many video game consoles and train a rule set based on the game state data. Advantageously, the system can save computational resources by utilizing the players' video game experience to train the rule set.
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
Embodiments of the systems and methods disclosed herein provide a request distribution system in which a request for resources may be executed by a plurality of workers. Upon receiving a request for resources from a user computing system, the request distribution system may select a subset of workers from the plurality of workers to execute the request within a time limit. Once the workers generate a plurality of outputs, each output associated with a quality level, the request distribution system may transmit the output associated with the highest quality level to the user computing system.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/71 - Game security or game management aspects using secure communication between game devices and game servers, e.g. by encrypting game data or authenticating players
Target testing code based on failure paths can improve hit rates and reduce memory consumption. Aggregating failure signatures into clusters can help to identify additional tests to perform. Further, the signature clusters can be used to automate testing of a video game by, for example, identifying tests that test elements of the video game that are common to the signatures within a cluster and automatically executing the tests without user involvement. The results of the tests can be used to modify the video game state. The process of testing and modifying the video game can be performed iteratively until a signature for the video game no longer matches the cluster.
Systems and methods are disclosed for training a machine learning model to control an in-game character or other entity in a video game in a manner that aims to imitate how a particular player would control the character or entity. A generic behavior model that is trained without respect to the particular player may be obtained and then customized based on observed gameplay of the particular player. The customization training process may include freezing at least a subset of layers or levels in the generic model, then generating one or more additional layers or levels that are trained using gameplay data for the particular player.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
Various aspects of the subject technology relate to systems, methods, and machine-readable media for automated detection of emergent behaviors in interactive agents of an interactive environment. The disclosed system represents an artificial intelligence based entity that utilizes a trained machine-learning-based clustering algorithm to group users together based on similarities in behavior. The clusters are processed based on a determination of the type of activity of the clustered users. In order to identify new categories of behavior and to label those new categories, a separate cluster analysis is performed using interaction data obtained at a subsequent time. The additional cluster analysis determines any change in behavior between the clustered sets and/or change in the number of users in a cluster. The identification of emergent user behavior enables the subject system to treat users differently based on their in-game behavior and to adapt in near real-time to changes in their behavior.
Various aspects of the subject technology relate to systems, methods, and machine-readable media for motion capture. The method includes obtaining a first video with at least one actor, the first video including a first set of movements of the at least one actor. The method also includes obtaining a second video with the at least one actor, the second video including a second set of movements of the at least one actor, the second set of movements correlating with the first set of movements. The method also includes combining the first video with the second video to obtain a combined video, the combined video including the first set of movements and the second set of movements, the first set of movements displayed as outlines.
Systems and methods can automatically detect bad contrast ratios in a rendered frame of a game application. A frame in the game application can be divided into a plurality of pixel regions with each pixel regions having multiple pixels. The systems and methods can calculate luminance of the pixels in the pixel region and calculate a contrast ratio for the pixel region based on the luminance of the pixels in the pixel region. The contrast ratio of the pixel region can be used to determine whether it is sufficient to meet a threshold contrast ratio. The color of the pixel region can be automatically changed to a predefined color to indicate that contrast ratio is sufficient to meet the threshold contrast ratio.
G09G 3/36 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes for presentation of an assembly of a number of characters, e.g. a page, by composing the assembly by combination of individual elements arranged in a matrix by control of light from an independent source using liquid crystals
9.
Interactive agents for user engagement in an interactive environment
Various aspects of the subject technology relate to systems, methods, and machine-readable media for interactive computer-operated agents for user engagement in an interactive environment. Computer-operated agents are introduced to help populate a session and are configured to maximize engagement rates among users associated with user-controlled agents. During these interactions, engagement metrics are collected that indicate different interaction rates at different times by the computer-operated agents. The number of popular computer-operated agents (with relatively high interaction rates) can be kept in circulation while some less popular computer-operated agents (with relatively smaller interaction rates) can be kept in circulation for diversity or are purged from circulation. In the disclosed system, for each instance that a user-controlled agent interacts with a computer-operated agent, a log of behavior data from that interaction can be monitored and collected to generate and/or adjust behavior models that provide the behavioral response distribution for a given computer-operated agent.
Embodiments of the systems and methods described herein provide an editor hub that can host a virtual environment and allow multiple game developer systems to connect to. The editor hub can manage all change requests by connected game developers and execute the change requests into the runtime version of the data. The connected game developers can receive the same cached build results from the build pipeline, which can allow for simultaneous updates for a plurality of game developers working together on the same virtual content.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
H04L 29/06 - Communication control; Communication processing characterised by a protocol
Various aspects of the subject technology relate to systems, methods, and machine-readable media for real-time localization. The disclosed system provides for real-time localization of an online product that is released to a client in its native language. The online product may include a native layer where content in the native language is provided as well as a localization layer where the content that has been localized can be provided for display. The localization layer may be superimposed on the native layer such that the content of the native layer is obscured from display. The disclosed system includes determining both the native language and detecting the user's locale so that the disclosed system can provide a localized version of the native content. The client calls a backend translation engine of a server to request for the localized version, and the backend translation engine can respond with the localized version in real-time.
Systems and methods are disclosed for universal body movement translation and character rendering. Motion data from a source character can be translated and used to direct movement of a target character model in a way that respects the anatomical differences between the two characters. The movement of biomechanical parts in the source character can be converted into normalized values based on defined constraints associated with the source character, and those normalized values can be used to inform the animation of movement of biomechanical parts in a target character based on defined constraints associated with the target character.
The present disclosure provides a system for a game application host system and game application that can determine the hardware characteristics of a user computing system for use during online matchmaking in a multiplayer game application. The game application can include a hardware analysis module that can evaluate the user computing system to determine the speed and operational characteristics of the hardware. The hardware characteristics can be used for matchmaking by a matchmaking module of the host application system to select hosts and users for a game match. The hardware analysis module that can run tests, such as a data throughput analysis and a processing analysis, to evaluate and rate the user computing system. The ratings can be incorporated into the matchmaking analysis along with other matchmaking characteristics, such as latency, player skill level, geographical location, and other existing matchmaking characteristics, in order to selects users for game matches.
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/34 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using peer-to-peer connections
H04L 29/06 - Communication control; Communication processing characterised by a protocol
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
14.
Content aggregation and automated assessment of network-based platforms
In some embodiments, the present disclosure provides a content aggregation and assessment computing system that can be configured to host a network-based content platform. For example, content generated can accumulate value based on defined metrics. The system can automatically track the submitted content's value over time. The accumulated value may be associated with a user's profile based on pre-defined criteria. The accumulated value may be used to calculate a ranking for user profile. The user profile ranking may correspond to increased status and/or privileges in the online community and access to secured portions of the platform.
Sequence predictors may be used to predict one or more entries in a musical sequence. The predicted entries in the musical sequence enable a virtual musician to continue playing a musical score based on the predicted entries when the occurrence of latency causes a first computing system hosting a first virtual musician to not receive entries or timing information for entries being performed in the musical sequence by a second computing system hosting a second virtual musician. The sequence predictors may be generated using a machine learning model generation system that uses historical performances of musical scores to generate the sequence predictor. Alternatively, or in addition, earlier portions of a musical score may be used to train the model generation system to obtain a prediction model that can predict later portions of the musical score.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
Systems and methods are disclosed for enabling a player of a video game to designate custom voice utterances to control an in-game character. One or more machine learning models may learn in-game character actions associated with each of a number of player-defined utterances based on player demonstration of desired character actions. During execution of an instance of a video game, current game state information may be provided to the one or more trained machine learning models based on an indication that a given utterance was spoken by the player. A system may then cause one or more in-game actions to be performed by a non-player character in the instance of the video game based on output of the one or more machine learning models.
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
17.
Universal facial expression translation and character rendering system
Systems and methods for universal facial expression translation and character rendering. An example method includes obtaining a three-dimensional face model of a face of a virtual character. The three-dimensional face model is presented in a user interface, with facial characteristics of the three-dimensional face model adjustable in the user interface. Definitions of facial shapes of the virtual character are obtained, with each facial shape being associated with a facial shape identifier. A facial shape identifier indicates a type of adjustment of facial characteristics. A facial shape represents the three-dimensional face model of the virtual character with facial characteristics according to associated facial shape identifiers. The facial shapes are stored in a database as being associated with the character. User input specifying one or more facial shape identifiers is received. The three-dimensional face model is rendered with facial characteristics adjusted according to the one or more specified facial shape identifiers.
Methods for providing contextual video recommendations within a video game are provided. In one aspect, a method includes executing an application that uses a rendering engine. The method also includes determining that a video recommendation threshold has been met. The method also includes providing a current contextual state of the application to a server such that the server selects a video from a plurality of videos based on the provided current contextual state and an index, wherein the index includes output from a vision model applied on the plurality of videos, and wherein the vision model is trained on footage generated by the rendering engine. The method also includes receiving a reference to the selected video from the server. The method also includes providing for display, via the reference, the selected video within a user interface of the application. Systems and machine-readable media are also provided.
H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
A system can manage distribution of computing jobs among a plurality of third-party network or cloud computing providers to maximize utilization of available computing resources purchased or otherwise obtained by an entity. The system can determine a dependency relationship between jobs and distribute the jobs among the network computing providers based at least in part on the dependency relationship between the jobs. Moreover, the system can use machine learning algorithms to generate one or more prediction algorithms to predict future computing resource usage demands for performing a set of scheduled and unscheduled jobs. Based at least in part on the resource prediction, the dependency relationship between jobs, service level agreements with network computing service providers, and job resource requirements, the system can determine an improved or optimal distribution of jobs among the network computing service providers that satisfies or best satisfies one or more objective functions to maximize resource utilization.
A video compression system and method may be used to compress video data using both resolution compression and texture compression. The compression may involve converting the video format from a first format to a second format and then performing resolution compression across blocks of pixels within each frame of the video. The resolution compressed data may then be arranged as data triplets spanning three consecutive frames of the video. The data triplets may be texture compressed using ETC or other texture compression techniques. The compressed video may be part of other applications, such as a video to be played within a video game. A client device may be able to decompress the compressed video by reversing the texture compression, reversing the resolution compression, and performing a format conversion to generate uncompressed video data that can be used to play the video.
H04N 19/156 - Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
21.
SYSTEMS AND METHODS FOR RAY-TRACED SHADOWS OF TRANSPARENT OBJECTS
Rendering shadows of transparent objects using ray tracing in real-time is disclosed. For each pixel in an image, a ray is launched towards the light source. If the ray intersects a transparent object, lighting information (e.g., color, brightness) is accumulated for the pixel. A new ray is launched from the point of intersection, either towards the light source or in a direction based on reflection/refraction from the surface. Ray tracing continues recursively, accumulating lighting information at each transparent object intersection. Ray tracing terminates when a ray intersects an opaque object, indicating a dark shadow. Ray tracing also terminates when a ray exits the scene without intersecting an object, where the accumulated lighting information is used to render a shadow for the pixel location. Soft shadows can be rendered using the disclosed technique by launching a plurality of rays in different directions based on a size of the light source.
09 - Scientific and electric apparatus and instruments
41 - Education; entertainment
42 - Scientific and technological services and research and design relating thereto; industrial analysis and research services; design and development of computer hardware and software.
Goods & Services
Downloadable computer game software via a global computer network and wireless devices; Recorded computer game software via a global computer network and wireless devices; Downloadable video game software; Recorded video game software. Entertainment services, namely, providing an online computer game; Provision of information relating to electronic computer games provided via the Internet; Production of video and computer game software. Design and development of interactive, computer, video and electronic game software; Video game development services; Video game programming development services; Designing and developing computer game software and video game software for use with computers, video game program systems and computer networks; Computer programming of video and computer games.
09 - Scientific and electric apparatus and instruments
41 - Education; entertainment
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software. Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet.
09 - Scientific and electric apparatus and instruments
41 - Education; entertainment
42 - Scientific and technological services and research and design relating thereto; industrial analysis and research services; design and development of computer hardware and software.
Goods & Services
Downloadable computer game software via a global computer network and wireless devices; Recorded computer game software via a global computer network and wireless devices; Downloadable video game software; Recorded video game software. Entertainment services, namely, providing an online computer game; Provision of information relating to electronic computer games provided via the Internet; Production of video and computer game software. Design and development of interactive, computer, video and electronic game software; Video game development services; Video game programming development services; Designing and developing computer game software and video game software for use with computers, video game program systems and computer networks; Computer programming of video and computer games.
42 - Scientific and technological services and research and design relating thereto; industrial analysis and research services; design and development of computer hardware and software.
Goods & Services
Design and development of interactive, computer, video and electronic game software; Video game development services; video game programming development services; Designing and developing computer game software and video game software for use with computers, video game program systems and computer networks; Computer programming of video and computer games
09 - Scientific and electric apparatus and instruments
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software
Entertainment services, namely, providing an online computer game; Provision of information relating to electronic computer games provided via the Internet; Production of video and computer game software
Entertainment services, namely, providing an online computer game; provision of information relating to electronic computer games provided via the Internet
Entertainment services, namely, providing an online computer game; Provision of information relating to electronic computer games provided via the Internet; Production of video and computer game software
42 - Scientific and technological services and research and design relating thereto; industrial analysis and research services; design and development of computer hardware and software.
Goods & Services
Design and development of interactive, computer, video and electronic game software; Video game development services; video game programming development services; Designing and developing computer game software and video game software for use with computers, video game program systems and computer networks; Computer programming of video and computer games
09 - Scientific and electric apparatus and instruments
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software
09 - Scientific and electric apparatus and instruments
41 - Education; entertainment
42 - Scientific and technological services and research and design relating thereto; industrial analysis and research services; design and development of computer hardware and software.
Goods & Services
(1) Computer video games; Downloadable computer games; Downloadable electronic games for use with mobile telephones, handheld computers and tablet computers; Video games (1) Online video gaming services; Provision of information relating to electronic computer games provided via the Internet; Production of video games and computer game software
(2) Design and development of interactive, computer, video and electronic game software; Video game development services; video game programming development services; Designing and developing computer game software and video game software for use with computers, video game program systems and computer networks; Computer programming of video and computer games
Entertainment services, namely, providing an on-line computer game; provision of information relating to electronic computer games provided via the Internet
09 - Scientific and electric apparatus and instruments
Goods & Services
Downloadable computer game software; recorded computer game software; downloadable computer game software via a global computer network and wireless devices; downloadable video game software; recorded video game software
Methods and systems for presenting incentivized hierarchical gameplay are provided. In one aspect, a method includes receiving a request to form a relationship between a first and a second virtual entity in an electronic simulation environment, the first virtual entity comprising multiple third virtual entity, and at least one third virtual entity is in a hierarchical relationship with another third virtual entity. The method also includes determining whether the relationship between the first and the second virtual entity can be formed. The method also includes forming the relationship. The method also includes associating a set of attributes of the first virtual entity with the second virtual entity. The method also includes increasing a set of strength attributes or decreasing a set of weakness attributes of the first virtual entity as a result of the relationship being formed. The method also includes transmitting a message indicating failure to form the relationship.
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
38.
System and method for providing promotions to users during idle time
A63F 13/53 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game
A63F 13/44 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment involving timing of operations, e.g. performing an action within a time slot
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
39.
Method and apparatus for splitting three-dimensional volumes
Apparatuses and methods pertaining to computer handling of three dimensional volumes are disclosed. One such method comprises obtaining data representing an input set of one or more three-dimensional volumes; selecting a first three-dimensional volume from among the input set of three-dimensional volumes; identifying a concavity in the first three-dimensional volume, the concavity having a region of deepest concavity; splitting the first three-dimensional volume along a split plane or intersection loop contacting or intersecting the region of deepest concavity, such as to provide plural three-dimensional volumes; and providing data representing an output set of two or more three-dimensional volumes.
Embodiments of the present disclosure comprise a content streaming system 120 that can stream content assets within the game environment. The content streaming system 120 can include a decoding module 124 that can decode content prior to streaming. The content streaming system 120 can stream the content asset to the environmental element during runtime without dynamically rendering the content asset during runtime, the decoded content asset can be applied as a textures on elements within the game environment. A content management module 122 can prioritize the order in which the content asset content is decoded and manage the process of streaming the content asset to the environmental elements during runtime. The content management module 122 can also dynamically select the content asset with an appropriate resolution to stream to the environmental element based on runtime gameplay information.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
H04N 19/127 - Prioritisation of hardware or computational resources
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 19/44 - Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
41.
DYNAMIC MODIFICATIONS OF SINGLE PLAYER AND MULTIPLAYER MODE IN A VIDEO GAME
A video game includes a single player mode where completion of storyline objectives advances the single player storyline. The video game also includes a multiplayer mode where a plurality of players can play on an instance of a multiplayer map. Storyline objectives from the single player mode are selected and made available for completion to players in the multiplayer mode, and the single player storylines can be advanced by players completing respective storyline objectives while playing in the multiplayer mode. Combinations of storyline objectives are selected from pending storyline objectives for players connecting to a multiplayer game for compatibility with multiplayer maps. Constraints can be used to determine compatibility.
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/48 - Starting a game, e.g. activating a game device or waiting for other players to join a multiplayer session
An example method of efficient binary resource distribution to client computing devices connected to content delivery networks (CDNs) comprises: receiving, by a client computing device, build metadata associated with a new build of a software product, wherein the new build comprises a plurality of binary resources; identifying, based on the build metadata, a subset of reusable locally stored binary resources comprised by a current build of the software product, wherein each binary resource of the subset matches a corresponding binary resource of the new build; and responsive to identifying a binary resource of the new build, such that the binary resource that does not have a corresponding reusable locally stored binary resource of the current build, downloading at least a part of the binary resource from a CDN.
Systems and methods are provided for real-time audio generation for electronic games based on personalized music preferences. An example method includes requesting listening history information from one or more music streaming platforms, the listening history information indicating, at least, music playlists to which a user created or is subscribed. A style preference associated with the user is determined based on the listening history information. A musical cue associated with an electronic game is accessed, with the musical being associated with music to be output by the electronic game based on a game state of the electronic game. Personalized music is generated utilizing one or more machine learning models based on the musical cue and the style preference, wherein the system is configured to provide the personalized music for inclusion in the electronic game.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
G06F 16/635 - Filtering based on additional data, e.g. user or group profiles
Systems described herein may automatically and dynamically adjust the amount and type of computing resources usable to execute, process, or perform various tasks associated with a video game. Using one or more machine learning algorithms, a prediction model can be generated that uses the historical and/or current user interaction data obtained by monitoring the users playing the video game. Based on the historical and/or current user interaction data, future user interactions likely to be performed in the future can be predicted. Using the predictions of the users' future interactions, the amount and type of computing resources maintained in the systems can be adjusted such that a proper balance between reducing the consumption of computing resources and reducing the latency experienced by the users of the video game is achieved and maintained.
A63F 13/358 - Adapting the game course according to the network or server load, e.g. for reducing latency due to different connection speeds between clients
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
Embodiments of the systems and methods described herein provide a virtual object aging system. The virtual object aging system can utilize artificial intelligence to modify virtual objects within a video game to age and/or deteriorate for a certain time period. The virtual object aging system can be used to determine erosion, melting ice, and/or other environmental effects on virtual objects within the game. The virtual object aging system can apply aging, rust, weathering, and/or other effects that cause persistent change to object meshes and textures.
G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
G06N 3/04 - Architecture, e.g. interconnection topology
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
Embodiments of the systems and methods described herein provide a three dimensional reconstruction system that can receive an image from a camera, and then utilize machine learning algorithms to identify objects in the image. The three dimensional reconstruction system can identify a geolocation of a user, identify features of the surrounding area, such as structures or geographic features, and reconstruct the scene including the identified features. The three dimensional reconstruction system can generate three dimensional object data for the features and/or objects, modify the three dimensional objects, arrange the objects in a scene, and render a two dimensional view of the scene.
Systems and methods for generating a customized virtual animal character are disclosed. A system may obtain video data or other media depicting a real animal, and then may provide the obtained media to one or more machine learning models configured to learn visual appearance and behavior information regarding the particular animal depicted in the video or other media. The system may then generate a custom visual appearance model and a custom behavior model corresponding to the real animal, which may subsequently be used to render, within a virtual environment of a video game, a virtual animal character that resembles the real animal in appearance and in-game behavior.
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
48.
VIRTUAL CHARACTER GENERATION FROM IMAGE OR VIDEO DATA
Systems and methods for generating a customized virtual character are disclosed. A system may obtain video data or other media depicting a real person, and then may provide the obtained media to one or more machine learning models configured to learn visual appearance and behavior information regarding the particular person depicted in the video or other media. The system may then generate a custom visual appearance model and a custom behavior model corresponding to the real person, which may subsequently be used to render, within a virtual environment of a video game, a virtual character that resembles the real person in appearance and in-game behavior.
A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
G06T 17/20 - Wire-frame description, e.g. polygonalisation or tessellation
G06N 3/04 - Architecture, e.g. interconnection topology
Systems and methods are disclosed for converting a player-controlled character or virtual entity in a video game to at least temporarily be under emulated control when certain criteria is met, such as when the player's device has lost its network connection to a game server. The character or virtual entity may continue to behave in the game in a manner that emulates or mimics play of the actual player until the end of the game session or until the underlying connection problem or other issue is resolved, such that other players participating in the game session have the same or similar gameplay experience as they would have had if the disconnected player had continued to play.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
The present disclosure provides a state stream game engine for a video game application. The state stream game engines can decouple the simulation of a video game application from the rendering of the video game application. The simulation of the video game is handled by a simulation engine. The rendering of the video game is handled by a presentation engine. The data generated by the simulation engine can be communicated to the presentation engine 124 using a state stream.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/352 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
51.
OPTIMIZED TEST CASE SELECTION FOR QUALITY ASSURANCE TESTING OF VIDEO GAMES
A test case selection system and method uses a test selection model to select test cases from a library of test cases to be used for quality assurance (QA) testing of a software application to maximize the chances of finding bugs from executing the selected test cases. The test case selection model may be a machine learning based regression model trained using outcomes of previous QA testing. In some case, the test case selection system may provide periodic and/or continuous refinement of the test case selection model from one QA testing run to the next. The model refinements may include updating weights associated with the test case selection model in the form of a regression model. Additionally, the test case selection system may provide performance analytics between a test case selection model-based selection of test cases and random selection of test cases.
Embodiments of the present application provide a network-based game modification system. The network based game modification system can provide users with access to game application source data and editing tools through a game editor client. The game editor client can provide an interface by which the developers can provide access to public source data while preventing access to private game source data. Additionally, the game editor client can provide an interface for access to some of the developers tools.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
G06F 9/451 - Execution arrangements for user interfaces
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
A63F 13/537 - Controlling the output signals based on the game progress involving additional visual information provided to the game scene, e.g. by overlay to simulate a head-up display [HUD] or displaying a laser sight in a shooting game using indicators, e.g. showing the condition of a game character on screen
Embodiments of the systems and methods disclosed herein provide a sponsor matching system in which players and sponsors can be matched. Upon a match based at least in part on stored sponsorship criteria and/or player preferences, a first sponsor can select a set of players to receive permission to select an advertisement associated with the first sponsor. Once a first player of the selected players selects an advertisement and an advertisement placement location associated with the first sponsor, the sponsor matching system can generate game rendering instructions for a first player system associated with the first player.
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
A63F 13/61 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor using advertising information
54.
LOW LATENCY CACHE SYNCHRONIZATION IN DISTRIBUTED DATABASES
An example distributed database includes a first instance and a second instance. The first instance is configured to: responsive to performing, within a scope of a database update transaction, a first database update operation, invalidate a cache entry residing in the first database cache maintained by the first instance, wherein the first database update operation is reflected by a transaction log maintained by the first instance; perform, within the scope of the database update transaction, a second database update operation to insert an identifier of the cache entry into a predetermined table of the distributed database, wherein the second database update operation is reflected by the transaction log; and responsive to committing the database update transaction, transmit the transaction log to the second instance. The second instance is configured responsive to receiving the transaction log, to: perform the first database update operation specified by the transaction log; and invalidate the cache entry in the second database cache maintained by the second instance.
G06F 16/17 - File systems; File servers - Details of further file system functions
G06F 12/0891 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
The present disclosure provides a system and method for updating a game application during runtime of a game application. The game application is executed on a client computing device using application code that includes a function store. During runtime of the game application, a function asset is received and stored in the function store. The function asset includes either precompiled code or code written in a scripting language and includes a version identifier. To execute a particular game function of the game application, the function asset is identified from other function assets in the function store based at least in part on its version identifier, and the game function is executed using the function asset.
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
G06F 8/71 - Version control ; Configuration management
Embodiments presented herein use an audio based authentication system for pairing a user account with an audio-based periphery computing system. The audio-based authentication system allows a user to interface with the periphery device through a user computing device. The user can utilize a previously authenticated user account on the user computing device in order to facilitate the pairing of the audio-based periphery computing system with the user account.
A63F 13/25 - Output arrangements for video game devices
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
Some embodiments of the present disclosure include a video and audio sentiment analysis system. The video and audio sentiment analysis system can capture video and audio of workflows while a game developer is working on a game development tool. The video and audio sentiment analysis system can use speech-to-text transcription to log requests and suggest help for a game developer. The video and audio sentiment analysis system can capture the recordings for a time period before the error occurs to provide the support team with a recording of the steps that led to the concern. The video and audio sentiment analysis system can package the video stream, transcription of audio, and user interface recordings to the development team such that the support system can replay the scenario of the user to get a full picture of the user's actions and concerns.
Embodiments of the present application provide a phased streaming system and process using a dynamic video game client. The dynamic video game client can utilize a state stream game engine in combination with a game application streaming service to provide users with the ability to begin playing games quickly on a huge range of devices.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/352 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
59.
Personalized real-time audio generation based on user physiological response
Systems and methods are provided for personalized real-time audio generation based on user physiological response. An example method includes obtaining hearing information associated with a user's hearing capabilities, the hearing information indicating one or more constraints on the user's hearing, and the hearing information being determined based on one or more hearing tests performed by the user; requesting listening history information from one or more music streaming platforms, the listening history information indicating, at least, music playlists to which a user created or is subscribed; determining, based on the listening history information, a style preference associated with the user; generating, utilizing one or more machine learning models, personalized music based on the hearing information and the style preference, wherein the personalized music comports with the constraints, and wherein the system is configured to provide the personalized music for output via a user device of the user.
G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
H04H 60/46 - Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for recognising users' preferences
A computer-implemented method for generating a machine-learned model to generate facial position data based on audio data comprising training a conditional variational autoencoder having an encoder and decoder. The training comprises receiving a set of training data items, each training data item comprising a facial position descriptor and an audio descriptor; processing one or more of the training data items using the encoder to obtain distribution parameters; sampling a latent vector from a latent space distribution based on the distribution parameters; processing the latent vector and the audio descriptor using the decoder to obtain a facial position output; calculating a loss value based at least in part on a comparison of the facial position output and the facial position descriptor of at least one of the one or more training data items; and updating parameters of the conditional variational autoencoder based at least in part on the calculated loss value.
A video game test system can determine an objective measure of elapsed time between interaction with a video game controller and the occurrence of a particular event within the video game. This objective measure enables a tester to determine whether a video game is objectively operating slowly or just feels slow to the tester, and may indicate the existence of coding errors that may affect execution speed, but not cause visible errors. The system may obtain the objective measure of elapsed time by simulating a user's interaction with the video game. Further, the system may identify data embedded into a frame of an animation by the video game source code to identify the occurrence of a corresponding event. The system can then measure the elapsed time between the simulated user interaction and the occurrence or triggering of the corresponding event.
A63F 13/31 - Communication aspects specific to video games, e.g. between several handheld game devices at close range
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
62.
TOXICITY-BASED CHAT PERSONALIZATION WITHIN VIDEO GAMES
Using user-specific prediction models, it is possible to present an individualized view of messages generated by users playing a shared instance of a video game. Further, users with different subjective views of what is offensive may be presented with different forms or annotations of a message. By personalizing the views of messages generated by users, it is possible to reduce or eliminate the toxic environment that sometimes forms when players, who may be strangers to each other and may be located in disparate locations play a shared instance of a video game. Further, the user-specific prediction models may be adapted to filter or otherwise annotate other undesirable messages that may not be offensive, such as a message generated by one user in a video game that includes a solution to an in-game puzzle that another user may not desire to read as it may spoil the challenge for the user.
An application test system can determine an objective measure of elapsed time between interaction with a user interface device and the occurrence of a particular event within the application, such as a video game. This objective measure enables a tester to determine whether an application is objectively operating slowly or just feels slow to the tester, and may indicate the existence of coding errors that may affect execution speed, but not cause visible errors. The system may obtain the objective measure of elapsed time by simulating a user's interaction with the application. Further, the system may identify data embedded into a frame of an animation by the application source code to identify the occurrence of a corresponding event. The system can then measure the elapsed time between the simulated user interaction and the occurrence or triggering of the corresponding event.
G06F 11/36 - Preventing errors by testing or debugging of software
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
64.
Computer architecture for asset management and delivery
Methods for reducing network bandwidth usage by managing data file assets in a bundle of data file assets requested for download are provided. In one aspect, a method includes receiving a manifest associated with a bundle including one or more assets. The manifest includes information regarding the one or more assets. The method includes comparing the information with locally stored assets. The method includes determining, based on the comparison, at least one asset to be requested. The method includes sending a request for the at least one asset. The method includes receiving, in response to the request, the at least one asset. Systems and machine-readable media are also provided.
The present disclosure provides embodiments of a particle-based inverse kinematic analysis system. The inverse kinematic system can utilize a neural network, also referred to as a deep neural network, which utilizes machine learning processes in order to create poses that are more life-like and realistic. The system can generate prediction models using motion capture data. The motion capture data can be aggregated and analyzed in order to train the neural network. The neural network can determine rules and constraints that govern how joints and connectors of a character model move in order to create realistic motion of the character model within the game application.
Approaches for secondary-game-mode sessions based on primary-game-mode arrangements of user-controlled elements are provided. Actions by user-controlled elements of a first user or other game-space elements in a primary game mode of a game space may be managed. A session request for a session in a secondary game mode of the game space may be received from the first user. A first session for the first user may be executed in the secondary game mode such that: the first session involves artificial-intelligence-controlled elements as opponents against the user-controlled elements; (ii) an arrangement of the user-controlled elements at a beginning of the first session is the same as an arrangement of the user-controlled elements in the primary game mode at a time of the session request; and (iii) impacts on the user-controlled elements during the first session in the secondary game mode are not reflected in the primary game mode.
A63F 13/85 - Providing additional services to players
A63F 13/35 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers
A63F 13/56 - Computing the motion of game characters with respect to other game characters, game objects or elements of the game scene, e.g. for simulating the behaviour of a group of virtual soldiers or for path finding
A63F 13/335 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using wide area network [WAN] connections using Internet
A63F 13/352 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
Various aspects of the subject technology relate to systems, methods, and machine-readable media for controlling movement in a video game. The method includes rotating a camera angle in a virtual world to change a viewpoint of a character. The method also includes populating a movement control interface with selections corresponding to points of interest in the virtual world in response to rotating the camera angle, the selections changing based on the viewpoint. The method also includes selecting a point of interest from the movement control interface by toggling a selection corresponding to the point of interest. The method also includes moving the character to the point of interest in the virtual world.
A63F 13/525 - Changing parameters of virtual cameras
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
A63F 13/23 - Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
68.
Systems and methods for separable foreground and background rendering
Various aspects of the subject technology relate to systems, methods, and machine-readable media for streaming a game. The method includes receiving a background stream rendered on a server and including a server time stamp, the background stream rendered at a first resolution and displayed at a second resolution, the first resolution larger than the second resolution. The method also includes receiving a foreground stream rendered on the server. The method also includes receiving control input from a player at a current time, the control input controlling a camera angle of the game. The method also includes determining a difference in the camera angle intended by the player between the current time and the server time stamp. The method also includes adjusting a display output based on the difference by shifting a focal point of the background stream according to the control input by utilizing extra rendered pixels.
H04N 13/00 - PICTORIAL COMMUNICATION, e.g. TELEVISION - Details thereof
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
Embodiments of systems presented herein may identify users to include in a match plan. A parameter model may be generated to predict the retention time of a set of users. A queue of potential users, a set of teammates, and/or opponents may be selected from a queue of waiting users. User information for the set of teammates and/or opponents may be provided to the parameter model to generate a predicted retention time. The set of teammates and/or opponents may be approved if the predicted retention time meets a predetermined threshold. Advantageously, by creating a match plan based on retention rates, the engagement and/or retention level for a number of users may be improved compared to existing multiplayer matching systems.
A63F 13/73 - Authorising game programs or game devices, e.g. checking authenticity
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
70.
Collision Detection and Resolution in Virtual Environments
A non-transitory computer readable storage medium storing computer program code that, when executed by a processing device, cause the processing device to perform operations comprising: determining a first representative point, wherein the first representative point represents a first geometric primitive; determining a second representative point, wherein the second representative point represents a second geometric primitive; determining an initial distance between the first representative point and the second representative point; calculating a first displacement based on a velocity of the first representative point; calculating a second displacement based on a velocity of the second representative point; determining a separating direction between the first representative point and the second representative point; projecting the first displacement along the separating direction; projecting the second displacement along the separating direction; calculating a predicted minimum distance between the first representative point and the second representative point based on the projection of the first displacement along the separating direction, the projection of the second displacement along the separating direction and the initial distance between the first representative point and the second representative point; and in response to the predicted minimum distance being less than a threshold distance, generating a collision constraint preventing penetration between the first geometric primitive and the second geometric primitive.
A63F 13/577 - Simulating properties, behaviour or motion of objects in the game world, e.g. computing tyre load in a car race game using determination of contact between game characters or objects, e.g. to avoid collision between virtual racing cars
71.
Systems and methods for ray-traced shadows of transparent objects
Rendering shadows of transparent objects using ray tracing in real-time is disclosed. For each pixel in an image, a ray is launched towards the light source. If the ray intersects a transparent object, lighting information (e.g., color, brightness) is accumulated for the pixel. A new ray is launched from the point of intersection, either towards the light source or in a direction based on reflection/refraction from the surface. Ray tracing continues recursively, accumulating lighting information at each transparent object intersection. Ray tracing terminates when a ray intersects an opaque object, indicating a dark shadow. Ray tracing also terminates when a ray exits the scene without intersecting an object, where the accumulated lighting information is used to render a shadow for the pixel location. Soft shadows can be rendered using the disclosed technique by launching a plurality of rays in different directions based on a size of the light source.
Disclosed is a hybrid approach to rendering transparent or translucent objects, which combines object-space ray tracing with texture-space parametrization and integration. Transparent or translucent objects are first parameterized using two textures: (1) a texture that stores the surface normal at each location on the transparent or translucent object, and (2) a texture that stores the world space coordinates at each location on the transparent or translucent object. Ray tracing can then be used to streamline and unify the computation of light transport inside thick mediums, such as transparent or translucent objects, with the rest of the scene. For each valid (e.g., visible) location on the surface of a transparent or translucent object, the disclosed embodiments trace one or more rays through such objects and compute the resulting lighting in an order-independent fashion. The results are stored in a texture, which is then applied during the final lighting stage.
Disclosed is a hybrid approach to rendering transparent or translucent objects, which combines object-space ray tracing with texture-space parametrization and integration. Transparent or translucent objects are first parameterized using two textures: (1) a texture that stores the surface normal at each location on the transparent or translucent object, and (2) a texture that stores the world space coordinates at each location on the transparent or translucent object. Ray tracing can then be used to streamline and unify the computation of light transport inside thick mediums, such as transparent or translucent objects, with the rest of the scene. For each valid (e.g., visible) location on the surface of a transparent or translucent object, the disclosed embodiments trace one or more rays through such objects and compute the resulting lighting in an order-independent fashion. The results are stored in a texture, which is then applied during the final lighting stage.
Responders and requesters can be assigned a score while waiting. Requesters and responders are matched via an auction type system, where requesters with the best requester scores are matched with requesters with the best responder scores. The requester scores can be boosted while the requester is waiting for a response if the requester performs certain actions. The requester score and responder score can increase over time. The requester score and responder score can also be based on other variables.
A video compression system and method may be used to compress video data using both resolution compression and texture compression. The compression may involve converting the video format from a first format to a second format and then performing resolution compression across blocks of pixels within each frame of the video. The resolution compressed data may then be arranged as data triplets spanning three consecutive frames of the video. The data triplets may be texture compressed using ETC or other texture compression techniques. The compressed video may be part of other applications, such as a video to be played within a video game. A client device may be able to decompress the compressed video by reversing the texture compression, reversing the resolution compression, and performing a format conversion to generate uncompressed video data that can be used to play the video.
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/156 - Availability of hardware or computational resources, e.g. encoding based on power-saving criteria
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
Systems and methods are disclosed herein for using machine learning to automatically modify unstructured scripts with speech tags for a context in which the speech is to be spoken so that the speech can be synthesized to sound more realistic and more contextually appropriate. The systems and methods can be dynamically applied. Training context tags and corresponding structured training scripts are used to train the machine learning system to generate an AI model. The AI model can be used in different ways, and a feedback system is described.
G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
Methods for matching online users in a networked interactive entertainment simulation are provided. In one aspect, a method includes receiving a user request for a user for joining an online session of the simulation. The user request is associated with a set of criteria for matching the user with other online users. An available population of users and a moving average of elapsed time to match for other users are determined. The set of criteria is adjusted based on the available population and the moving average of elapsed time. Finding other online users matching the adjusted set of criteria is initiated. The online session is started based on found online users. Systems and machine-readable media are also provided.
A63F 13/795 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for providing a buddy list
G06Q 10/06 - Resources, workflows, human or project management, e.g. organising, planning, scheduling or allocating time, human or machine resources; Enterprise planning; Organisational models
G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
Embodiments of a system and method for dynamically selecting a communication technology based at least in part on the success in forming a peer-to-peer connection for playing an instance of a video game are disclosed. Further, the systems may dynamically select a communication technology based on the quality of service of an established communication connection between two or more computing systems corresponding to two or more users attempting to play the instance of the video game. In some embodiments, the identification of a communication technology may occur during a gaming session and the communication technology used at the start of the game play session may be transitioned to another communication technology enabling the maintenance of a level of quality of service.
A63F 13/34 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers using peer-to-peer connections
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
Embodiments of systems and methods described herein disclose the use of megatextures to specify blend maps for different instances of an object within a game environment. Each blend map may specify a blending between two or more different versions of the object. In some embodiments, the two or more different versions may correspond to different visual appearances associated with the object (for example, an undamaged object and a damaged object). The blend map for an instance of the object may be dynamically updated based on one or more actions within the game environment, allowing for the visual appearance of the object instance to change within the game environment in response to various actions.
A63F 13/20 - Input arrangements for video game devices
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
A computerized method operable on a computer system for compositing data streams to generate a playable composite stream includes receiving a plurality of independent data streams that are included in a broadcast stream. The independent data streams include a video stream and a metadata stream. The metadata stream includes a plurality of user selectable graphics metadata for a plurality of graphics options. The computerized method further includes receiving a user selection for at least one of the graphics options; and compositing the at least one graphics option with the video stream to generate a composite video stream, which includes the at least one graphics option and the video stream.
A63F 13/86 - Watching games played by other players
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/8545 - Content authoring for generating interactive applications
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
H04N 21/8549 - Creating video summaries, e.g. movie trailer
A63F 13/23 - Input arrangements for video game devices for interfacing with the game device, e.g. specific interfaces between game controller and console
A63F 13/26 - Output arrangements for video game devices having at least one additional display device, e.g. on the game controller or outside a game booth
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/54 - Controlling the output signals based on the game progress involving acoustic signals, e.g. for simulating revolutions per minute [RPM] dependent engine sounds in a driving game or reverberation against a virtual wall
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
Various aspects of the subject technology relate to systems, methods, and machine-readable media for automated real-time engagement in an interactive environment. The disclosed system provides for producing a series of game challenges that replicate scenarios of a real-life event, and soliciting users to engage the series of game challenges to win an in-game reward. In some aspects, an extraction server engine obtains a live feed from a service provider of the real-life event and converts, on a periodic basis, the live feed into a parsed feed with a predetermined number of events. The extraction server engine then feeds its output to a trained neural network, which then selects a subset of the predetermined number of events. The trained neural network feeds the selections to a game server engine, which then feeds the selected events as in-game event challenges to a game client for presentation to an end user.
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A63F 13/816 - Athletics, e.g. track-and-field sports
Systems and methods for enhanced training of machine learning systems based on automatically generated visually realistic gameplay. An example method includes obtaining electronic game data that includes rendered images and associated annotation information, the annotation information identifying features included in the rendered images to be learned, and the electronic game data being generated by a video game associated with a particular sport. Machine learning models are trained based on the obtained electronic game data, with training including causing the machine learning models to output annotation information based on associated input of a rendered image. Real-world gameplay data is obtained, with the real-world gameplay data being images of real-world gameplay of the particular sport. The obtained real-world gameplay data is analyzed based on the trained machine learning models. Analyzing includes extracting features from the real-world gameplay data using the machine learning models.
A63F 13/55 - Controlling game characters or game objects based on the game progress
G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
An example method of live migration of distributed databases may include implementing a first database access mode with respect to a distributed database to be migrated from an original set of storage servers to a destination set of storage servers, wherein, in the first database access mode, database read requests are routed to the original set of storage servers and database update requests are routed to both the original set of storage servers and the destination set of storage servers. The method may further include copying a plurality of records of the distributed database from the original set of storage servers to the destination set of storage servers. The method may further include switching to a second database access mode, in which database read requests are routed to the destination set of storage servers and database update requests are routed to both the original set of storage servers and the destination set of storage servers. The method may further include switching to a post-migration database access mode, in which database read and update requests are routed to the destination set of storage servers.
G06F 16/27 - Replication, distribution or synchronisation of data between databases or within a distributed database system; Distributed database system architectures therefor
84.
SYSTEM AND METHOD FOR ALTERING PERCEPTION OF VIRTUAL CONTENT IN A VIRTUAL SPACE
The disclosure relates to systems and methods for altering perception of virtual or game content in a virtual space based on one or more attribute levels. The perception of some virtual or game content may not be altered. Thus, the depiction of some content is altered while other content is not. A system may alter the depiction of game content based on attributes of an entity and/or based on which entity is to perceive the game content. The different depictions of game content may be provided to the same entity at different times and/or different perceptions of game content may be provided to different entities. Thus, a rich interface may be provided that differentially depicts game content based on attribute levels and/or the entity that is to perceive the game content.
Systems presented herein may automatically and dynamically modify a video game being played by a user based at least in part on a determined or predicted emotional state of a user. Using one or more machine learning algorithms, a parameter function can be generated that uses sensory and/or biometric data obtained by monitoring a user playing a video game. Based on the sensory and/or biometric data, an emotional state of the user can be predicted. For example, it can be determined whether a user is likely to be bored, happy, or frightened while playing the video game. Based at least in part on the determination of the user's emotional state, the video game can be modified to improve positive feelings and reduce negative feelings occurring in response to the video game.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G10L 15/18 - Speech classification or search using natural language modelling
G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
An identity authenticator receives a first authentication credential from a first application at a first computing device. The identity authenticator then determines that the first authentication credential is associated with a second authentication credential for the first application at a second computing device based on a stored authentication identity. The identity authenticator then provides a stored execution state for the first application to the first computing device, wherein the stored execution state is associated, based on the stored authentication identity, with at least one of the first authentication credential or the second authentication credential.
An example method of photometric image processing may comprise: receiving a plurality of images of a three-dimensional object, wherein the plurality of images has been acquired by a plurality of cameras using a plurality of illumination and polarization patterns; performing color calibration of the plurality of images to produce a plurality of color-calibrated images; generating, using the plurality of color-calibrated images, a polygonal mesh simulating geometry of the three-dimensional object; producing a plurality of partial UV maps by projecting the plurality of color-calibrated images onto the polygonal mesh; generating a plurality of masks, wherein each mask of the plurality of masks is associated with a camera of the plurality of cameras, wherein the mask defines a UV space region that is covered by a field of view of the camera; blending, using the plurality of masks, the plurality of partial UV maps; and generating one or more texture maps representing the three-dimensional object.
An example method of multi-character animation comprises: identifying a single scene origin in an animation scene of an interactive video game; aligning, to the single scene origin, each game character of a plurality of game characters associated with the animation scene; generating, with respect to the single scene origin, a respective animation for each game character of the plurality of game characters; and causing each game character of the plurality of game characters to be displayed, using the respective animation, in the interactive video game.
Systems and methods are provided for enhanced real-time audio generation via a virtualized orchestra. An example method includes receiving, from a user device, a request to generate output associated with a musical score. Actions associated with virtual musicians with respect to respective instruments are simulated based on one or more machine learning models, with the simulated actions being associated with a virtual musician and indicative of an expected playing style during performance of the musical score. Output audio to be provided to the user device is generated, with the output audio being generated based on the simulated actions.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
A method, system and computer program code for analyzing a video stream, the method comprising: receiving a sequence of communication packets associated with a frame and an indication to a frame number; retrieving slices associated with the frame from the sequence of communication packets until a missing or corrupted slice, or an end of the frame is encountered; subject to no missing or corrupted slice encountered, providing the slices associated with the frame to a handling unit; and subject to a missing or corrupted slice encountered: skipping data from a beginning of the missing or corrupted frame; and resuming retrieving the slices subject to the end of the frame not being encountered.
H04N 19/00 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
H04N 19/65 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using error resilience
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 19/174 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a slice, e.g. a line of blocks or a group of blocks
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/132 - Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
Financial sponsorship of charitable organizations; Charitable services, namely arranging charitable fundraising and community service activities; all of the aforesaid relating to the promotion of physical and mental health, safety, and fair play in the video gaming community. Electronic publishing services, namely, publication of guides, newsletters, and blogs; Entertainment services, namely, arranging, conducting, and organizing events, programs, contests, and competitions in the field of electronic computer games; all of the aforesaid relating to the promotion of physical and mental health, safety, and fair play in the video gaming community.
Providing information promoting awareness of the physical and mental health, safety, and fair play in the video gaming community Financial sponsorship of charitable organizations; Charitable services, namely, arranging charitable fundraising and community service activities; all of the aforesaid relating to the promotion of physical and mental health, safety, and fair play in the video gaming community Electronic publishing services, namely, publication of guides, newsletters, and blogs; Entertainment services, namely, arranging, conducting, and organizing events, programs, contests, and competitions in the field of electronic computer games; all of the aforesaid relating to the promotion of physical and mental health, safety, and fair play in the video gaming community
Entertainment services, namely, providing temporary use of
non-downloadable game software (Terms too vague in the
opinion of the International Bureau – Rule 13 (2) (b) of the
Common Regulations); provision of information relating to
electronic computer games provided via the Internet.
Systems and methods are provided for enhancements for musical composition applications. An example method includes receiving information identifying initiation of a music composition application, the music composition application being executed via a user device of a user, with the received information indicating a genre associated with a musical score being created via the music composition application. One or more constraints associated with the genre are determined, with the constraints indicating one or more features learned based on analyzing music associated with the genre. Musical elements specified by the user are received via the music composition application. Musical score updates are determined based on the musical elements and genre. The determined musical score updates are provided to the user device.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
G06N 3/04 - Architecture, e.g. interconnection topology
95.
Systems and methods for multi-user editing of virtual content
Embodiments of the systems and methods described herein provide an editor hub that can host a virtual environment and allow multiple game developer systems to connect to. The editor hub can manage all change requests by connected game developers and execute the change requests into the runtime version of the data. The connected game developers can receive the same cached build results from the build pipeline, which can allow for simultaneous updates for a plurality of game developers working together on the same virtual content.
A63F 13/60 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
H04L 29/06 - Communication control; Communication processing characterised by a protocol
96.
ARTIFICIAL INTELLIGENCE BASED CUSTOMER SERVICE ASSESSMENT
A support assessment system and method generate metrics to assess the quality, effectiveness, process adherence, or the like of customer support interactions. These metrics may be generated based at least in part on one or more support assessment models to provide objective measures of the customer support interactions. The support assessment models may be trained on training data based on a set of support conversations and indication of the metrics that are to result from those support conversations. The support assessment models may be any variety of machine learning models, such as neural network models. The objective measures generated by the support assessment models may further be used to recommend process changes, add or discontinue products or services, make assessments of customer support resources, and/or generate customer support training materials.
Sequence predictors may be used to predict one or more entries in a musical sequence. The predicted entries in the musical sequence enable a virtual musician to continue playing a musical score based on the predicted entries when the occurrence of latency causes a first computing system hosting a first virtual musician to not receive entries or timing information for entries being performed in the musical sequence by a second computing system hosting a second virtual musician. The sequence predictors may be generated using a machine learning model generation system that uses historical performances of musical scores to generate the sequence predictor. Alternatively, or in addition, earlier portions of a musical score may be used to train the model generation system to obtain a prediction model that can predict later portions of the musical score.
G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
42 - Scientific and technological services and research and design relating thereto; industrial analysis and research services; design and development of computer hardware and software.
Goods & Services
Design and development of interactive, computer, video and
electronic game software; video game development services;
video game programming development services; designing and
developing computer game software and video game software
for use with computers, video game program systems and
computer networks; computer programming of video and
computer games.
Embodiments of the systems and methods described herein provide game terrain generation system that can generate height field data from a sketch of graphical inputs from a user via a graphical user interface. The game terrain generation system can use a model, such as a trained neural network, to apply macro and micro topological features on top of the height field data to generate game terrain data. The game terrain generation system can identify boundaries between different styles of terrain and generate transitions between the styles to create a more realistic terrain boundary.
A63F 13/63 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by the player, e.g. authoring using a level editor
A63F 13/85 - Providing additional services to players
G06T 19/20 - Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
Using voice recognition, a user can interact with a companion application to control a video game from a mobile device. Advantageously, the user can interact with the companion application when the video game is unavailable because, for example, of the user's location. Moreover, machine learning may be used to facilitate generating voice responses to user utterances that are predicted to improve or maintain a user's level of engagement with the companion application, or its corresponding video game.
A63F 13/215 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising means for detecting acoustic signals, e.g. using a microphone
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
A63F 13/424 - Processing input control signals of video game devices, e.g. signals generated by the player or derived from the environment by mapping the input signals into game commands, e.g. mapping the displacement of a stylus on a touch screen to the steering angle of a virtual vehicle involving acoustic input signals, e.g. by using the results of pitch or rhythm extraction or voice recognition
A63F 13/352 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world