A gesture-recognition (GR) device is disclosed that includes a capacitive touch sensor panel and a controller. The capacitive touch sensor panel comprises a plurality of sensing pads arranged in a cylindrical pattern inside a handle of the GR device and detects a multi-factor touch assertion at a set of sensing pads of the plurality of sensing pads. The controller transmits a driving signal to each of the plurality of sensing pads for the detection of the multi-factor touch assertion, generates an assertion signal, determines a signal sequence based on the assertion signal, and converts a current inactive state of the GR device to an active state based on a validation of the determined signal sequence corresponding to the multi-factor touch assertion and an inferred user intent.
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
G06F 3/044 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
2.
NEMESIS CHARACTERS, NEMESIS FORTS, SOCIAL VENDETTAS AND FOLLOWERS IN COMPUTER GAMES
Methods for managing non-player characters and power centers in a computer game are based on character hierarchies and individualized correspondences between each character's traits or rank and events that involve other non-player characters or objects. Players may share power centers, character hierarchies, non-player characters, and related quests involving the shared objects with other players playing separate and unrelated game instances over a computer network, with the outcome of the quests reflected in different the games. Various configurations of game machines are used to implement the methods.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
3.
SYSTEM AND METHOD FOR GENERATING VIDEO CONTENT WITH HUE-PRESERVATION IN VIRTUAL PRODUCTION
A system is provided for generating video content with hue-preservation in virtual production. The system comprises a memory for storing instructions and a processor configured to execute the instructions. Based on the executed instructions, the processor is further configured to control a saturation of scene linear data based on mapping of a first color gamut corresponding to a first encoding format of raw data to a second color gamut corresponding to a defined color space. The processor is further configured to determine a standard dynamic range (SDR) video content in the defined color space based on the scene linear data. Based on a scaling factor that is applied to three primary color values that describe the first color gamut, hue of the SDR video content is preserved.
H04N 9/64 - Circuits for processing colour signals
H04N 9/67 - Circuits for processing colour signals for matrixing
H04N 9/73 - Colour balance circuits, e.g. white balance circuits or colour temperature control
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
4.
CONTROL OF SOCIAL ROBOT BASED ON PRIOR CHARACTER PORTRAYAL
A method and apparatus for controlling a social robot includes providing a set of quantitative personality trait values, also called a “personality profile” to a decision engine of the social robot. The personality profile is derived from a character portrayal in a fictional work, dramatic performance, or by a real-life person (any one of these sometime referred to herein as a “source character”). The decision engine controls social responses of the social robot to environmental stimuli, based in part on the set of personality trait values. The social robot thereby behaves in a manner consistent with the personality profile for the profiled source character.
G06N 3/008 - Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
5.
System and method for controlling digital cinematic content based on emotional state of characters
Provided is a system for controlling digital cinematic content based on emotional state of characters. A focus on one or more computer-controlled characters appearing in digital cinematic content is determined based on emotion indicators of a first user actively interacting with at least the one or more computer-controlled characters. A set of emotion indicators is inferred for each of the one or more computer-controlled characters based on one or more criteria and multifactor feedback loops are created. A story line of the digital cinematic content and behavioural characteristics of the one or more computer-controlled characters are controlled to achieve a target emotional arc of the first user based on the multifactor feedback loops.
An entertainment system provides data to a common screen (e.g., cinema screen) and personal immersive reality devices. For example, a cinematic data distribution server communicates with multiple immersive output devices each configured for providing immersive output (e.g., a virtual reality output) based on a data signal. Each of the multiple immersive output devices is present within eyesight of a common display screen. The server configures the data signal based on digital cinematic master data that includes immersive reality data. The server transmits the data signal to the multiple immersive output devices contemporaneously with each other, and optionally contemporaneously with providing a coordinated audio-video signal for output via the common display screen and shared audio system.
Mathematical relationships between the scene geometry, camera parameters, and viewing environment are used to control stereography to obtain various results influencing the viewer's perception of 3D imagery. The methods may include setting a horizontal shift, convergence distance, and camera interaxial parameter to achieve various effects. The methods may be implemented in a computer-implemented tool for interactively modifying scene parameters during a 2D-to-3D conversion process, which may then trigger the re-rendering of the 3D content on the fly.
A portal device for a video game includes a pad with different zones that can be illuminated by selectable colors, a toy sensor (e.g., an RFID tag sensor) associated with each zone, a controller and a communications port for communicating with a video game process executing on a game machine. The colors of each zone can be configured to one or a combination of three primary colors during game play, based on the game process. The portal device reacts to placement of tagged toys on zones and the color of the zones during play and provides sensor data to the game process. The game process controls the game state in part based on data from the portal device and in part on other user input.
A63F 9/24 - Games using electronic circuits not otherwise provided for
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/214 - Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
A computer-implemented method in conjunction with mixed reality gear (e.g., a headset) includes imaging a real scene encompassing a user wearing a mixed reality output apparatus. The method includes determining data describing a real context of the real scene, based on the imaging; for example, identifying or classifying objects, lighting, sound or persons in the scene. The method includes selecting a set of content including content enabling rendering of at least one virtual object from a content library, based on the data describing a real context, using various selection algorithms. The method includes rendering the virtual object in the mixed reality session by the mixed reality output apparatus, optionally based on the data describing a real context (“context parameters”). An apparatus is configured to perform the method using hardware, firmware, and/or software.
A63F 13/217 - Input arrangements for video game devices characterised by their sensors, purposes or types using environment-related information, i.e. information generated otherwise than by the player, e.g. ambient temperature or humidity
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A method for matching mouth shape and movement in digital video to alternative audio includes deriving a sequence of facial poses including mouth shapes for an actor from a source digital video. Each pose in the sequence of facial poses corresponds to a middle position of each audio sample. The method further includes generating an animated face mesh based on the sequence of facial poses and the source digital video, transferring tracked expressions from the animated face mesh or the target video to the source video, and generating a rough output video that includes transfers of the tracked expressions. The method further includes generating a finished video at least in part by refining the rough video using a parametric autoencoder trained on mouth shapes in the animated face mesh or the target video. One or more computers may perform the operations of the method.
A system is provided for generating video content with hue-preservation in virtual production. The system comprises a memory for storing instructions and a processor configured to execute the instructions. Based on the executed instructions, the processor is further configured to control a saturation of scene linear data based on mapping of a first color gamut corresponding to a first encoding format of raw data to a second color gamut corresponding to a defined color space. The processor is further configured to determine a standard dynamic range (SDR) video content in the defined color space based on the scene linear data. Based on a scaling factor that is applied to three primary color values that describe the first color gamut, hue of the SDR video content is preserved.
H04N 9/64 - Circuits for processing colour signals
H04N 9/73 - Colour balance circuits, e.g. white balance circuits or colour temperature control
H04N 9/67 - Circuits for processing colour signals for matrixing
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
12.
CONTROLLING PROGRESS OF AUDIO-VIDEO CONTENT BASED ON SENSOR DATA OF MULTIPLE USERS, COMPOSITE NEURO-PHYSIOLOGICAL STATE AND/OR CONTENT ENGAGEMENT POWER
Provided is a system for controlling progress of audio-video content based on sensor data of multiple users, composite neuro-physiological state (CNS) and/or content engagement power (CEP). Sensor data is received from sensors positioned on an electronic device of a first user to sense neuro-physiological responses of the first user and second users that are in field-of-view (FOV) of the sensors. Based on the sensor data and at least one of a CNS value for social interaction application and a CEP value for immersive content, recommendations of action items for first user are predicted. Content of a feedback loop, created based on sensor data, CNS value, CEP value, and predicted recommendations, is rendered on output unit of electronic device during play of the at least one of social interaction application and immersive content experience. Progress of social interaction and immersive content experience is controlled by first user based on predicted recommendations.
Methods, apparatus and systems for geometric matching of virtual reality (VR) or augmented reality (AR) output contemporaneously with video output formatted for display on a 2D screen include a determination of value sets that when used in image processing cause an off-screen angular field of view of the at least one of the AR output object or the VR output object to have a fixed relationship to at least one of the angular field of view of the onscreen object or of the 2D screen. The AR/VR output object is outputted to an AR/VR display device and the user experience is improved by the geometric matching between objects observed on the AR/VR display device and corresponding objects appearing on the 2D screen.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
A digital content package includes first content comprising a video feature such as a motion picture or the like, and a user-selectable application configured to operate as follows. When activated using an icon off of a menu screen, the application records an identifier for scenes (discrete portions) of the first content that are selected by a user to generate a playlist. The user may select the scenes by indicating a start and end of each scene. The application saves the playlist locally, then uploads to a server. Via a user account at the server, a user may publish the playlist to a user-created distribution list, webpage, or other electronic publication, and modify the playlist by deleting or reordering scenes.
H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
H04N 21/482 - End-user interface for program selection
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
15.
Perfless and cadenceless scanning and digitization of motion picture film
Motion picture film is scanned by high-resolution, continuous sprocketless scanner, producing a first sequence of digital images each representing a plurality of motion picture frames and perforations (perfs) of the film input. The first sequence of images is processed using a processor running an analysis and extraction algorithm, producing a second sequence of images each including a single, edge-stabilized frame of the motion picture.
A method and apparatus for controlling a social robot includes operating an electronic output device based on social interactions between the social robot and a user. The social robot utilizes an algorithm or other logical solution process to infer a user mental state, for example a mood or desire, based on observation of the social interaction. Based on the inferred mental state, the social robot causes an action of the electronic output device to be selected. Actions may include, for example, playing a selected video clip, brewing a cup of coffee, or adjusting window blinds.
A computer-generated scene is generated as background for a live action set, for display on a panel of light emitting diodes (LEDs). Characteristics of light output by the LED panel are controlled such that the computer-generated scene rendered on the LED panel, when captured by a motion picture camera, has high fidelity to the original computer-generated scene. Consequently, the scene displayed on the screen more closely simulates the rendered scene from the viewpoint of the camera. Thus, a viewpoint captured by the camera appears more realistic and/or truer to the creative intent.
A computer-implemented method for providing electronics games for play by a group of users in two or more moving vehicles. The method includes maintaining data structures of media program data, user profile data and vehicle profile data, receiving user and vehicle state information, identifying a group of users based on contemporaneous presence in two or more vehicles or common participation in a game or other group experience for related trips at different times, and selecting, configuring or creating media program for play at media players. An apparatus or system is configured to perform the method, and related operations.
A63F 13/355 - Performing operations on behalf of clients with restricted processing capabilities, e.g. servers transform changing game scene into an MPEG-stream for transmitting to a mobile phone or a thin client
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
H04W 4/46 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for vehicle-to-vehicle communication [V2V]
H04W 4/21 - Services signalling; Auxiliary data signalling, i.e. transmitting data via a non-traffic channel for social networking applications
A method for preparing digital image data from an analog image input by scanning, and reducing visibility of the scanning noise, may include estimating a visibility of scanning noise, and a number of scanning samples needed to reduce scanning noise to below a visible threshold. Related methods include scanning, by an analog-to-digital image scanner, an analog image for multiple iterations, resulting in digital image data for each of the iterations; calculating a noise statistic for individual pixels of digital image data across the iterations; determining true values of individual pixels of the digital image data based on the noise statistic for each of the individual pixels and generating scanner noise reduced digital image data wherein pixels are assigned their respective ones of the true values; and saving the scanner noise reduced digital image data in a computer memory.
A system is provided for generating video content with hue-preservation in virtual production. A processor determines data in a scene-based encoding format based on raw data received in a pre-defined format. The raw data includes a computer-generated background rendered on a rendering panel and a foreground object. Based on the data in the scene-based encoding format, scene linear data is determined. A saturation of the scene linear data is controlled when a first color gamut corresponding to the pre-defined format is mapped to a second color gamut corresponding to a display-based encoding color space. Based on the scene linear data, a standard dynamic range (SDR) video content in the display-based encoding color space is determined. Hue of the SDR video content is preserved, when the rendering panel is in-focus or out-of-focus, based on a scaling factor that is applied to three primary color values that describe the first color gamut.
H04N 9/64 - Circuits for processing colour signals
H04N 9/73 - Colour balance circuits, e.g. white balance circuits or colour temperature control
H04N 9/67 - Circuits for processing colour signals for matrixing
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
Systems and computer-implemented methods are disclosed for providing social entertainment experiences in a moving vehicle via an apparatus that simulates human social behavior relevant to a journey undertaken by the vehicle, for displaying human-perceivable exterior communication on the moving vehicle to neighboring vehicles and/or pedestrians, and for providing a modular travel experience.
H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/4788 - Supplemental services, e.g. displaying phone caller identification or shopping application communicating with other users, e.g. chatting
22.
HETEROGENOUS GEOMETRY CACHING FOR REAL-TIME SIMULATED FLUIDS
A method for simulating fluid surfaces in real-time in response to user input includes detecting interactive conditions triggering insertion of a heterogeneous mesh sequence in a 3D model sequence for rendering, fetching ones of the heterogenous mesh sequence from a computer memory, inserting the successive members in corresponding representations of the 3D model sequence in a computer memory, and rendering successive video frames from the representations of the 3D model sequence each including a successive member of the heterogenous mesh sequence. A related method for generating a compact heterogeneous mesh sequence for use in rendering corresponding frames of video includes generating a heterogenous mesh sequence modeling response of a fluid surface to physical forces, the heterogenous mesh sequence characterized by position values represented in computer memory by not less than 12 bytes for each vertex thereof, transforming the heterogenous mesh sequence into the compact heterogeneous mesh sequence, at least in part by quantizing the position values to not greater than four bytes, and storing the compact heterogeneous mesh sequence in a computer memory for use in real-time rendering.
A method for real-time shadow rendering using cached shadow maps and deferred shading by a video processor of a game console or the like includes, for at least each key frame of video output, determining a viewpoint for a current key frame based on user input, filtering a texel of a frame-specific shadow map based on a dynamic mask wherein the texel is filtered, for a shadowed light, from a static shadow map and a dynamic shadow map or from the static shadow map only, based on the dynamic mask value for the texel, and rendering the current key frame based on the frame-specific shadow map and a deferred-shadow rendering algorithm. The method enables efficient rendering of thousands of shadowed lights in large environments by consumer-grade game consoles.
An automatic flagging of sensitive portions of a digital dataset for media production includes receiving the digital dataset comprising at least one of audio data, video data, or audio-video data for producing at least one media program. A processor identifies sensitive portions of the digital dataset likely to be in one or more defined content classifications, based at least in part on comparing unclassified portions of the digital dataset with classified portions of the prior media production using an algorithm, and generates a plurality of sensitivity tags each signifying a sensitivity assessment for a corresponding one of the sensitive portions. The processor may save the plurality of sensitivity tags each correlated to its corresponding one of the sensitive portions in a computer memory for use by a media production or localization team.
Methods and apparatus for personalizing a vehicle with a sensory output device include receiving, by one or more processors, a signal indicating an identity or passenger profile of a detected passenger in or boarding the vehicle, accessing preference data and geographic location data for the passenger, and selecting sensory content for delivery to the passenger in the vehicle based on the preference data and geographic location data. Methods and apparatus for producing video customized for a preference profile of a person or cohort include associating each of stored video clips with a set of characteristic parameters relating to user-perceivable characteristics, receiving user profile data relating to the person or cohort, selecting video clips from the data structure based at least partly on the user profile data, and automatically producing a video including the preferred video clips.
H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/458 - Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules
H04N 21/8549 - Creating video summaries, e.g. movie trailer
A gesture-recognition (GR) device made to be held or worn by a user includes an electronic processor configured by program instructions in memory to recognize a gesture. The device or a cooperating system may match a gesture identifier to an action identifier for one or more target devices in a user's environment, enabling control of the target devices by user movement of the GR device in three-dimensional space.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
27.
SCALABLE SIMULATION AND AUTOMATED TESTING OF MOBILE VIDEOGAMES
A method for evaluating performance of a video game by a computing device. The method includes a harness application independent of device's execution context, and an agent application to simulate player's actions.
Motion picture film is scanned by high-resolution, continuous sprocketless scanner, producing a first sequence of digital images each representing a plurality of motion picture frames and perforations (perfs) of the film input. The first sequence of images is processed using a processor running an analysis and extraction algorithm, producing a second sequence of images each including a single, edge-stabilized frame of the motion picture.
A gesture-recognition (GR) device is disclosed that includes a capacitive touch sensor panel and a controller. The capacitive touch sensor panel comprises a plurality of sensing pads arranged in a cylindrical pattern inside a handle of the GR device and detects a multi-factor touch assertion at a set of sensing pads of the plurality of sensing pads. The controller transmits a driving signal to each of the plurality of sensing pads for the detection of the multi-factor touch assertion, generates an assertion signal, determines a signal sequence based on the assertion signal, and converts a current inactive state of the GR device to an active state based on a validation of the determined signal sequence corresponding to the multi-factor touch assertion and an inferred user intent.
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
G06F 3/044 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
A63F 9/24 - Games using electronic circuits not otherwise provided for
Provided is a gesture-recognition (GR) device that includes a printed circuit board (PCB) and a controller. The circuit board has an aspect ratio exceeding a threshold value that corresponds to at least a 70 percent difference between length and width of the PCB. The PCB includes a first unit and a second unit. The first unit corresponds to a base unit to be grasped by hand of a user. The second unit corresponds to an elongate unit that extends outward from the first unit. The second unit is characterized by a minimal wand form factor. A rigid strength of the second unit is based on at least a shape of an outer shell and a structural attribute of the second unit. The controller controls illumination of a plurality of light sources mounted on the second unit of the circuit board based on assertion signals.
A63F 9/24 - Games using electronic circuits not otherwise provided for
G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
G06V 40/20 - Movements or behaviour, e.g. gesture recognition
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
G06F 3/041 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
G06F 3/044 - Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means by capacitive means
A63H 30/04 - Electrical arrangements using wireless transmission
31.
MANAGING STATES OF A GESTURE RECOGNITION DEVICE AND AN INTERACTIVE CASING
Provided is a system that includes an interactive casing and a GR device. The interactive casing receives a first signal based on activation of a masked electrical switch by release of a magnetic assertion when a lid member of an interactive casing is disengaged from a base member of the interactive casing. A first system state of the interactive casing is converted to a second system state. Audio-visual feedback is generated and a second signal is communicated to the GR device based on the conversion. Based on the received second signal, a first device state of the GR device is converted to a second device state. Power levels of a first power storage device of the interactive casing and a second power storage device of the GR device are maintained during the first system state and the first device state, respectively.
Provided is a gesture recognition (GR) device that includes a circuit board on which a plurality of light sources are mounted. A first light source is side-mounted at a tip of a second unit of the circuit board, and the set of second light sources is mounted at right angles at top and bottom surfaces of the second unit. A first pair from the set of second light sources is positioned adjacent to the side-mounted first light source. The plurality of light sources are controlled to generate multiple lighting effects for the tip based on assertion signals generated at a first unit of the circuit board. A first lighting effect corresponds to a directional beam generated by the first light source. A set of second lighting effects, which remains unblocked by the side-mounted first light source, corresponds to a multi-color illumination generated by the set of second light sources.
H05B 47/13 - Controlling the light source in response to determined parameters by determining the presence or movement of objects or living beings by using passive infrared detectors
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/03 - Arrangements for converting the position or the displacement of a member into a coded form
G06F 3/0346 - Pointing devices displaced or positioned by the user; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
33.
REAL-TIME ROUTE CONFIGURING OF ENTERTAINMENT CONTENT
A computer-implemented method for producing real-time media content customized for a travel event in a dynamic travel route between an origin and a destination of a connected vehicle. The method includes maintaining media production data, receiving information on a travel event including route information, producing media content based on the travel event and travel route information, and streaming the media content to connected cars and servers along the travel route.
A method for converting a source video content constrained to a first color space to a video content constrained to a second color space using an artificial intelligence machine-learning algorithm based on a creative profile.
Systems and methods described herein are configured to enhance the understanding and experience of news and live events in real-time. The systems and methods leverage a distributed network of professional and amateur journalists/correspondents using technology to create unique experiences and/or provide views and perspectives different from experiences, views and/or perspectives provided by existing newscasts and/or sportscasts.
Multiple different versions of media content are contained in a single package of audio-video media content, using compression algorithms that reduce storage and bandwidth required for storing multiple full-resolution versions of the media. Portions of individual frames are replaced during playback so that only the pixels that differ between versions need to be stored.
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/8543 - Content authoring using a description language, e.g. MHEG [Multimedia and Hypermedia information coding Expert Group] or XML [eXtensible Markup Language]
A method for transforming extended video data for display in virtual reality processes digital extended video data for display on a center screen and two auxiliary screens of a real extended video cinema. The method includes accessing, by a computer executing a rendering application, data that defines virtual screens including a center screen and auxiliary screens, wherein tangent lines to each of the auxiliary screens at their respective centers of area intersect with a tangent line to the center screen at its center of area at equal angles in a range of 75 to 105 degrees. The method includes preparing virtual extended video data at least in part by rendering the digital extended video on corresponding ones of the virtual screens; and saving the virtual extended video data in a computer memory. A corresponding playback method and apparatus display the processed data in virtual reality.
A computer-implemented method for transforming audio-video data includes automatically detecting substantially all discrete human-perceivable messages encoded in the audio-video data, determining a semantic encoding for each of the detected messages, assigning a time code to each of the encodings correlated to specific frames of the audio-video data, and recording a data structure relating each time code to a corresponding one of the semantic encodings in a recording medium. The method may further include converting extracted recorded vocal instances from the audio-video data into a text data, generating a dubbing list comprising the text data and the time code, assigning a set of annotations corresponding to the one or more vocal instances specifying one or more creative intents, generating the scripting data comprising the dubbing list and the set of annotations, and other optional operations. An apparatus may be programmed to perform the method by executable instructions for the foregoing operations.
H04N 21/439 - Processing of audio elementary streams
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
A method for receiving and processing sonic messaging by a mixed reality (xR) device from acoustic output of a media player playing content for a two-dimensional (2D) screen or projection surface, to coordinate the rendering and playing of an xR output in the xR device with the playing of the 2D content at the media player.
An automated testing framework and associated tools for executable applications such as games focus on integration testing, wherein users create data-driven tests by using test modules and configuration data as building blocks. The tools facilitate cooperation between coders and non-technical Quality Assurance (QA) staff in creating automated tests, by simplifying the user interface for configuring tests. Components of the tools simulate user interactions with the application under test, for example, gamepad button presses. The tools are also capable skipping portions of gameplay or other interactive activity and directly jumping into a desired game mode during automated testing, and other functions.
A method and system for automated voice casting compares candidate voices samples from candidate speakers in a target language with a primary voice sample from a primary speaker in a primary language. Utterances in the audio samples of the candidates speakers and the primary speaker are identified and typed and voice samples generated that meet applicable utterance type criteria. A neural network is used to generate an embedding for the voice samples. A voice sample can include groups of different utterance types and embeddings generated for each utterance group in the voice sample and then combined in a weighted form wherein the resulting embedding emphasizes selected utterance types. Similarities between embeddings for the candidate voice samples relative to the primary voice sample are evaluated and used to select a candidate speaker that is a vocal match.
A processor provides a simulated three-dimensional (3D) environment for a game or virtual reality (VR) experience, including controlling a characteristic parameter of a 3D object or character based on at least one of: an asynchronous event in a second game, feedback from multiple synchronous users of the VR experience, or on a function driven by one or variables reflecting a current state of at least one of the 3D environment, the game or the VR experience.
A63F 13/525 - Changing parameters of virtual cameras
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04L 29/06 - Communication control; Communication processing characterised by a protocol
A63F 13/28 - Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
A63F 13/30 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
43.
Nemesis characters, nemesis forts, social vendettas and followers in computer games
Methods for managing non-player characters and power centers in a computer game are based on character hierarchies and individualized correspondences between each character's traits or rank and events that involve other non-player characters or objects. Players may share power centers, character hierarchies, non-player characters, and related quests involving the shared objects with other players playing separate and unrelated game instances over a computer network, with the outcome of the quests reflected in different the games. Various configurations of game machines are used to implement the methods.
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/69 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor by enabling or updating specific game elements, e.g. unlocking hidden features, items, levels or versions
A63F 13/798 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories for assessing skills or for ranking players, e.g. for generating a hall of fame
44.
Cinematic mastering for virtual reality and augmented reality
An entertainment system provides data to a common screen (e.g., cinema screen) and personal immersive reality devices. For example, a cinematic data distribution server communicates with multiple immersive output devices each configured for providing immersive output (e.g., a virtual reality output) based on a data signal. Each of the multiple immersive output devices is present within eyesight of a common display screen. The server configures the data signal based on digital cinematic master data that includes immersive reality data. The server transmits the data signal to the multiple immersive output devices contemporaneously with each other, and optionally contemporaneously with providing a coordinated audio-video signal for output via the common display screen and shared audio system.
A method for matching mouth shape and movement in digital video to alternative audio includes deriving a sequence of facial poses including mouth shapes for an actor from a source digital video. Each pose in the sequence of facial poses corresponds to a middle position of each audio sample. The method further includes generating an animated face mesh based on the sequence of facial poses and the source digital video, transferring tracked expressions from the animated face mesh or the target video to the source video, and generating a rough output video that includes transfers of the tracked expressions. The method further includes generating a finished video at least in part by refining the rough video using a parametric autoencoder trained on mouth shapes in the animated face mesh or the target video. One or more computers may perform the operations of the method.
Methods, apparatus and systems for geometric matching of virtual reality (VR) or augmented reality (AR) output contemporaneously with video output formatted for display on a 2D screen include a determination of value sets that when used in image processing cause an off-screen angular field of view of the at least one of the AR output object or the VR output object to have a fixed relationship to at least one of the angular field of view of the onscreen object or of the 2D screen. The AR/VR output object is outputted to an AR/VR display device and the user experience is improved by the geometric matching between objects observed on the AR/VR display device and corresponding objects appearing on the 2D screen.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
An apparatus and method for configuring a customized tour includes providing a list of tour subject indicators with a relevance value for locations to be toured, receiving user selection data regarding the subject indicators, and configuring a tour route based on an aggregate relevance score calculated for the tour subject indicators and locations indicated by one or more users. The method and apparatus may further include defining at least one tour route record comprising an ordered list of locations that satisfies at least a minimum aggregate relevance score constraint and a maximum tour duration constraint and saving information defining the at least one tour route record in a computer memory for use in delivering a corresponding tour.
G06Q 10/04 - Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
G06F 16/901 - Indexing; Data structures therefor; Storage structures
A digital content package includes first content comprising a video feature such as a motion picture or the like, and a user-selectable application configured to operate as follows. When activated using an icon off of a menu screen, the application records an identifier for scenes (discrete portions) of the first content that are selected by a user to generate a playlist. The user may select the scenes by indicating a start and end of each scene. The application saves the playlist locally, then uploads to a server. Via a user account at the server, a user may publish the playlist to a user-created distribution list, webpage, or other electronic publication, and modify the playlist by deleting or reordering scenes.
H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
H04N 21/482 - End-user interface for program selection
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
49.
Methods for controlling scene, camera and viewing parameters for altering perception of 3D imagery
Mathematical relationships between the scene geometry, camera parameters, and viewing environment are used to control stereography to obtain various results influencing the viewer's perception of 3D imagery. The methods may include setting a horizontal shift, convergence distance, and camera interaxial parameter to achieve various effects. The methods may be implemented in a computer-implemented tool for interactively modifying scene parameters during a 2D-to-3D conversion process, which may then trigger the re-rendering of the 3D content on the fly.
H04N 19/70 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
H04N 19/119 - Adaptive subdivision aspects e.g. subdivision of a picture into rectangular or non-rectangular coding blocks
A systematic method of introducing obfuscating “organic” noise to a user's content engagement history leverages a recommender system by creating a public history on a client device which is a superset of the user's true engagement history. The method builds up the superset history over time through a client's interaction with the recommender system by simulating organic growth in a user's actual engagement history. The organic superset prevents an adversary with access to the underlying recommendation model from readily distinguishing between signal and noise in a user's query and obfuscates the user's engagement history with the recommender system.
Applications for a Content Engagement Power (CEP) value include generating a script, text, or other communication content or for controlling playout of communication content based on neuro-physiological data, gathering neuro-physiological data correlated to consumption of communication content, or rating effectiveness of personal communications. The CEP is computed based on neuro-physiological sensor data processed to express engagement with content along multiple dimensions such as valence, arousal, and dominance. An apparatus is configured to perform the method using hardware, firmware, and/or software.
A61B 5/16 - Devices for psychotechnics; Testing reaction times
A61B 5/04 - Measuring bioelectric signals of the body or parts thereof
A63F 13/352 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers - Details of game servers involving special game server arrangements, e.g. regional servers connected to a national server or a plurality of servers managing partitions of the game world
G10L 15/04 - Segmentation; Word boundary detection
52.
SOCIAL INTERACTIVE APPLICATIONS USING BIOMETRIC SENSOR DATA FOR DETECTION OF NEURO-PHYSIOLOGICAL STATE
Applications for a Composite Neuro-physiological State (CNS) value include rating using the value as in indicator of participant emotional state in computer games and other social interaction applications. The CNS is computed based on biometric sensor data processed to express player engagement with content, game play, and other participants along multiple dimensions such as valence, arousal, and dominance. An apparatus is configured to perform the method using hardware, firmware, and/or software.
A61B 5/16 - Devices for psychotechnics; Testing reaction times
G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body
A61B 5/055 - Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
A61B 5/04 - Measuring bioelectric signals of the body or parts thereof
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
A63F 13/212 - Input arrangements for video game devices characterised by their sensors, purposes or types using sensors worn by the player, e.g. for measuring heart beat or leg activity
A63F 13/79 - Game security or game management aspects involving player-related data, e.g. identities, accounts, preferences or play histories
53.
Transforming audio content for subjective fidelity
A method or apparatus for delivering audio programming such as music to listeners may include identifying, capturing and applying a listener's audiometric profile to transform audio content so that the listener hears the content similarly to how the content was originally heard by a creative producer of the content. An audio testing tool may be implemented as software application to identify and capture the listener's audiometric profile. A signal processor may operate an algorithm used for processing source audio content, obtaining an identity and an audiometric reference profile of the creative producer from metadata associated with the content. The signal processor may then provide audio output based on a difference between the listener's and creative producer's audiometric profiles.
Video conversion technology, in which a first stream of video content is accessed and multiple, different layers are extracted from the first stream of the video content. Each of the multiple, different layers are separately processed to convert the multiple, different layers into modified layers that each have a higher resolution. The modified layers are reassembled into a second stream of the video content that has a higher resolution than the first stream of the video content.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
G09G 5/02 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
A portal device for a video game includes a pad with different zones that can be illuminated by selectable colors, a toy sensor (e.g., an RFID tag sensor) associated with each zone, a controller and a communications port for communicating with a video game process executing on a game machine. The colors of each zone can be configured to one or a combination of three primary colors during game play, based on the game process. The portal device reacts to placement of tagged toys on zones and the color of the zones during play and provides sensor data to the game process. The game process controls the game state in part based on data from the portal device and in part on other user input.
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
A63F 9/24 - Games using electronic circuits not otherwise provided for
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/214 - Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
A game modification engine modifies configuration settings affecting game play and the user experience in computer games after initial publication of the game, based on device level and game play data associated with a user or cohort of users and on machine-learned relationships between input data and a use metric for the game. The modification is selected to improve performance of the game as measured by the use metric. The modification may be tailored for a user cohort. The game modification engine may define the cohort automatically based on correlations discovered in the input data relative to a defined use metric.
A63F 13/67 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor adaptively or by learning from player actions, e.g. skill level adjustment or by storing successful combat sequences for re-use
A63F 13/77 - Game security or game management aspects involving data related to game devices or game servers, e.g. configuration data, software version or amount of memory
A playback device includes a port configured to receive content from an external memory device, a device memory residing in the device, and a controller programmed to execute instructions that cause the controller to read a read data pattern from the defined region in the external memory device and determine if the read data pattern correlates to an expected data pattern to a predetermined level, wherein the expected data pattern is derived at least in part from a defect map of the defined region.
An automatic flagging of sensitive portions of a digital dataset for media production includes receiving the digital dataset comprising at least one of audio data, video data, or audio-video data for producing at least one media program. A processor identifies sensitive portions of the digital dataset likely to be in one or more defined content classifications, based at least in part on comparing unclassified portions of the digital dataset with classified portions of the prior media production using an algorithm, and generates a plurality of sensitivity tags each signifying a sensitivity assessment for a corresponding one of the sensitive portions. The processor may save the plurality of sensitivity tags each correlated to its corresponding one of the sensitive portions in a computer memory for use by a media production or localization team.
Methods and apparatus control scrolling of graphical content arranged in a sequence of panels by a computer responsive to a user input device. A method includes sensing, by the computer, a direction and length of continuous cursor movement along a first axis of the user input device. The method further includes progressing display of the graphical content through the sequence of panels based on the direction and length of the continuous cursor movement. The user input device may be a touchscreen or a touchpad and the computer may determine the cursor location based on a one-finger touch registered by the user input device. The method enables the user to scroll through panels of graphical content using a reduced number of finger taps and smaller movements, reducing hand and finger fatigue.
Methods and apparatus for improving automatic selection and timing of messages by a machine or system of machines include an inductive computational process driven by log-level network data from mobile devices and other network-connected devices, optionally in addition to traditional application-level data from cookies or the like. The methods and apparatus may be used, for example, to improve or optimize effectiveness of automatically-generated electronic communications with consumers and potential consumers for achieving a specified target.
A computer-implemented method for obtaining a digital representation of user engagement with audio-video content includes playing digital data comprising audio-video content by an output device that outputs audio-video output based on the digital data and receiving sensor data from at least one sensor positioned to sense an involuntary response of one or more users while engaged with the audio-video output. The method further includes determining a Content Engagement Power (CEP) value, based on the sensor data and recording the digital representation of CEP in a computer memory. An apparatus is configured to perform the method using hardware, firmware, and/or software.
H04N 21/8545 - Content authoring for generating interactive applications
H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/8541 - Content authoring involving branching, e.g. to different story endings
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
A61B 5/16 - Devices for psychotechnics; Testing reaction times
G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G09B 19/00 - Teaching not covered by other main groups of this subclass
62.
Production and control of cinematic content responsive to user emotional state
A computer-implemented method for providing cinematic content to a user via a computer-controlled media player includes accessing by a processor a content package including a targeted emotional arc and a collection of digital objects each associated with codes indicating an emotional profile of the each digital object, playing digital objects selected from the content package thereby outputting an audio-video signal for display by an output device; receiving sensor data from at least one sensor positioned to sense a biometric feature of a user watching the output device; determining a value of one or more emotional state variables, based on the sensor data; and selecting the digital objects for the playing based on the one or more emotional state variables, a recent value of the targeted emotional arc, and the one or more codes indicating an emotional profile. An apparatus is configured to perform the method using hardware, firmware, and/or software.
H04H 60/33 - Arrangements for monitoring the users' behaviour or opinions
H04H 60/56 - Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups or
H04N 7/10 - Adaptations for transmission by electrical cable
H04N 7/025 - Systems for transmission of digital non-picture data, e.g. of text during the active part of a television frame
H04N 21/8545 - Content authoring for generating interactive applications
H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/8541 - Content authoring involving branching, e.g. to different story endings
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
A61B 5/16 - Devices for psychotechnics; Testing reaction times
G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G09B 19/00 - Teaching not covered by other main groups of this subclass
Methods for managing digital content include authenticating a user account identifier from a client device over a computer network, registering a telephone number for at least one wireless mobile device in a registry identified with the user account based on the authenticating, as a pre-authorized identifier for accessing digital content licensed for use with the client device. The methods include maintaining a library of digital content identified with the user account for access by the at least one wireless mobile device, and initiating streaming of the digital video content to the at least one wireless mobile device without requiring user authentication from the at least one wireless mobile device, based on the registering of the telephone number as the pre-authorized identifier. An apparatus for performing the method comprises a processor coupled to a memory, the memory holding instructions for performing steps of the method as summarized above.
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/254 - Management at additional data server, e.g. shopping server or rights management server
H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
A specification defining allowable luma and chroma code-values is applied in a region-of-interest encoding method of a mezzanine compression process. The method may include analyzing an input image to determine regions or areas within each image frame that contain code-values that are near allowable limits as specified by the specification. In addition, the region-of-interest method may comprise then compressing those regions with higher precision than the other regions of the image that do not have code-values that are close to the legal limits.
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/64 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
65.
Cinematic mastering for virtual reality and augmented reality
An entertainment system provides data to a common screen (e.g., cinema screen) and personal immersive reality devices. For example, a cinematic data distribution server communicates with multiple immersive output devices each configured for providing immersive output (e.g., a virtual reality output) based on a data signal. Each of the multiple immersive output devices is present within eyesight of a common display screen. The server configures the data signal based on digital cinematic master data that includes immersive reality data. The server transmits the data signal to the multiple immersive output devices contemporaneously with each other, and optionally contemporaneously with providing a coordinated audio-video signal for output via the common display screen and shared audio system.
An improved video streaming method and system that reduces video stream playback dropouts when video data flow from a server is degraded or interrupted. Two adaptive bitrate video streams of the content are operated in parallel with video quality that can be adjusted independently. The first stream receives data blocks at a rate generally concurrent with the video playback. The second stream receives data blocks in advance of the playback point and these blocks are buffered. The best available block is used during playback. Buffered second stream blocks can be used to continue playback when the network connection is compromised or interrupted.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/2381 - Adapting the multiplex stream to a specific network, e.g. an IP [Internet Protocol] network
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
67.
Transforming audio content for subjective fidelity
A method or apparatus for delivering audio programming such as music to listeners may include identifying, capturing and applying a listener's audiometric profile to transform audio content so that the listener hears the content similarly to how the content was originally heard by a creative producer of the content. An audio testing tool may be implemented as software application to identify and capture the listener's audiometric profile. A signal processor may operate an algorithm used for processing source audio content, obtaining an identity and an audiometric reference profile of the creative producer from metadata associated with the content. The signal processor may then provide audio output based on a difference between the listener's and creative producer's audiometric profiles.
A method for transforming extended video data for display in virtual reality processes digital extended video data for display on a center screen and two auxiliary screens of a real extended video cinema. The method includes accessing, by a computer executing a rendering application, data that defines virtual screens including a center screen and auxiliary screens, wherein tangent lines to each of the auxiliary screens at their respective centers of area intersect with a tangent line to the center screen at its center of area at equal angles in a range of 75 to 105 degrees. The method includes preparing virtual extended video data at least in part by rendering the digital extended video on corresponding ones of the virtual screens; and saving the virtual extended video data in a computer memory. A corresponding playback method and apparatus display the processed data in virtual reality.
Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
G11B 27/11 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
H04N 13/388 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/25 - Image signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
A digital still image is processed using motion-adding algorithms that are provided with an original still image and a set of motionizing parameters. Output of the motion-adding algorithms include a motionized digital image suitable for display by any digital image display device. The motionized digital image may be used in place of a still image in any context that a still image would be used, for example, in an ebook, e-zine, digital graphic novel, website, picture or poster, or user interface.
A method and apparatus for controlling a social robot includes operating an electronic output device based on social interactions between the social robot and a user. The social robot utilizes an algorithm or other logical solution process to infer a user mental state, for example a mood or desire, based on observation of the social interaction. Based on the inferred mental state, the social robot causes an action of the electronic output device to be selected. Actions may include, for example, playing a selected video clip, brewing a cup of coffee, or adjusting window blinds.
Methods, apparatus and systems for geometric matching of virtual reality (VR) or augmented reality (AR) output contemporaneously with video output formatted for display on a 2D screen include a determination of value sets that when used in image processing cause an off-screen angular field of view of the at least one of the AR output object or the VR output object to have a fixed relationship to at least one of the angular field of view of the onscreen object or of the 2D screen. The AR/VR output object is outputted to an AR/VR display device and the user experience is improved by the geometric matching between objects observed on the AR/VR display device and corresponding objects appearing on the 2D screen.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
Mathematical relationships between the scene geometry, camera parameters, and viewing environment are used to control stereography to obtain various results influencing the viewer's perception of 3D imagery. The methods may include setting a horizontal shift, convergence distance, and camera interaxial parameter to achieve various effects. The methods may be implemented in a computer-implemented tool for interactively modifying scene parameters during a 2D-to-3D conversion process, which may then trigger the re-rendering of the 3D content on the fly.
Video conversion technology, in which a first stream of video content is accessed and multiple, different layers are extracted from the first stream of the video content. Each of the multiple, different layers are separately processed to convert the multiple, different layers into modified layers that each have a higher resolution. The modified layers are reassembled into a second stream of the video content that has a higher resolution than the first stream of the video content.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
G09G 5/02 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
76.
Production and packaging of entertainment data for virtual reality
An augmented reality (AR) output device or virtual reality (VR) output device is worn by a user and includes one or more sensors positioned to detect actions performed by a user of the immersive output device. A processor provides a data signal configured for the AR or VR output device, causing the immersive output device to provide AR output or VR output via a stereographic display device. The data signal encodes audio-video data. The processor controls a pace of scripted events defined by a narrative in the one of the AR output or the VR output, based on output from the one or more sensors indicating actions performed by a user of the AR or VR output device. The audio-video data may be packaged in a non-transitory computer-readable medium with additional content that is coordinated with the defined narrative and is configured for providing an alternative output, such as 2D video output or the stereoscopic 3D output.
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04N 13/356 - Image reproducers having separate monoscopic and stereoscopic modes
G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
77.
Method and apparatus for generating media presentation content with environmentally modified audio components
An apparatus for generating a presentation from content having original audio and video components is described wherein an environment detector is configured to output an environment-type signal indicating a detected particular environment. An acoustics memory is configured to output selected acoustic characteristics indicative of the environment identified by the environment-type signal. An audio processor receives the audio components and the acoustic characteristics and operates to modify the original audio components to produce modified audio components based on the selected acoustic characteristics. The presentation including the modified audio components is output.
H04N 13/161 - Encoding, multiplexing or demultiplexing different image signal components
H04S 7/00 - Indicating arrangements; Control arrangements, e.g. balance control
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
H04N 9/802 - Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving processing of the sound signal
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
78.
Control of social robot based on prior character portrayal
A method and apparatus for controlling a social robot includes providing a set of quantitative personality trait values, also called a “personality profile” to a decision engine of the social robot. The personality profile is derived from a character portrayal in a fictional work, dramatic performance, or by a real-life person (any one of these sometime referred to herein as a “source character”). The decision engine controls social responses of the social robot to environmental stimuli, based in part on the set of personality trait values. The social robot thereby behaves in a manner consistent with the personality profile for the profiled source character.
G06N 3/00 - Computing arrangements based on biological models
G06N 3/008 - Artificial life, i.e. computing arrangements simulating life based on physical entities controlled by simulated intelligence so as to replicate intelligent life forms, e.g. based on robots replicating pets or humans in their appearance or behaviour
79.
Method and apparatus for color difference transform
Efficient image compression for video data characterized by a non-neutral dominant white point is achieved by transforming the input video signal into a de-correlated video signal based on a color difference encoding transform, wherein the color difference encoding transform is adapted based on the dominant white point using an algorithm. The adapting algorithm is designed for optimizing low-entropy output when the white point is other than a neutral or equal-energy value. Decompression is handled conversely.
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 1/64 - Colour picture communication systems - Details therefor, e.g. coding or decoding means therefor
80.
Mixed reality system for context-aware virtual object rendering
A computer-implemented method in conjunction with mixed reality gear (e.g., a headset) includes imaging a real scene encompassing a user wearing a mixed reality output apparatus. The method includes determining data describing a real context of the real scene, based on the imaging; for example, identifying or classifying objects, lighting, sound or persons in the scene. The method includes selecting a set of content including content enabling rendering of at least one virtual object from a content library, based on the data describing a real context, using various selection algorithms. The method includes rendering the virtual object in the mixed reality session by the mixed reality output apparatus, optionally based on the data describing a real context (“context parameters”). An apparatus is configured to perform the method using hardware, firmware, and/or software.
A63F 13/217 - Input arrangements for video game devices characterised by their sensors, purposes or types using environment-related information, i.e. information generated otherwise than by the player, e.g. ambient temperature or humidity
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/211 - Input arrangements for video game devices characterised by their sensors, purposes or types using inertial sensors, e.g. accelerometers or gyroscopes
A63F 13/25 - Output arrangements for video game devices
81.
Immersive virtual reality production and playback for storytelling content
Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
G11B 27/11 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
H04N 13/388 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/25 - Image signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
82.
Biometric feedback in production and playback of video content
Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
G11B 27/11 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
H04N 13/388 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/25 - Image signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
83.
Social and procedural effects for computer-generated environments
A sensor coupled to an AR/VR headset detects an eye convergence distance. A processor adjusts a focus distance for a virtual camera that determines rendering of a three-dimensional (3D) object for a display device of the headset, based on at least one of the eye convergence distance or a directed focus of attention for the at least one of the VR content or the AR content.
A63F 13/525 - Changing parameters of virtual cameras
G06T 19/00 - Manipulating 3D models or images for computer graphics
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04L 29/06 - Communication control; Communication processing characterised by a protocol
A63F 13/28 - Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
A63F 13/30 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
A portal device for a video game includes a pad with different zones that can be illuminated by selectable colors, a toy sensor (e.g., an RFID tag sensor) associated with each zone, a controller and a communications port for communicating with a video game process executing on a game machine. The colors of each zone can be configured to one or a combination of three primary colors during game play, based on the game process. The portal device reacts to placement of tagged toys on zones and the color of the zones during play, and provides sensor data to the game process. The game process controls the game state in part based on data from the portal device and in part on other user input.
A63F 9/24 - Games using electronic circuits not otherwise provided for
A63F 13/00 - Video games, i.e. games using an electronically generated display having two or more dimensions
A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
A63F 13/214 - Input arrangements for video game devices characterised by their sensors, purposes or types for locating contacts on a surface, e.g. floor mats or touch pads
A63F 13/65 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition
G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
H04N 13/388 - Volumetric displays, i.e. systems where the image is built up from picture elements distributed through a volume
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
H04N 13/25 - Image signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
86.
Immersive virtual reality production and playback for storytelling content
Methods for digital content production and playback of an immersive stereographic video work provide or enhance interactivity of immersive entertainment using various different playback and production techniques. “Immersive stereographic” may refer to virtual reality, augmented reality, or both. The methods may be implemented using specialized equipment for immersive stereographic playback or production. Aspects of the methods may be encoded as instructions in a computer memory, executable by one or more processors of the equipment to perform the aspects.
H04N 13/00 - PICTORIAL COMMUNICATION, e.g. TELEVISION - Details thereof
H04N 13/117 - Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation the virtual viewpoint locations being selected by the viewers or determined by viewer tracking
H04N 13/25 - Image signal generators using stereoscopic image cameras using image signals from one sensor to control the characteristics of another sensor
H04N 13/344 - Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
87.
Transforming audio content for subjective fidelity
A method or apparatus for delivering audio programming such as music to listeners may include identifying, capturing and applying a listener's audiometric profile to transform audio content so that the listener hears the content similarly to how the content was originally heard by a creative producer of the content. An audio testing tool may be implemented as software application to identify and capture the listener's audiometric profile. A signal processor may operate an algorithm used for processing source audio content, obtaining an identity and an audiometric reference profile of the creative producer from metadata associated with the content. The signal processor may then provide audio output based on a difference between the listener's and creative producer's audiometric profiles.
A specification defining allowable luma and chroma code-values is applied in a region-of-interest encoding method of a mezzanine compression process. The method may include analyzing an input image to determine regions or areas within each image frame that contain code-values that are near allowable limits as specified by the specification. In addition, the region-of-interest method may comprise then compressing those regions with higher precision than the other regions of the image that do not have code-values that are close to the legal limits.
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/64 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel
89.
Generation and use of user-selected scenes playlist from distributed digital content
A digital content package includes first content comprising a video feature such as a motion picture or the like, and a user-selectable application configured to operate as follows. When activated using an icon off of a menu screen, the application records an identifier for scenes (discrete portions) of the first content that are selected by a user to generate a playlist. The user may select the scenes by indicating a start and end of each scene. The application saves the playlist locally, then uploads to a server. Via a user account at the server, a user may publish the playlist to a user-created distribution list, webpage, or other electronic publication, and modify the playlist by deleting or reordering scenes.
H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/482 - End-user interface for program selection
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
Methods for managing digital content include authenticating a user account identifier from a client device over a computer network, registering a telephone number for at least one wireless mobile device in a registry identified with the user account based on the authenticating, as a pre-authorized identifier for accessing digital content licensed for use with the client device. The methods include maintaining a library of digital content identified with the user account for access by the at least one wireless mobile device, and initiating streaming of the digital video content to the at least one wireless mobile device without requiring user authentication from the at least one wireless mobile device, based on the registering of the telephone number as the pre-authorized identifier. An apparatus for performing the method comprises a processor coupled to a memory, the memory holding instructions for performing steps of the method as summarized above.
H04N 7/173 - Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
H04N 21/254 - Management at additional data server, e.g. shopping server or rights management server
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
Video conversion technology, in which a first stream of video content is accessed and multiple, different layers are extracted from the first stream of the video content. Each of the multiple, different layers are separately processed to convert the multiple, different layers into modified layers that each have a higher resolution. The modified layers are reassembled into a second stream of the video content that has a higher resolution than the first stream of the video content.
G09G 5/02 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
G09G 5/00 - Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
A digital media distribution device that includes an encoder, a decoder coupled to the encoder, and a transcoder coupled to the decoder. The encoder is configured to encode input data that is received by the digital media distribution device into a first data format. The decoder is configured to decode output data to be output by the digital media distribution device. The transcoder is configured to convert the encoded input data from the first data format into a second data format. The digital media distribution device is configured to be coupled to a computer network.
H04N 7/173 - Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
H04N 7/16 - Analogue secrecy systems; Analogue subscription systems
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 12/50 - Circuit switching systems, i.e. systems in which the path is physically permanent during the communication
H04Q 11/00 - Selecting arrangements for multiplex systems
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/61 - Network physical structure; Signal processing
H04N 21/233 - Processing of audio elementary streams
H04N 19/157 - Assigned coding mode, i.e. the coding mode being predefined or preselected to be further used for selection of another element or parameter
H04N 19/40 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video transcoding, i.e. partial or full decoding of a coded input stream followed by re-encoding of the decoded output stream
H04N 21/60 - Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client; Communication details between server and client
93.
Method and apparatus for generating 3D audio positioning using dynamically optimized audio 3D space perception cues
a first audio/video encoder receiving an input and encoding said input into an audio visual content having visual objects and audio objects, said audio objects being disposed at location corresponding to said one spatial position, said encoder using said encoding coefficients for said encoding.
H04N 9/87 - Regeneration of colour television signals
G11B 27/30 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording on the same track as the main recording
G11B 27/034 - Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
H04N 19/597 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
G10L 19/008 - Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
G10L 19/20 - Vocoders using multiple modes using sound class specific coding, hybrid encoders or object based coding
Mathematical relationships between the scene geometry, camera parameters, and viewing environment are used to control stereography to obtain various results influencing the viewer's perception of 3D imagery. The methods may include setting a horizontal shift, convergence distance, and camera interaxial parameter to achieve various effects. The methods may be implemented in a computer-implemented tool for interactively modifying scene parameters during a 2D-to-3D conversion process, which may then trigger the re-rendering of the 3D content on the fly.
The present invention pertains to an apparatus and method for adding a graphic element, such as a subtitle to selected locations of the frames in a 3D movie. The authoring tool receives a depth map indicating the position of various objects in the frames of 3D content along a Z-axis. The authoring device then designates a position for at least one additional graphic element in at least some of the frames, these positions being determined in relation either to the positions of the objects or the position of the screen along said Z-axis. An encoder uses parameters from the authoring tool to reauthor the 3D movie by adding the graphic content to the positions designated by the parameters.
H04N 9/82 - Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
An augmented reality (AR) output device or virtual reality (VR) output device is worn by a user, and includes one or more sensors positioned to detect actions performed by a user of the immersive output device. A processor provides a data signal configured for the AR or VR output device, causing the immersive output device to provide AR output or VR output via a stereographic display device. The data signal encodes audio-video data. The processor controls a pace of scripted events defined by a narrative in the one of the AR output or the VR output, based on output from the one or more sensors indicating actions performed by a user of the AR or VR output device. The audio-video data may be packaged in a non-transitory computer-readable medium with additional content that is coordinated with the defined narrative and is configured for providing an alternative output, such as 2D video output or the stereoscopic 3D output.
G06T 19/00 - Manipulating 3D models or images for computer graphics
H04N 13/356 - Image reproducers having separate monoscopic and stereoscopic modes
G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
G06F 3/0487 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
98.
Cinematic mastering for virtual reality and augmented reality
An entertainment system provides data to a common screen (e.g., cinema screen) and personal immersive reality devices. For example, a cinematic data distribution server communicates with multiple immersive output devices each configured for providing immersive output (e.g., a virtual reality output) based on a data signal. Each of the multiple immersive output devices is present within eyesight of a common display screen. The server configures the data signal based on digital cinematic master data that includes immersive reality data. The server transmits the data signal to the multiple immersive output devices contemporaneously with each other, and optionally contemporaneously with providing a coordinated audio-video signal for output via a the common display screen and shared audio system.
A processor provides a simulated three-dimensional (3D) environment for a game or virtual reality (VR) experience, including controlling a characteristic parameter of a 3D object or character based on at least one of: an asynchronous event in a second game, feedback from multiple synchronous users of the VR experience, or on a function driven by one or variables reflecting a current state of at least one of the 3D environment, the game or the VR experience. In another aspect, a sensor coupled to an AR/VR headset detects an eye convergence distance. A processor adjusts a focus distance for a virtual camera that determines rendering of a three-dimensional (3D) object for a display device of the headset, based on at least one of the eye convergence distance or a directed focus of attention for the at least one of the VR content or the AR content.
A63F 13/525 - Changing parameters of virtual cameras
A63F 13/28 - Output arrangements for video game devices responding to control signals received from the game device for affecting ambient conditions, e.g. for vibrating players' seats, activating scent dispensers or affecting temperature or light
A63F 13/30 - Interconnection arrangements between game servers and game devices; Interconnection arrangements between game devices; Interconnection arrangements between game servers
A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
G06T 19/00 - Manipulating 3D models or images for computer graphics
G09G 3/00 - Control arrangements or circuits, of interest only in connection with visual indicators other than cathode-ray tubes
H04L 29/06 - Communication control; Communication processing characterised by a protocol
100.
Region-of-interest encoding enhancements for variable-bitrate mezzanine compression
A specification defining allowable luma and chroma code-values is applied in a region-of-interest encoding method of a mezzanine compression process. The method may include analyzing an input image to determine regions or areas within each image frame that contain code-values that are near allowable limits as specified by the specification. In addition, the region-of-interest method may comprise then compressing those regions with higher precision than the other regions of the image that do not have code-values that are close to the legal limits.
H04N 7/50 - involving transform and predictive coding
H04N 19/64 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding using sub-band based transform, e.g. wavelets characterised by ordering of coefficients or of bits for transmission
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/136 - Incoming video signal characteristics or properties
H04N 19/14 - Coding unit complexity, e.g. amount of activity or edge presence estimation
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/17 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
H04N 19/12 - Selection from among a plurality of transforms or standards, e.g. selection between discrete cosine transform [DCT] and sub-band transform or selection between H.263 and H.264
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/182 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a pixel