The disclosed computer-implemented method includes applying transport protocol heuristics to selective acknowledgement (SACK) messages received at a network adapter from a network node. The transport protocol heuristics identify threshold values for operational functions that are performed when processing the SACK messages. The method further includes determining, by applying the transport protocol heuristics to the SACK messages received from the network node, that the threshold values for the transport protocol heuristics have been reached. In response to determining that the threshold values have been reached, the method includes identifying the network node as a security threat and taking remedial actions to mitigate the security threat. Various other methods, systems, and computer-readable media are also disclosed.
The disclosed computer-implemented method includes establishing a coalescing service configured to combine queries received at the coalescing service. The method further includes instantiating, within the coalescing service, multiple execution windows to which the received queries are to be assigned, where each execution window has an assigned deadline within which to execute. The method also includes analyzing a first query among the received queries to identify characteristics of the first query. The method then includes assigning the first query to a first execution window among the execution windows according to the identified characteristics. Then, upon detecting the occurrence of a specified trigger for at least one of the queries in the first execution window, the method includes executing those queries, including the first query, that are assigned to the first execution window. Various other methods, systems, and computer-readable media are also disclosed.
An online distributed computer system with methodologies for distributed trace aggregation and targeting distributed tracing. In one aspect, the disclosed distributed tracing technologies improve on existing distributed tracing technologies by providing to application developers and site operations personnel a more holistic and comprehensive insight into the behavior of the online distributed computer system in the form of computed span metric aggregates displayed in a graphical user interface thereby making it easier for such personnel to diagnose problems in the system and to support and maintain the system. In another aspect, the disclosed distributed tracing technologies improve on existing distributed tracing technologies by facilitating targeted tracing of initiator requests.
A playback application seamlessly advances playback of and interactive media title in response to user selections in a manner that minimizes latency and preserves user immersion in a narrative. The playback application buffers an interstitial segment included in the interactive media title and feeds portions of the interstitial segment to a media player only when those portions are needed for display. When the user selects an option displayed during the interstitial segment, the playback application begins buffering a subsequent media segment and stops feeding portions of the interstitial segment to the media player. The playback application starts feeding blocks of the subsequent media segment to the media player and then seamlessly advances playback to the subsequent media segment.
One embodiment of the present disclosure sets forth a technique for generating translation suggestions. The technique includes receiving a sequence of source-language subtitle events associated with a content item, where each source-language subtitle event includes a different textual string representing a corresponding portion of the content item, generating a unit of translatable text based on a textual string included in at least one source-language subtitle event from the sequence, translating, via software executing on a machine, the unit of translatable text into target-language text, generating, based on the target-language text, at least one target-language subtitle event associated with a portion of the content item corresponding to the at least one source-language subtitle event, and generating, for display, a subtitle presentation template that includes the at least one target-language subtitle event.
Various embodiments of the present application set forth a computer-implemented method for accessing data comprising receiving, by a first storage controller at a first spoke network and from an entity remote to the first spoke network, a message identifying a first content item, where the first content item is identified based on a task that is to be performed by accessing the first content item, determining, by the first storage controller, a first storage partition that stores the first content item, where the first storage partition is included in a tiered group of storage partitions accessible by the first spoke network, retrieving, by the first storage controller from the first storage partition, the first content item, and causing, by the first storage controller, the first content item to be transmitted to a second spoke network for storage in a second storage partition accessible by the second spoke network.
Various embodiments disclose a method for maintaining file versions in volatile memory. The method includes storing, in volatile memory for at least a first portion of a first sync interval, a first version of a file that is not modifiable during the at least the first portion of the first sync interval. The method also includes storing, in volatile memory for at least a second portion of the first sync interval, a second version of the file that is modifiable during the at least the second portion of the first sync interval. The method also includes subsequent to the first sync interval, replacing in nonvolatile memory, a third version of the file with the first version of the file stored in volatile memory. Further, the method includes marking the second version of the file as not modifiable during at least a first portion of a second sync interval.
In various embodiments, a predictive assignment application computes a forecasted amount of processor use for each workload included in a set of workloads using a trained machine-learning model. Based on the forecasted amounts of processor use, the predictive assignment application computes a performance cost estimate associated with an estimated level of cache interference arising from executing the set of workloads on a set of processors. Subsequently, the predictive assignment application determines processor assignment(s) based on the performance cost estimate. At least one processor included in the set of processors is subsequently configured to execute at least a portion of a first workload that is included in the set of workloads based on the processor assignment(s). Advantageously, because the predictive assignment application generates the processor assignment(s) based on the forecasted amounts of processor use, the isolation application can reduce interference in a non-uniform memory access (NUMA) microprocessor instance.
Various embodiments disclose a computer-implemented method that includes receiving, subsequent to a first font file being stored in read-only memory, a first font patch file for storage in read-write memory, where each of the first font file and the first font patch file is associated with a first font and includes a different set of glyphs used to render characters for display, and a first set of glyphs included in the first font file is static, determining that a first text string includes a first set of characters to be rendered, retrieving, from at least one of the first font file and the first font patch file depending on whether a first glyph is included in the first set of glyphs, the first glyph corresponding to a first character included in the first set of characters, and rendering a portion of the first text string using the first glyph.
In various embodiments, a proxy application processes requests associated with a network-based service. In operation, the proxy application determines that a first request received from a client application indicates that a response to the first request can be offloaded from a server machine. Prior to transmitting the first request to the server machine, the proxy application transmits a response to the first request to the client application. The response indicates that the server machine has successfully processed the first request. Advantageously, upon receiving the response, the client application is able to initiate a second request irrespective of the server machine.
In various embodiments, an iterative encoding application generates shot encode points based on a first set of encoding points and a first shot sequence associated with a media title. The iterative encoding application performs convex hull operations across the shot encode points to generate a first convex hull. Subsequently, the iterative encoding application generates encoded media sequences based on the first convex hull and a second convex hull that is associated with both a second shot sequence associated with the media title and a second set of encoding points. The iterative encoding application determines a first optimized encoded media and a second optimized encoded media sequence from the encoded media sequences based on, respectively, a first target metric value and a second target metric value for a media metric. Portions of the optimized encoded media sequences are subsequently streamed to endpoint devices during playback of the media title.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
The disclosed computer-implemented method may include receiving, as an input, segmented video scenes, where each video scene includes a specified length of video content. The method may further include scanning the video scenes to identify objects within the video scene and also determining a relative importance value for the identified objects. The relative importance value may include an indication of which objects are to be included in a cropped version of the video scene. The method may also include generating a video crop that is to be applied to the video scene such that the resulting cropped version of the video scene includes those identified objects that are to be included based on the relative importance value. The method may also include applying the generated video crop to the video scene to produce the cropped version of the video scene. Various other methods systems and computer-readable media are also disclosed.
H04N 21/4728 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for selecting a ROI [Region Of Interest], e.g. for requesting a higher resolution version of a selected region
H04N 21/485 - End-user interface for client configuration
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
13.
MACHINE LEARNING TECHNIQUES FOR DETERMINING QUALITY OF USER EXPERIENCE
In various embodiments, a quality of experience (QoE) prediction application computes a visual quality score associated with a stream of encoded video content. The QoE prediction application also determines a rebuffering duration associated with the stream of encoded video content. Subsequently, the QoE prediction application computes an overall QoE score associated with the stream of encoded video content based on the visual quality score, the rebuffering duration, and an exponential QoE model. The exponential QoE model is generated using a plurality of subjective QoE scores and a linear regression model. The overall QoE score indicates a quality level of a user experience when viewing reconstructed video content derived from the stream of encoded video content.
H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
14.
TECHNIQUES FOR INCREASING THE ISOLATION OF WORKLOADS WITHIN A MULTIPROCESSOR INSTANCE
In various embodiments, an isolation application determines processor assignment(s) based on a performance cost estimate. The performance cost estimate is associated with an estimated level of cache interference arising from executing a set of workloads on a set of processors. Subsequently, the isolation application configures at least one processor included in the set of processors to execute at least a portion of a first workload that is included in the set of workloads based on the processor assignment(s). Advantageously, because the isolation application generates the processor assignment(s) based on the performance cost estimate, the isolation application can reduce interference in a non-uniform memory access (NUMA) microprocessor instance.
An encoding engine encodes a video sequence to provide optimal quality for a given bitrate. The encoding engine cuts the video sequence into a collection of shot sequences. Each shot sequence includes video frames captured from a particular capture point. The encoding engine resamples each shot sequence across a range of different resolutions, encodes each resampled sequence with a range of quality parameters, and then upsamples each encoded sequence to the original resolution of the video sequence. For each upsampled sequence, the encoding engine computes a quality metric and generates a data point that includes the quality metric and the resample resolution. The encoding engine collects all such data points and then computes the convex hull of the resultant data set. Based on all convex hulls across all shot sequences, the encoding engine determines an optimal collection of shot sequences for a range of bitrates
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
H04N 19/192 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 21/238 - Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
The disclosed computer-implemented method may include accessing a string of text that includes characters written in a first language. The method may next include translating the text string into different languages using machine translation. The method may next include identifying, among the translated text strings, a shortest string and a longest string. The method may also include calculating a customized string length adjustment ratio for adjusting the length of the accessed text string based on the shortest translated string length and the longest translated string length. Furthermore, the method may include dynamically applying the calculated customized string length adjustment ratio to the accessed text string, so that the length of the accessed text string may be dynamically adjusted according to the customized string length adjustment ratio. The method may also include presenting the adjusted text string in the user interface. Various other methods, systems, and computer-readable media are also disclosed.
Various embodiments of the present application set forth a computer-implemented method for accessing data comprising identifying a first set of read operations occurring during a first time period, where each read operation included in the set of read operations is associated with retrieving a different portion of at least one object from a storage system, determining a byte density associated with the set of read operations, where the byte density indicates a size of contiguous portions of the at least one object that were retrieved during the first time period, and determining, based on the byte density, a pre-buffering block size for a read operation during a second period, where the pre-buffering block size specifies a size of a portion of at least one object that is to be retrieved from the storage system.
A playback application seamlessly advances playback of and interactive media title in response to user selections in a manner that minimizes latency and preserves user immersion in a narrative. The playback application buffers an interstitial segment included in the interactive media title and feeds portions of the interstitial segment to a media player only when those portions are needed for display. When the user selects an option displayed during the interstitial segment, the playback application begins buffering a subsequent media segment and stops feeding portions of the interstitial segment to the media player. The playback application starts feeding blocks of the subsequent media segment to the media player and then seamlessly advances playback to the subsequent media segment.
The disclosed apparatus may include a rack-side support structure dimensioned to hold a two-sided port interface with a rack-side mating end and an adapter-side mating end. The rack-side mating end may be configured to interface with supply cables, and the adapter-side mating end may be configured to interface with an opposite adapter-side mating end of another port interface. The apparatus may also include a device-side support structure dimensioned to hold a two-sided port interface including an opposing adapter-side mating end and a device-side mating end. The opposing adapter-side mating end may be configured to interface with the adapter-side mating end of the rack-side's port interface, and the device-side mating end may interface with cables that connect to the electronic devices. The rack-side support structure may be configured to interlock with the device-side support structure to connect to the electronic devices. Various other methods, systems, and computer-readable media are also disclosed.
In various embodiments, an interpolation-based encoding application encodes a first subsequence included in a media title at each encoding point included in a first set of encoding points to generate encoded subsequences. Subsequently, the interpolation-based encoding application performs interpolation operation(s) based on the encoded subsequences to estimate a first media metric value associated with a first encoding point that is not included in the first set of encoding points. The interpolation-based encoding application then generates an encoding recipe based on the encoded subsequences and the first media metric value. The encoding recipe specifies a different encoding point for each subsequence included in the media title. After determining that the encoding recipe specifies the first encoding point for the first subsequence, the interpolation-based encoding application encodes the first subsequence at the first encoding point to generate at least a portion of an encoded version of the media title.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 19/587 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal sub-sampling or interpolation, e.g. decimation or subsequent interpolation of pictures in a video sequence
25.
INTERACTIVE INTERFACE FOR IDENTIFYING DEFECTS IN VIDEO CONTENT
The disclosed computer-implemented method may include accessing defect identification data that identifies defects in frames of video content. The method may also include generating, as part of the interactive user interface, an interactive element that presents the frames of video content. The method may further include generating, as part of the interactive user interface, another interactive element that presents selectable metadata items associated with the identified defects in the frames of video content. At least one of the selectable metadata items may include an associated user interface action. Then, upon receiving an input selecting one of the selectable metadata items, the method may include performing the associated user interface action. Various other methods, systems, and computer-readable media are also disclosed.
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
G06F 3/0482 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance interaction with lists of selectable items, e.g. menus
The disclosed computer-implemented method may include accessing a pre-rendered multimedia item. The pre-rendered multimedia item may have branching logic associated with it, where the branching logic includes branching points that direct non-sequential playback of the pre-rendered multimedia item. The method may also include initializing playback of the pre-rendered multimedia item and accessing, at the branching points, various trigger conditions that direct playback order of different segments of the pre-rendered multimedia item. The method may then include updating, based on the trigger conditions, at least some portion of custom state data. The method may further include playing back the segments of the pre-rendered multimedia item according to the updated state data. Various other methods, systems, and computer-readable media are also disclosed.
H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
27.
Display screen or portion thereof with animated graphical user interface
One embodiment of the present invention sets forth a technique for generating one or more hash data structures. The technique includes generating a hash data structure having entries that correspond to a plurality of content servers, and, for each file included in a first plurality of files, allocating the file to one or more content servers included in the plurality of content servers by comparing a hash value associated with the file to one or more entries included in the entries. The technique further includes comparing a network bandwidth utilization of a first content server to a network bandwidth utilization associated with one or more other content servers included in the plurality of content servers to generate a result, and modifying a first number of entries associated with the first content server and included in the entries based on the result to generate a biased hash data structure.
In various embodiments, a buffer-based encoding application generates a first convex hull of subsequence encode points based on multiple encoding points and a first subsequence associated with a media title. The buffer-based encoding application then generates a first global convex hull of media encode points based on a transmission buffer constraint, the first convex hull, and a second global convex hull of media encode points. Notably, the second global convex hull is associated with a portion of the media title that occurs before the first subsequence in a playback order for the media title. Subsequently, the subsequence-based encoding application selects a first media encode point included in the first global convex hull based on a media metric and determines a first encoded media sequence based on the selected media encode point. The first encoded media sequence is subsequently streamed to an endpoint device during playback of the media title.
H04N 21/231 - Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers or prioritizing data for deletion
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
30.
LINEAR ON-SCREEN KEYBOARD WITH FLOATING UTILITY KEYS
A computer-implemented method causing a linear on-screen keyboard that includes an array of input keys and a focus indicator to be displayed, wherein navigation of the focus indicator to an input key in the array enables a selection of a character corresponding to the input key; and upon determining that the focus indicator has navigated to a first input key in the array, causing one or more utility keys to be displayed proximate to the first input key.
G06F 3/0488 - Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
G06F 3/0482 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance interaction with lists of selectable items, e.g. menus
The disclosed computer-implemented method includes receiving an indication that cache data is to be copied from an originating cluster having a specified number of replica nodes to a destination cluster having an arbitrary number of replica nodes. The method further includes copying the cache data to a cache dump and creating a log that identifies where the cache data is stored in the cache dump. The method further includes copying the cache data from the cache dump to the replica nodes of the destination cluster. The copying includes writing the copied data in a distributed manner, such that at least a portion of the copied data is distributed over each of the replica nodes in the destination cluster. Various other methods, systems, and computer-readable media are also disclosed.
G06F 12/0895 - Caches characterised by their organisation or structure of parts of caches, e.g. directory or tag array
G06F 9/38 - Concurrent instruction execution, e.g. pipeline, look ahead
G06F 3/06 - Digital input from, or digital output to, record carriers
G06F 12/0891 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches using clearing, invalidating or resetting means
G06F 12/0837 - Cache consistency protocols with software control, e.g. non-cacheable data
32.
Techniques for encoding a media title while constraining bitrate variations
In various embodiments, a subsequence-based encoding application generates a first set of subsequence encode points based on multiple encoding points and a first subsequence included in a set of subsequences that are associated with a media title. Notably, each subsequence encode point is associated with a different encoded subsequence. The subsequence-based encoding application then performs convex hull operation(s) across the first set of subsequence encode points to generate a first convex hull. The subsequence-based encoding application then generates an encode list that includes multiple subsequence encode points based on multiple convex hulls, including the first convex hull. Subsequently, the subsequence-based encoding application performs filtering operation(s) on the encode list based on a variability constraint associated with a media metric to generate an upgrade candidate list. Finally, the subsequence-based encoding application generates an encoded media sequence based on the upgrade candidate list and the first convex hull.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/87 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
33.
Techniques for encoding a media title while constraining quality variations
In various embodiments, a subsequence-based encoding application generates a convex hull of subsequence encode points based on multiple encoding points and a first subsequence included in a set of subsequences that are associated with a media title. The subsequence-based encoding application then generates a first encode list that includes multiple subsequence encode points based on the first convex hull. Notably, each subsequence encode point included in the first encode list is associated with a different subsequence. The subsequence-based encoding application selects a first subsequence encode point included in the first encode list based on a first variability constraint that is associated with a media metric. The subsequence-based encoding application then replaces the first subsequence encode point included in the first encode list with a second subsequence encode point to generate a second encode list. Finally, the subsequence-based encoding application generates an encoded media sequence based on the second encode list.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/176 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a block, e.g. a macroblock
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
H04N 19/87 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving scene cut or scene change detection in combination with video compression
34.
TECHNIQUES FOR IDENTIFYING SYNCHRONIZATION ERRORS IN MEDIA TITLES
A neural network system that is trained to identify one or more portions of a media title where synchronization errors are likely to be present. The neural network system is trained based on a first set of media titles where synchronization errors are present and a second set of media titles where synchronization errors are absent. The second set of media titles can be generated by introducing synchronization errors into a set of media titles that otherwise lack synchronization errors. Via training, the neural network system learns to identify specific visual features included in one or more video frames and corresponding audio features that should be played back in synchrony with the associated visual features. Accordingly, when presented with a media title that includes synchronization errors, the neural network can indicate the specific frames where synchronization errors are likely to be present.
G11B 27/36 - Monitoring, i.e. supervising the progress of recording or reproducing
G10L 25/57 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for processing of video signals
G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
G06K 9/62 - Methods or arrangements for recognition using electronic means
G06N 3/04 - Architecture, e.g. interconnection topology
One embodiment of the present invention sets forth a method for updating content stored in a cache residing at an internet service provider (ISP) location that includes receiving popularity data associated with a first plurality of content assets, where the popularity data indicate the popularity of each content asset in the first plurality of content assets across a user base that spans multiple geographic regions, generating a manifest that includes a second plurality of content assets based on the popularity data and a geographic location associated with the cache, where each content asset included in the manifest is determined to be popular among users proximate to the geographic location or users with preferences similar to users proximate to the geographic location, and transmitting the manifest to the cache, where the cache is configured to update one or more content assets stored in the cache based on the manifest.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 16/958 - Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
G06F 16/957 - Browsing optimisation, e.g. caching or content distillation
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
H04N 21/237 - Communication with additional data server
H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/61 - Network physical structure; Signal processing
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
The disclosed computer-implemented method may include generating a three-dimensional (3D) feature map for a digital image using a fully convolutional network (FCN). The 3D feature map may be configured to identify features of the digital image and identify an image region for each identified feature. The method may also include generating a region composition graph that includes the identified features and image regions. The region composition graph may be configured to model mutual dependencies between features of the 3D feature map. The method may further include performing a graph convolution on the region composition graph to determine a feature aesthetic value for each node according to the weightings in the node's weighted connecting segments, and calculating a weighted average for each node's feature aesthetic value to provide a combined level of aesthetic appeal for the digital image. Various other methods, systems, and computer-readable media are also disclosed.
G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
The disclosed computer-implemented method includes determining that audio quality is to be adjusted for a multimedia streaming connection over which audio data and video data are being streamed to a content player. The audio data is streamed at a specified audio quality level and the video data is streamed at a specified video quality level. The method also includes determining that a specified minimum video quality level is to be maintained while adjusting the audio quality level. Still further, the method includes dynamically adjusting the audio quality level of the multimedia streaming connection while maintaining the video quality level of the multimedia streaming connection at at least the specified minimum video quality level. Various other methods, systems, and computer-readable media are also disclosed.
G11B 20/00 - Signal processing not specific to the method of recording or reproducing; Circuits therefor
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/8358 - Generation of protective data, e.g. certificates involving watermark
H04N 1/32 - Circuits or arrangements for control or supervision between transmitter and receiver
In various embodiments, a subsequence-based encoding application generates subsequences based on a source sequence associated with a media title. The subsequence-based encoding application then encodes both a first subsequence and a second subsequence across each of multiple configured encoders and at least one rate control value to generate, respectively, a first set of encoded subsequences and a second set of encoded subsequences. Notably, each configured encoder is associated with a combination of an encoder and a configuration, and at least two configured encoders are different from one another. Subsequently, the subsequence-based encoding application generates encoded media sequences based on the first set of encoded subsequences and the second set of encoded subsequences. Finally, the application selects a first encoded media sequence from the encoded media sequences based on a first target value for a media metric to subsequently stream to a first endpoint device during playback of the media title.
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 21/238 - Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
In various embodiments, a training application generates a preference prediction model based on an interaction matrix and a closed-form solution for minimizing a Lagrangian. The interaction matrix reflects interactions between users and items, and the Lagrangian is formed based on a constrained optimization problem associated with the interaction matrix. A service application generates a first application interface that is to be presented to the user. The service application computes predicted score(s) using the preference prediction model, where each predicted score predicts a preference of the user for a different item. The service application then determines a first item from the items to present to the user via an interface element included in the application interface. Subsequently, the service application causes a representation of the first item to be displayed via the interface element included in the application interface.
A computer-implemented method of displaying video content includes, based on an input to transition playback of a video content item from a first media player that is instantiated in a user interface to a second media player that is instantiated in the user interface, determining a current value of a first state descriptor associated with the first media player; setting a value of a second state descriptor associated with the second media player to match the current value of the first state descriptor; and after setting the value of the second state descriptor, causing the second media player to begin playback of the video content item, wherein the second media player begins playing the video content item based on the value of the second state descriptor.
H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
H04N 21/482 - End-user interface for program selection
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/485 - End-user interface for client configuration
H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
A method includes receiving, with a computing system, a video item. The method further includes identifying a first set of features within a first frame of the video item. The method further includes identifying, with the computing system, a second set of features within a second frame of the video item, the second frame being subsequent to the first frame. The method further includes determining, with the computing system, differences between the first set of features and the second set of features. The method further includes assigning a clip category to a clip extending between the first frame and the second frame based on the differences.
An apparatus for minimizing installation footprints of expansion cards may include one or more expansion cards that include a short edge, a long edge that is longer than the short edge and is substantially perpendicular to the short edge, and an edge connector disposed on the short edge. The apparatus may also include an expansion-card frame dimensioned to 1) guide an expansion card toward a printed circuit board of a computing device at a substantially vertical orientation such that the short edge of the expansion card is disposed proximate the printed circuit board of the computing device and the long edge of the expansion card extends away from the printed circuit board and 2) removably couple the edge connector disposed on the short edge of the expansion card to the printed circuit board of the computing device. Various other apparatuses, systems, and methods are also disclosed.
Techniques for adaptive metric collection, metric storage, and alert thresholds are described. In an approach, a metric collector computer processes metrics as a collection of key/value pairs. The key/value pairs represent the dimensionality of the metrics and allows for semantic queries on the metrics based on keys. In an approach, a storage controller computer maintains a storage system with multiple storage tiers ranked by speed of access. The storage computer stores policy data that specifies the rules by which metric records are stored across the multiple storage tiers. Periodically, the storage computer moves database records to higher or lower tiers based on the policy data. In an approach, a metric collector in response to receiving a new metric, generates a predicted metric value based on previously recorded metric values and measures the deviation from the new metric value to determine whether an alert is appropriate.
In one embodiment of the present invention, a shading engine enables multiple versions of dependencies to coexist in an executable software application. During the software build process, the shading engine dynamically renames transitive dependencies of the software application to disambiguated names. The shading engine performs this renaming at both the library and class level. Notably, the shading engine does not rename the first-order dependencies of the software application. Consequently, the code of the software application and interfaces between the software application and the first-order library dependencies of the software application are not modified. Notably, the shading engine efficiently and accurately shades the transitive dependencies without manual intervention. By contrast, primarily manually-based conventional approaches to dependency management are time consuming and susceptible to errors.
One embodiment sets forth technique for computing a similarity score between two digital items is computed based on interaction histories associated with global users and interaction histories associated with local users. Global counts indicating the number of interactions associated with each unique pair of digital items are weighted based on a mixing rate. The weighted global counts are then combined with local counts to compute total counts. An effective interaction probability indicating the likelihood of a user interacting with one digital item in the pair of digital items after interacting with the other digital item in the pair is computed based on the total counts. The effective interaction probability is then corrected for noise, resulting in a similarity score indicating the similarity between the pair of digital items.
G06Q 30/02 - Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination
Techniques are disclosed for multiplexing a dynamic bit-rate video stream with an audio stream received by a client device in a manner that allows the resulting multiplexed stream to be played back without disruption, despite dynamic changes in the bit rate of the video stream that may occur. A content server may stream both a video stream and an audio stream to a client device for playback. The client device may multiplex the video and audio streams prior to them being presented to a playback engine for decoding and playback to a user.
H04N 19/167 - Position within a video image, e.g. region of interest [ROI]
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 19/46 - Embedding additional information in the video signal during the compression process
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
H04N 19/164 - Feedback from the receiver or from the transmission channel
H04N 19/61 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using transform coding in combination with predictive coding
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 21/462 - Content or additional data management e.g. creating a master electronic program guide from data received from the Internet and a Head-end or controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabi
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/2368 - Multiplexing of audio and video streams
The disclosed apparatus may include a rack-side support structure dimensioned to hold a two-sided port interface with a rack-side mating end and an adapter-side mating end. The rack-side mating end may be configured to interface with supply cables, and the adapter-side mating end may be configured to interface with an opposite adapter-side mating end of another port interface. The apparatus may also include a device-side support structure dimensioned to hold a two-sided port interface including an opposing adapter-side mating end and a device-side mating end. The opposing adapter-side mating end may be configured to interface with the adapter-side mating end of the rack-side's port interface, and the device-side mating end may interface with cables that connect to the electronic devices. The rack-side support structure may be configured to interlock with the device-side support structure to connect to the electronic devices. Various other methods, systems, and computer-readable media are also disclosed.
A playback application is configured to dynamically generate topology for an interactive media title. The playback application obtains an initial topology and also collects various data associated with a user interacting with the feature. The playback application then modifies the initial topology, based on the collected data, to generate a dynamic topology tailored to the user. The dynamic topology describes the set of choices available to the user during playback as well as which options can be selected by the user when making a given choice. In addition, the playback application also selectively buffers different portions of the interactive media title, based on the collected data, in anticipation of the user selecting particular options for available choices.
H04N 21/8541 - Content authoring involving branching, e.g. to different story endings
H04N 21/8545 - Content authoring for generating interactive applications
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/475 - End-user interface for inputting end-user data, e.g. PIN [Personal Identification Number] or preference data
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
49.
DYNAMIC TOPOLOGY GENERATION FOR BRANCHING NARRATIVES
A playback application is configured to dynamically generate topology for an interactive media title. The playback application obtains an initial topology and also collects various data associated with a user interacting with the feature. The playback application then modifies the initial topology, based on the collected data, to generate a dynamic topology tailored to the user. The dynamic topology describes the set of choices available to the user during playback as well as which options can be selected by the user when making a given choice. In addition, the playback application also selectively buffers different portions of the interactive media title, based on the collected data, in anticipation of the user selecting particular options for available choices.
H04N 21/8541 - Content authoring involving branching, e.g. to different story endings
H04N 21/8545 - Content authoring for generating interactive applications
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/475 - End-user interface for inputting end-user data, e.g. PIN [Personal Identification Number] or preference data
In various embodiments, a forensic scoping application analyzes host instances in order to detect anomalies. The forensic scoping application acquires a snapshot for each host instance included in an instance group. Each snapshot represents a current operational state of the associated host instance. Subsequently, the forensic scoping application performs clustering operation(s) based on the snapshots to generate a set of clusters. The forensic scoping application determines that a first cluster in the set of clusters is associated with fewer host instances than at least a second cluster in the set of clusters. Based on the first cluster, the forensic scoping application determines that a first host instance included in the instance group is operating in an anomalous fashion. Advantageously, efficiently determining host instances that are operating in an anomalous fashion during a security attack can reduce the amount of damage caused by the security attack.
A computer-implemented method includes receiving a request from a client computing device for a first shot included in a media title being streamed to the client computing device for playback; in response to the request, sending the first shot to the client computing device for playback; and sending a representative image for at least one of the first shot and a second shot included in the media title, wherein the first shot comprises a first sequence of video frames that is included in the media title and captured continuously from a first point of capture, and the second shot comprises a second sequence of video frames that is included in the media title and captured continuously from a second point of capture.
H04N 21/6587 - Control parameters, e.g. trick play commands or viewpoint selection
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
G06F 17/30 - Information retrieval; Database structures therefor
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
52.
TECHNIQUES FOR DETERMINING AN UPPER BOUND ON VISUAL QUALITY OVER A COMPLETED STREAMING SESSION
In various embodiments, a hindsight application computes a hindsight metric value for evaluation of a video rate selection algorithm. The hindsight application determines a first encoding option associated with a source chunk of a media title based on a network throughput trace and a buffer trellis. The hindsight application determines that the first encoding option is associated with a buffered duration range. The buffered duration range is also associated with a second encoding option that is stored in the buffer trellis. After determining that the first encoding option is associated with a higher visual quality than the second encoding option, the hindsight application stores the first encoding option instead of the second encoding option in the buffer trellis to generate a modified buffer trellis. Finally, the hindsight application computes a hindsight metric value associated with a sequence of encoded chunks of the media title based on the modified buffer trellis.
A method includes receiving, with a computing system, data representing a video item into a buffer. The method further includes outputting the video item from the buffer to a display system. The method further includes determining that utilization of the buffer falls below a predetermined threshold. The method further includes, in response to determining that the utilization of the buffer falls below the predetermined threshold, determining that there is a specified rebuffering point within a predetermined time frame. The method further includes pausing with the computing system, the video item at the specified rebuffering point in response to determining that there is the specified rebuffering point within the predetermined time frame.
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
H04N 21/433 - Content storage operation, e.g. storage operation in response to a pause request or caching operations
54.
FEATURE GENERATION FOR ONLINE/OFFLINE MACHINE LEARNING
A system for utilizing models derived from offline historical data in online applications is provided. The system includes a processor and a memory storing machine-readable instructions for determining a set of contexts of the usage data, and for each of the contexts within the set of contexts, collecting service data from services supporting the media service and storing that service data in a database. The system performing an offline testing process by fetching service data for a defined context from the database, generating a first set of feature vectors based on the fetched service data, and providing the first set to a machine-learning module. The system performs an online testing process by fetching active service data from the services supporting the media streaming service, generating a second set of feature vectors based on the fetched active service data, and providing the second set to the machine-learning module.
In various embodiments, an encoder comparison application compares the performance of different configured encoders. In operation, the encoder comparison application generates a first global convex hull of video encode points based on a first configured encoder and a set of subsequences included in a source video sequence. Each video encode point is associated with a different encoded version of the source video sequence. The encoder comparison application also generates a second global convex hull of video encode points based on a second configured encoder and the subsequences. Subsequently, the encoder configuration application computes a performance value for an encoding comparison metric based on the first global convex hull and the second global convex hull. Notably, the first performance value estimates a difference in performance between the first configured encoder and the second configured encoder.
H04N 19/196 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding being specially adapted for the computation of encoding parameters, e.g. by averaging previously computed encoding parameters
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
56.
TECHNIQUES FOR EFFICIENTLY ACCESSING VALUES SPANNING SLABS OF MEMORY
In various embodiments, a memory pool application implements composite arrays via a memory pool that includes a first slab and a second slab. First, the memory pool application assigns the first slab and the second slab to a composite array. The memory pool application then modifies a final data word included in the first slab to store a first portion of a specified value and a leading data word included in the second slab to store a second portion of the specified value. The memory pool application copies the second data word to a duplicate data word included in the first slab. Subsequently, the memory pool application performs an unaligned read operation on the first slab based on a specified offset to retrieve a first word stored in memory and extracts the specified value from the first word based on the specified offset and a specified number of bits.
In various embodiments, a hindsight application computes a total download size for a sequence of encoded chunks associated with a media title for evaluation of at least one aspect of a video streaming service. The hindsight application computes a feasible download end time associated with a source chunk of the media title based on a network throughput trace and a subsequent feasible download end time associated with a subsequent source chunk of the media title. The hindsight application then selects an encoded chunk associated with the source chunk based on the network throughput trace, the feasible download end time, and a preceding download end time associated with a preceding source chunk of the media title. Subsequently, the hindsight application computes the total download size based on the number of encoded bits included in the first encoded chunk. The total download size correlates to an upper bound on visual quality.
A system of assessing deployments in a network-based media system is provided herein. The system includes a data storage system storing observation vectors, each observation vector being associated with an outcome indicator, and a processing device in communication with the data storage system to receive and store observation vectors and associated outcome indicators. The processing device performs operations including communicating with an endpoint device of a user to obtain information associated with the endpoint device; and transmitting an instance of a variable user interface to the endpoint device for presentation to the user via the endpoint device based on the stored observation vectors, the stored associated outcome indicators, and the obtained information associated with the endpoint device. Related systems and methods are also disclosed.
G06Q 30/02 - Marketing, e.g. market research and analysis, surveying, promotions, advertising, buyer profiling, customer management or rewards; Price estimation or determination
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
59.
Relationship-based search and recommendations via authenticated negatives
One embodiment of the present invention sets forth techniques for generating recommendation sets for a first client device. A recommendation system receives, from the first client device, a first selection of a first recommended item included in a plurality of recommended items. The recommendation system identifies a second recommended item included in the plurality of recommended items that has not been selected. The recommendation system retrieves an authenticated negative item from a plurality of authenticated negative items. The recommendation system stores one or more entries in a log file comprising a plurality of entries, based on at least one of the first recommended item, the second recommended item, and the authenticated negative item. One advantage of the disclosed techniques is that the use of authenticated negative examples, also referred to herein as authenticated negative items, provides a more relevant set of recommendations for the user.
The disclosed computer-implemented method may include establishing a header policy that is to be applied at a metadata proxy. The header policy may indicate that specified header information is to be included in each metadata service request sent to a metadata service. The method may also include accessing the established header policy at the metadata proxy, where the metadata proxy is configured to intercept metadata service requests and check the intercepted requests for the specified header information. The method may further include determining, at the metadata proxy, that the metadata service request does not include the specified header information and, in response to the determination, preventing the metadata service request from being passed to the metadata service. Various other methods, systems, and computer-readable media are also disclosed.
The disclosed computer-implemented method may include mapping an internal network to identify various nodes of the internal network. The method may further include determining where at least some of the internal network nodes identified in the mapping are located. The method may also include receiving a request for metadata service information from an application hosted on a cloud server instance. The method may then include providing a response to the received request for metadata service information if the determined location of the requesting node is approved or preventing a response to the received request for metadata service information if the determined location of the requesting node is not approved. Various other methods, systems, and computer-readable media are also disclosed.
The disclosed computer-implemented method may include initializing a server instance using a specified network address and an associated set of credentials, logging the network address of the initialized server instance as well as the associated set of credentials in a data log, analyzing network service requests to determine that a different server instance with a different network address is requesting a network service using the same set of credentials, accessing the data log to determine whether the second server instance is using a network address that is known to be valid within the network and, upon determining that the second server instance is not using a known network address, preventing the second server instance from performing specified tasks within the network. Various other methods, systems, and computer-readable media are also disclosed.
Various embodiments of the disclosure disclosed herein provide techniques for detecting anomalies across one or more components within a distributed computing system, according to various embodiments of the present disclosure. An anomaly detection system retrieves event data associated with a real-time stream of events generated by one or more components within a distributed computing system. The anomaly detection system computes a failure metric based on the event data. The anomaly detection system determines that the failure metric exceeds a dynamically adjustable trigger condition. The anomaly detection system generates an alert associated with the failure metric.
Various embodiments of the invention disclosed herein provide techniques for performing distributed anti-entropy repair procedures across a plurality of nodes in a distributed database network. A node included in a plurality of nodes within the distributed database network determines, before all other nodes included in the plurality of nodes, that a first anti-entropy repair procedure has ended. The node determines that a second anti-entropy repair procedure is ready to begin. The node generates a schedule for executing one or more operations associated with the second anti-entropy repair procedure. The node writes the schedule to a shared repair schedule data structure to initiate the second anti-entropy repair procedure across multiple nodes included in the plurality of nodes. Each of the nodes included in the plurality of nodes then performs a node repair based on the schedule.
In various embodiments, a bootstrapping training subsystem performs sampling operation(s) on a training database that includes subjective scores to generate resampled dataset. For each resampled dataset, the bootstrapping training subsystem performs machine learning operation(s) to generate a different bootstrap perceptual quality model. The bootstrapping training subsystem then uses the bootstrap perceptual quality models to quantify the accuracy of a perceptual quality score generated by a baseline perceptual quality model for a portion of encoded video content. Advantageously, relative to prior art solutions in which the accuracy of a perceptual quality score is unknown, the bootstrap perceptual quality models enable developers and software applications to draw more valid conclusions and/or more reliably optimize encoding operations based on the perceptual quality score.
In various embodiments, an encoding metric comparison application computes a first set of quality scores associated with a test encoding configuration based on a set of bootstrap quality models. Each bootstrap quality model is trained based on a different subset of a training database. The encoding metric comparison application computes a second set of quality scores associated with a reference encoding configuration based on the set of bootstrap quality models. Subsequently, the encoding metric comparison application generates a distribution of bootstrap values for an encoding comparison metric based on the first set of quality scores and the second set of quality scores. The distribution quantifies an accuracy of a baseline value for the encoding comparison metric generated by a baseline quality model.
An endpoint device outputs frames of test media during a testing procedure. Each frame of test media includes a test pattern. A test module coupled to the endpoint device samples the test pattern and transmits sample data to a media test engine. The media test engine decodes a binary number from the test pattern and then converts the binary number to an integer value that is associated with the corresponding frame. The media test engine then analyzes sequences of these integer values to identify playback errors associated with the endpoint device.
H04N 19/89 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
H04L 12/24 - Arrangements for maintenance or administration
In an embodiment, a data processing method provides an improvement in efficient use of computer memory and comprises using a computer, creating in computer memory a glyph memory area that is configured to store a plurality of cached glyphs; using the computer, receiving a request from an application to use a particular glyph; in response to the request, determining whether the particular glyph is in the glyph memory area; in response to determining that the particular glyph is not in the glyph memory area: attempting to store a bitmap of the particular glyph to a next location in the glyph memory area; in response to determining that the next location is not available a first time, reclaiming space in the glyph memory area in an amount sufficient to store the bitmap; attempting a second time to store the bitmap in the next location in the glyph memory area; in response to determining that the next location is not available a second time, clearing the glyph memory area of all previously stored glyphs and storing the bitmap in the glyph memory area.
G06F 12/0875 - Addressing of a memory level in which the access to the desired data or data block requires associative addressing means, e.g. caches with dedicated cache, e.g. instruction or stack
69.
Techniques for modeling temporal distortions when predicting perceptual video quality
In various embodiments, a prediction application computes a quality score for re-constructed visual content that is derived from visual content. The prediction application generates a frame difference matrix based on two frames included in the re-constructed video content. The prediction application then generates a first entropy matrix based on the frame difference matrix and a first scale. Subsequently, the prediction application computes a first value for a first temporal feature based on the first entropy matrix and a second entropy matrix associated with both the visual content and the first scale. The prediction application computes a quality score for the re-constructed video content based on the first value, a second value for a second temporal feature associated with a second scale, and a machine learning model that is trained using subjective quality scores. The quality score indicates a level of visual quality associated with streamed video content.
In various embodiments, an ensemble prediction application computes a quality score for re-constructed visual content that is derived from visual content. The ensemble prediction application computes a first quality score for the re-constructed video content based on a first set of values for a first set of features and a first model that associates the first set of values with the first quality score. The ensemble prediction application computes a second quality score for the re-constructed video content based on a second set of values for a second set of features and a second model that associates the second set of values with the second quality score. Subsequently, the ensemble prediction application determines an overall quality score for the re-constructed video content based on the first quality score and the second quality score. The overall quality score indicates a level of visual quality associated with streamed video content.
In various embodiments, a subtitle application generates a subtitle list for a trailer. In operation, the subtitle application performs matching operation(s) between trailer audio associated with a trailer and source audio associated with an audiovisual program. The subtitle application then maps a subtitle associated with the source audio from a source timeline associated with the source audio to a trailer timeline associated with the trailer audio to generate a mapped subtitle. Subsequently, the subtitle application generates a trailer subtitle list based on the mapped subtitle and at least one additional mapped subtitle. Because the subtitle application generates the trailer subtitle list based on audio comparisons, the subtitle application ensures that the proper subtitles are included in the trailer subtitle list without requiring a subtitler to view the trailer.
A data processing method comprises receiving title interaction data, wherein the title interaction data specifies, an order in which users interacted with a plurality of titles; generating a plurality of statistical models, each statistical model of the plurality of statistical models specifying a plurality of probabilities, wherein the plurality of probabilities represent, for each first title of the plurality of titles and each second title of the plurality of titles, a likelihood that a user will interact with the first title then next interact with the second title; refining the plurality of statistical models based on the title interaction data; determining a plurality of weight values corresponding to the plurality of statistical models for a particular user; identifying, for the particular user, one or more recommended titles of the plurality of titles based on the plurality of weight values and the plurality of statistical models.
In various embodiments, a subtitle conformance application causes modifications to a subtitle list based on changes associated with an audiovisual program. In operation, the subtitle conformance application performs comparison operation(s) between versions of a subtitle template to identify changes to subtitles associated with the audiovisual program. The subtitle conformance application then determines a mapping between a first change included in the changes and a subtitle list associated with the audiovisual program. Finally, the subtitle conformance application causes the subtitle list to be modified based on the first change and the mapping. Advantageously, the subtitle conformance application enables productive development of subtitles to begin before the audiovisual program is finalized.
In various embodiments, a subtitle timing application detects timing errors between subtitles and shot changes. In operation, the subtitle timing application determines that a temporal edge associated with a subtitle does not satisfy a timing guideline based on a shot change. The shot change occurs within a sequence of frames of an audiovisual program. The subtitle timing application then determines a new temporal edge that satisfies the timing guideline relative to the shot change. Subsequently, the subtitle timing application causes a modification to a temporal location of the subtitle within the sequence of frames based on the new temporal edge. Advantageously, the modification to the subtitle improves a quality of a viewing experience for a viewer. Notably, by automatically detecting timing errors, the subtitle timing application facilitates proper and efficient re-scheduling of subtitles that are not optimally timed with shot changes.
H04N 5/14 - Picture signal circuitry for video frequency region
H04N 9/82 - Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback the individual colour picture signal components being recorded simultaneously only
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
H04N 5/445 - Receiver circuitry for displaying additional information
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
During a Transmission Control Protocol (“TCP”) session, a sending endpoint computer monitors amounts of data sent and patterns of data loss as data is sent to a receiving endpoint computer. The sending endpoint computer periodically determines whether data is being sent below, at or above path capacity, based on the monitored amounts of data sent and patterns of data loss observed. The sending endpoint computer periodically dynamically adjusts the rate at which data is sent to the receiving endpoint computer, in response to the determinations whether data is being sent below, at or above path capacity.
One embodiment of the present invention sets forth a technique for displaying scenes included in media assets. The technique includes selecting a first scene included in a first video asset based on one or more preferences and metadata associated with multiple scenes. The first video asset is one of multiple video assets, and each scene included in the multiple scenes is included in one of the video assets included in the multiple video assets. The technique further includes displaying the first scene within a first portion of a display area.
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
G06F 3/0482 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance interaction with lists of selectable items, e.g. menus
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
In various embodiments, a shot collation application causes multiple encoding instances to encode a source video sequence that includes at least two shot sequences. The shot collation application assigns a first shot sequence to a first chunk. Subsequently, the shot collation application determines that a second shot sequence does not meet a collation criterion with respect to the first chunk. Consequently, the shot collation application assigns the second shot sequence or a third shot sequence derived from the second shot sequence to a second chunk. The shot collation application causes a first encoding instance to independently encode each shot sequence assigned to the first chunk. Similarly, the shot collation application causes a second encoding instance to independently encode each shot sequence assigned to the second chunk. Finally, a chunk assembler combines the first encoded chunk and the second encoded chunk to generate an encoded video sequence.
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
H04N 19/436 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation using parallelised computational arrangements
H04N 19/115 - Selection of the code volume for a coding unit prior to coding
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 19/146 - Data rate or code amount at the encoder output
78.
Automatic detection of preferences for subtitles and dubbing
In an approach, a server computer receives a request from a client computer specifying particular content for a particular user, wherein the particular content is associated with an original audio language. In response to receiving the request, the server computer selects a preferred audio language and a preferred subtitle language for the particular content based on a particular record of a preference database. The server computer returns asset identifying data that the client computer uses to obtain a stream of the particular content using the preferred audio language and the preferred subtitle language from a content delivery network (CDN) or other asset location. The server computer receives a message from the client computer that identifies an presented audio language and a presented subtitle language that were presented to the particular user while the particular content streamed. In response to a determination that the presented audio language differs from the preferred audio language or that the presented subtitle language differs from the preferred subtitle language, the server computer updates the particular record in the preference database.
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/4722 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for requesting additional data associated with the content
G06F 17/27 - Automatic analysis, e.g. parsing, orthograph correction
H04N 21/266 - Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system or merging a VOD unicast channel into a multicast channel
H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies
H04N 21/485 - End-user interface for client configuration
H04N 21/439 - Processing of audio elementary streams
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G06F 16/78 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
H04L 29/06 - Communication control; Communication processing characterised by a protocol
One embodiment of the present invention sets forth a method for updating content stored in a cache residing at an internet service provider (ISP) location that includes receiving popularity data associated with a first plurality of content assets, where the popularity data indicate the popularity of each content asset in the first plurality of content assets across a user base that spans multiple geographic regions, generating a manifest that includes a second plurality of content assets based on the popularity data and a geographic location associated with the cache, where each content asset included in the manifest is determined to be popular among users proximate to the geographic location or users with preferences similar to users proximate to the geographic location, and transmitting the manifest to the cache, where the cache is configured to update one or more content assets stored in the cache based on the manifest.
G06F 15/167 - Interprocessor communication using a common memory, e.g. mailbox
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 16/958 - Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
G06F 16/957 - Browsing optimisation, e.g. caching or content distillation
H04N 21/218 - Source of audio or video content, e.g. local disk arrays
H04N 21/222 - Secondary servers, e.g. proxy server or cable television Head-end
H04N 21/237 - Communication with additional data server
H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
H04N 21/61 - Network physical structure; Signal processing
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
A technique for translating text strings includes receiving a source language text string from an application, determining that a translated text string that includes a translation in a target language of the source language text string is not available for use by the application, transmitting the source language text string to a translation service for translation, receiving the translated text string from the translation service, and causing the translated text string to be available for use by the application.
In various embodiments, a defective pixel detection application automatically detects defective pixels in video content. In operation, the defective pixel detection application computes a first set of pixel intensity gradients based on a first frame of video content and a first neighborhood of pixels associated with a first pixel. The defective pixel detection application also computes a second set of pixel intensity gradients based on the first frame and a second neighborhood of pixels associated with the first pixel. Subsequently, the defective pixel detection application computes a statistical distance between the first set of pixel intensity gradients and the second set of pixel intensity gradients. The defective pixel detection application then determines that the first pixel is defective based on the statistical distance.
One embodiment of the present invention sets forth a technique for compressing a forwarding table. The technique includes selecting, from a listing of network prefixes, a plurality of network prefixes that are within a range of a subnet. The technique further includes sorting the plurality of network prefixes to generate one or more subgroups of network prefixes and selecting a first subgroup of network prefixes included in the one or more subgroups of network prefixes. The technique further includes generating a synthetic supernet based on the first subgroup of network prefixes.
In various embodiments, a reconstruction application generates reconstructed video content that includes synthesized film grain. The reconstruction application performs scaling operation(s) on first unit noise based on a piecewise linear scaling function and the brightness component of the decoded video content to generate a brightness component of synthesized film grain. The reconstruction application then generates a brightness component of reconstructed video content based on the brightness component of the synthesized film grain and the brightness component of the decoded video content. Finally, the reconstructed application performs operation(s) related to saving the reconstructed video content to a file and/or further processing the reconstructed video content. Advantageously, the synthesized film grain reliably represents the film grain included in source video content from which the decoded video content was derived.
G06T 3/40 - Scaling of a whole image or part thereof
G06T 7/90 - Determination of colour characteristics
G06T 3/20 - Linear translation of a whole image or part thereof, e.g. panning
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
H04N 19/44 - Decoders specially adapted therefor, e.g. video decoders which are asymmetric with respect to the encoder
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
84.
Scalable techniques for executing custom algorithms on media items
In various embodiments, a workflow engine executes a custom algorithm on a media item. In operation, the workflow engine generates split specifications based on a split function included in a container image. Each split specification is associated with a different portion of the media item. Subsequently, the workflow engine generates map output files based on the split specifications and a map function included in the container image. The workflow engine then generates one or more final output file(s) based on the map output files and a collect function included in the container image. The final output file(s) are subsequently used to perform at least one of an evaluation operation on, a modification operation on, and a representation operation with respect to the media item.
Techniques are disclosed for improving user experience of multimedia streaming over computer networks. More specifically, techniques presented herein reduce (or eliminate) latency in playback start time for streaming digital media content resulting from digital rights management (DRM) authorizations. A streaming media client (e.g., a browser, set-top box, mobile telephone or tablet “app”) may request a “fast-expiring” license for titles the streaming media client predicts a user is likely to begin streaming. A fast-expiring license is a DRM license (and associated decryption key) which is valid for only a very limited time after being used for playback. During the validity period of such a license, the client device requests a “normal” or “regular” license to continue accessing the title after the fast-expiring license expires.
G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
H04N 21/8355 - Generation of protective data, e.g. certificates involving usage data, e.g. number of copies or viewings allowed
H04N 21/482 - End-user interface for program selection
H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
H04N 21/4405 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving video stream decryption
H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors
H04N 21/6334 - Control signals issued by server directed to the network components or client directed to client for authorisation, e.g. by transmitting a key
86.
Encoding techniques for optimizing distortion and bitrate
An encoding engine encodes a video sequence to provide optimal quality for a given bitrate. The encoding engine cuts the video sequence into a collection of shot sequences. Each shot sequence includes video frames captured from a particular capture point. The encoding engine resamples each shot sequence across a range of different resolutions, encodes each resampled sequence with a range of quality parameters, and then upsamples each encoded sequence to the original resolution of the video sequence. For each upsampled sequence, the encoding engine computes a quality metric and generates a data point that includes the quality metric and the resample resolution. The encoding engine collects all such data points and then computes the convex hull of the resultant data set. Based on all convex hulls across all shot sequences, the encoding engine determines an optimal collection of shot sequences for a range of bitrates.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 21/238 - Interfacing the downstream path of the transmission network, e.g. adapting the transmission rate of a video stream to network bandwidth; Processing of multiplex streams
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
H04N 19/147 - Data rate or code amount at the encoder output according to rate distortion criteria
H04N 19/192 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the adaptation method, adaptation tool or adaptation type used for the adaptive coding the adaptation method, adaptation tool or adaptation type being iterative or recursive
H04N 19/59 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving spatial sub-sampling or interpolation, e.g. alteration of picture size or resolution
87.
ENCODING TECHNIQUES FOR OPTIMIZING DISTORTION AND BITRATE
A shot analyzer varies the resolution when generating encoded video sequences for streaming. The shot analyzer generates a first encoded video sequence based on a first resolution and a source video sequence that is associated with a video title. The shot analyzer then determines a first encoded shot sequence from multiple encoded shot sequences included in the first encoded video sequence based on quality metric(s). The first encoded shot sequence is associated with a first shot sequence included in the source video sequence. Subsequently, the shot analyzer generates a second encoded shot sequence based on a second resolution and the first shot sequence. The shot analyzer generates a second encoded video sequence based on the first encoded video sequence and the second encoded shot sequence. At least a first portion of the second encoded video sequence is subsequently streamed to an endpoint device during playback of the video title.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
A sequence analyzer compares different episodes of an episodic serial to identify portions of a current episode of the serial that have already been played back to a user. Those portions may include introductory material such as credits, or a recap section that includes content from previous episodes. The sequence analyzer parses previous episodes of the serial and selects a representative frame for each shot sequence. The sequence analyzer then generates a fingerprint for each shot sequence based on the associated representative frame. The sequence analyzer compares fingerprints associated with a current episode of the serial to fingerprints associated with one or more previous episodes of the serial to identify shot sequences that have already been played. The user may then skip those repeated sequences via a playback interface.
H04L 29/06 - Communication control; Communication processing characterised by a protocol
G06F 3/0484 - Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object or an image, setting a parameter value or selecting a range
One embodiment of the present invention sets forth a technique for correcting color values. The technique includes downsampling first color space values to generate downsampled color space values and upsampling the downsampled color space values to generate second color space values. The technique further includes modifying at least one component value included in the downsampled color space values based on a first component value included in the first color space values, a second component value included in the second color space values, and an approximation of a nonlinear transfer function.
H04N 9/64 - Circuits for processing colour signals
H04N 19/132 - Sampling, masking or truncation of coding units, e.g. adaptive resampling, frame skipping, frame interpolation or high-frequency transform coefficient masking
H04N 19/85 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression
H04N 9/68 - Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
H04N 9/77 - Circuits for processing the brightness signal and the chrominance signal relative to each other, e.g. adjusting the phase of the brightness signal relative to the colour signal, correcting differential gain or differential phase
H04N 19/186 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
H04N 19/10 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
H04N 19/86 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving reduction of coding artifacts, e.g. of blockiness
An endpoint device outputs frames of test media during a testing procedure. Each frame of test media includes a test pattern. A test module coupled to the endpoint device samples the test pattern and transmits sample data to a media test engine. The media test engine decodes a binary number from the test pattern and then converts the binary number to an integer value that is associated with the corresponding frame. The media test engine then analyzes sequences of these integer values to identify playback errors associated with the endpoint device.
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 19/89 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using pre-processing or post-processing specially adapted for video compression involving methods or arrangements for detection of transmission errors at the decoder
H04L 29/06 - Communication control; Communication processing characterised by a protocol
One embodiment includes acceleration systems that operate as intermediaries between the API processing system and the clients to reduce API call roundtrip latencies. The acceleration systems are a network of interconnected systems that are distributed across the globe. A given acceleration system establishes a network connection with a given client and receives a request for processing an API call over the connection. The programming function associated with the API call is configured in the API processing system. The acceleration system facilitates the processing of the API call over an established connection with the API processing system.
One embodiment of the invention sets forth a mechanism for encoding video streams associated with the same digital content such that switch points staggered across two video streams occur at every offset temporal distance. The offset temporal distance is less than the distance between two consecutive key frames in a given video stream. This enables a content player to switch to a video stream having a playback quality up or down one level from a current video stream at the offset temporal distance from the most recently played key frame. In effect, the content player does not wait the entire key frame temporal distance before switching.
H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
H04N 21/2365 - Multiplexing of several video streams
H04N 21/438 - Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
H04N 21/233 - Processing of audio elementary streams
H04N 19/114 - Adapting the group of pictures [GOP] structure, e.g. number of B-frames between two anchor frames
H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
H04N 19/172 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object the region being a picture, frame or field
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
A method and system for discovering and testing security assets is provided. Based on source definition data describing sources to monitor on the one or more computer networks, an example system scans the sources to identify security assets. The system analyses the security assets to identify characteristics of the server-based applications. The system stores database records describing the security assets and the identified characteristics. The system queries the database records to select, based at least on the identified characteristics, one or more target assets, from the security assets, on which to conduct one or more security tests. Responsive to selecting the one or more target assets, the system conducts the one or more security tests on the one or more target assets. The system identifies one or more security vulnerabilities at the one or more target assets based on the conducted one or more security tests.
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
A method includes receiving, with a computing system, data representing a video item into a buffer. The method further includes outputting the video item from the buffer to a display system. The method further includes determining that utilization of the buffer falls below a predetermined threshold. The method further includes, in response to determining that the utilization of the buffer falls below the predetermined threshold, determining that there is a specified rebuffering point within a predetermined time frame. The method further includes pausing with the computing system, the video item at the specified rebuffering point in response to determining that there is the specified rebuffering point within the predetermined time frame.
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
H04N 21/433 - Content storage operation, e.g. storage operation in response to a pause request or caching operations
H04N 21/61 - Network physical structure; Signal processing
95.
OPTIMIZING ENCODING OPERATIONS WHEN GENERATING ENCODED VERSIONS OF A MEDIA TITLE
In various embodiments, a sequence-based encoding application partitions a set of shot sequences associated with a media title into multiple clusters based on at least one feature that characterizes media content and/or encoded media content associated with the media title. The clusters include at least a first cluster and a second cluster. The sequence-based encoding application encodes a first shot sequence using a first operating point to generate a first encoded shot sequence. The first shot sequence and the first operating point are associated with the first cluster. By contrast, the sequence-based encoding application encodes a second shot sequence using a second operating point to generate a second encoded shot sequence. The second shot sequence and the second operating point are associated with the second cluster. Subsequently, the sequence-based encoding application generates an encoded media sequence based on the first encoded shot sequence and the second encoded shot sequence.
H04N 19/42 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals - characterised by implementation details or hardware specially adapted for video compression or decompression, e.g. dedicated software implementation
H04N 19/179 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a scene or a shot
H04N 19/142 - Detection of scene cut or scene change
H04N 19/154 - Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
H04N 19/177 - Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a group of pictures [GOP]
96.
Distributed traffic management system and techniques
Approaches, techniques, and mechanisms are disclosed for implementing a distributed firewall. In an embodiment, many different computer assets police incoming messages based on local policy data. This local policy data is synchronized with global policy data. The global policy data is generated by one or more separate analyzers. Each analyzer has access to message logs, or information derived therefrom, for groups of computer assets, and is thus able to generate policies based on intelligence from an entire group as opposed to an isolated asset. Among other effects, some of the approaches, techniques, and mechanisms may be effective even in computing environments with limited supervision over the attack surface, and/or computing environments in which assets may need to make independent decisions with respect to how incoming messages should be handled, on account of latency and/or unreliability in connections to other system components.
One embodiment of the present invention sets forth a technique for generating one or more hash data structures. The technique includes generating a hash data structure having entries that correspond to a plurality of content servers, and, for each file included in a first plurality of files, allocating the file to one or more content servers included in the plurality of content servers by comparing a hash value associated with the file to one or more entries included in the entries. The technique further includes comparing a network bandwidth utilization of a first content server to a network bandwidth utilization associated with one or more other content servers included in the plurality of content servers to generate a result, and modifying a first number of entries associated with the first content server and included in the entries based on the result to generate a biased hash data structure.
G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
A security application manages security and reliability of networked applications executing collection of interacting computing elements within a distributed computing architecture. The security application monitors various classes of resources utilized by the collection of nodes within the distributed computing architecture and determine whether utilization of a class of resources is approaching a pre-determined maximum limit. The security application performs a vulnerability scan of a networked application to determine whether the networked application is prone to a risk of intentional or inadvertent breach by an external application. The security application scans a distributed computing architecture for the existence of access control lists (ACLs), and stores ACL configurations and configuration changes in a database. The security application scans a distributed computing architecture for the existence of security certificates, places newly discovered security certificates in a database, and deletes outdated security certificates. Advantageously, security and reliability are improved in a distributed computing architecture.
G06F 16/28 - Databases characterised by their database models, e.g. relational or object models
G06F 21/45 - Structures or tools for the administration of authentication
G06F 21/57 - Certifying or maintaining trusted computer platforms, e.g. secure boots or power-downs, version controls, system software checks, secure updates or assessing vulnerabilities
H04L 9/32 - Arrangements for secret or secure communication including means for verifying the identity or authority of a user of the system
In one embodiment of the present invention, a quality trainer and quality calculator collaborate to establish a consistent perceptual quality metric via machine learning. In a training phase, the quality trainer leverages machine intelligence techniques to create a perceptual quality model that combines objective metrics to optimally track a subjective metric assigned during viewings of training videos. Subsequently, the quality calculator applies the perceptual quality model to values for the objective metrics for a target video, thereby generating a perceptual quality score for the target video. In this fashion, the perceptual quality model judiciously fuses the objective metrics for the target video based on the visual feedback processed during the training phase. Since the contribution of each objective metric to the perceptual quality score is determined based on empirical data, the perceptual quality score is a more accurate assessment of observed video quality than conventional objective metrics.