Gracenote, Inc.

United States of America

Back to Profile

1-92 of 92 for Gracenote, Inc. Sort by
Query
Patent
World - WIPO
Aggregations Reset Report
Date
2023 1
2022 7
2021 12
2020 32
2019 1
See more
IPC Class
H04N 21/81 - Monomedia components thereof 16
G06F 17/30 - Information retrieval; Database structures therefor 12
H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs 12
H04N 21/845 - Structuring of content, e.g. decomposing content into time segments 10
H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware 9
See more
Found results for  patents

1.

RADIO HEAD UNIT WITH DYNAMICALLY UPDATED TUNABLE CHANNEL LISTING

      
Application Number US2022047219
Publication Number 2023/069581
Status In Force
Filing Date 2022-10-20
Publication Date 2023-04-27
Owner GRACENOTE, INC. (USA)
Inventor
  • Jeyachandran, Suresh
  • Fasching, Damon
  • Tanaka, Hidenori
  • Vetriselvi, Kaviarasu
  • Raul, Samit

Abstract

In one aspect, an example method includes (i) encountering, by a media playback device of a vehicle, a trigger to update a list of currently tunable radio stations; (ii) based on encountering the trigger to update the list of currently tunable radio stations, updating, by the media playback device, the list of currently tunable radio stations using a location of the vehicle and radio station contour data stored in a local database of the media playback device; and (iii) displaying, by the media playback device, a station list using the list of currently tunable radio stations.

IPC Classes  ?

  • H04N 21/438 - Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/482 - End-user interface for program selection
  • H04N 21/61 - Network physical structure; Signal processing

2.

SEPARATING MEDIA CONTENT INTO PROGRAM SEGMENTS AND ADVERTISEMENT SEGMENTS

      
Application Number US2022013240
Publication Number 2022/186910
Status In Force
Filing Date 2022-01-21
Publication Date 2022-09-09
Owner GRACENOTE, INC. (USA)
Inventor
  • Hodges, Todd, J.
  • Schmidt, Andreas
  • Gupta, Sharmishtha

Abstract

In one aspect, an example method includes (i) extracting, by a computing system, features from media content; (ii) generating, by the computing system, repetition data for respective portions of the media content using the features, with repetition data for a given portion including a list of other portions of the media content matching the given portion; (iii) determining, by the computing system, transition data for the media content; (iv) selecting, by the computing system, a portion within the media content using the transition data; (v) classifying, by the computing system, the portion as either an advertisement segment or a program segment using repetition data for the portion; and (vi) outputting, by the computing system, data indicating a result of the classifying for the portion.

IPC Classes  ?

  • H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
  • H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream

3.

METHODS AND APPARATUS TO FINGERPRINT AN AUDIO SIGNAL

      
Application Number US2022015442
Publication Number 2022/186952
Status In Force
Filing Date 2022-02-07
Publication Date 2022-09-09
Owner GRACENOTE INC. (USA)
Inventor
  • Topchy, Alexander
  • Nielsen, Christen V.
  • Davis, Jeremey M.

Abstract

Methods, apparatus, systems, and articles of manufacture to fingerprint an audio signal. An example apparatus disclosed herein includes an audio segmenter to divide an audio signal into a plurality of audio segments, a bin normalizer to normalize the second audio segment to thereby create a first normalized audio segment, a subfingerprint generator to generate a first subfingerprint from the first normalized audio segment, the first subfingerprint including a first portion corresponding to a location of an energy extremum in the normalized second audio segment, a portion strength evaluator to determine a likelihood of the first portion to change, and a portion replacer to, in response to determining the likelihood does not satisfy a threshold, replace the first portion with a second portion to thereby generate a second subfingerprint.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G10L 19/025 - Detection of transients or attacks for time/frequency resolution switching

4.

IDENTIFYING AND LABELING SEGMENTS WITHIN VIDEO CONTENT

      
Application Number US2022013239
Publication Number 2022/177693
Status In Force
Filing Date 2022-01-21
Publication Date 2022-08-25
Owner GRACENOTE, INC. (USA)
Inventor
  • Garg, Amanmeet
  • Gupta, Sharmishtha
  • Schmidt, Andreas
  • Balasuriya, Lakshika
  • Vartakavi, Aneesh

Abstract

In one aspect, an example method includes (i) obtaining fingerprint repetition data for a portion of video content, with the fingerprint repetition data including a list of other portions of video content matching the portion of video content and respective reference identifiers for the other portions of video content; (ii) identifying the portion of video content as a program segment rather than an advertisement segment based at least on a number of unique reference identifiers within the list of other portions of video content relative to a total number of reference identifiers within the list of other portions of video content; (iii) determining that the portion of video content corresponds to a program specified in an electronic program guide using a timestamp of the portion of video content; and (iv) storing an indication of the portion of video content in a data file for the program.

IPC Classes  ?

  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/8547 - Content authoring involving timestamps for synchronizing content
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream

5.

AUDIO CONTENT RECOGNITION METHOD AND SYSTEM

      
Application Number US2021063300
Publication Number 2022/146674
Status In Force
Filing Date 2021-12-14
Publication Date 2022-07-07
Owner GRACENOTE, INC. (USA)
Inventor
  • Berrian, Alexander
  • Hodges, Tadd
  • Coover, Robert
  • Wilkinson, Matthew
  • Rafii, Zafar

Abstract

A method implemented by a computing system comprises generating, by the computing system, a fingerprint comprising a plurality of bin samples associated with audio content. Each bin sample is specified within a frame of the fingerprint and is associated with one of a plurality of non-overlapping frequency ranges and a value indicative of a magnitude of energy associated with a corresponding frequency range. The computing system removes. from the fingerprint, a plurality of bin samples associated with a frequency sweep in the andio content.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

6.

COVER SONG IDENTIFICATION METHOD AND SYSTEM

      
Application Number US2021063318
Publication Number 2022/146677
Status In Force
Filing Date 2021-12-14
Publication Date 2022-07-07
Owner GRACENOTE, INC. (USA)
Inventor
  • Liu, Xiaochen
  • Renner, Joseph
  • Morris, Joshua
  • Hodges, Todd
  • Coover, Robert
  • Rafii, Zafar

Abstract

A cover song identification method implemented by a computing system comprises receiving, by a computing system and from a user device, harmonic pitch class profile (HPCP) information that specifies one or more HPCP features associated with target audio content. A major chord profile feature and a minor chord profile feature associated with the target audio content are derived from the HPCP features. Machine learning logic of the computing system determines. based on the major chord profile feature and the minor chord profile feature, a relatedness between the target audio conten t and each of a plurality of audio content items speci fied in records of a database. Each audio content i tem is associated with cover song information. Cover song information associated with an audio content item having a highest relatedness to the target audio content is communicated to the user device.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/632 - Query formulation
  • G06F 16/638 - Presentation of query results
  • G06F 16/61 - Indexing; Data structures therefor; Storage structures
  • G06Q 50/18 - Legal services; Handling legal documents
  • G06N 20/00 - Machine learning

7.

SYSTEM AND METHOD FOR PODCAST REPETITIVE CONTENT DETECTION

      
Application Number US2021050311
Publication Number 2022/076137
Status In Force
Filing Date 2021-09-14
Publication Date 2022-04-14
Owner GRACENOTE, INC. (USA)
Inventor
  • Garg, Amanmeet
  • Vartakavi, Aneesh

Abstract

In one aspect, a method includes detecting a fingerprint match between query fingerprint data representing at least one audio segment within podcast content and reference fingerprint data representing known repetitive content within other podcast content, detecting a feature match between a set of audio features across multiple time-windows of the podcast content, and detecting a text match between at least one query text sentences from a transcript of the podcast content and reference text sentences, the reference text sentences comprising text sentences from the known repetitive content within the other podcast content. The method also includes responsive to the detections, generating sets of labels identifying potential repetitive content within the podcast content. The method also includes selecting, from the sets of labels, a consolidated set of labels identifying segments of repetitive content within the podcast content, and responsive to selecting the consolidated set of labels, performing an action.

IPC Classes  ?

  • G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
  • G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
  • G10L 25/90 - Pitch determination of speech signals
  • G10L 17/14 - Use of phonemic categorisation or speech recognition prior to speaker recognition or verification
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

8.

SYSTEM AND METHOD FOR MULTI-MODAL PODCAST SUMMARIZATION

      
Application Number US2021041010
Publication Number 2022/015585
Status In Force
Filing Date 2021-07-09
Publication Date 2022-01-20
Owner GRACENOTE, INC. (USA)
Inventor
  • Garg, Amanmeet
  • Vartakavi, Aneesh
  • Morris, Joshua

Abstract

In one aspect, a method includes receiving podcast content, generating a transcript of at least a portion of the podcast content, and parsing the podcast content to (i) identify audio segments within the podcast content, (ii) determine classifications for the audio segments, (iii) identify audio segment offsets, and (iv) identify sentence offsets. The method also includes based on the audio segments, the classifications, the audio segment offsets, and the sentence offsets, dividing the generated transcript into text sentences and, from among the text sentences of the divided transcript, selecting a group of text sentences for use in generating an audio summary of the podcast content. The method also includes based on timestamps at which the group of text sentences begin in the podcast content, combining portions of audio in the podcast content that correspond to the group of text sentences to generate an audio file representing the audio summary.

IPC Classes  ?

  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
  • G10L 15/26 - Speech to text systems
  • G10L 25/78 - Detection of presence or absence of voice signals
  • G06F 16/65 - Clustering; Classification

9.

TRANSITION DETECTOR NEURAL NETWORK

      
Application Number US2021026651
Publication Number 2021/207648
Status In Force
Filing Date 2021-04-09
Publication Date 2021-10-14
Owner GRACENOTE, INC. (USA)
Inventor
  • Renner, Joseph
  • Vartakavi, Aneesh
  • Coover, Robert

Abstract

In one aspect, an example method includes (i) extracting a sequence of audio features from a portion of a sequence of media content; (ii) extracting a sequence of video features from the portion of the sequence of media content; (iii) providing the sequence of audio features and the sequence of video features as an input to a transition detector neural network that is configured to classify whether or not a given input includes a transition between different content segments; (iv) obtaining from the transition detector neural network classification data corresponding to the input; (v) determining that the classification data is indicative of a transition between different content segments; and (vi) based on determining that the classification data is indicative of a transition between different content segments, outputting transition data indicating that the portion of the sequence of media content includes a transition between different content segments.

IPC Classes  ?

  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology

10.

KEYFRAME EXTRACTOR

      
Application Number US2021026276
Publication Number 2021/207426
Status In Force
Filing Date 2021-04-07
Publication Date 2021-10-14
Owner GRACENOTE, INC. (USA)
Inventor Schmidt, Andreas

Abstract

In one aspect, an example method includes (i) determining a biur delta that quantifies a difference between a level of blurriness of a first frame of a video and a level of blurriness of a second frame of the video, wherein the second frame is subsequent to and adjacent to the first frame; (ii) determining a contrast delta that quantifies a difference between a contrast of the first frame and a contrast of the second frame; (iii) determining a fingerprint distance between a first image fingerprint of the first frame and a second image fingerprint of the second frame; (iv) determining a keyframe score using the blur delta, the contrast delta, and the fingerprint distance; (v) based on the keyframe score, determining that the second frame is a keyframe; and (vi) outputting data, indicating that the second frame is a keyframe.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/46 - Extraction of features or characteristics of the image
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • G06T 7/37 - Determination of transform parameters for the alignment of images, i.e. image registration using transform domain methods

11.

SELECTION OF VIDEO FRAMES USING A MACHINE LEARNING PREDICTOR

      
Application Number US2020057303
Publication Number 2021/150282
Status In Force
Filing Date 2020-10-26
Publication Date 2021-07-29
Owner GRACENOTE, INC. (USA)
Inventor
  • Vartakavi, Aneesh
  • Christensen, Casper, Lützhøft

Abstract

Example systems and methods of selection of video frames using a machine learning (ML) predictor program are disclosed. The ML predictor program may generate predicted cropping boundaries for any given input image. Training raw images associated with respective sets of training master images indicative of cropping characteristics for the training raw image may be input to the ML predictor, and the ML predictor program trained to predict cropping boundaries for raw image based on expected cropping boundaries associated training master images. At runtime, the trained ML predictor program may be applied to a sequence of video image frames to determine for each respective video image frame a respective score corresponding to a highest statistical confidence associated with one or more subsets of cropping boundaries predicted for the respective video image frame. Information indicative of the respective video image frame having the highest score may be stored or recorded.

IPC Classes  ?

12.

AUTOMATED CROPPING OF IMAGES USING A MACHINE LEARNING PREDICTOR

      
Application Number US2020057322
Publication Number 2021/150284
Status In Force
Filing Date 2020-10-26
Publication Date 2021-07-29
Owner GRACENOTE, INC. (USA)
Inventor
  • Vartakavi, Aneesh
  • Christensen, Casper Lützhøft

Abstract

Example systems and methods may selection of video frames using a machine learning (ML) predictor program are disclosed. The ML predictor program may generate predicted cropping boundaries for any given input image. Training raw images associated with respective sets of training master images indicative of cropping characteristics for the training raw image may be input to the ML predictor, and the ML predictor program trained to predict cropping boundaries for raw image based on expected cropping boundaries associated training master images. At runtime, the trained ML predictor program may be applied to runtime raw images in order to generate respective sets of runtime cropping boundaries corresponding to different cropped versions of the runtime raw image. The runtime raw images may be stored with information indicative of the respective sets of runtime boundaries.

IPC Classes  ?

13.

METHODS AND APPARATUS TO GENERATE RECOMMENDATIONS BASED ON ATTRIBUTE VECTORS

      
Application Number US2020060846
Publication Number 2021/108166
Status In Force
Filing Date 2020-11-17
Publication Date 2021-06-03
Owner GRACENOTE, INC. (USA)
Inventor
  • Vartakavi, Aneesh
  • Rancel Gil, Carmen Yaiza
  • Gopakumar, Anjana
  • Cramer, Jason Timothy

Abstract

Methods and apparatus are disclosed to generate a recommendation, including an attribute vector aggregator to form a resultant attribute vector based on an input set of attribute vectors, the set of attribute vectors containing at least one of a media attribute vector, an attendee attribute vector, an artist attribute vector, an event attribute vector, or a venue attribute vector, and a recommendation generator, the recommendation generator including: a vector comparator to perform a comparison between an input attribute vector and other attribute vectors and a recommendation compiler to create one or more recommendations of at least one of media, an artist, an event, or a venue based on the comparison.

IPC Classes  ?

  • H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
  • G06Q 50/10 - Services

14.

METHODS AND APPARATUS TO FINGERPRINT AN AUDIO SIGNAL VIA EXPONENTIAL NORMALIZATION

      
Application Number US2020061077
Publication Number 2021/108186
Status In Force
Filing Date 2020-11-18
Publication Date 2021-06-03
Owner GRACENOTE, INC. (USA)
Inventor
  • Berrian, Alexander
  • Wilkinson, Matthew James
  • Coover, Robert

Abstract

Methods, apparatus, systems, and articles of manufacture are disclosed to fingerprint an audio signal via exponential normalization. An example apparatus includes an audio segmenter to divide an audio signal into a plurality of audio segments including a first audio segment and a second audio segment, the first audio segment including a first time-frequency bin, the second audio segment including a second time-frequency bin, a mean calculator to determine a first exponential mean value associated with the first time frequency bin based on a first magnitude of the audio signal associated with the first time frequency bin and a second exponential mean value associated with the second time frequency bin based on a second magnitude of the audio signal associated with the second time frequency bin and the first exponential mean value. The example apparatus further includes a bin normalizer to normalize the first time-frequency bin based on the second exponential mean value and a fingerprint generator to generate a fingerprint of the audio signal based on the normalized first time-frequency bins.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G10L 19/025 - Detection of transients or attacks for time/frequency resolution switching

15.

METHODS AND APPARATUS TO CONTROL LIGHTING EFFECTS

      
Application Number US2020061263
Publication Number 2021/108212
Status In Force
Filing Date 2020-11-19
Publication Date 2021-06-03
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus Kurt
  • Coover, Robert
  • Rafii, Zafar
  • Vartakavi, Aneesh
  • Schmidt, Andreas
  • Hodges, Todd

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed to adjust device control information. The example apparatus comprises a light drive waveform generator to obtain metadata corresponding to media and generate device control information based on the metadata, the device control information to inform a lighting device to enable consecutive light pulses; an effect engine to apply an attack parameter and a decay parameter to consecutive light pulses corresponding to the device control information, the attack parameter and the decay parameter based on the metadata to affect a shape of the consecutive light pulses; and a color timeline generator to generate color information based on the metadata, the color information to inform the lighting device to change a color state.

IPC Classes  ?

  • H05B 47/10 - Controlling the light source
  • H05B 47/19 - Controlling the light source by remote control via wireless transmission
  • G06Q 50/10 - Services
  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware

16.

METHODS AND APPARATUS FOR AUDIO EQUALIZATION BASED ON VARIANT SELECTION

      
Application Number US2020062360
Publication Number 2021/108664
Status In Force
Filing Date 2020-11-25
Publication Date 2021-06-03
Owner GRACENOTE, INC. (USA)
Inventor
  • Coover, Robert
  • Renner, Joseph
  • Summers, Cameron A.

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed methods and apparatus for audio equalization based on variant selection. An example apparatus includes a processor to obtain training data, the training data including a plurality of reference audio signals each associated with a variant of music and organize the training data into a plurality of entries based on the plurality of reference audio signals, a training model executor to execute a neural network model using the training data, and a model trainer to train the neural network model by updating at least one weight corresponding to one of the entries in the training data when the neural network model does not satisfy a training threshold.

IPC Classes  ?

  • H04R 3/04 - Circuits for transducers for correcting frequency response
  • B60K 35/00 - Arrangement or adaptations of instruments
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus
  • G06F 9/54 - Interprogram communication
  • G06N 3/08 - Learning methods
  • G06N 20/00 - Machine learning

17.

OBTAINING ARTIST IMAGERY FROM VIDEO CONTENT USING FACIAL RECOGNITION

      
Application Number US2020051414
Publication Number 2021/061511
Status In Force
Filing Date 2020-09-18
Publication Date 2021-04-01
Owner GRACENOTE, INC. (USA)
Inventor
  • Scott, Jeffrey
  • Vartakavi, Aneesh

Abstract

An example method may include applying an automated face detection program implemented on a computing device to a plurality of training digital images associated with a particular TV program to identify a sub-plurality of the training digital images, each containing a single face of a particular person associated with the particular TV program. A set of feature vectors determined for the sub -plurality may be used to train a computational model of a face recognition program for recognizing the particular person in any given digital image. The face recognition program and the computational model may be applied to a runtime digital image associated with the particular TV program to recognize the particular person in the runtime digital image, together with geometric coordinates. The runtime digital image may be stored together with information identifying the particular person and corresponding geometric coordinates of the particular person in the runtime digital image.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 20/00 - Machine learning

18.

METHODS AND APPARATUS TO IDENTIFY MEDIA

      
Application Number US2020049463
Publication Number 2021/046392
Status In Force
Filing Date 2020-09-04
Publication Date 2021-03-11
Owner GRACENOTE, INC. (USA)
Inventor
  • Coover, Robert
  • Wilkinson, Matthew James
  • Scott, Jeffrey
  • Hong, Yongju

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed to identify media. An example method includes: in response to a query, generating an adjusted sample media fingerprint by applying an adjustment to a sample media fingerprint; comparing the adjusted sample media fingerprint to a reference media fingerprint; and in response to the adjusted sample media fingerprint matching the reference media fingerprint, transmitting information associated with the reference media fingerprint and the adjustment.

IPC Classes  ?

  • H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
  • H04N 21/233 - Processing of audio elementary streams
  • H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments

19.

METHOD AND SYSTEM FOR USE OF NETWORK AFFILIATION AS BASIS TO DETERMINE CHANNEL RENDERED BY CONTENT PRESENTATION DEVICE

      
Application Number US2020042742
Publication Number 2021/016168
Status In Force
Filing Date 2020-07-20
Publication Date 2021-01-28
Owner GRACENOTE, INC. (USA)
Inventor
  • Sunku, Raghavendra
  • Lee, Jaehyung
  • Debelair, Virginie
  • Dunker, Peter

Abstract

A computing system detects a channel multi-match with non-matching programs, based on fingerprint-based ACR analysis of digital fingerprint data representing a channel rendered by a content presentation device. The system then responsively determines a channel rendered by the device through a process including (a) determining that channels of the multi-match group are all affiliate channels of the same network as each other and (b) determining, as the channel, which affiliate channel of that network serves a location of the content presentation device. The system then uses the determined channel as a basis for carrying out of at least one channel-specific operation, such as recording audience-measurement data and/or invoking dynamic content modification.

IPC Classes  ?

  • H04N 21/2668 - Creating a channel for a dedicated end-user group, e.g. by inserting targeted commercials into a video stream based on end-user profiles
  • H04N 21/458 - Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H04H 60/37 - Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying segments of broadcast information, e.g. scenes or extracting programme ID
  • H04H 60/33 - Arrangements for monitoring the users' behaviour or opinions

20.

METHOD AND SYSTEM FOR USE OF EARLIER AND/OR LATER SINGLE-MATCH AS BASIS TO DISAMBIGUATE CHANNEL MULTI-MATCH WITH NON-MATCHING PROGRAMS

      
Application Number US2020042743
Publication Number 2021/016169
Status In Force
Filing Date 2020-07-20
Publication Date 2021-01-28
Owner GRACENOTE, INC. (USA)
Inventor
  • Sunku, Raghavendra
  • Lee, Jaehyung
  • Debelair, Virginie
  • Dunker, Peter

Abstract

A computing system detects a channel multi-match with non-matching programs, based on fingerprint-based ACR analysis of digital fingerprint data representing a channel rendered by a content presentation device. The system then responsively performs disambiguation based at least in part on detecting an earlier single-channel match and/or a later single-channel match, the disambiguation establishing that the channel rendered by the content presentation device is the single known channel. And based on the disambiguation, the system then uses the single known channel as a basis for carrying out of at least one channel-specific operation, such as recording audience-measurement data and/or invoking dynamic content modification.

IPC Classes  ?

  • H04N 21/2668 - Creating a channel for a dedicated end-user group, e.g. by inserting targeted commercials into a video stream based on end-user profiles
  • H04N 21/458 - Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H04N 21/81 - Monomedia components thereof
  • H04H 60/37 - Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying segments of broadcast information, e.g. scenes or extracting programme ID

21.

METHODS AND APPARATUS TO IMPROVE DETECTION OF AUDIO SIGNATURES

      
Application Number US2020038114
Publication Number 2020/263649
Status In Force
Filing Date 2020-06-17
Publication Date 2020-12-30
Owner GRACENOTE, INC. (USA)
Inventor Rafii, Zafar

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed to improve detection of audio signatures. An example apparatus includes a TDOA determiner to determine a first time difference of arrival for a first audio sensor of a meter and a second audio sensor of the meter, and a second time difference of arrival for the first audio sensor and a third audio sensor of the meter, a TDOA matcher to determine a match by comparing the first time difference of arrival to a first virtual source time difference of arrival and a second virtual source time difference of arrival, responsive to determining that the first time difference of arrival matches the first virtual source time difference of arrival, identify a first virtual source location as the location of a media presentation device, and remove an audio recording of the second audio sensor to reduce a computational burden on the processor.

IPC Classes  ?

  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware

22.

METHODS AND SYSTEMS FOR SCOREBOARD TEXT REGION DETECTION

      
Application Number US2020014873
Publication Number 2020/154555
Status In Force
Filing Date 2020-01-24
Publication Date 2020-07-30
Owner GRACENOTE, INC. (USA)
Inventor
  • Scott, Jeffrey
  • Cremer, Markus Kurt Peter
  • Parekh, Nishit Umesh
  • Lee, Dewey Ho

Abstract

A computing system automatically detects, within a digital video frame, a video frame region that depicts a textual expression of a scoreboard. The computing system (a) engages in an edge-detection process to detect edges of at least scoreboard image elements depicted by the digital video frame, with at least some of these edges being of the textual expression and defining alphanumeric shapes; (b) applies pattern-recognition to identify the alphanumeric shapes; (c) establishes a plurality of minimum bounding rectangles each bounding a respective one of the identified alphanumeric shapes; (d) establishes, based on at least two of the minimum bounding rectangles, a composite shape that encompasses the identified alphanumeric shapes that were bounded by the at least two minimum bounding rectangles; and (e) based on the composite shape occupying a particular region, deems the particular region to be the video frame region that depicts the textual expression.

IPC Classes  ?

  • H04N 21/81 - Monomedia components thereof
  • H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
  • H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
  • H04N 21/4728 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for selecting a ROI [Region Of Interest], e.g. for requesting a higher resolution version of a selected region
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/32 - Aligning or centering of the image pick-up or image-field

23.

METHODS AND SYSTEMS FOR EXTRACTING SPORT-RELATED INFORMATION FROM DIGITAL VIDEO FRAMES

      
Application Number US2020014874
Publication Number 2020/154556
Status In Force
Filing Date 2020-01-24
Publication Date 2020-07-30
Owner GRACENOTE, INC. (USA)
Inventor
  • Scott, Jeffrey
  • Cremer, Markus Kurt Peter
  • Parekh, Nishit Umesh
  • Lee, Dewey Ho

Abstract

A computing system automatically extracting, from a digital video frame, scoreboard information including a first team name, a second team name, a first score, and a second score. The computing system (a) detects, within the digital video frame, a plurality of frame regions based on each detected frame region depicting text; (b) selects, from the detected frame regions, a set of frame regions based on the frame regions of the selected set cooperatively having a geometric arrangement that corresponds with a candidate geometric arrangement of the scoreboard information; (c) recognizes characters respectively within each of the frame regions of the selected set of frame regions; (d) based at least on the recognized characters in the frame regions of the selected set, detects the scoreboard information; and (e) records the detected scoreboard information.

IPC Classes  ?

  • H04N 21/81 - Monomedia components thereof
  • H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
  • H04N 21/4728 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for selecting a ROI [Region Of Interest], e.g. for requesting a higher resolution version of a selected region
  • H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
  • G06K 9/32 - Aligning or centering of the image pick-up or image-field
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

24.

METHODS AND SYSTEMS FOR DETERMINING ACCURACY OF SPORT-RELATED INFORMATION EXTRACTED FROM DIGITAL VIDEO FRAMES

      
Application Number US2020014875
Publication Number 2020/154557
Status In Force
Filing Date 2020-01-24
Publication Date 2020-07-30
Owner GRACENOTE, INC. (USA)
Inventor
  • Scott, Jeffrey
  • Cremer, Markus Kurt Peter
  • Parekh, Nishit Umesh
  • Lee, Dewey Ho

Abstract

A computing system determines accuracy of sport-related information extracted from a time sequence of digital video frames that represent a sport event, the extracted sport- related information including an attribute that changes over the time sequence, The computing system (a) detects, based on the extracted sport-related information, a pattern of change of the attribute over the time sequence and (b) makes a determination of whether the detected pattern is an expected pattern of change associated with the sport event, lf the determination is that the detected pattern is the expected pattern, then, responsive to making the determination, the computing system takes a first action that corresponds to the sport- related information being accurate. Whereas, if the determination is that the detected pattern is not the expected pattern, then, responsive to making the determination, the computing system takes a second action that corresponds to the sport-related information being inaccurate.

IPC Classes  ?

  • H04N 21/81 - Monomedia components thereof
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
  • H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
  • H04N 21/4728 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for selecting a ROI [Region Of Interest], e.g. for requesting a higher resolution version of a selected region
  • G06K 9/32 - Aligning or centering of the image pick-up or image-field
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

25.

METHODS AND SYSTEMS FOR SPORT DATA EXTRACTION

      
Application Number US2020014871
Publication Number 2020/154553
Status In Force
Filing Date 2020-01-24
Publication Date 2020-07-30
Owner GRACENOTE, INC. (USA)
Inventor
  • Scott, Jeffrey
  • Parekh, Nishit, Umesb
  • Lee, Dewey, Ho
  • Cremer, Markus, Kurt Peter

Abstract

A computing system engages in digital image processing of received video frames to generate sport data that indicates a score and/or a time associated with a sport event. The digital image processing includes: (i) identifying a first frame region of the video frames based on the first frame region depicting a scoreboard; (ii) executing a first procedure that analyzes the identified first frame region to detect, within the identified first frame region, second frame region(s) based on the second frame region(s) depicting text of the scoreboard: (iii) m response to detecting the second frame region(s), executing a second procedure to recognize the text in at least one of the second frame region(s); and (iv) based at least on the recognizing of the text, generating the sport data. In response to completing the digital image processing, the computing system then carries out an action based on the generated sport data.

IPC Classes  ?

  • H04N 21/81 - Monomedia components thereof
  • H04N 21/4728 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content for selecting a ROI [Region Of Interest], e.g. for requesting a higher resolution version of a selected region
  • H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
  • H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/32 - Aligning or centering of the image pick-up or image-field

26.

METHODS AND SYSTEMS FOR SCOREBOARD REGION DETECTION

      
Application Number US2020014872
Publication Number 2020/154554
Status In Force
Filing Date 2020-01-24
Publication Date 2020-07-30
Owner GRACENOTE, INC. (USA)
Inventor
  • Scott, Jeffrey
  • Cremer, Markus Kurt Peter
  • Parekh, Nishit Umesb
  • Lee, Dewey Ho

Abstract

A computing system automatically detects, in a sequence of video frames, a video frame region that depicts a scoreboard. The video frames of the sequence depict image elements including (i) scoreboard image elements that are unchanging across the video frames of the sequence and (ii) other image elements that change across the video frames of the sequence. Given this, the computing system (a) receives the sequence, (b) engages in an edge-detection process to detect, in the video frames of the sequence, a set of edges of the depicted image elements, (c) identifies a subset of the detected set of edges based on each edge of the subset being unchanging across the video frames of the sequence, and (d) detects, based on the edges of the identified subset, the video frame region that depicts the scoreboard.

IPC Classes  ?

27.

GENERATION OF MEDIA STATION PREVIEWS USING A SECONDARY TUNER

      
Application Number US2019068932
Publication Number 2020/142426
Status In Force
Filing Date 2019-12-30
Publication Date 2020-07-09
Owner GRACENOTE, INC. (USA)
Inventor
  • Qin, John M.
  • Jeyachandran, Suresh
  • Fasching, Damon P.

Abstract

In one aspect, an example method includes (i) while a media playback device of a vehicle is playing back content received on a first channel, generating, by the media playback device, a query fingerprint using second content received on a second channel; (ii) sending, by the media playback device, the query fingerprint to a server that maintains a reference database containing a plurality of reference fingerprints; (iii) receiving, by the media playback device from the server, identifying information corresponding to a reference fingerprint of the plurality of reference fingerprints that matches the query fingerprint; and (iv) while the media playback device is playing back the first content received on the first channel, providing, by the media playback device for display, at least a portion of the identifying information.

IPC Classes  ?

  • H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
  • H04N 21/426 - Internal components of the client
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/482 - End-user interface for program selection
  • H04N 21/8549 - Creating video summaries, e.g. movie trailer
  • H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests

28.

GENERATION OF MEDIA STATION PREVIEWS USING A REFERENCE DATABASE

      
Application Number US2019068521
Publication Number 2020/142339
Status In Force
Filing Date 2019-12-26
Publication Date 2020-07-09
Owner GRACENOTE, INC. (USA)
Inventor
  • Fearn, Pat, D.
  • Jeyachandran, Suresh
  • Fasching, Damon, P.
  • Sherman, Mark, W.

Abstract

In one aspect, an example method includes (i) while a media playback device of a vehicle is playing back content received on a first channel, sending, by the media playback device to a server, a preview request, the preview request identifying a second channel that is different from the first channel; (ii) receiving, by the media playback device from the server, a response to the preview request, the response including identifying information corresponding to content being provided on the second channel; and (iii) while the media playback device is playing back the content received on the first channel, providing, by the media playback device for display, at least a portion of the identifying information corresponding to content being provided on the second channel.

IPC Classes  ?

  • H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
  • H04N 21/426 - Internal components of the client
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies 
  • H04N 21/482 - End-user interface for program selection
  • H04N 21/858 - Linking data to content, e.g. by linking an URL to a video object or by creating a hotspot
  • H04N 21/239 - Interfacing the upstream path of the transmission network, e.g. prioritizing client requests

29.

DETECTION OF MEDIA PLAYBACK LOUDNESS LEVEL AND CORRESPONDING ADJUSTMENT TO AUDIO DURING MEDIA REPLACEMENT EVENT

      
Application Number US2019059882
Publication Number 2020/101951
Status In Force
Filing Date 2019-11-05
Publication Date 2020-05-22
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus, K.
  • Merchant, Shashank
  • Vartakavi, Aneesh

Abstract

In one aspect, an example method includes (i) presenting first media content from a first source; (ii) encountering a trigger to switch from presenting the first media content from the first source to presenting second media content from a second source; (iii) determining a first loudness level of the first media content; (iv) determining a second loudness level of the second media content; (v) based on a difference between the first loudness level and the second loudness level, adjusting a loudness level of the second media content so as to generate modified media content having a third loudness level that is different from the second loudness level; and (vi) responsive to encountering the trigger, presenting the modified media content having the third loudness level.

IPC Classes  ?

  • H04N 21/485 - End-user interface for client configuration
  • H04N 21/439 - Processing of audio elementary streams

30.

DETECTION OF VOLUME ADJUSTMENTS DURING MEDIA REPLACEMENT EVENTS USING LOUDNESS LEVEL PROFILES

      
Application Number US2019061633
Publication Number 2020/102633
Status In Force
Filing Date 2019-11-15
Publication Date 2020-05-22
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus, K.
  • Merchant, Shashank
  • Coover, Robert
  • Hodges, Todd, J.
  • Morris, Joshua, Ernest

Abstract

In one aspect, an example method includes (i) determining, by a playback device, a loudness level of first media content that the playback device is receiving from a first source; (ii) comparing, by the playback device, the determined loudness level of the first media content with a reference loudness level indicated by a loudness level profile for the first media content; (iii) determining, by the playback device, a target volume level for the playback device based on a difference between the determined loudness level of the first media content and the reference loudness level; and (iv) while the playback device presents second media content from a second source in place of the first media content, adjusting, by the playback device, a volume of the playback device toward the target volume level.

IPC Classes  ?

  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/485 - End-user interface for client configuration
  • H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content

31.

DETECTION OF MUTE AND COMPENSATION THEREFOR DURING MEDIA REPLACEMENT EVENT

      
Application Number US2019054798
Publication Number 2020/101819
Status In Force
Filing Date 2019-10-04
Publication Date 2020-05-22
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus, K.
  • Merchant, Shashank

Abstract

In one aspect, an example method includes (i) presenting, by a playback device, first media content from a first source; (ii) encountering, by the playback device, a trigger to switch from presenting the first media content from the first source to presenting second media content from a second source; (iii) determining, by the playback device, that the playback device is presenting the first media content from the first source in a muted state; and (iv) responsive to encountering the trigger, and based on the determining that the playback device is presenting the first media content from the first source in a muted state, presenting, by the playback device, the second media content from the second source in the muted state.

IPC Classes  ?

  • G11B 27/02 - Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G10L 25/93 - Discriminating between voiced and unvoiced parts of speech signals

32.

MONITORING LOUDNESS LEVEL DURING MEDIA REPLACEMENT EVENT USING SHORTER TIME CONSTANT

      
Application Number US2019061632
Publication Number 2020/102632
Status In Force
Filing Date 2019-11-15
Publication Date 2020-05-22
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus, K.
  • Merchant, Shashank
  • Coover, Robert
  • Hodges, Todd, J.
  • Morris, Joshua, Ernest

Abstract

In one aspect, an example method includes (i) determining, by a playback device, a first loudness level of a first portion of first media content from a first source while the playback device presents the first media content, with the first portion having a first length; (ii) switching, by the playback device, from presenting the first media content from the first source to presenting second media content from a second source; (iii) based on the switching, determining, by the playback device, second loudness levels of second portions of the first media content while the playback device presents the second media content, with the second portions having a second length that is shorter than the first length; and (iv) while the playback device presents the second media content, adjusting, by the playback device, a volume of the playback device based on one or more of the second loudness levels.

IPC Classes  ?

  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/485 - End-user interface for client configuration
  • H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content

33.

METHODS AND APPARATUS TO ADJUST AUDIO PLAYBACK SETTINGS BASED ON ANALYSIS OF AUDIO CHARACTERISTICS

      
Application Number US2019057736
Publication Number 2020/086771
Status In Force
Filing Date 2019-10-23
Publication Date 2020-04-30
Owner GRACENOTE, INC. (USA)
Inventor
  • Coover, Robert
  • Summers, Cameron Aubrey
  • Hodges, Todd
  • Renner, Joseph
  • Cremer, Markus

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed to adjust audio playback settings based on analysis of audio characteristics. Example apparatus disclosed herein include an equalization (EQ) model query generator to generate a query to a neural network, the query including a representation of a sample of an audio signal; an EQ filter settings analyzer to: access a plurality of audio playback settings determined by the neural network based on the query; and determine a filter coefficient to apply to the audio signal based on the plurality of audio playback settings; and an EQ adjustment implementor to apply the filter coefficient to the audio signal in a first duration.

IPC Classes  ?

  • G11B 31/00 - Arrangements for the associated working of recording or reproducing apparatus with related apparatus
  • G11B 20/10 - Digital recording or reproducing
  • G10L 19/26 - Pre-filtering or post-filtering
  • G06N 3/02 - Neural networks
  • H03G 5/02 - Manually-operated control

34.

METHODS AND APPARATUS FOR EFFICIENT MEDIA INDEXING

      
Application Number US2019049747
Publication Number 2020/051332
Status In Force
Filing Date 2019-09-05
Publication Date 2020-03-12
Owner GRACENOTE, INC. (USA)
Inventor
  • Wilkinson, Matthew James
  • Scott, Jeffrey
  • Coover, Robert
  • Dimitriou, Konstantinos Antonios

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed for efficient media indexing. An example method disclosed herein includes selecting a first hash seed value based on a first entropy value calculated for a first bucket distribution resulting from use of the first hash seed value to store data in a first hash table, selecting a second hash seed value to be used in combination with the first hash seed value based on a second entropy value calculated on a second bucket distribution resulting from use of the first hash seed value in combination with the second hash seed value, and storing data in the first hash table based on the first hash seed value and a second hash table based on the second hash seed value.

IPC Classes  ?

  • G06F 16/61 - Indexing; Data structures therefor; Storage structures
  • G06F 16/41 - Indexing; Data structures therefor; Storage structures
  • G06F 16/901 - Indexing; Data structures therefor; Storage structures

35.

SYSTEMS, METHODS, AND APPARATUS TO IMPROVE MEDIA IDENTIFICATION

      
Application Number US2019049357
Publication Number 2020/051148
Status In Force
Filing Date 2019-09-03
Publication Date 2020-03-12
Owner GRACENOTE, INC. (USA)
Inventor
  • Scott, Jeffrey
  • Wilkinson, Matthew James
  • Coover, Robert
  • Merchant, Shashank

Abstract

Methods, apparatus, systems, and articles of manufacture are disclosed to improve media identification. An example apparatus includes a hash handler to generate a first set of reference matches by performing hash functions on a subset of media data associated with media to generate hashed media data based on a first bucket size, a candidate determiner to identify a second set of reference matches that include ones of the first set, the second set including ones having first quantities of hits that did not satisfy a threshold, determine second quantities of hits for ones of the second set by matching ones to the hash tables based on a second bucket size, and identify one or more candidate matches based on at least one of (1) ones of the first set or (2) ones of the second set, and a report generator to generate a report including a media identification.

IPC Classes  ?

  • G06F 21/10 - Protecting distributed programs or content, e.g. vending or licensing of copyrighted material
  • H04N 21/83 - Generation or processing of protective or descriptive data associated with content; Content structuring
  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal

36.

METHODS AND APPARATUS TO FINGERPRINT AN AUDIO SIGNAL VIA NORMALIZATION

      
Application Number US2019049953
Publication Number 2020/051451
Status In Force
Filing Date 2019-09-06
Publication Date 2020-03-12
Owner GRACENOTE, INC. (USA)
Inventor
  • Coover, Robert
  • Rafii, Zafar

Abstract

Methods, apparatus, systems, and articles of manufacture are disclosed to fingerprint audio via mean normalization. An example apparatus for audio fingerprinting includes a frequency range separator to transform an audio signal into a frequency domain, the transformed audio signal including a plurality of time-frequency bins including a first time-frequency bin, an audio characteristic determiner to determine a first characteristic of a first group of time-frequency bins of the plurality of time-frequency bins, the first group of time-frequency bins surrounding the first time-frequency bin and a signal normalizer to normalize the audio signal to thereby generate normalized energy values, the normalizing of the audio signal including normalizing the first time-frequency bin by the first characteristic. The example apparatus further includes a point selector to select one of the normalized energy values and a fingerprint generator to generate a fingerprint of the audio signal using the selected one of the normalized energy values.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G10L 19/025 - Detection of transients or attacks for time/frequency resolution switching
  • G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band

37.

METHODS AND APPARATUS FOR DYNAMIC VOLUME ADJUSTMENT VIA AUDIO CLASSIFICATION

      
Application Number US2019050080
Publication Number 2020/051544
Status In Force
Filing Date 2019-09-06
Publication Date 2020-03-12
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus
  • Coover, Robert
  • Scherf, Steven D.
  • Summers, Cameron Aubrey

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed for dynamic volume adjustment via audio classification. Examples methods include analyzing, with a neural network trained model, a parameter of an audio signal associated with a first volume level to determine a classification group associated with the audio signal, determining an input volume of the audio signal, the selection based on the classification group associated with the audio signal, applying a gain value to the audio signal, the gain value based on the classification group and the input volume, the gain value to modify the first volume level to a second volume level, and applying a compression value to the audio signal, the compression value to modify the second volume level to a third volume level that satisfies a target volume threshold.

IPC Classes  ?

38.

DYNAMIC PLAYOUT OF TRANSITION FRAMES WHILE TRANSITIONING BETWEEN PLAYOUT OF MEDIA STREAMS

      
Application Number US2019035996
Publication Number 2020/036667
Status In Force
Filing Date 2019-06-07
Publication Date 2020-02-20
Owner GRACENOTE, INC. (USA)
Inventor
  • Seo, Chung, Won
  • Lee, Seunghyeong

Abstract

When a device is playing out a first media stream, the device determines a target time for beginning playout of a second media stream in place of the first media stream. The device then starts a stream-transition process that is expected to take anywhere from a minimum expected transition duration to a maximum expected transition duration, and the de vice starts the transition process in advance of the determined target time by the maximum expected transition duration, to help ensure timely starting of playout of the second media stream. Further, for an uncertainty period that extends from die minimum expected transition duration after the starting to the maximum expected transition duration after the starting, the device generates and plays a sequence of transition frames to help mask transition from the first media stream to the second media stream.

IPC Classes  ?

  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
  • H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream

39.

DYNAMIC REDUCTION IN PLAYOUT OF REPLACEMENT CONTENT TO HELP ALIGN END OF REPLACEMENT CONTENT WITH END OF REPLACED CONTENT

      
Application Number US2019036001
Publication Number 2020/036668
Status In Force
Filing Date 2019-06-07
Publication Date 2020-02-20
Owner GRACENOTE, INC. (USA)
Inventor
  • Seo, Chung Won
  • Lee, Seunghyeong

Abstract

When a device is playing out a media stream, the device determines a target time at which the device is to start playing out replacement content in place of a portion of the media stream. However, the device then detects that a starting time when the device starts playout of the replacement content is delayed from the target time by a delay period of duration P. In response, the device then reduces its playout of the replacement content by duration P, to help playout of the replacement content end at a desired time. For instance, the device could seek forward in playout of the replacement content by duration P and/or could remove one or more other portions of the replacement content to total a reduction of duration P.

IPC Classes  ?

  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/81 - Monomedia components thereof

40.

VEHICLE-BASED MEDIA SYSTEM WITH AUDIO AD AND VISUAL CONTENT SYNCHRONIZATION FEATURE

      
Application Number US2019043078
Publication Number 2020/028101
Status In Force
Filing Date 2019-07-23
Publication Date 2020-02-06
Owner GRACENOTE, INC. (USA)
Inventor Modi, Nisarg, A.

Abstract

In one aspect, an example method to he performed by a vehicle-based media system includes (a) receiving audio content: (b) causing one or more speakers to output the received audio content; (c) using a microphone of the vehicle-based media system to capture the output audio content; (d) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (e) identifying visual content based at least on the identified reference audio content; and (f) outputting, via a user interface of the vehicle-based media system, the identified visual content.

IPC Classes  ?

  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • G10L 15/02 - Feature extraction for speech recognition; Selection of recognition unit
  • G10L 21/10 - Transforming into visible information

41.

VEHICLE-BASED MEDIA SYSTEM WITH AUDIO AD AND NAVIGATION-RELATED ACTION SYNCHRONIZATION FEATURE

      
Application Number US2019043083
Publication Number 2020/028103
Status In Force
Filing Date 2019-07-23
Publication Date 2020-02-06
Owner GRACENOTE, INC. (USA)
Inventor Modi, Nisarg, A.

Abstract

In one aspect, an example method to he performed by a vehicle-based media system includes (a) receiving audio content: (b) causing one or more speakers to output the received audio content; (c) using a microphone of the vehicle-based media system to capture the output audio content; (d) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content: (e) identifying a geographic location associated with the identified reference audio content: and (i) based at least on the identified geographic location associated with the identified reference audio content, outputting, via the user interface of the vehicle-based media system, a prompt to navigate to the identified geographic location.

IPC Classes  ?

  • B60R 16/03 - Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric for supply of electrical power to vehicle subsystems
  • B60R 16/023 - Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric for transmission of signals between vehicle parts or subsystems
  • B60K 35/00 - Arrangement or adaptations of instruments

42.

VEHICLE-BASED MEDIA SYSTEM WITH AUDIO ADVERTISEMENT AND EXTERNAL-DEVICE ACTION SYNCHRONIZATION FEATURE

      
Application Number US2019043085
Publication Number 2020/028104
Status In Force
Filing Date 2019-07-23
Publication Date 2020-02-06
Owner GRACENOTE, INC. (USA)
Inventor Modi, Nisarg, A.

Abstract

In one aspect, an example method to be performed by a vehicle-based media system includes (a) receiving audio content: (b) causing one or more speakers to output the received audio content; (c) using a microphone of the vehicle-based media system to capture the output audio content; (d) identifying reference audio content that has at least a threshold extent of similarity with, the captured audio content; (e) identifying a computational action based at least on the identified reference audio content; and (f) sending, via a network interface of the vehicle-based media system, an instruction that causes an external computing device to perform, the identified computational action.

IPC Classes  ?

  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/436 - Interfacing a local distribution network, e.g. communicating with another STB or inside the home
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/81 - Monomedia components thereof

43.

AUDIO PLAYOUT REPORT FOR RIDE-SHARING SESSION

      
Application Number US2019043088
Publication Number 2020/028106
Status In Force
Filing Date 2019-07-23
Publication Date 2020-02-06
Owner GRACENOTE, INC. (USA)
Inventor Modi, Nisarg, A.

Abstract

In one aspect, an example method to be performed by a computing device includes (a) determining that a ride-sharing session is active; (b) in response to determining the ride-sharing session is active, using a microphone of the computing device to capture audio content; (c) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (d) determining that the ride-sharing session is inactive; and (e) outputting an indication of tire identified reference audio content.

IPC Classes  ?

44.

TAGGING AN IMAGE WITH AUDIO-RELATED METADATA

      
Application Number US2019043089
Publication Number 2020/028107
Status In Force
Filing Date 2019-07-23
Publication Date 2020-02-06
Owner GRACENOTE, INC. (USA)
Inventor
  • Modi, Nisarg A.
  • Hamilton, Brian T.

Abstract

In one aspect, an example method to be performed by a computing device includes (a) receiving a request to use a camera of the computing device; (h) in response to receiving the request, (i) using a microphone of the computing device to capture audio content and (ii) using the camera of the computing device to capture an image; (c) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; and (d) outputting an indication of the identified reference audio content while displaying the captured image.

IPC Classes  ?

  • G06F 16/58 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06T 1/00 - General purpose image data processing

45.

DYNAMIC CONTROL OF FINGERPRINTING RATE TO FACILITATE TIME-ACCURATE REVISION OF MEDIA CONTENT

      
Application Number US2019035961
Publication Number 2020/018190
Status In Force
Filing Date 2019-06-07
Publication Date 2020-01-23
Owner GRACENOTE, INC. (USA)
Inventor
  • Thielen, Kurt, R.
  • Merchant, Shashank, C.
  • Dunker, Peter
  • Cremer, Markus, K.
  • Scherf, Steven, D.

Abstract

A computing system identifies a media stream being received by a client, based on fingerprint matching conducted with query fingerprints generated by the client at a frame rate. The computing system then causes the client to increase the frame rate, in order to facilitate establishment by the computing system of synchronous lock between true time within the media stream and client time according to an clock of the client. The computing system then uses the established synchronous lock as a basis to map a true-time point at which a content revision should be performed in the media stream to a client-time point at which the client should perform the content revision. And the computing system causes the client to perform the content revision at the determined client-time point.

IPC Classes  ?

  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
  • H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
  • H04N 21/2662 - Controlling the complexity of the video stream, e.g. by scaling the resolution or bitrate of the video stream based on the client capabilities
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments

46.

ESTABLISHMENT AND USE OF TIME MAPPING BASED ON INTERPOLATION USING LOW-RATE FINGERPRINTING, TO HELP FACILITATE FRAME-ACCURATE CONTENT REVISION

      
Application Number US2019035974
Publication Number 2020/018193
Status In Force
Filing Date 2019-06-07
Publication Date 2020-01-23
Owner GRACENOTE, INC. (USA)
Inventor
  • Dunker, Peter
  • Cremer, Markus, K.
  • Merchant, Shashank, C.
  • Thielen, Kurt, R.

Abstract

A media client ascertains a plurality of matching points between (i) query fingerprints representing a media stream being received by the client and (ii) reference fingerprints, each identified matching point defining a respective match between a query' fingerprint that is time stamped with client time defined according to a clock of the client and a reference fingerprint that is time stamped with true time defined according to a timeline within a known media stream. Further, the client performs linear regression based on the timestamps of the ascertained plurality of matching points, to establish a mapping between true time and client time. The client then uses the established mapping as a basis to determine a client-time point at which the client should perform an action with respect to media stream, being received by the client. And the client performs the action at the determined client-time point.

IPC Classes  ?

  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
  • H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
  • H04N 21/8547 - Content authoring involving timestamps for synchronizing content

47.

MODIFYING PLAYBACK OF REPLACEMENT CONTENT RESPONSIVE TO DETECTION OF REMOTE CONTROL SIGNALS THAT MODIFY OPERATION OF THE PLAYBACK DEVICE

      
Application Number US2019040546
Publication Number 2020/018287
Status In Force
Filing Date 2019-07-03
Publication Date 2020-01-23
Owner GRACENOTE, INC. (USA)
Inventor
  • Thielen, Kurt, R.
  • Merchant, Shashank
  • Dunker, Peter
  • Cremer, Markus, K.
  • Seo, Chungwon
  • Lee, Seunghyeong
  • Scherf, Steven, D.

Abstract

In one aspect, an example method includes (i) providing, by a playback device, replacement media content for display; (ii) determining, by the playback device, that a remote control transmitted to the playback device an instruction configured to cause a modification to operation of the playback device while the playback device displays the replacement media content; (iii) determining, by the playback device based on the instruction, an overlay that the playback device is configured to provide for display in conjunction with the modification; (iv) determining, by the playback device, a region within a display of the playback device corresponding to the overlay; and (v) modifying, by the playback device, a transparency of the region such that the overlay is visible through the replacement media content when the playback device provides the overlay for display.

IPC Classes  ?

  • H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/81 - Monomedia components thereof

48.

DYNAMIC CONTROL OF FINGERPRINTING RATE TO FACILITATE TIME-ACCURATE REVISION OF MEDIA CONTENT

      
Application Number US2019035955
Publication Number 2020/018189
Status In Force
Filing Date 2019-06-07
Publication Date 2020-01-23
Owner GRACENOTE, INC. (USA)
Inventor
  • Thielen, Kurt, R.
  • Merchant, Shashank, C.
  • Dunker, Peter
  • Cremer, Markus, K.
  • Scherf, Steven, D.

Abstract

A computing system identifies a media stream being received by a client, based on fingerprint matching conducted with query fingerprints generated by the client at a frame rate. The computing system then causes the client to increase the frame rate, in order to facilitate establishment by the computing system of synchronous lock between true time within the media stream and client time according to an clock of the client. The computing system then uses the established synchronous lock as a basis to map a true-time point at which a content revision should be performed in the media stream to a client-time point at which the client should perform the content revision. And the computing system causes the client to perform the content revision at the determined client-time point.

IPC Classes  ?

  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

49.

ADVANCED PREPARATION FOR CONTENT REVISION BASED ON EXPECTED LATENCY IN OBTAINING NEW CONTENT

      
Application Number US2019035967
Publication Number 2020/018191
Status In Force
Filing Date 2019-06-07
Publication Date 2020-01-23
Owner GRACENOTE, INC (USA)
Inventor
  • Dunker, Peter
  • Cremer, Markus, K.
  • Merchant, Shashank, C.
  • Thielen, Kurt, R.

Abstract

When a media client is receiving a media stream, the media client determines an upcoming time point at which the media client is to perform a content revision involving insertion (e.g., substitution or overlaying) of new content. The media client further determines an advanced time point when the media client should initiate a process of acquiring the new content, setting the advanced time point sufficiently in advance of the upcoming content-revision time point to enable the media client to obtain at least enough of the new content to be able to start the content revision on time. In an example implementation, the media client could determine the advanced time point by predicting how long the content-acquisition process will take, based on consideration of past instances of content acquisition, possibly correlated with operational factors such as content source, processor load, memory load, network speed, and time of day.

IPC Classes  ?

  • H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists

50.

ESTABLISHMENT AND USE OF TIME MAPPING BASED ON INTERPOLATION USING LOW-RATE FINGERPRINTING, TO HELP FACILITATE FRAME-ACCURATE CONTENT REVISION

      
Application Number US2019035973
Publication Number 2020/018192
Status In Force
Filing Date 2019-06-07
Publication Date 2020-01-23
Owner GRACENOTE, INC. (USA)
Inventor
  • Dunker, Peter
  • Cremer, Markus, K.
  • Merchant, Shashank, C.
  • Thielen, Kurt, R.

Abstract

A computing system identifies multiple matching points between (i) query fingerprints representing a media stream being received by a client and (ii) reference fingerprints, each identified matching point defining a respective match between a query fingerprint timestamped with client time defined according to a clock of the client and a reference fingerprint timestamped with true time defined according to a timeline within a known media stream. Further, the computing system performs linear regression based on the timestamps of the matching points, to establish a mapping between true time and client time. The computing system then uses the mapping to determine a client-time point at winch the client should perform a content revision or other action with respect to the media stream being received by the client. And the computing system causes the client to perform the content revision or other action at the determined client-time point.

IPC Classes  ?

  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
  • H04N 21/242 - Synchronization processes, e.g. processing of PCR [Program Clock References]
  • H04N 21/8547 - Content authoring involving timestamps for synchronizing content

51.

MODIFYING PLAYBACK OF REPLACEMENT CONTENT RESPONSIVE TO DETECTION OF REMOTE CONTROL SIGNALS THAT CONTROL A DEVICE PROVIDING VIDEO TO THE PLAYBACK DEVICE

      
Application Number US2019040550
Publication Number 2020/018288
Status In Force
Filing Date 2019-07-03
Publication Date 2020-01-23
Owner GRACENOTE, INC. (USA)
Inventor
  • Thielen, Kurt, R.
  • Merchant, Shashank
  • Dunker, Peter
  • Cremer, Markus, K.
  • Seo, Chungwon
  • Lee, Seunghyeong
  • Scherf, Steven, D.

Abstract

In one aspect, an example method includes (i) providing, by a playback device, replacement media content for display; (ii) determining, by the playback device that while the playback device is displaying the replacement media content a remote control transmitted an instruction to a media device that provides media content to the playback device; (iii) determining, by the playback device, a playback-modification action corresponding to the instruction and the media device; and (iv) modifying, by the playback device, playback of the replacement media content in accordance with the playback-modification action.

IPC Classes  ?

  • H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/81 - Monomedia components thereof

52.

MODIFYING PLAYBACK OF REPLACEMENT CONTENT BASED ON CONTROL MESSAGES

      
Application Number US2019040553
Publication Number 2020/018289
Status In Force
Filing Date 2019-07-03
Publication Date 2020-01-23
Owner GRACENOTE, INC. (USA)
Inventor
  • Thielen, Kurt, R.
  • Dunker, Peter
  • Cremer, Markus, K.
  • Scherf, Steven, D.
  • Merchant, Shashank

Abstract

In one aspect, an example method includes (i) identifying, by a playback device, a media device based on a control message received from the media device by way of an audio and/or video interface, where the media device provides media content to the playback device; (ii) providing, by the playback device, replacement media content for display; (iii) determining, by the playback device, that while the playback device is displaying the replacement media content a remote control transmitted an instruction to the identified media device; (iv) determining, by the playback device, a playback-modification action corresponding to the instruction and the identified media device; and (v) modifying, by the playback device, playback of the replacement media content in accordance with the playback-modification action.

IPC Classes  ?

  • H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/4363 - Adapting the video stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network

53.

METHODS AND APPARATUS FOR VOLUME ADJUSTMENT

      
Application Number US2019012524
Publication Number 2019/136371
Status In Force
Filing Date 2019-01-07
Publication Date 2019-07-11
Owner GRACENOTE, INC. (USA)
Inventor
  • Coover, Robert
  • Scott, Jeffrey
  • Cremer, Markus K.
  • Vartakavi, Aneesh

Abstract

Apparatus, systems, articles of manufacture, and methods for volume adjustment are disclosed herein. An example method includes identifying media represented in an audio signal, accessing metadata associated with the media in response to identify the media in the audio signal, determining, based on the metadata, an average volume for the media, and adjusting an output volume of the audio signal based on an average gain value, the average gain value determined based on the average volume for the media.

IPC Classes  ?

  • H03G 7/00 - Volume compression or expansion in amplifiers

54.

DETECTING AND RESPONDING TO RENDERING OF INTERACTIVE VIDEO CONTENT

      
Application Number US2018032202
Publication Number 2018/231393
Status In Force
Filing Date 2018-05-11
Publication Date 2018-12-20
Owner GRACENOTE, INC. (USA)
Inventor
  • Lee, Dewey, Ho
  • Merchant, Shashank, C.
  • Cremer, Markus, K.

Abstract

A computing system obtains a fingerprint of video content being rendered by a video presentation device, including a first portion representing a pre-established video segment and a second portion representing a dynamically-defined video segment. While obtaining the query fingerprint, the computing system (a) detects a match between the first portion of the query fingerprint and a reference fingerprint that represents the pre-established video segment, (b) based on the detecting of the match, identifies the video content being rendered, (c) after identifying the video content being rendered, applies a trained neural network to at least the second portion of the query fingerprint, and (d) detects, based on the applying of the neural network, that rendering of the identified video content continues. And responsive to at least the detecting that rendering of the identified video content continues, the computing system then takes associated action.

IPC Classes  ?

  • H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
  • H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
  • H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream

55.

MUSIC SERVICE WITH MOTION VIDEO

      
Application Number US2018025392
Publication Number 2018/183841
Status In Force
Filing Date 2018-03-30
Publication Date 2018-10-04
Owner GRACENOTE, INC. (USA)
Inventor Cremer, Markus, K.

Abstract

Techniques of providing motion video content along with audio content are disclosed. In some example embodiments, a computer-implemented system is configured to perform operations comprising: receiving primary audio content; determining that at least one reference audio content satisfies a predetermined similarity threshold based on a comparison of the primaiy audio content with the at least one reference audio content; for each one of the at least one reference audio content, identifying motion video content based on the motion video content being stored in association with the one of the at least one reference audio content and not stored in association with the primaiy audio content; and causing the identified motion video content to be displayed on a device concurrently with a presentation of the primary audio content on the device.

IPC Classes  ?

  • H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware

56.

GENERATING A VIDEO PRESENTATION TO ACCOMPANY AUDIO

      
Application Number US2018025397
Publication Number 2018/183845
Status In Force
Filing Date 2018-03-30
Publication Date 2018-10-04
Owner GRACENOTE, INC. (USA)
Inventor
  • Harron, Wilson
  • Summers, Cameron, Aubrey

Abstract

Example methods and systems for generating a video presentation to accompany audio are described. The video presentation to accompany the audio track is generated from one or more video sequences. In some example embodiments, the video sequences are divided into video segments that correspond to discontinuities between frames. Video segments are concatenated to form a video presentation to which the audio track is added. In some example embodiments, only video segments having a duration equal to an integral number of beats of music in the audio track are used to form the video presentation. In these example embodiments, transitions between video segments in the video presentation that accompanies the audio track are aligned with the beats of the music.

IPC Classes  ?

  • H04N 21/2368 - Multiplexing of audio and video streams
  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
  • H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
  • H04N 21/431 - Generation of visual interfaces; Content or additional data rendering

57.

MATCHING AUDIO FINGERPRINTS

      
Application Number US2016044041
Publication Number 2017/222569
Status In Force
Filing Date 2016-07-26
Publication Date 2017-12-28
Owner GRACENOTE, INC. (USA)
Inventor
  • Coover, Robert
  • Scott, Jeffrey
  • Dimitriou, Konstantinos Antonios

Abstract

A matching machine accordingly accesses a query fingerprint that includes query sub-fingerprints that have been generated from query segments of a portion of query audio. After selecting reference sub-fingerprints for comparison to the query sub-fingerprints, the matching machine identifies a best-matching subset of the reference sub-fingerprints by evaluating total matches between the query sub-fingerprints and different subsets of the reference sub-fingerprints. The match machine then generates a count vector that stores the total counts mapped to respective offsets from a reference point in the reference sub-fingerprints. The matching machine determines a maximum count among the total counts and classifies the reference sub-fingerprints as a match with the query sub-fingerprints based on the maximum count.

IPC Classes  ?

  • H04N 7/16 - Analogue secrecy systems; Analogue subscription systems

58.

MEDIA CHANNEL IDENTIFICATION WITH VIDEO MULTI-MATCH DETECTION AND DISAMBIGUATION BASED ON AUDIO FINGERPRINT

      
Application Number US2017019908
Publication Number 2017/151591
Status In Force
Filing Date 2017-02-28
Publication Date 2017-09-08
Owner GRACENOTE, INC. (USA)
Inventor
  • Seo, Chung, Won
  • Kwon, Youngmoo
  • Lee, Jaehyung

Abstract

Disclosed are methods and systems to help disambiguate channel identification in a scenario where a video fingerprint of media content matches multiple reference video fingerprints corresponding respectively with multiple different channels. Given such a multi- match situation, an entity could disambiguate based on an audio component of the media content, such as by further determining that an audio fingerprint of the media content at issue matches an audio fingerprint of just one of the multiple channels, thereby establishing that that is the channel on which the media content being rendered by the media presentation device is arriving.

IPC Classes  ?

  • H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream
  • H04N 21/458 - Scheduling content for creating a personalised stream, e.g. by combining a locally stored advertisement with an incoming stream; Updating operations, e.g. for OS modules
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/8352 - Generation of protective data, e.g. certificates involving content or source identification data, e.g. UMID [Unique Material Identifier]

59.

MEDIA CHANNEL IDENTIFICATION WITH MULTI-MATCH DETECTION AND DISAMBIGUATION BASED ON LOCATION

      
Application Number US2017019915
Publication Number 2017/151596
Status In Force
Filing Date 2017-02-28
Publication Date 2017-09-08
Owner GRACENOTE, INC. (USA)
Inventor
  • Seo, Chung, Won
  • Kwon, Youngmoo
  • Lee, Jaehyung

Abstract

Disclosed are methods and systems involving location-based disambiguation of media channel identification, in a scenario where a fingerprint of media content being rendered by a media presentation device matches multiple reference fingerprints corresponding respectively with multiple different media channels. Upon detecting such a multi-match situation, a server or other entity will use a location of the media presentation device as a basis to disambiguate between the matching reference fingerprints and thus to determine the channel on which the media content being rendered by the media presentation device is arriving.

IPC Classes  ?

  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/2368 - Multiplexing of audio and video streams
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs

60.

MEDIA CHANNEL IDENTIFICATION WITH VIDEO MULTI-MATCH DETECTION AND DISAMBIGUATION BASED ON TIME OF BROADCAST

      
Application Number US2017019949
Publication Number 2017/151614
Status In Force
Filing Date 2017-02-28
Publication Date 2017-09-08
Owner GRACENOTE, INC. (USA)
Inventor
  • Seo, Chung, Won
  • Kwon, Youngmoo
  • Lee, Jaehyung

Abstract

Disclosed herein are methods and systems to help disambiguate channel identification in a scenario where fingerprint data of media content being rendered by a media presentation device matches multiple reference fingerprints corresponding respectively with multiple different channels. Upon detecting such a multi-match, a server or other entity will perform disambiguation based at least in part on a comparison of time of broadcast of the media content being rendered by the media presentation device with time of broadcast of the media content represented by the reference fingerprints. The server or other entity will thereby determine the channel on which the media content being rendered by the media presentation device is arriving, so as to facilitate taking channel-specific action.

IPC Classes  ?

  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/2385 - Channel allocation; Bandwidth allocation
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs

61.

MEDIA CHANNEL IDENTIFICATION AND ACTION WITH MULTI-MATCH DETECTION AND DISAMBIGUATION BASED ON MATCHING WITH DIFFERENTIAL REFERENCE-FINGERPRINT FEATURE

      
Application Number US2017019974
Publication Number 2017/151633
Status In Force
Filing Date 2017-02-28
Publication Date 2017-09-08
Owner GRACENOTE, INC. (USA)
Inventor
  • Lee, Jaehyung
  • Lee, Dewey Ho
  • Cremer, Markus

Abstract

A computing system compares various reference fingerprints each representing a reference media stream broadcast on a different respective known channel, and the computing system determines that a plurality of the reference fingerprints match each other, thus defining a multi-match group of the matching reference fingerprints. In response, the computing system identifies a fingerprint feature that could define a distinction between the reference fingerprints, and the computing system resolves the multi-match based on the identified feature, thereby determining the channel carrying the media stream being rendered by the media presentation device. And the server then takes channel-specific action based on the determined channel.

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor

62.

MEDIA CHANNEL lDENTIFICATION AND ACTION WITH MULTI-MATCH DETECTION BASED ON REFERENCE STREAM COMPARISON

      
Application Number US2017019979
Publication Number 2017/151636
Status In Force
Filing Date 2017-02-28
Publication Date 2017-09-08
Owner GRACENOTE, INC. (USA)
Inventor
  • Seo, Chung Won
  • Kwon, Youngmoo
  • Lee, Jaehyung

Abstract

A computing system will compare various reference fingerprints each representing a reference media stream broadcast on a different respective known channel, and the computing system will determine that a plurality of the reference fingerprints match each other, thus defining a multi-match group of the matching reference fingerprints. Further, the computing system will determine that a query fingerprint representing a media stream being rendered by a media presentation device matches the multi-match group, thus raising a question of which channel is carrying the media stream that is being rendered by the media presentation device. By considering one or more attributes of the query fingerprint, the server may then disambiguate and thereby determine the channel at issue, and the server may in turn take channel-specific action.

IPC Classes  ?

  • H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
  • H04N 21/482 - End-user interface for program selection

63.

MEDIA CHANNEL IDENTIFICATION WITH MULTI-MATCH DETECTION AND DISAMBIGUATION BASED ON SINGLE-MATCH

      
Application Number US2017019946
Publication Number 2017/151611
Status In Force
Filing Date 2017-02-28
Publication Date 2017-09-08
Owner GRACENOTE, INC. (USA)
Inventor
  • Lee, Jaehyung
  • Seo, Chung, Won
  • Kwon, Youngmoo

Abstract

Disclosed herein are methods and systems to help disambiguate channel identification in a scenario where fingerprint data of media content matches multiple reference fingerprints corresponding respectively with multiple different channels. Upon detecting such a multi- match, a server or other entity will perform disambiguation based on a determination that a segment of the fingerprint data matches a reference fingerprint corresponding with just a single channel, such as a reference fingerprint representing commercial or news programming content specific to just the single channel. The server or other entity will thereby determine the channel on which the media content being rendered by the media presentation device is arriving, so as to facilitate taking channel-specific action.

IPC Classes  ?

  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/2385 - Channel allocation; Bandwidth allocation
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream

64.

METHOD AND SYSTEM FOR DETECTING AND RESPONDING TO CHANGING OF MEDIA CHANNEL

      
Application Number US2017020003
Publication Number 2017/151654
Status In Force
Filing Date 2017-02-28
Publication Date 2017-09-08
Owner GRACENOTE, INC. (USA)
Inventor
  • Lee, Jaehyung
  • Lee, Dewey, Ho

Abstract

A computing system receives from a media presentation device a query fingerprint stream representing media content being presented by the media presentation device, where the query fingerprint stream has been determined to represent a first channel. The computing system then detects that a threshold mismatch exists between the received query fingerprint stream and a reference fingerprint stream representing the first channel, thus indicating a likelihood that the media presentation device has transitioned from presenting the first channel to presenting a second channel. Responsive to detecting the threshold mismatch, the system thus discontinues channel-specific action with respect to the first channel. For instance, the system could discontinue superimposing of first-channel-specific content on the presented media content and perhaps start superimposing of second-channel-specific content instead.

IPC Classes  ?

  • H04N 21/438 - Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs

65.

GENERATING AND DISTRIBUTING PLAYLISTS WITH RELATED MUSIC AND STORIES

      
Application Number US2016066943
Publication Number 2017/120008
Status In Force
Filing Date 2016-12-15
Publication Date 2017-07-13
Owner GRACENOTE, INC. (USA)
Inventor
  • Sharma, Rishabh
  • Cremer, Markus

Abstract

An embodiment may involve, based on a profile associated with a client device, selecting an audio file containing music. Based on an attribute of the audio file containing the music, an audio file containing a story may be selected. A playlist for the client device may be generated, where the playlist includes (i) a reference to the audio file containing the music, and (ii) a reference to the audio file containing the story. A server device may transmit the playlist to the client device over a wide area network. Reception of the playlist at the client device may cause an audio player application to retrieve and play out each of the audio file containing the music and the audio file containing the story.

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor

66.

GENERATING AND DISTRIBUTING PLAYLISTS WITH MUSIC AND STORIES HAVING RELATED MOODS

      
Application Number US2016066961
Publication Number 2017/120009
Status In Force
Filing Date 2016-12-15
Publication Date 2017-07-13
Owner GRACENOTE, INC. (USA)
Inventor
  • Sharma, Rishabh
  • Cremer, Markus

Abstract

An embodiment may involve a server device obtaining an audio file containing a story. The server device may determine a mood of the story. The server device may select an audio file containing music, where the audio file containing the music is associated with a music attribute that is indicative of the mood. The server device may generate a playlist for the client device, where the playlist includes (i) a reference to the audio file containing the music, and (ii) a reference to the audio file containing the story. The server device may transmit the playlist, over a wide area network, to the client device. Reception of the playlist at the client device may cause an audio player application to retrieve and play out each audio file therein.

IPC Classes  ?

  • G06Q 50/10 - Services
  • G10L 13/08 - Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
  • G10L 15/26 - Speech to text systems
  • H04W 4/00 - Services specially adapted for wireless communication networks; Facilities therefor

67.

COMPUTING SYSTEM WITH CHANNEL-CHANGE-BASED TRIGGER FEATURE

      
Application Number US2017012336
Publication Number 2017/120337
Status In Force
Filing Date 2017-01-05
Publication Date 2017-07-13
Owner GRACENOTE, INC. (USA)
Inventor
  • Harron, Wilson
  • Dimitriou, Konstantinos Antonios

Abstract

In one aspect, an example method includes (i) receiving, by a computing system, media content; (ii) generating, by the computing system, a fingerprint of the received media content; (iii) determining, by the computing system, that a channel-change operation was performed; (iv) responsive to determining that the channel-change operation was performed, transmitting, by the computing system, the generated fingerprint to a content identification server to identify the received media content; and (v) performing an action based on the identified media content.

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor

68.

COMPUTING SYSTEM WITH CONTENT-CHARACTERISTIC-BASED TRIGGER FEATURE

      
Application Number US2017012338
Publication Number 2017/120339
Status In Force
Filing Date 2017-01-05
Publication Date 2017-07-13
Owner GRACENOTE, INC. (USA)
Inventor
  • Harron, Wilson
  • Dimitriou, Konstantinos, Antonios

Abstract

In one aspect, an example method includes (i) receiving, by a computing system, media content; (ii) generating, by the computing system, a fingerprint of a portion of the received media content; (iii) determining, by the computing system, that the received media content has a predefined characteristic; (iv) responsive to determining that the received media content has the predefined characteristic, transmitting, by the computing system, the generated fingerprint to a content identification server to identify the portion of the received media content; and (v) performing an action based on the identified portion of media content.

IPC Classes  ?

  • H04N 21/235 - Processing of additional data, e.g. scrambling of additional data or processing content descriptors
  • H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
  • H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
  • H04N 21/435 - Processing of additional data, e.g. decrypting of additional data or reconstructing software from modules extracted from the transport stream

69.

RESPONDING TO REMOTE MEDIA CLASSIFICATION QUERIES USING CLASSIFIER MODELS AND CONTEXT PARAMETERS

      
Application Number US2016068898
Publication Number 2017/117234
Status In Force
Filing Date 2016-12-28
Publication Date 2017-07-06
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus K.
  • Popp, Phillip
  • Summers, Cameron Aubrey
  • Cramer, Jason

Abstract

A neural network-based classifier system can receive a query including a media signal and, in response, provide an indication that a particular received query corresponds to a known media type or media class. The neural networkbased classifier system can select and apply various models to facilitate media classification. In an example embodiment, classifying a media query includes accessing digital media data and a context parameter from a first device. A model for use with the network-based classifier system can be selected based on the context parameter. In an example embodiment, the network-based classifier system provides a media type probability index for the digital media data using the selected model and spectral features corresponding to the digital media data. In an example embodiment, the digital media data includes an audio or video signal sample.

IPC Classes  ?

  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

70.

DYNAMIC VIDEO OVERLAYS

      
Application Number US2016067250
Publication Number 2017/106695
Status In Force
Filing Date 2016-12-16
Publication Date 2017-06-22
Owner GRACENOTE, INC. (USA)
Inventor
  • Lee, Dewey Ho
  • Harron, Wilson
  • Pearce, David Henry
  • Dunker, Peter
  • Cremer, Markus K.
  • Scherf, Steven D.
  • Li, Sherman Ling Fung
  • Dimitriou, Konstantinos Antonios

Abstract

A client device accesses a video input stream from an intermediate device for display. The client device analyzes the video input stream to determine that the video input stream matches a template corresponding to a screen portion. Based on the video input stream matching the template, a video output stream is generated and caused to be presented on a display. In some example embodiments, the analysis is performed while the client device is replacing video content received from a content source via the intermediate device. For example, commercials transmitted from a national content provider to a smart TV via a set-top box may be replaced with targeted commercials. During the replacement, menus generated by the set-top box may be detected and the replacement video altered by the smart TV to include the menus.

IPC Classes  ?

  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/81 - Monomedia components thereof

71.

AUDIO MATCHING BASED ON HARMONOGRAM

      
Application Number US2015067814
Publication Number 2016/109500
Status In Force
Filing Date 2015-12-29
Publication Date 2016-07-07
Owner GRACENOTE, INC. (USA)
Inventor Rafii, Zafar

Abstract

In an example context of identifying live audio, an audio processor machine accesses audio data that represents a query sound and creates a spectrogram from the audio data. Each segment of the spectrogram represents a different time slice in the query sound. For each time slice, the audio processor machine determines one or more dominant frequencies and an aggregate energy value that represents a combination of all the energy for that dominant frequency and its harmonics. The machine creates a harmonogram by representing these aggregate energy values at these dominant frequencies in each time slice. The harmonogram thus may represent the strongest harmonic components within the query sound. The machine can identify the query sound by comparing its harmonogram to other harmonograms of other sounds and may respond to a user's submission of the query sound by providing an identifier of the query sound to the user.

IPC Classes  ?

  • G10L 11/00 - Determination or detection of speech or audio characteristics not restricted to a single one of groups ; G10L 15/00-G10L 21/00

72.

MACHINE-LED MOOD CHANGE

      
Application Number US2015067889
Publication Number 2016/109553
Status In Force
Filing Date 2015-12-29
Publication Date 2016-07-07
Owner GRACENOTE, INC. (USA)
Inventor
  • Vartakavi, Aneesh
  • Dimaria, Peter C.
  • Gubman, Michael
  • Cremer, Markus K.
  • Summers, Cameron Aubrey
  • Tronel, Gregoire

Abstract

A machine is configured to identify a media file that, when played to a user, is likely to modify an emotional or physical state of the user to or towards a target emotional or physical state. The machine accesses play counts that quantify playbacks of media files for the user. The playbacks may be locally performed or detected by the machine from ambient sound. The machine accesses arousal scores of the media files and determines a distribution of the play counts over the arousal scores. The machine uses one or more relative maxima in the distribution in selecting a target arousal score for the user based on contextual data that describes an activity of the user. The machine selects one or more media files based on the target arousal score. The machine may then cause the selected media file to be played to the user.

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor

73.

BROADCAST PROFILING SYSTEM

      
Application Number US2015068088
Publication Number 2016/109682
Status In Force
Filing Date 2015-12-30
Publication Date 2016-07-07
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus K.
  • Sharma, Rishabh
  • Chien, Michael Yeehua
  • Jeyachandran, Suresh
  • Quinn, Paul Emmanuel

Abstract

Systems and methods are presented for receiving, by a server computer, broadcast data for a content station, determining that the broadcast data comprises a change in content for the content station, determining identifying information associated with the broadcast data, analyzing the identifying information associated with the broadcast data to determine characteristics of the broadcast data, storing the identifying information and the characteristics of the broadcast data, incrementing persona characteristics in a datastore for the content station with the characteristics of the broadcast data, and generating a profile of the content station based on the persona characteristics in the datastore of the content station.

IPC Classes  ?

  • G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs

74.

CONTENT-BASED ASSOCIATION OF DEVICE TO USER

      
Application Number US2015023430
Publication Number 2016/018472
Status In Force
Filing Date 2015-03-30
Publication Date 2016-02-04
Owner GRACENOTE, INC. (USA)
Inventor
  • Jeffrey, Michael
  • Scherf, Steven, D.
  • Cremer, Markus, K.

Abstract

Example methods and systems for content-based association of a device to a user are presented. In an example method, data corresponding to each of a plurality of items of content stored within a user device are accessed. A device identifier for the user device is generated based on the data. The device identifier is transmitted from the user device to a service device to associate the user device with a user.

IPC Classes  ?

  • G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

75.

TEXT DETECTION IN VIDEO

      
Application Number US2015032618
Publication Number 2015/183914
Status In Force
Filing Date 2015-05-27
Publication Date 2015-12-03
Owner GRACENOTE, INC. (USA)
Inventor
  • Zhu, Irene
  • Harron, Wilson
  • Cremer, Markus K.

Abstract

Techniques of detecting text in video are disclosed. In some embodiments, a portion of video content can be identified as having text. Text within the identified portion of the video content can be identified. A category for the identified text can be determined. In some embodiments, a determination is made as to whether the video content satisfies at least one predetermined condition, and the portion of video content is identified as having text in response to a determination that the video content satisfies the predetermined condition(s). In some embodiments, the predetermined condition(s) comprises at least one of a minimum level of clarity, a minimum level of contrast, and a minimum level of content stability across multiple frames. In some embodiments, additional information corresponding to the video content is determined based on the identified text and the determined category.

IPC Classes  ?

  • G06K 9/34 - Segmentation of touching or overlapping patterns in the image field

76.

VIDEO FINGERPRINTING

      
Application Number US2015027117
Publication Number 2015/167901
Status In Force
Filing Date 2015-04-22
Publication Date 2015-11-05
Owner GRACENOTE, INC. (USA)
Inventor
  • Harron, Wilson
  • Wilkinson, Matthew James

Abstract

A query fingerprint of a set of frames of video content captured at a client device may be generated. Multiple patches of the set of frames of video content may be selected and a value calculated for each of the selected multiple patches. The value for each patch may be indicated as a single bit along with an additional 1-bit value to indicate whether the patch value is weak. A database of known reference fingerprints may be queried using the generated query fingerprint. Matches between the query fingerprint and the reference fingerprints may be identified. Weak bits may be given reduced weight in identifying the match of fingerprints. Based on the matches, an identifier for the video content may be returned to the client device. The client device may use the received identifier to access the supplemental content.

IPC Classes  ?

  • G06K 9/64 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix
  • G06F 17/30 - Information retrieval; Database structures therefor

77.

TARGETED AD REDISTRIBUTION

      
Application Number US2015019955
Publication Number 2015/138601
Status In Force
Filing Date 2015-03-11
Publication Date 2015-09-17
Owner GRACENOTE, INC. (USA)
Inventor
  • Brenner, Vadim
  • Cremer, Markus K.

Abstract

A user may view a video. The user may share a portion or the entirety of the video. The user may provide tags or comments with the shared video. The tags or comments may indicate a mood associated with the shared video. The server receiving the request to share the video may have metadata associated with the video. Additional metadata may be stored associated with smaller portions of the video. As an example, metadata may indicate the mood of a scene. The server may embed one or more advertisements into the shared video. The one or more advertisements may be targeted. Selection of advertisements may be based on a match between advertisement metadata and a user profile of the sharing user, a user profile of the receiving user, tags or comments provided by the sharing user, and metadata for the video or the clip.

IPC Classes  ?

  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising

78.

MODIFYING OPERATIONS BASED ON ACOUSTIC AMBIENCE CLASSIFICATION

      
Application Number US2014071105
Publication Number 2015/102921
Status In Force
Filing Date 2014-12-18
Publication Date 2015-07-09
Owner GRACENOTE, INC. (USA)
Inventor
  • Jeyachandran, Suresh
  • Brenner, Vadim
  • Cremer, Markus K.

Abstract

Methods and systems for modification of electronic system operation based on acoustic ambience classification are presented. In an example method, at least one audio signal present in a physical environment of a user is detected. The at least one audio signal is analyzed to extract at least one audio feature from the audio signal. The audio signal is classified based on the audio feature to produce at least one classification of the audio signal. Operation of an electronic system interacting with the user in the physical environment is modified based on the classification of the audio signal.

IPC Classes  ?

  • G10L 25/93 - Discriminating between voiced and unvoiced parts of speech signals

79.

INTERACTIVE PROGRAMMING GUIDE

      
Application Number US2014072977
Publication Number 2015/103384
Status In Force
Filing Date 2014-12-31
Publication Date 2015-07-09
Owner GRACENOTE, INC. (USA)
Inventor
  • Harron, Wilson
  • Herrada, Oscar Celma
  • Zhu, Irene
  • Cremer, Markus K.

Abstract

Techniques of providing an interactive programming guide with a personalized lineup are disclosed. In some embodiments, a profile is accessed, and a personalized lineup is determined based on the profile. The personalized lineup may comprise a corresponding media content identification assigned to each one of a plurality of sequential time slots, where each media content identification identifies media content for the corresponding time slot. A first interactive programming guide may be caused to be displayed on a first media content device associated with the profile, where the first interactive programming guide comprises the personalized lineup.

IPC Classes  ?

  • G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
  • H04N 7/03 - Subscription systems therefor
  • H04N 21/20 - Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof

80.

MEDIA SERVICE

      
Application Number US2014066428
Publication Number 2015/094558
Status In Force
Filing Date 2014-11-19
Publication Date 2015-06-25
Owner GRACENOTE, INC. (USA)
Inventor
  • Dimaria, Peter C.
  • Mink, Barnabas
  • Herrada, Oscar Celma
  • Gubman, Michael

Abstract

A machine may form all or part of a network-based system configured to provide media service to one or more user devices. The machine may be configured to define a station library within a larger collection of media files. In particular, the machine may access meta data that describes the media files included in the collection and access a seed that forms the basis on which the station library is to be defined. The machine may generate a list of media files from the meta data and based on the seed and enable a human editor to modify the machine-generated station set according to a human-contributed input to the station list. The machine may then modify the station set based on the submitted input and configure a media service to provide one or more user devices with a data stream that includes media files selected from the modified station list.

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor

81.

AUTHORIZING DEVICES BASED ON IDENTIFYING CONTENT DISTRIBUTOR

      
Application Number US2013076201
Publication Number 2014/107311
Status In Force
Filing Date 2013-12-18
Publication Date 2014-07-10
Owner GRACENOTE, INC. (USA)
Inventor
  • Gordon, Donald F.
  • Cremer, Markus K.
  • Dunker, Peter

Abstract

Methods and systems to authorize devices and/or perform other actions based on identifying content distributors are described. In some example embodiments, the methods and systems access video content playing at a client device, calculate fingerprints of a portion of the video content, identify a distributor of the video content based on the fingerprints, and perform an action in response to the identification of the distributor of the video content, such as actions to authorize the client device or other associated devices (e.g., second screens) to receive content from the distributor, actions to present sponsored content to the client device or associated devices, and so on.

IPC Classes  ?

  • H04L 9/32 - Arrangements for secret or secure communications; Network security protocols including means for verifying the identity or authority of a user of the system

82.

USER PROFILE BASED ON CLUSTERING TIERED DESCRIPTORS

      
Application Number US2013055576
Publication Number 2014/042826
Status In Force
Filing Date 2013-08-19
Publication Date 2014-03-20
Owner GRACENOTE, INC. (USA)
Inventor
  • Popp, Phillip
  • Chen, Ching-Wei
  • Dimaria, Peter C.
  • Cremer, Markus K.

Abstract

A user of a network-based system may correspond to a user profile that describes the user. The user profile may describe the user using one or more descriptors of items that correspond to the user (e.g., items owned by the user, items liked by the user, or items rated by the user). In some situations, such a user profile may be characterized as a "taste profile" that describes an array or distribution of one or more tastes, preferences, or habits of the user. Accordingly, the user profile machine within the network-based system may generate the user profile by accessing descriptors of items that correspond to the user, clustering one or more of the descriptors, and generating the user profile based on one or more clusters of the descriptors.

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor

83.

MEDIA SOURCE IDENTIFICATION

      
Application Number US2012051745
Publication Number 2013/032787
Status In Force
Filing Date 2012-08-21
Publication Date 2013-03-07
Owner GRACENOTE, INC. (USA)
Inventor
  • Raesig, Tassilo
  • Heider, Frank
  • Krauss, Dietmar

Abstract

A server machine and a first device (e.g., a television) are configured to access a stream of media (e.g., a broadcast channel) from a media source (e.g., a broadcaster). The server machine generates a representation (e.g., a fingerprint) of the stream of media and stores the representation. The first device plays the stream of media and generates an analog signal based on the stream of media. A second device (e.g., a mobile device of a user) is configured to receive the analog signal and generate a representation of the analog signal. The second device provides the representation of the analog signal to the server machine, which may compare the representation of the stream of media to the representation of the analog signal. Based on the comparison, the server machine may provide an identifier of the media source to the second device.

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]

84.

METHODS AND APPARATUS FOR DETERMINING A MOOD PROFILE ASSOCIATED WITH MEDIA DATA

      
Application Number US2010037665
Publication Number 2010/151421
Status In Force
Filing Date 2010-06-07
Publication Date 2010-12-29
Owner GRACENOTE, INC. (USA)
Inventor
  • Chen, Ching-Wei
  • Lee, Kyogu
  • Dimaria, Peter C.
  • Cremer, Markus K.

Abstract

In an embodiment, a method is provided for determining a mood profile of media data. In this method, mood is determined across multiple elements of mood for the media data to create a mood profile associated with the media data. In some embodiments, the mood profile is then used to determine congruencies between one or more pieces of media data.

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor
  • G01H 1/00 - Measuring vibrations in solids by using direct conduction to the detector

85.

APPARATUS AND METHOD FOR DETERMINING A PROMINENT TEMPO OF AN AUDIO WORK

      
Application Number US2010033753
Publication Number 2010/129693
Status In Force
Filing Date 2010-05-05
Publication Date 2010-11-11
Owner GRACENOTE, INC. (USA)
Inventor
  • Chen, Ching-Wei
  • Lee, Kyogu
  • Dimaria, Peter, C.
  • Cremer, Markus, K.

Abstract

The prominent tempo of an audio data is determined by detecting a plurality of beat rates of the audio data. One or more audio data characteristics are used to filter through the beat rates to determine the prominent tempo. Systems, methods, and apparatuses to determine the prominent tempo are discussed herein.

IPC Classes  ?

86.

RECOGNITION OF VIDEO CONTENT

      
Application Number US2009041383
Publication Number 2009/132084
Status In Force
Filing Date 2009-04-22
Publication Date 2009-10-29
Owner GRACENOTE, INC. (USA)
Inventor
  • Scherf, Steven, D.
  • Funk, Gregory, Allan

Abstract

A method and system is provided for recognizing video content represented by temporally segmented video content. An example system includes a communication module and a search and match module. The communications module may be configured to receive a source table of contents (TOC) related to a temporally segmented video content. The source TOC may include one or more titles and a source playback length. The search and match module may be configured to interrogate a video products database with the source TOC to determine one or more match results, utilizing a fuzzy matching technique.

IPC Classes  ?

87.

SYNTHESIZING A PRESENTATION OF A MULTIMEDIA EVENT

      
Application Number US2008077843
Publication Number 2009/042858
Status In Force
Filing Date 2008-09-26
Publication Date 2009-04-02
Owner GRACENOTE, INC. (USA)
Inventor
  • Roberts, Dale T.
  • Cook, Randall E.
  • Cremer, Markus K.

Abstract

Example embodiments of a media synchronization system and method for synthesizing a presentation of a multimedia event are generally described herein. In some example embodiments, the media synchronization system includes a media ingestion module to access a plurality of media clips received from a plurality of client devices, a media analysis module to' determine a temporal relation between a first media clip from the plurality of media clips and a second media clip from the plurality of media clips, and a content creation module to align the first media clip and the second media clip based on the temporal relation, and to combine the first media clip and the second media clip to generate the presentation.

IPC Classes  ?

  • G11B 7/085 - Disposition or mounting of heads or light sources relatively to record carriers with provision for moving the light beam into, or out of, its operative position.

88.

DYNAMIC MIXED MEDIA PACKAGE

      
Application Number US2008062524
Publication Number 2008/137756
Status In Force
Filing Date 2008-05-02
Publication Date 2008-11-13
Owner GRACENOTE, INC. (USA)
Inventor
  • Roberts, Dale, T.
  • Cremer, Markus, K.
  • Mantle, Michael, W.
  • White, Stephen, Helling
  • Theeuwes, Marc

Abstract

It has been discovered that a dynamic mixed media package with a mechanism for dynamic modification/update provides a media experience to users that exceeds the experience offered by individual media files. A dynamic mixed media package accommodates various types of media and allows for additional media and modifications of existing media. Additional media includes media generated by consumers, such as media derived from a seed media. A seed media is marked and assembled with supplemental media into a package. The seed media is marked to allow performance of various operations, such as identification of the seed media during the lifetime of the package and attribution when the seed media is incorporated into consumer generated derivative media.

IPC Classes  ?

89.

UNIFIED FORMAT OF DIGITAL CONTENT METADATA

      
Application Number US2008050130
Publication Number 2008/086104
Status In Force
Filing Date 2008-01-03
Publication Date 2008-07-17
Owner GRACENOTE, INC. (USA)
Inventor
  • Sumrall, Harry
  • Williams, Richard
  • San Filippo, Scott

Abstract

A method and system to provide a unified format for digital content metadata are described. The system may include a module to obtain source data associated with media content; a module to identify, based on the source data, the media content; an extractor to obtain metadata associated the identified content; and a converter to format the obtained metadata according to the popular music format. The popular music format is a tree field format, where the fields are to store the title of the album, the title of the track, and the name of the artist.

IPC Classes  ?

  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

90.

METHOD AND SYSTEM FOR MEDIA NAVIGATION

      
Application Number US2007006131
Publication Number 2007/103583
Status In Force
Filing Date 2007-03-09
Publication Date 2007-09-13
Owner GRACENOTE, INC. (USA)
Inventor
  • Dimaria, Peter C.
  • Cremer, Markus K.
  • Brenner, Vadim
  • Roberts, Dale T.

Abstract

A method and system for media navigation. A descriptor hierarchy may be accessed. The descriptor hierarchy may include at least one category list. One or more media descriptors may be accessed for a plurality of media items. The plurality of media items may be accessible from a plurality of sources. The one or more media descriptors may be mapped to the at least one category list. The navigation may be processed through a user interface to enable selection of the plurality of media items from the plurality of sources.

IPC Classes  ?

  • G06F 7/00 - Methods or arrangements for processing data by operating upon the order or content of the data handled

91.

METHOD AND SYSTEM TO CONTROL OPERATION OF A PLAYBACK DEVICE

      
Application Number US2006032722
Publication Number 2007/022533
Status In Force
Filing Date 2006-08-21
Publication Date 2007-02-22
Owner GRACENOTE, INC. (USA)
Inventor
  • Brenner, Vadim
  • Dimaria, Peter, C.
  • Roberts, Dale, T.
  • Mantle, Michael, W.
  • Orme, Michael, W.

Abstract

Media metadata is accessible for a plurality of media items (See Figure 12). The media metadata includes a number of strings to identify information regarding the media items (See Figure 12). Phonetic metadata is associated the number of strings of the media metadata (See Figure 12). Each portion of the phonetic metadata is stored in an original language of the string (See Figure 12).

IPC Classes  ?

  • G06F 17/30 - Information retrieval; Database structures therefor

92.

NETWORK-BASED DATA COLLECTION, INCLUDING LOCAL DATA ATTRIBUTES, ENABLING MEDIA MANAGEMENT WITHOUT REQUIRING A NETWORK CONNECTION

      
Application Number US2005035843
Publication Number 2006/041928
Status In Force
Filing Date 2005-10-06
Publication Date 2006-04-20
Owner GRACENOTE, INC. (USA)
Inventor
  • Mantle, Michael, W.
  • Hamilton, Brian, T.
  • Dimaria, Peter, C.

Abstract

Consistent user experience of playlist capabilities, despite differences in available resources and on-line connectivity, is provided. Data embedded in a playback device compensates for lack of connectivity. For compactness, embedded data can be targeted to geographic region(s) by selecting metadata for recordings containing audio using statistics on playback of the recordings in many geographic regions. The statistics and corresponding metadata are segregated by the geographic regions. Then a portion of the corresponding metadata is selected for at least one of the geographic regions based on the statistics. By using statistics that indicate popularity of recordings within geographic regions based on the frequency of playback or requests for information about a recording when it is played, the portion of the corresponding metadata that is selected can be tailored for individual geographic regions. To ensure that subregions and genres are not totally excluded, the portion selected may not be solely based on popularity.

IPC Classes  ?

  • H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
  • G06F 17/30 - Information retrieval; Database structures therefor
  • G06Q 30/00 - Commerce
  • G11B 27/00 - Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel