Gracenote, Inc.

United States of America

Back to Profile

1-100 of 403 for Gracenote, Inc. Sort by
Query
Aggregations
IP Type
        Patent 385
        Trademark 18
Jurisdiction
        United States 296
        World 92
        Europe 9
        Canada 6
Date
New (last 4 weeks) 8
2024 April (MTD) 4
2024 March 6
2024 February 10
2024 January 1
See more
IPC Class
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content 44
G06F 3/16 - Sound input; Sound output 44
H04N 21/81 - Monomedia components thereof 42
G06F 17/30 - Information retrieval; Database structures therefor 37
H04N 21/439 - Processing of audio elementary streams 35
See more
NICE Class
41 - Education, entertainment, sporting and cultural services 17
42 - Scientific, technological and industrial services, research and design 11
09 - Scientific and electric apparatus and instruments 9
Status
Pending 73
Registered / In Force 330
  1     2     3     ...     5        Next Page

1.

Predictive Measurement of End-User Consumption of Scheduled Multimedia Transmissions

      
Application Number 18136095
Status Pending
Filing Date 2023-04-18
First Publication Date 2024-04-11
Owner Gracenote, Inc. (USA)
Inventor
  • Kazemi Rad, Melissa
  • Kedar, Tal
  • Zanotto, Matteo
  • Mooney, Garrett
  • Chakraborty, Sritanu
  • Edwards, Dominic Bryan

Abstract

Methods and systems for determining projected amounts of viewing time of a TV program by end-users are disclosed. Data including end-user type, a TV program descriptor, TV network, and start time of transmission may be received. End-users may be identified by end-user type. A machine-learning model applied to the data and viewing history data may generate parameters for determining how much of the TV program they are each expected to view during a sequence of time intervals. For each end-user, the parameters may be applied to make a determination of temporal-fraction values of the TV program the end-user is expected to view during the time interval, and for each time interval, conditioning values used to condition the determination for the next time interval. Projected subtotals of viewing time may be determined, based on the temporal-fraction values. A projected total amount viewing time of the TV program may then be determined.

IPC Classes  ?

  • H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
  • H04N 21/24 - Monitoring of processes or resources, e.g. monitoring of server load, available bandwidth or upstream requests
  • H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data

2.

Methods and Systems for Determining Accuracy of Sport-Related Information Extracted from Digital Video Frames

      
Application Number 18499799
Status Pending
Filing Date 2023-11-01
First Publication Date 2024-04-11
Owner Gracenote, Inc. (USA)
Inventor
  • Scott, Jeffrey
  • Cremer, Markus Kurt Peter
  • Parekh, Nishit Umesh
  • Lee, Dewey Ho

Abstract

A computing system determines accuracy of sport-related information extracted from a time sequence of digital video frames that represent a sport event, the extracted sport-related information including an attribute that changes over the time sequence. The computing system (a) detects, based on the extracted sport-related information, a pattern of change of the attribute over the time sequence and (b) makes a determination of whether the detected pattern is an expected pattern of change associated with the sport event. If the determination is that the detected pattern is the expected pattern, then, responsive to making the determination, the computing system takes a first action that corresponds to the sport-related information being accurate. Whereas, if the determination is that the detected pattern is not the expected pattern, then, responsive to making the determination, the computing system takes a second action that corresponds to the sport-related information being inaccurate.

IPC Classes  ?

3.

Transition Detector Neural Network

      
Application Number 18539758
Status Pending
Filing Date 2023-12-14
First Publication Date 2024-04-04
Owner Gracenote, Inc. (USA)
Inventor
  • Renner, Joseph
  • Vartakavi, Aneesh
  • Coover, Robert

Abstract

In one aspect, an example method includes (i) extracting a sequence of audio features from a portion of a sequence of media content; (ii) extracting a sequence of video features from the portion of the sequence of media content; (iii) providing the sequence of audio features and the sequence of video features as an input to a transition detector neural network that is configured to classify whether or not a given input includes a transition between different content segments; (iv) obtaining from the transition detector neural network classification data corresponding to the input; (v) determining that the classification data is indicative of a transition between different content segments; and (vi) based on determining that the classification data is indicative of a transition between different content segments, outputting transition data indicating that the portion of the sequence of media content includes a transition between different content segments.

IPC Classes  ?

  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06F 18/24 - Classification techniques
  • G06F 18/2413 - Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
  • G06N 3/049 - Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
  • G06N 3/08 - Learning methods
  • G06V 10/80 - Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
  • G06V 20/40 - Scenes; Scene-specific elements in video content
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/81 - Monomedia components thereof

4.

Station Library Creation for a Media Service

      
Application Number 18529501
Status Pending
Filing Date 2023-12-05
First Publication Date 2024-04-04
Owner Gracenote, Inc. (USA)
Inventor
  • Dimaria, Peter C.
  • Silverman, Andrew

Abstract

A machine may form all or part of a network-based system configured to provide media service to one or more user devices. The machine may be configured to define a station library within a larger collection of media files. In particular, the machine may access metadata that describes a seed that forms the basis on which the station library is to be defined. The machine may determine a genre composition for the station library based on the metadata. The machine may generate a list of media files from the metadata based on a relevance of each media file to the station library. The machine may determine the relevance of each media file based on a similarity of the media file to the genre composition of the station library as well as a comparison of metadata describing the media file to the accessed metadata that describes the seed.

IPC Classes  ?

  • G06F 16/48 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • H04L 65/612 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors
  • H04N 21/854 - Content authoring

5.

Monitoring Icon Status in a Display from an External Device

      
Application Number 18386729
Status Pending
Filing Date 2023-11-03
First Publication Date 2024-03-28
Owner Gracenote, Inc. (USA)
Inventor Dimitriou, Konstantinos Antonio

Abstract

Systems and methods for monitoring of icon in an external display device are disclosed. Images of an icon displayed in a display device may be continually captured as video frames by a video camera of an icon monitoring system. While operating in a first mode, video frames may be continually analyzed to determine if the captured image matches an active template icon known to match the captured image of the icon. While the captured image matches the active template icon, operating in the first mode continues. Upon detecting a failed match to the active template icon, the system starts operating in a second to search among known template icons for a new match. Upon finding a new match, the active template icon may be updated to the new match, and operation switches back to the first mode. Times of transitions between the first and second modes may be recorded.

IPC Classes  ?

  • G06N 3/082 - Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06T 7/11 - Region-based segmentation
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries
  • G06V 20/40 - Scenes; Scene-specific elements in video content

6.

Machine Learning Systems and Methods for Predicting End-User Consumption of Future Multimedia Transmissions

      
Application Number 18370792
Status Pending
Filing Date 2023-09-20
First Publication Date 2024-03-28
Owner Gracenote, Inc. (USA)
Inventor
  • Sereday, Scott John
  • Zhuang, Cathy
  • Ban, Shufang
  • Dan, Oana Monica

Abstract

Methods and systems for prediction audience ratings are disclosed. A database of television (TV) viewing data may include program records for a multiplicity of existing TV programs. A system may receive a training plurality of program records from the TV viewing data, and for each program record a most similar TV program based on content characteristics may be identified. A synthetic program record may be constructed by merging features of each record and its most similar record. Audience performance metrics may be omitted from synthetic records. An aggregate of the training plurality of program records and the synthetic program records may be used to train a machine-learning (ML) model to predict audience performance metrics of the new or hypothetical TV programs not yet available for viewing and/or not yet transmitted or streamed.

IPC Classes  ?

  • H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk

7.

Logo Recognition in Images and Videos

      
Application Number 18507560
Status Pending
Filing Date 2023-11-13
First Publication Date 2024-03-21
Owner Gracenote, Inc. (USA)
Inventor
  • Pereira, Jose Pio
  • Brocklehurst, Kyle
  • Kulkarni, Sunil Suresh
  • Wendt, Peter

Abstract

Accurately detection of logos in media content on media presentation devices is addressed. Logos and products are detected in media content produced in retail deployments using a camera. Logo recognition uses saliency analysis, segmentation techniques, and stroke analysis to segment likely logo regions. Logo recognition may suitably employ feature extraction, signature representation, and logo matching. These three approaches make use of neural network based classification and optical character recognition (OCR). One method for OCR recognizes individual characters then performs string matching. Another OCR method uses segment level character recognition with N-gram matching. Synthetic image generation for training of a neural net classifier and utilizing transfer learning features of neural networks are employed to support fast addition of new logos for recognition.

IPC Classes  ?

  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06F 18/24 - Classification techniques
  • G06T 7/11 - Region-based segmentation
  • G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
  • G06T 7/60 - Analysis of geometric attributes
  • G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects

8.

Methods and Appartus for Efficient Media Indexing

      
Application Number 18511616
Status Pending
Filing Date 2023-11-16
First Publication Date 2024-03-14
Owner Gracenote, Inc. (USA)
Inventor
  • Wilkinson, Matthew James
  • Scott, Jeffrey
  • Coover, Robert
  • Dimitriou, Konstantinos Antonios

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed for efficient media indexing. An example method disclosed herein includes means for initiating a list of hash seeds, the list of hash seeds including at least a first hash seed value and a second hash seed value among other hash seed values, means for generating to generate a first bucket distribution based on the first hash seed value and a first hash function and generate a second bucket distribution based on the second hash seed value used in combination with the first hash seed value, means for determining to determine a first entropy value of the first bucket distribution, wherein data associated with the first bucket distribution is stored in a first hash table and determine a second entropy value of the second bucket distribution.

IPC Classes  ?

  • G06F 16/22 - Indexing; Data structures therefor; Storage structures
  • G06F 16/41 - Indexing; Data structures therefor; Storage structures

9.

Vehicle-Based Media System with Audio Ad and Visual Content Synchronization Feature

      
Application Number 18463901
Status Pending
Filing Date 2023-09-08
First Publication Date 2024-03-07
Owner Gracenote, Inc. (USA)
Inventor Modi, Nisarg A.

Abstract

In one aspect, an example method to be performed by a vehicle-based media system includes (a) receiving audio content; (b) causing one or more speakers to output the received audio content; (c) using a microphone of the vehicle-based media system to capture the output audio content; (d) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (e) identifying visual content based at least on the identified reference audio content; and (f) outputting, via a user interface of the vehicle-based media system, the identified visual content.

IPC Classes  ?

  • H04H 20/62 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast for transportation systems, e.g. in vehicles
  • G01C 21/36 - Input/output arrangements for on-board computers
  • G06Q 30/0207 - Discounts or incentives, e.g. coupons or rebates
  • G06Q 30/0241 - Advertisements
  • G06Q 30/0251 - Targeted advertisements
  • G10L 15/26 - Speech to text systems
  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • H04N 21/41 - Structure of client; Structure of client peripherals
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • H04R 3/12 - Circuits for transducers for distributing signals to two or more loudspeakers
  • H04W 4/02 - Services making use of location information
  • H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]

10.

Synthesizing A Presentation From Multiple Media Clips

      
Application Number 18506986
Status Pending
Filing Date 2023-11-10
First Publication Date 2024-03-07
Owner Gracenote, Inc. (USA)
Inventor
  • Roberts, Dale T.
  • Cook, Randall E.
  • Cremer, Markus K.

Abstract

In an example implementation, a method is described. The implementation accesses first and second media clips. The implementation also matches a first fingerprint of the first media clip with a second fingerprint of the second media clip and determines an overlap of the first media clip with the second media clip. The implementation also, based on the overlap, merges the first and second media clips into a group of overlapping media clips, transmits, to a client device, data identifying the group of overlapping media clips and specifying a synchronization of the first media clip with the second media clip, and generates for display on a display device of the client computing device, a graphical user interface that identifies the group of overlapping media clips, specifies the synchronization of the first media clip with the second media clip, and allows access to, and manipulation of, the first and second media clips.

IPC Classes  ?

  • G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
  • G11B 27/036 - Insert-editing
  • G11B 27/10 - Indexing; Addressing; Timing or synchronising; Measuring tape travel
  • G11B 27/28 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
  • G11B 27/34 - Indicating arrangements
  • H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects
  • H04N 7/173 - Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
  • H04N 21/218 - Source of audio or video content, e.g. local disk arrays
  • H04N 21/2743 - Video hosting of uploaded data from client
  • H04N 21/8549 - Creating video summaries, e.g. movie trailer

11.

Selection of Video Frames Using a Machine Learning Predictor

      
Application Number 18234682
Status Pending
Filing Date 2023-08-16
First Publication Date 2024-02-29
Owner Gracenote, Inc. (USA)
Inventor
  • Vartakavi, Aneesh
  • Christensen, Casper Lützhøft

Abstract

Example systems and methods of selection of video frames using a machine learning (ML) predictor program are disclosed. The ML predictor program may generate predicted cropping boundaries for any given input image. Training raw images associated with respective sets of training master images indicative of cropping characteristics for the training raw image may be input to the ML predictor, and the ML predictor program trained to predict cropping boundaries for raw image based on expected cropping boundaries associated training master images. At runtime, the trained ML predictor program may be applied to a sequence of video image frames to determine for each respective video image frame a respective score corresponding to a highest statistical confidence associated with one or more subsets of cropping boundaries predicted for the respective video image frame. Information indicative of the respective video image frame having the highest score may be stored or recorded.

IPC Classes  ?

  • G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]
  • G06F 18/21 - Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
  • G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
  • G06T 7/174 - Segmentation; Edge detection involving the use of two or more images
  • G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
  • G06V 10/776 - Validation; Performance evaluation
  • G06V 20/40 - Scenes; Scene-specific elements in video content

12.

Vehicle-Based Media System with Audio Ad and Navigation-Related Action Synchronization Feature

      
Application Number 18466648
Status Pending
Filing Date 2023-09-13
First Publication Date 2024-02-29
Owner Gracenote, Inc. (USA)
Inventor Modi, Nisarg A.

Abstract

In one aspect, an example method to be performed by a vehicle-based media system includes (a) receiving audio content; (b) causing one or more speakers to output the received audio content; (c) using a microphone of the vehicle-based media system to capture the output audio content; (d) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (e) identifying a geographic location associated with the identified reference audio content; and (f) based at least on the identified geographic location associated with the identified reference audio content, outputting, via the user interface of the vehicle-based media system, a prompt to navigate to the identified geographic location.

IPC Classes  ?

  • H04H 20/62 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast for transportation systems, e.g. in vehicles
  • G01C 21/36 - Input/output arrangements for on-board computers
  • G06Q 30/0207 - Discounts or incentives, e.g. coupons or rebates
  • G06Q 30/0241 - Advertisements
  • G06Q 30/0251 - Targeted advertisements
  • G10L 15/26 - Speech to text systems
  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • H04N 21/41 - Structure of client; Structure of client peripherals
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • H04R 3/12 - Circuits for transducers for distributing signals to two or more loudspeakers
  • H04W 4/02 - Services making use of location information
  • H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]

13.

Audiovisual Content Curation System

      
Application Number 18462715
Status Pending
Filing Date 2023-09-07
First Publication Date 2024-02-29
Owner Gracenote, Inc. (USA)
Inventor
  • Dimaria, Peter C.
  • Silverman, Andrew

Abstract

Systems and methods are provided for filtering at least one media content catalog based on criteria for a station library to generate a first list of candidate tracks for the station library, combining a similarity score and a popularity score for each track of the first list of candidate tracks to generate a total score for each track of the first list of candidate tracks, generating a list of top ranked tracks for the first genre, and returning the list of top ranked tracks of the first genre as part of the station library.

IPC Classes  ?

  • G06F 16/61 - Indexing; Data structures therefor; Storage structures
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

14.

Audio Fingerprinting

      
Application Number 18500764
Status Pending
Filing Date 2023-11-02
First Publication Date 2024-02-29
Owner Gracenote, Inc. (USA)
Inventor
  • Han, Jinyu
  • Coover, Robert

Abstract

A machine may be configured to generate one or more audio fingerprints of one or more segments of audio data. The machine may access audio data to be fingerprinted and divide the audio data into segments. For any given segment, the machine may generate a spectral representation from the segment; generate a vector from the spectral representation; generate an ordered set of permutations of the vector; generate an ordered set of numbers from the permutations of the vector; and generate a fingerprint of the segment of the audio data, which may be considered a sub-fingerprint of the audio data. In addition, the machine or a separate device may be configured to determine a likelihood that candidate audio data matches reference audio data.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal

15.

INSERTING INFORMATION INTO PLAYING CONTENT

      
Application Number 18503110
Status Pending
Filing Date 2023-11-06
First Publication Date 2024-02-29
Owner GRACENOTE, INC. (USA)
Inventor
  • Brenner, Vadim
  • Cremer, Markus K.

Abstract

Example methods and systems for inserting information into playing content are described. In some example embodiments, the methods and systems may identify a break in content playing via a playback device, select an information segment representative of information received by the playback device to present during the identified break, and insert the information segment into the content playing via the playback device upon an occurrence of the identified break.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G11B 27/036 - Insert-editing
  • G11B 27/11 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier

16.

MACHINE-LED MOOD CHANGE

      
Application Number 18503399
Status Pending
Filing Date 2023-11-07
First Publication Date 2024-02-29
Owner GRACENOTE, INC. (USA)
Inventor
  • Vartakavi, Aneesh
  • Dimaria, Peter C.
  • Gubman, Michael
  • Cremer, Markus K.
  • Summers, Cameron Aubrey
  • Tronel, Gregoire

Abstract

A machine is configured to identify a media file that, when played to a user, is likely to modify an emotional or physical state of the user to or towards a target emotional or physical state. The machine accesses play counts that quantify playbacks of media files for the user. The playbacks may be locally performed or detected by the machine from ambient sound. The machine accesses arousal scores of the media files and determines a distribution of the play counts over the arousal scores. The machine uses one or more relative maxima in the distribution in selecting a target arousal score for the user based on contextual data that describes an activity of the user. The machine selects one or more media files based on the target arousal score. The machine may then cause the selected media file to be played to the user.

IPC Classes  ?

17.

MODIFICATION OF ELECTRONIC SYSTEM OPERATION BASED ON ACOUSTIC AMBIENCE CLASSIFICATION

      
Application Number 18497362
Status Pending
Filing Date 2023-10-30
First Publication Date 2024-02-22
Owner Gracenote, Inc. (USA)
Inventor
  • Jeyachandran, Suresh
  • Brenner, Vadim
  • Cremer, Markus K.

Abstract

Methods and systems for modification of electronic system operation based on acoustic ambience classification are presented. In an example method, at least one audio signal present in a physical environment of a user is detected. The at least one audio signal is analyzed to extract at least one audio feature from the audio signal. The audio signal is classified based on the audio feature to produce at least one classification of the audio signal. Operation of an electronic system interacting with the user in the physical environment is modified based on the classification of the audio signal.

IPC Classes  ?

  • G10L 15/20 - Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise or of stress induced speech
  • H03G 3/00 - Gain control in amplifiers or frequency changers
  • G10L 25/48 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use
  • H03G 3/30 - Automatic control in amplifiers having semiconductor devices

18.

Audio Playout Report for Ride-Sharing Session

      
Application Number 18499648
Status Pending
Filing Date 2023-11-01
First Publication Date 2024-02-22
Owner Gracenote, Inc. (USA)
Inventor Modi, Nisarg A.

Abstract

In one aspect, an example method to be performed by a computing device includes (a) determining that a ride-sharing session is active; (b) in response to determining the ride-sharing session is active, using a microphone of the computing device to capture audio content; (c) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (d) determining that the ride-sharing session is inactive; and (e) outputting an indication of the identified reference audio content.

IPC Classes  ?

  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G06F 3/16 - Sound input; Sound output
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

19.

Synchronizing Streaming Media Content Across Devices

      
Application Number 18500864
Status Pending
Filing Date 2023-11-02
First Publication Date 2024-02-22
Owner Gracenote, Inc. (USA)
Inventor
  • Jeyachandran, Suresh
  • Tsai, Roger
  • Quinn, Paul Emmanuel
  • Cremer, Markus K.

Abstract

Methods, apparatus, and systems are disclosed for synchronizing streaming media content. An example apparatus includes a storage device, and a processor to execute instructions to identify a first source streaming broadcast media to a first computing device based on an audio fingerprint of audio associated with the broadcast media, identify sources broadcasting the broadcast media streaming to the first computing device, the sources available to a second computing device including the processor, select a second source of the identified sources for streaming the broadcast media to the second computing device, the second source different than the first source, detect termination of the streaming of the broadcast media on the first computing device, the termination corresponding to a termination time of the broadcast media, and automatically start, by using the selected second source, streaming of the broadcast media to the second computing device at the termination time.

IPC Classes  ?

  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/2187 - Live feed
  • H04H 60/40 - Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space for identifying broadcast time
  • H04H 60/58 - Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups or of audio
  • H04H 60/65 - Arrangements for services using the result of monitoring, identification or recognition covered by groups or for using the result on users' side
  • H04L 65/611 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast

20.

OBTAINING ARTIST IMAGERY FROM VIDEO CONTENT USING FACIAL RECOGNITION

      
Application Number 18244086
Status Pending
Filing Date 2023-09-08
First Publication Date 2024-02-22
Owner Gracenote, Inc. (USA)
Inventor
  • Scott, Jeffrey
  • Vartakavi, Aneesh

Abstract

An example method may include receiving, at a computing device, a digital image associated with a particular media content program, the digital image containing one or more faces of particular people associated with the particular media content program. A computer-implemented automated face recognition program may be applied to the digital image to recognize, based on at least one feature vector from a prior-determined set of feature vectors, one or more of the particular people in the digital image, together with respective geometric coordinates for each of the one or more detected faces. At least a subset of the prior-determined set of feature vectors may be associated with a respective one of the particular people. The digital image together may be stored in non-transitory computer-readable memory, together with information assigning respective identities of the recognized particular people, and associating with each respective assigned identity geometric coordinates in the digital image.

IPC Classes  ?

  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

21.

Methods and Apparatus for Harmonic Source Enhancement

      
Application Number 18499845
Status Pending
Filing Date 2023-11-01
First Publication Date 2024-02-22
Owner Gracenote, Inc. (USA)
Inventor Rafii, Zafar

Abstract

Methods and apparatus for harmonic source enhancement are disclosed herein. An example apparatus includes an interface to receive a media signal. The example apparatus also includes a harmonic source enhancer to determine a magnitude spectrogram of audio corresponding to the media signal; generate a time-frequency mask based on the magnitude spectrogram; and apply the time-frequency mask to the magnitude spectrogram to enhance a harmonic source of the media signal.

IPC Classes  ?

  • G10K 11/175 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
  • G06F 3/16 - Sound input; Sound output
  • G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band

22.

METHODS AND APPARATUS FOR PLAYBACK USING PRE-PROCESSED INFORMATION AND PERSONALIZATION

      
Application Number 18462252
Status Pending
Filing Date 2023-09-06
First Publication Date 2024-02-15
Owner GRACENOTE, INC. (USA)
Inventor
  • Coover, Robert
  • Summers, Cameron Aubrey
  • Renner, Joseph
  • Cremer, Markus
  • Mansfield, Warren

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed for playback using pre-processed profile information and personalization. Example apparatus disclosed herein include a synchronizer to, in response to receiving a media signal to be played on a playback device, access an equalization (EQ) profile corresponding to the media signal; an EQ personalization manager to generate a personalized EQ setting; and an EQ adjustment implementor to modify playback of the media signal on the playback device based on a blended equalization generation based on the EQ profile and the personalized EQ setting.

IPC Classes  ?

  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H03G 5/16 - Automatic control
  • H04R 3/04 - Circuits for transducers for correcting frequency response
  • G06N 3/08 - Learning methods
  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G06F 3/16 - Sound input; Sound output
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
  • H04N 9/87 - Regeneration of colour television signals
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies 

23.

GENERATION OF MEDIA STATION PREVIEWS USING A REFERENCE DATABASE

      
Application Number 18467272
Status Pending
Filing Date 2023-09-14
First Publication Date 2024-02-15
Owner Gracenote, Inc. (USA)
Inventor
  • Fearn, Pat D.
  • Jeyachandran, Suresh
  • Fasching, Damon P.
  • Sherman, Mark W.

Abstract

In one aspect, an example method includes (i) while a media playback device of a vehicle is playing back content received on a first channel, sending, by the media playback device to a server, a preview request, the preview request identifying a second channel that is different from the first channel; (ii) receiving, by the media playback device from the server, a response to the preview request, the response including identifying information corresponding to content being provided on the second channel; and (iii) while the media playback device is playing back the content received on the first channel, providing, by the media playback device for display, at least a portion of the identifying information corresponding to content being provided on the second channel.

IPC Classes  ?

  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
  • H04N 21/2387 - Stream processing in response to a playback request from an end-user, e.g. for trick-play
  • H04N 21/278 - Content descriptor database or directory service for end-user access
  • H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies 
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/439 - Processing of audio elementary streams

24.

Music Service with Motion Video

      
Application Number 18452836
Status Pending
Filing Date 2023-08-21
First Publication Date 2024-02-08
Owner Gracenote, Inc. (USA)
Inventor Cremer, Markus K.

Abstract

Techniques of providing motion video content along with audio content are disclosed. In some example embodiments, a computer-implemented system is configured to perform operations comprising: receiving primary audio content; determining that at least one reference audio content satisfies a predetermined similarity threshold based on a comparison of the primary audio content with the at least one reference audio content; for each one of the at least one reference audio content, identifying motion video content based on the motion video content being stored in association with the one of the at least one reference audio content and not stored in association with the primary audio content; and causing the identified motion video content to be displayed on a device concurrently with a presentation of the primary audio content on the device.

IPC Classes  ?

  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies 

25.

METHODS AND APPARATUS FOR DYNAMIC VOLUME ADJUSTMENT VIA AUDIO CLASSIFICATION

      
Application Number 18453792
Status Pending
Filing Date 2023-08-22
First Publication Date 2024-02-08
Owner Gracenote, Inc. (USA)
Inventor
  • Cremer, Markus
  • Coover, Robert
  • Scherf, Steven D.
  • Summers, Cameron Aubrey

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed for dynamic volume adjustment via audio classification. Example apparatus include at least one memory; instructions; and at least one processor to execute the instructions to: analyze, with a neural network, a parameter of an audio signal associated with a first volume level to determine a classification group associated with the audio signal; determine an input volume of the audio signal; determine a classification gain value based on the classification group; determine an intermediate gain value as an intermediate between the input volume and the classification gain value by applying a first weight to the input volume and a second weight to the classification gain value; apply the intermediate gain value to the audio signal, the intermediate gain value to modify the first volume level to a second volume level; and apply a compression value to the audio signal, the compression value to modify the second volume level to a third volume level that satisfies a target volume threshold.

IPC Classes  ?

  • G06F 3/16 - Sound input; Sound output
  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination

26.

METHODS AND APPARATUS FOR VOLUME ADJUSTMENT

      
Application Number 18486816
Status Pending
Filing Date 2023-10-13
First Publication Date 2024-02-01
Owner GRACENOTE, INC. (USA)
Inventor
  • Coover, Robert
  • Scott, Jeffrey
  • Cremer, Markus K.
  • Vartakavi, Aneesh

Abstract

Apparatus, systems, articles of manufacture, and methods for volume adjustment are disclosed herein. An example method includes collecting data corresponding to a volume of an audio signal as the audio signal is output through a device, when an average volume of the audio signal does not satisfy a volume threshold for a specified timespan, determining a difference between the average volume and a desired volume, and applying a gain to the audio signal to adjust the volume of the audio signal to the desired volume, the gain determined based on the difference between the average volume and the desired volume.

IPC Classes  ?

  • H03G 3/30 - Automatic control in amplifiers having semiconductor devices
  • H03F 3/183 - Low-frequency amplifiers, e.g. audio preamplifiers with semiconductor devices only
  • H04R 3/00 - Circuits for transducers
  • H03G 7/00 - Volume compression or expansion in amplifiers
  • H03G 3/02 - Manually-operated control

27.

Use of Mismatched Query Fingerprint as Basis to Validate Media Identification

      
Application Number 17814294
Status Pending
Filing Date 2022-07-22
First Publication Date 2024-01-25
Owner Gracenote, Inc. (USA)
Inventor
  • Colton, John
  • Fasching, Damon
  • Jeyachandran, Suresh
  • Lee, Dong-In
  • Villariba, Kathyrene

Abstract

A method for controlling presentation of metadata regarding media. A system could generate query fingerprints representing media content being presented, the media content having been identified as being a first media-content item. The system could further detect a threshold mismatch comprising at least one of the query fingerprints not matching any of first reference fingerprints known to represent the first media-content item. In response, the system could engage in new media identification, establishing that the media content is a second media-content item, and could obtain both second reference fingerprints known to represent the second media-content item and metadata regarding the second media-content item. Further, the system could validate the new identification as a condition precedent to presenting the obtained metadata, the validating including comparing with the obtained second digital reference fingerprints the at least one digital query fingerprint that did not match any of the first digital reference fingerprints.

IPC Classes  ?

  • G06F 16/483 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 3/16 - Sound input; Sound output
  • G06F 16/432 - Query formulation
  • G06F 16/438 - Presentation of query results

28.

GRACENOTE NEXUS AUTO

      
Serial Number 98294135
Status Pending
Filing Date 2023-12-01
Owner Gracenote, Inc. ()
NICE Classes  ? 41 - Education, entertainment, sporting and cultural services

Goods & Services

Entertainment services, namely, providing information related to media, including music, music artists, entertainers, radio, podcasts, movies, television, video, and sports via a global communication network and providing an online database accessible to facilitate search and discovery of media, and to provide information related to media for presentation in connection with an end user playing media, selecting media for playback, or requesting information related to media.

29.

SELECTING BALANCED CLUSTERS OF DESCRIPTIVE VECTORS

      
Application Number 18348200
Status Pending
Filing Date 2023-07-06
First Publication Date 2023-11-02
Owner Gracenote, Inc. (USA)
Inventor
  • Vartakavi, Aneesh
  • Dimaria, Peter C.
  • Cremer, Markus K.
  • Popp, Phillip

Abstract

A clustering machine can cluster descriptive vectors in a balanced manner. The clustering machine calculates distances between pairs of descriptive vectors and generates clusters of vectors arranged in a hierarchy. The clustering machine determines centroid vectors of the clusters, such that each cluster is represented by its corresponding centroid vector. The clustering machine calculates a sum of inter-cluster vector distances between pairs of centroid vectors, as well as a sum of intra-cluster vector distances between pairs of vectors in the clusters. The clustering machine calculates multiple scores of the hierarchy by varying a scalar and calculating a separate score for each scalar. The calculation of each score is based on the two sums previously calculated for the hierarchy. The clustering machine may select or otherwise identify a balanced subset of the hierarchy by finding an extremum in the calculated scores.

IPC Classes  ?

  • G06F 16/41 - Indexing; Data structures therefor; Storage structures

30.

INTERACTIVE PROGRAMMING GUIDE

      
Application Number 18348824
Status Pending
Filing Date 2023-07-07
First Publication Date 2023-11-02
Owner GRACENOTE, INC. (USA)
Inventor
  • Harron, Wilson
  • Herrada, Oscar Celma
  • Zhu, Irene
  • Cremer, Markus K.

Abstract

Techniques of providing an interactive programming guide with a personalized lineup are disclosed. In some embodiments, a profile is accessed, and a personalized lineup is determined based on the profile. The personalized lineup may include a corresponding media content identification assigned to each one of a plurality of sequential time slots, where each media content identification identifies media content for the corresponding time slot. A first interactive programming guide may be caused to be displayed on a first media content device associated with the profile, where the first interactive programming guide includes the personalized lineup.

IPC Classes  ?

  • H04N 21/482 - End-user interface for program selection
  • H04N 21/2668 - Creating a channel for a dedicated end-user group, e.g. by inserting targeted commercials into a video stream based on end-user profiles
  • H04N 21/431 - Generation of visual interfaces; Content or additional data rendering
  • H04N 21/475 - End-user interface for inputting end-user data, e.g. PIN [Personal Identification Number] or preference data
  • H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies 
  • H04N 21/454 - Content filtering, e.g. blocking advertisements
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
  • H04N 21/472 - End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification or for manipulating displayed content
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments

31.

METHODS AND APPARATUS TO IDENTIFY MEDIA THAT HAS BEEN PITCH SHIFTED, TIME SHIFTED, AND/OR RESAMPLED

      
Application Number 18347363
Status Pending
Filing Date 2023-07-05
First Publication Date 2023-11-02
Owner GRACENOTE, Inc. (USA)
Inventor
  • Coover, Robert
  • Wilkinson, Matthew James
  • Scott, Jeffrey
  • Hong, Yongju

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed to identify media that has been pitch shifted, time shifted, and/or resampled. An example apparatus includes: memory; instructions in the apparatus; and processor circuitry to execute the instructions to: transmit a fingerprint of an audio signal and adjusting instructions to a central facility to facilitate a query, the adjusting instructions identifying at least one of a pitch shift, a time shift, or a resample ratio; obtain a response including an identifier for the audio signal and information corresponding to how the audio signal was adjusted; and change the adjusting instructions based on the information.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/632 - Query formulation
  • G06F 16/65 - Clustering; Classification
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06F 18/22 - Matching criteria, e.g. proximity measures

32.

CLASSIFYING SEGMENTS OF MEDIA CONTENT USING CLOSED CAPTIONING

      
Application Number 18348817
Status Pending
Filing Date 2023-07-07
First Publication Date 2023-11-02
Owner GRACENOTE, INC. (USA)
Inventor
  • Vartakavi, Aneesh
  • Balasuriya, Lakshika
  • Ko, Chin-Ting

Abstract

In one aspect, an example method includes (i) retrieving, from a text index, closed captioning repetition data for a segment of a sequence of media content; (ii) generating features using the closed captioning repetition data; (iii) providing the features as input to a classification model, wherein the classification model is configured to output classification data indicative of a likelihood of the features being characteristic of a program segment; (iv) obtaining the classification data output by the classification model; (v) determining a prediction of whether the segment is a program segment using the classification data; and (vi) storing the prediction for the segment in a database.

IPC Classes  ?

  • G06F 40/20 - Natural language analysis
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • G11B 27/34 - Indicating arrangements

33.

AUDIO CONTENT RECOGNITION METHOD AND SYSTEM

      
Application Number 18335618
Status Pending
Filing Date 2023-06-15
First Publication Date 2023-10-12
Owner Gracenote, Inc. (USA)
Inventor
  • Berrian, Alexander
  • Hodges, Todd J.
  • Coover, Robert
  • Wilkinson, Matthew James
  • Rafii, Zafar

Abstract

A method implemented by a computing system comprises generating, by the computing system, a fingerprint comprising a plurality of bin samples associated with audio content. Each bin sample is specified within a frame of the fingerprint and is associated with one of a plurality of non-overlapping frequency ranges and a value indicative of a magnitude of energy associated with a corresponding frequency range. The computing system removes, from the fingerprint, a plurality of bin samples associated with a frequency sweep in the audio content.

IPC Classes  ?

  • G10L 25/54 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for retrieval
  • G10L 25/27 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique
  • G10L 25/72 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for transmitting results of analysis
  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal

34.

System and Method to Identify Programs and Commercials in Video Content via Unsupervised Static Content Identification

      
Application Number 18193962
Status Pending
Filing Date 2023-03-31
First Publication Date 2023-10-12
Owner Gracenote, Inc. (USA)
Inventor
  • Vartakavi, Aneesh
  • Findelair, Arthur

Abstract

In one aspect, an example method includes (i) determining, by a computing system, a mean image of a set of frames of video content; (ii) extracting, by the computing system, a reference template of static content from the mean image; (iii) identifying, by the computing system, the extracted reference template of static content in a frame of the set of frames of the video content; (iv) labeling a segment within the video content as either a program segment or an advertisement segment based on the identifying of the extracted reference template of static content in the frame of the video content; and (v) generating data identifying the labeled segment.

IPC Classes  ?

  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs

35.

Method and System for Generating Podcast Metadata to Facilitate Searching and Recommendation

      
Application Number 18194260
Status Pending
Filing Date 2023-03-31
First Publication Date 2023-10-12
Owner Gracenote, Inc. (USA)
Inventor
  • Vartakavi, Aneesh
  • Lützhøft Christensen, Casper

Abstract

A method and system for computer-based generation of podcast metadata, to facilitate operations such as searching for and recommending podcasts based on the generated metadata. In an example method, a computing system obtains a text representation of a podcast episode and obtains person data defining a list of person names such as celebrity names. The computing system then correlates the person data with the text representation, to find a match between a listed person name a text string in the text representation. Further, the computing system predicts a named-entity span in the text representation and determines that the predicted named-entity span matches a location of the text string in the text representation of the podcast episode, and based on this determination, the computing system generates and outputs metadata that associates the person name with the podcast episode.

IPC Classes  ?

  • G06F 16/335 - Filtering based on additional data, e.g. user or group profiles
  • G06F 16/383 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/35 - Clustering; Classification

36.

Automated Generation of Banner Images

      
Application Number 18206571
Status Pending
Filing Date 2023-06-06
First Publication Date 2023-10-12
Owner Gracenote, Inc. (USA)
Inventor Vartakavi, Aneesh

Abstract

Example systems and methods for automated generation of banner images are disclosed. A program identifier associated with a particular media program may be received by a system, and used for accessing a set of iconic digital images and corresponding metadata associated with the particular media program. The system may select a particular iconic digital image for placing a banner of text associated with the particular media program, by applying an analytical model of banner-placement criteria to the iconic digital images. The system may apply another analytical model for banner generation to the particular iconic image to determine (i) dimensions and placement of a bounding box for containing the text, (ii) segmentation of the text for display within the bounding box, and (iii) selection of font, text size, and font color for display of the text. The system may store the particular iconic digital image and banner metadata specifying the banner.

IPC Classes  ?

  • H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors
  • H04N 21/81 - Monomedia components thereof
  • G06N 20/00 - Machine learning
  • H04N 21/485 - End-user interface for client configuration

37.

DYNAMIC CONTENT DELIVERY BASED ON VEHICLE NAVIGATIONAL ATTRIBUTES

      
Application Number 18305668
Status Pending
Filing Date 2023-04-24
First Publication Date 2023-10-12
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus K.
  • Jeyachandran, Suresh
  • Quinn, Paul Emmanuel
  • Tsai, Roger

Abstract

Systems and methods are disclosed for dynamic content delivery based on vehicle navigational attributes. An example apparatus includes at least one memory, machine readable instructions, and processor circuitry to execute the machine readable instructions to at least obtain navigational attributes from an electronic device of a vehicle via a network, determine a relevancy score for respective ones of first sporting event data items based on the navigational attributes, based on a determination that the navigational attributes correspond to a driving condition, identify a second sporting event data item of the first sporting event data items based on a relevancy score of the second sporting event data item corresponding to the driving condition, and transmit the second sporting event data item to the electronic device of the vehicle to cause the second sporting event data item to be presented.

IPC Classes  ?

  • G01C 21/26 - Navigation; Navigational instruments not provided for in groups specially adapted for navigation in a road network
  • H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
  • H04L 67/50 - Network services
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06F 16/9537 - Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
  • H04W 4/024 - Guidance services
  • G01C 21/36 - Input/output arrangements for on-board computers
  • H04L 67/52 - Network services specially adapted for the location of the user terminal
  • H04L 67/306 - User profiles
  • G06F 16/9535 - Search customisation based on user profiles and personalisation
  • G06F 16/2457 - Query processing with adaptation to user needs
  • H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
  • H04W 4/40 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]

38.

METHODS AND APPARATUS FOR AUDIO EQUALIZATION

      
Application Number 18186725
Status Pending
Filing Date 2023-03-20
First Publication Date 2023-09-28
Owner GRACENOTE, INC. (USA)
Inventor
  • Renner, Joseph
  • Coover, Robert
  • Cremer, Markus
  • Summers, Cameron Aubrey

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed for audio equalization. Example instructions disclosed herein cause one or more processors to at least: detect an irregularity in a frequency representation of an audio signal in response to a change in volume between a set of frequency values exceeding a threshold; and adjust a volume at a first frequency value of the set of frequency values to reduce the irregularity..

IPC Classes  ?

  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H03G 5/16 - Automatic control
  • H04R 3/04 - Circuits for transducers for correcting frequency response
  • G06N 3/08 - Learning methods
  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G06F 3/16 - Sound input; Sound output
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
  • H04N 9/87 - Regeneration of colour television signals
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies 

39.

Automated video segmentation

      
Application Number 18141216
Grant Number 11948361
Status In Force
Filing Date 2023-04-28
First Publication Date 2023-09-14
Grant Date 2024-04-02
Owner Gracenote, Inc. (USA)
Inventor
  • Dimitriou, Konstantinos Antonio
  • Garg, Amanmeet

Abstract

Methods and systems for automated video segmentation are disclosed. A sequence of video frames having video segments of contextually-related sub-sequences may be received. Each frame may be labeled according to segment and segment class. A video graph may be constructed in which each node corresponds to a different frame, and each edge connects a different pair of nodes, and is associated with a time between video frames and a similarity metric of the connected frames. An artificial neural network (ANN) may be trained to predict both labels for the nodes and clusters of the nodes corresponding to predicted membership among the segments, using the video graph as input to the ANN, and ground-truth clusters of ground-truth labeled nodes. The ANN may be further trained to predict segment classes of the predicted clusters, using the segment classes as ground truths. The trained ANN may be configured for application runtime video sequences.

IPC Classes  ?

  • G06V 20/00 - Scenes; Scene-specific elements
  • G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
  • G06F 18/22 - Matching criteria, e.g. proximity measures
  • G06N 3/045 - Combinations of networks
  • G06V 10/426 - Graphical representations
  • G06V 20/40 - Scenes; Scene-specific elements in video content
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs

40.

Methods and Systems for Scoreboard Text Region Detection

      
Application Number 18196310
Status Pending
Filing Date 2023-05-11
First Publication Date 2023-09-07
Owner Gracenote, Inc. (USA)
Inventor
  • Scott, Jeffrey
  • Cremer, Markus Kurt Peter
  • Parekh, Nishit Umesh
  • Lee, Dewey Ho

Abstract

A computing system engages in digital image processing of received video frames to generate sport data that indicates a score and/or a time associated with a sport event. The digital image processing includes: (i) identifying a first frame region of the video frames based on the first frame region depicting a scoreboard; (ii) executing a first procedure that analyzes the identified first frame region to detect, within the identified first frame region, second frame region(s) based on the second frame region(s) depicting text of the scoreboard; (iii) in response to detecting the second frame region(s), executing a second procedure to recognize the text in at least one of the second frame region(s); and (iv) based at least on the recognizing of the text, generating the sport data. In response to completing the digital image processing, the computing system then carries out an action based on the generated sport data.

IPC Classes  ?

41.

Methods and Systems for Scoreboard Text Region Detection

      
Application Number 18310333
Status Pending
Filing Date 2023-05-01
First Publication Date 2023-08-24
Owner Gracenote, Inc. (USA)
Inventor
  • Scott, Jeffrey
  • Cremer, Markus Kurt Peter
  • Parekh, Nishit
  • Lee, Dewey Ho

Abstract

A computing system automatically detects, within a digital video frame, a video frame region that depicts a textual expression of a scoreboard. The computing system (a) engages in an edge-detection process to detect edges of at least scoreboard image elements depicted by the digital video frame, with at least some of these edges being of the textual expression and defining alphanumeric shapes; (b) applies pattern-recognition to identify the alphanumeric shapes; (c) establishes a plurality of minimum bounding rectangles each bounding a respective one of the identified alphanumeric shapes; (d) establishes, based on at least two of the minimum bounding rectangles, a composite shape that encompasses the identified alphanumeric shapes that were bounded by the at least two minimum bounding rectangles; and (e) based on the composite shape occupying a particular region, deems the particular region to be the video frame region that depicts the textual expression.

IPC Classes  ?

  • H04N 21/2187 - Live feed
  • G06T 7/13 - Edge detection
  • G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
  • G06V 20/40 - Scenes; Scene-specific elements in video content
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

42.

GENERATION OF MEDIA STATION PREVIEWS USING A SECONDARY TUNER

      
Application Number 18301869
Status Pending
Filing Date 2023-04-17
First Publication Date 2023-08-10
Owner GRACENOTE, INC. (USA)
Inventor
  • Qin, John M.
  • Jeyachandran, Suresh
  • Fasching, Damon P.

Abstract

In one aspect, an example method includes (i) while a media playback device of a vehicle is playing back content received on a first channel, generating, by the media playback device, a query fingerprint using second content received on a second channel; (ii) sending, by the media playback device, the query fingerprint to a server that maintains a reference database containing a plurality of reference fingerprints; (iii) receiving, by the media playback device from the server, identifying information corresponding to a reference fingerprint of the plurality of reference fingerprints that matches the query fingerprint; and (iv) while the media playback device is playing back the first content received on the first channel, providing, by the media playback device for display, at least a portion of the identifying information.

IPC Classes  ?

  • G05B 19/42 - Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
  • G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
  • H04N 5/50 - Tuning indicators; Automatic tuning control

43.

Methods and Apparatus to Segment Audio and Determine Audio Segment Similarities

      
Application Number 18298044
Status Pending
Filing Date 2023-04-10
First Publication Date 2023-08-03
Owner Gracenote, Inc. (USA)
Inventor Mccallum, Matthew

Abstract

Methods, apparatus, and systems are disclosed to segment audio and determine audio segment similarities. An example apparatus includes at least one memory storing instructions and processor circuitry to execute instructions to at least select an anchor index beat of digital audio, identify a first segment of the digital audio based on the anchor index beat to analyze, the first segment having at least two beats and a respective center beat, concatenate time-frequency data of the at least two beats and the respective center beat to form a matrix of the first segment, generate a first deep feature based on the first segment, the first deep feature indicative of a descriptor of the digital audio, and train internal coefficients to classify the first deep feature as similar to a second deep feature based on the descriptor of the first deep feature and a descriptor of a second deep feature.

IPC Classes  ?

  • G10L 15/04 - Segmentation; Word boundary detection
  • G10L 15/06 - Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
  • G10L 15/16 - Speech classification or search using artificial neural networks

44.

Methods and Apparatus to Improve Detection of Audio Signatures

      
Application Number 18298178
Status Pending
Filing Date 2023-04-10
First Publication Date 2023-08-03
Owner Gracenote, Inc. (USA)
Inventor Rafii, Zafar

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed to improve detection of audio signatures. An example apparatus includes at least one memory, instructions in the apparatus, and processor circuitry to execute the instructions to: determine a first time difference of arrival for a first audio sensor of a meter and a second audio sensor of the meter based on a first audio recording from the first audio sensor and a second audio recording from the second audio sensor; determine a second time difference of arrival for the first audio sensor and a third audio sensor of the meter based on the first audio recording and a third audio recording from the third audio sensor; determine a match by comparing the first time difference of arrival to i) a first virtual source time difference of arrival and ii) a second virtual source time difference of arrival; in response to determining that the first time difference of arrival matches the first virtual source time difference of arrival, identify a first virtual source location as the location of a media presentation device presenting media; and remove the second audio recording to reduce a computational burden on the processor.

IPC Classes  ?

  • G01S 5/22 - Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
  • H04R 3/00 - Circuits for transducers
  • G01S 5/24 - Position of single direction-finder fixed by determining direction of a plurality of spaced sources of known location
  • H04R 1/40 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers

45.

Inserting information into playing content

      
Application Number 18084252
Grant Number 11853355
Status In Force
Filing Date 2022-12-19
First Publication Date 2023-07-20
Grant Date 2023-12-26
Owner Gracenote, Inc. (USA)
Inventor
  • Brenner, Vadim
  • Cremer, Markus K.

Abstract

Example methods and systems for inserting information into playing content are described. In some example embodiments, the methods and systems may identify a break in content playing via a playback device, select an information segment representative of information received by the playback device to present during the identified break, and insert the information segment into the content playing via the playback device upon an occurrence of the identified break.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G11B 27/11 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
  • G11B 27/036 - Insert-editing

46.

Methods and apparatus for volume adjustment

      
Application Number 18074169
Grant Number 11824507
Status In Force
Filing Date 2022-12-02
First Publication Date 2023-07-06
Grant Date 2023-11-21
Owner Gracenote, Inc. (USA)
Inventor
  • Coover, Robert
  • Scott, Jeffrey
  • Cremer, Markus K.
  • Vartakavi, Aneesh

Abstract

Apparatus, systems, articles of manufacture, and methods for volume adjustment are disclosed herein. An example method includes collecting data corresponding to a volume of an audio signal as the audio signal is output through a device, when an average volume of the audio signal does not satisfy a volume threshold for a specified timespan, determining a difference between the average volume and a desired volume, and applying a gain to the audio signal to adjust the volume of the audio signal to the desired volume, the gain determined based on the difference between the average volume and the desired volume.

IPC Classes  ?

  • H03G 3/30 - Automatic control in amplifiers having semiconductor devices
  • H03F 3/183 - Low-frequency amplifiers, e.g. audio preamplifiers with semiconductor devices only
  • H03G 7/00 - Volume compression or expansion in amplifiers
  • H04R 3/00 - Circuits for transducers
  • H03G 3/02 - Manually-operated control

47.

Audio Identification During Performance

      
Application Number 18165107
Status Pending
Filing Date 2023-02-06
First Publication Date 2023-06-15
Owner Gracenote, Inc. (USA)
Inventor
  • Roberts, Dale T.
  • Coover, Bob
  • Marcantonio, Nicola
  • Cremer, Markus K.

Abstract

Methods and apparatus for audio identification during a performance are disclosed herein. An example apparatus includes at least one memory and at least one processor to transform a segment of audio into a log-frequency spectrogram based on a constant Q transform using a logarithmic frequency resolution, transform the log-frequency spectrogram into a binary image, each pixel of the binary image corresponding to a time frame and frequency channel pair, each frequency channel representing a corresponding quarter tone frequency channel in a range from C3-C8, generate a matrix product of the binary image and a plurality of reference fingerprints, normalize the matrix product to form a similarity matrix, select an alignment of a line in the similarity matrix that intersects one or more bins in the similarity matrix with the largest calculated Hamming similarities, and select a reference fingerprint based on the alignment.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

48.

Methods and Systems for Scoreboard Region Detection

      
Application Number 18150670
Status Pending
Filing Date 2023-01-05
First Publication Date 2023-05-18
Owner Gracenote, Inc. (USA)
Inventor
  • Scott, Jeffrey
  • Cremer, Markus Kurt Peter
  • Parekh, Nishit
  • Lee, Dewey Ho

Abstract

A computing system automatically detects, in a sequence of video frames, a video frame region that depicts a scoreboard. The video frames of the sequence depict image elements including (i) scoreboard image elements that are unchanging across the video frames of the sequence and (ii) other image elements that change across the video frames of the sequence. Given this, the computing system (a) receives the sequence, (b) engages in an edge-detection process to detect, in the video frames of the sequence, a set of edges of the depicted image elements, (c) identifies a subset of the detected set of edges based on each edge of the subset being unchanging across the video frames of the sequence, and (d) detects, based on the edges of the identified subset, the video frame region that depicts the scoreboard.

IPC Classes  ?

  • G06T 7/13 - Edge detection
  • H04N 21/2187 - Live feed
  • G06V 20/40 - Scenes; Scene-specific elements in video content
  • G06V 20/62 - Text, e.g. of license plates, overlay texts or captions on TV images
  • G06V 10/44 - Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
  • G06V 10/75 - Image or video pattern matching; Proximity measures in feature spaces using context analysis; Selection of dictionaries

49.

Vehicle-based media system with audio ad and visual content synchronization feature

      
Application Number 18156307
Grant Number 11929823
Status In Force
Filing Date 2023-01-18
First Publication Date 2023-05-18
Grant Date 2024-03-12
Owner Gracenote, Inc. (USA)
Inventor Modi, Nisarg A.

Abstract

In one aspect, an example method to be performed by a vehicle-based media system includes (a) receiving audio content; (b) causing one or more speakers to output the received audio content; (c) using a microphone of the vehicle-based media system to capture the output audio content; (d) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (e) identifying visual content based at least on the identified reference audio content; and (f) outputting, via a user interface of the vehicle-based media system, the identified visual content.

IPC Classes  ?

  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • G01C 21/36 - Input/output arrangements for on-board computers
  • G06Q 30/0207 - Discounts or incentives, e.g. coupons or rebates
  • G06Q 30/0241 - Advertisements
  • G06Q 30/0251 - Targeted advertisements
  • G10L 15/26 - Speech to text systems
  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • H04H 20/62 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast for transportation systems, e.g. in vehicles
  • H04N 21/41 - Structure of client; Structure of client peripherals
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • H04N 21/81 - Monomedia components thereof
  • H04R 3/12 - Circuits for transducers for distributing signals to two or more loudspeakers
  • H04W 4/02 - Services making use of location information
  • H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]

50.

Audio playout report for ride-sharing session

      
Application Number 18156311
Grant Number 11837250
Status In Force
Filing Date 2023-01-18
First Publication Date 2023-05-18
Grant Date 2023-12-05
Owner Gracenote, Inc. (USA)
Inventor Modi, Nisarg A

Abstract

In one aspect, an example method to be performed by a computing device includes (a) determining that a ride-sharing session is active; (b) in response to determining the ride-sharing session is active, using a microphone of the computing device to capture audio content; (c) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (d) determining that the ride-sharing session is inactive; and (e) outputting an indication of the identified reference audio content.

IPC Classes  ?

  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 3/16 - Sound input; Sound output

51.

MODIFYING PLAYBACK OF CONTENT USING PRE-PROCESSED PROFILE INFORMATION

      
Application Number 18149415
Status Pending
Filing Date 2023-01-03
First Publication Date 2023-05-11
Owner Gracenote, Inc. (USA)
Inventor
  • Brenner, Vadim
  • Cremer, Markus K.
  • Becker, Michael

Abstract

Example methods and systems for modifying the playback of content using pre-processed profile information are described. Example instructions, when executed, cause at least one processor to access a media stream that includes media and a profile of equalization parameters, the media stream provided to a device via a network, the profile of equalization parameters included in the media stream selected based on a comparison of a reference fingerprint to a query fingerprint generated based on the media, the profile of equalization parameters including an equalization parameter for the media; and modify playback of the media based on the equalization parameter specified in the accessed profile.

IPC Classes  ?

  • H04H 60/47 - Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for recognising genres
  • H04R 3/04 - Circuits for transducers for correcting frequency response
  • H04H 60/58 - Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups or of audio
  • H04H 60/65 - Arrangements for services using the result of monitoring, identification or recognition covered by groups or for using the result on users' side
  • H04N 21/233 - Processing of audio elementary streams
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • H04N 21/266 - Channel or content management, e.g. generation and management of keys and entitlement messages in a conditional access system or merging a VOD unicast channel into a multicast channel
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/432 - Content retrieval operation from a local storage medium, e.g. hard-disk
  • H04N 21/654 - Transmission by server directed to the client
  • H04N 21/81 - Monomedia components thereof
  • G06F 3/16 - Sound input; Sound output

52.

Methods and Apparatus For Determining A Mood Profile Associated With Media Data

      
Application Number 18086865
Status Pending
Filing Date 2022-12-22
First Publication Date 2023-04-27
Owner Gracenote, Inc. (USA)
Inventor
  • Chen, Ching-Wei
  • Lee, Kyogu
  • Dimaria, Peter C.
  • Cremer, Markus K.

Abstract

An example method involves comparing a primary element of a first piece of audio data to a primary element of a second piece of audio data; based on the comparing of the primary elements, determining that the first and second pieces of audio data have the same predominant mood category; in response to determining that the first and second pieces of audio data have the same predominant mood category, comparing a first mood score of the primary element of the first piece of audio data to a second mood score of the primary element of a second piece of audio data; determining that an output of the comparison of the two mood scores exceeds a threshold value; and in response to determining that the output of the comparison of the two mood scores exceeds the threshold value, providing an indicator to an application.

IPC Classes  ?

  • G06F 16/2457 - Query processing with adaptation to user needs
  • G06F 16/438 - Presentation of query results
  • G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments

53.

Radio Head Unit with Dynamically Updated Tunable Channel Listing

      
Application Number 17969407
Status Pending
Filing Date 2022-10-19
First Publication Date 2023-04-27
Owner Gracenote, Inc. (USA)
Inventor
  • Jeyachandran, Suresh
  • Fasching, Damon P.
  • Tanaka, Hidenori
  • Vetriselvi, Kaviarasu
  • Raul, Samit

Abstract

In one aspect, an example method includes (i) encountering, by a media playback device of a vehicle, a trigger to update a list of currently tunable radio stations; (ii) based on encountering the trigger to update the list of currently tunable radio stations, updating, by the media playback device, the list of currently tunable radio stations using a location of the vehicle and radio station contour data stored in a local database of the media playback device; and (iii) displaying, by the media playback device, a station list using the list of currently tunable radio stations.

IPC Classes  ?

  • H04H 60/73 - Systems specially adapted for using specific information, e.g. geographical or meteorological information using meta-information
  • H04H 40/18 - Arrangements characterised by circuits or components specially adapted for receiving
  • H04H 60/41 - Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space for identifying broadcast space, i.e. broadcast channels, broadcast stations or broadcast areas

54.

RADIO HEAD UNIT WITH DYNAMICALLY UPDATED TUNABLE CHANNEL LISTING

      
Application Number US2022047219
Publication Number 2023/069581
Status In Force
Filing Date 2022-10-20
Publication Date 2023-04-27
Owner GRACENOTE, INC. (USA)
Inventor
  • Jeyachandran, Suresh
  • Fasching, Damon
  • Tanaka, Hidenori
  • Vetriselvi, Kaviarasu
  • Raul, Samit

Abstract

In one aspect, an example method includes (i) encountering, by a media playback device of a vehicle, a trigger to update a list of currently tunable radio stations; (ii) based on encountering the trigger to update the list of currently tunable radio stations, updating, by the media playback device, the list of currently tunable radio stations using a location of the vehicle and radio station contour data stored in a local database of the media playback device; and (iii) displaying, by the media playback device, a station list using the list of currently tunable radio stations.

IPC Classes  ?

  • H04N 21/438 - Interfacing the downstream path of the transmission network originating from a server, e.g. retrieving MPEG packets from an IP network
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/482 - End-user interface for program selection
  • H04N 21/61 - Network physical structure; Signal processing

55.

Unified Representation Learning of Media Features for Diverse Tasks

      
Application Number 17502347
Status Pending
Filing Date 2021-10-15
First Publication Date 2023-04-20
Owner Gracenote, Inc. (USA)
Inventor
  • Garg, Amanmeet
  • Gesiriech, Gannon

Abstract

Methods and systems are disclosed for generating general feature vectors (GFVs), each simultaneously constructed for separate tasks of image reconstruction and fingerprint-based image discrimination. The computing system may include machine-learning-based components configured for extracting GFVs from images, signal processing for both transmission and reception and recovery of the extracted GFVs, generating reconstructed images from the recovered GFVs, and discriminating between fingerprints generated from the recovered GFVs and query fingerprints generated from query GFVs. A set of training images may be received at the computing system. In each of one or more training iterations over the set of training images, the components may be jointly trained with each training image of the set by minimizing a joint loss function computed as a sum of losses due to signal processing and recovery, image reconstruction, and fingerprint discrimination. The trained components may be configured for runtime implementation among one or more computing devices.

IPC Classes  ?

  • G06N 20/20 - Ensemble learning
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means

56.

Computing System With DVE Template Selection And Video Content Item Generation Feature

      
Application Number 18063764
Status Pending
Filing Date 2022-12-09
First Publication Date 2023-04-06
Owner Gracenote, Inc. (USA)
Inventor
  • Roberts, Dale T.
  • Gubman, Michael

Abstract

In one aspect, an example method includes (i) receiving a first group of video content items; (ii) identifying from among the first group of video content items, a second group of video content items having a threshold extent of similarity with each other; (iii) determining a quality score for each video content item of the second group; (iv) identifying from among the second group of video content items, a third group of video content items each having a quality score that exceeds a quality score threshold; and (v) based on the identifying of the third group, transmitting at least a portion of at least one video content item of the identified third group to a digital video-effect (DVE) system, wherein the system is configured for using the at least the portion of the at least one video content item of the identified third group to generate a video content item.

IPC Classes  ?

  • G11B 27/036 - Insert-editing
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • G06V 20/40 - Scenes; Scene-specific elements in video content
  • H04N 21/854 - Content authoring

57.

Methods and apparatus for harmonic source enhancement

      
Application Number 18052481
Grant Number 11847998
Status In Force
Filing Date 2022-11-03
First Publication Date 2023-03-23
Grant Date 2023-12-19
Owner Gracenote, Inc. (USA)
Inventor Rafii, Zafar

Abstract

Methods and apparatus for harmonic source enhancement are disclosed herein. An example apparatus includes an interface to receive a media signal. The example apparatus also includes a harmonic source enhancer to determine a magnitude spectrogram of audio corresponding to the media signal; generate a time-frequency mask based on the magnitude spectrogram; and apply the time-frequency mask to the magnitude spectrogram to enhance a harmonic source of the media signal.

IPC Classes  ?

  • H03G 5/00 - Tone control or bandwidth control in amplifiers
  • H04R 1/32 - Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
  • G10K 11/175 - Methods or devices for protecting against, or for damping, noise or other acoustic waves in general using interference effects; Masking sound
  • G06F 3/16 - Sound input; Sound output
  • G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band

58.

Machine-led mood change

      
Application Number 18070214
Grant Number 11853645
Status In Force
Filing Date 2022-11-28
First Publication Date 2023-03-23
Grant Date 2023-12-26
Owner Gracenote, Inc. (USA)
Inventor
  • Vartakavi, Aneesh
  • Dimaria, Peter C.
  • Gubman, Michael
  • Cremer, Markus K.
  • Summers, Cameron Aubrey
  • Tronel, Gregoire

Abstract

A machine is configured to identify a media file that, when played to a user, is likely to modify an emotional or physical state of the user to or towards a target emotional or physical state. The machine accesses play counts that quantify playbacks of media files for the user. The playbacks may be locally performed or detected by the machine from ambient sound. The machine accesses arousal scores of the media files and determines a distribution of the play counts over the arousal scores. The machine uses one or more relative maxima in the distribution in selecting a target arousal score for the user based on contextual data that describes an activity of the user. The machine selects one or more media files based on the target arousal score. The machine may then cause the selected media file to be played to the user.

IPC Classes  ?

59.

Synchronizing streaming media content across devices

      
Application Number 17992020
Grant Number 11856253
Status In Force
Filing Date 2022-11-22
First Publication Date 2023-03-16
Grant Date 2023-12-26
Owner Gracenote, Inc. (USA)
Inventor
  • Jeyachandran, Suresh
  • Tsai, Roger
  • Quinn, Paul Emmanuel
  • Cremer, Markus K.

Abstract

Methods, apparatus, and systems are disclosed for synchronizing streaming media content. An example apparatus includes a storage device, and a processor to execute instructions to identify a first source streaming broadcast media to a first computing device based on an audio fingerprint of audio associated with the broadcast media, identify sources broadcasting the broadcast media streaming to the first computing device, the sources available to a second computing device including the processor, select a second source of the identified sources for streaming the broadcast media to the second computing device, the second source different than the first source, detect termination of the streaming of the broadcast media on the first computing device, the termination corresponding to a termination time of the broadcast media, and automatically start, by using the selected second source, streaming of the broadcast media to the second computing device at the termination time.

IPC Classes  ?

  • H04N 7/16 - Analogue secrecy systems; Analogue subscription systems
  • H04N 21/43 - Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronizing decoder's clock; Client middleware
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/2187 - Live feed
  • H04H 60/40 - Arrangements for identifying or recognising characteristics with a direct linkage to broadcast information or to broadcast space-time, e.g. for identifying broadcast stations or for identifying users for identifying broadcast time or space for identifying broadcast time
  • H04H 60/58 - Arrangements characterised by components specially adapted for monitoring, identification or recognition covered by groups or of audio
  • H04H 60/65 - Arrangements for services using the result of monitoring, identification or recognition covered by groups or for using the result on users' side
  • H04L 65/611 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for multicast or broadcast

60.

Audio fingerprinting

      
Application Number 18049882
Grant Number 11854557
Status In Force
Filing Date 2022-10-26
First Publication Date 2023-03-09
Grant Date 2023-12-26
Owner Gracenote, Inc. (USA)
Inventor
  • Han, Jinyu
  • Coover, Robert

Abstract

A machine may be configured to generate one or more audio fingerprints of one or more segments of audio data. The machine may access audio data to be fingerprinted and divide the audio data into segments. For any given segment, the machine may generate a spectral representation from the segment; generate a vector from the spectral representation; generate an ordered set of permutations of the vector; generate an ordered set of numbers from the permutations of the vector; and generate a fingerprint of the segment of the audio data, which may be considered a sub-fingerprint of the audio data. In addition, the machine or a separate device may be configured to determine a likelihood that candidate audio data matches reference audio data.

IPC Classes  ?

  • G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal

61.

Multiple stage indexing of audio content

      
Application Number 18050326
Grant Number 11954149
Status In Force
Filing Date 2022-10-27
First Publication Date 2023-03-09
Grant Date 2024-04-09
Owner Gracenote, Inc. (USA)
Inventor
  • Dimaria, Peter C.
  • Cremer, Markus K.
  • Mink, Barnabas
  • Koshio, Tanji
  • Tsuji, Kei

Abstract

Techniques of content unification are disclosed. In some example embodiments, a computer-implemented method comprises: determining clusters based a comparison of a plurality of audio content using a first matching criteria, each cluster of the plurality of clusters comprising at least two audio content from the plurality of audio content; for each cluster of the plurality of clusters, determining a representative audio content for the cluster from the at least two audio content of the cluster; loading the corresponding representative audio content of each cluster into an index; matching the query audio content to one of the representative audio contents using a first matching criteria; determining the corresponding cluster of the matched representative audio content; and identifying a match between the query audio content and at least one of the audio content of the cluster of the matched representative audio content based on a comparison using a second matching criteria.

IPC Classes  ?

  • G06F 16/61 - Indexing; Data structures therefor; Storage structures
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

62.

Tagging an image with audio-related metadata

      
Application Number 18055699
Grant Number 11941048
Status In Force
Filing Date 2022-11-15
First Publication Date 2023-03-09
Grant Date 2024-03-26
Owner Gracenote, Inc. (USA)
Inventor
  • Modi, Nisarg A.
  • Hamilton, Brian T.

Abstract

In one aspect, an example method to be performed by a computing device includes (a) receiving a request to use a camera of the computing device; (b) in response to receiving the request, (i) using a microphone of the computing device to capture audio content and (ii) using the camera of the computing device to capture an image; (c) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; and (d) outputting an indication of the identified reference audio content while displaying the captured image.

IPC Classes  ?

  • G06F 3/048 - Interaction techniques based on graphical user interfaces [GUI]
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/783 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • H04N 23/61 - Control of cameras or camera modules based on recognised objects

63.

Audiovisual content curation system

      
Application Number 17974095
Grant Number 11803588
Status In Force
Filing Date 2022-10-26
First Publication Date 2023-03-02
Grant Date 2023-10-31
Owner Gracenote, Inc. (USA)
Inventor
  • Dimaria, Peter C.
  • Silverman, Andrew

Abstract

Systems and methods are provided for filtering at least one media content catalog based on criteria for a station library to generate a first list of candidate tracks for the station library, combining a similarity score and a popularity score for each track of the first list of candidate tracks to generate a total score for each track of the first list of candidate tracks, generating a list of top ranked tracks for the first genre, and returning the list of top ranked tracks of the first genre as part of the station library.

IPC Classes  ?

  • G06F 16/00 - Information retrieval; Database structures therefor; File system structures therefor
  • G06F 16/61 - Indexing; Data structures therefor; Storage structures
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

64.

Methods and systems for automatically generating backdrop imagery for a graphical user interface

      
Application Number 17462375
Grant Number 11915429
Status In Force
Filing Date 2021-08-31
First Publication Date 2023-03-02
Grant Date 2024-02-27
Owner Gracenote, Inc. (USA)
Inventor
  • Vartakavi, Aneesh
  • Scott, Jeffrey

Abstract

In one aspect, an example method for generating a candidate image for use as backdrop imagery for a graphical user interface is disclosed. The method includes receiving a raw image and determining an edge image from the raw image using edge detection. The method also includes identifying a candidate region of interest (ROI) in the raw image based on the candidate ROI enclosing a portion of the edge image having edge densities exceeding a threshold edge density. The method also includes manipulating the raw image relative to a backdrop imagery canvas for a graphical user interface based on a location of the candidate ROI within the raw image. The method also includes generating, based on the manipulating, a set of candidate backdrop images in which at least a portion of the candidate ROI occupies a preselected area of the backdrop imagery canvas, and storing the set of candidate backdrop images.

IPC Classes  ?

  • G06T 7/13 - Edge detection
  • G06T 3/40 - Scaling of a whole image or part thereof
  • G06T 7/149 - Segmentation; Edge detection involving deformable models, e.g. active contour models
  • G06V 10/25 - Determination of region of interest [ROI] or a volume of interest [VOI]

65.

Automated video segmentation

      
Application Number 17475551
Grant Number 11769328
Status In Force
Filing Date 2021-09-15
First Publication Date 2023-03-02
Grant Date 2023-09-26
Owner Gracenote, Inc. (USA)
Inventor
  • Dimitriou, Konstantinos Antonio
  • Garg, Amanmeet

Abstract

Methods and systems for automated video segmentation are disclosed. A sequence of video frames having video segments of contextually-related sub-sequences may be received. Each frame may be labeled according to segment and segment class. A video graph may be constructed in which each node corresponds to a different frame, and each edge connects a different pair of nodes, and is associated with a time between video frames and a similarity metric of the connected frames. An artificial neural network (ANN) may be trained to predict both labels for the nodes and clusters of the nodes corresponding to predicted membership among the segments, using the video graph as input to the ANN, and ground-truth clusters of ground-truth labeled nodes. The ANN may be further trained to predict segment classes of the predicted clusters, using the segment classes as ground truths. The trained ANN may be configured for application runtime video sequences.

IPC Classes  ?

  • G06V 20/00 - Scenes; Scene-specific elements
  • G06V 20/40 - Scenes; Scene-specific elements in video content
  • G06V 10/426 - Graphical representations
  • G06F 18/22 - Matching criteria, e.g. proximity measures
  • G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
  • G06N 3/045 - Combinations of networks
  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs

66.

Methods and Apparatus for Audio Equalization Based on Variant Selection

      
Application Number 18048615
Status Pending
Filing Date 2022-10-21
First Publication Date 2023-02-23
Owner Gracenote, Inc. (USA)
Inventor
  • Coover, Robert
  • Renner, Joseph
  • Summers, Cameron A.

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed methods and apparatus for audio equalization based on variant selection. An example apparatus includes a processor to obtain training data, the training data including a plurality of reference audio signals each associated with a variant of music and organize the training data into a plurality of entries based on the plurality of reference audio signals, a training model executor to execute a neural network model using the training data, and a model trainer to train the neural network model by updating at least one weight corresponding to one of the entries in the training data when the neural network model does not satisfy a training threshold.

IPC Classes  ?

67.

METHODS AND APPARATUS TO GENERATE RECOMMENDATIONS BASED ON ATTRIBUTE VECTORS

      
Application Number 17779538
Status Pending
Filing Date 2020-11-17
First Publication Date 2023-01-26
Owner GRACENOTE, INC. (USA)
Inventor
  • Vartakavi, Aneesh
  • Rancel Gil, Carmen Yaiza
  • Gopakumar, Anjana
  • Cramer, Jason Timothy

Abstract

Methods and apparatus are disclosed to generate a recommendation, including an attribute vector aggregator to form a resultant attribute vector based on an input set of attribute vectors, the set of attribute vectors containing at least one of a media attribute vector, an attendee attribute vector, an artist attribute vector, an event attribute vector, or a venue attribute vector, and a recommendation generator, the recommendation generator including: a vector comparator to perform a comparison between an input attribute vector and other attribute vectors and a recommendation compiler to create one or more recommendations of at least one of media, an artist, an event, or a venue based on the comparison.

IPC Classes  ?

  • G06F 16/635 - Filtering based on additional data, e.g. user or group profiles
  • G06F 16/638 - Presentation of query results
  • G06Q 30/06 - Buying, selling or leasing transactions
  • G06F 16/22 - Indexing; Data structures therefor; Storage structures
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

68.

Generating and distributing a replacement playlist

      
Application Number 17944535
Grant Number 11921779
Status In Force
Filing Date 2022-09-14
First Publication Date 2023-01-19
Grant Date 2024-03-05
Owner Gracenote, Inc. (USA)
Inventor
  • Sharma, Rishabh
  • Cremer, Markus

Abstract

An embodiment may involve a server device transmitting, over a wide area network, a first playlist with a first duration to a client device. Possibly while the client device is playing out a current audio file of a first plurality of audio files in the playlist, the server device may receive an instruction from the client device and generate a second playlist. The second playlist may include references to a second plurality of audio files, where playout of the second plurality of audio files may have a duration that is less than the duration of the playout of the first plurality of audio files. The server device may transmit, over the wide area network, the second playlist to the client device. Reception of the second playlist at the client device may cause the audio player application to retrieve and play out the second plurality of audio files.

IPC Classes  ?

  • G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
  • G06F 16/638 - Presentation of query results
  • G06F 16/635 - Filtering based on additional data, e.g. user or group profiles
  • G06F 16/951 - Indexing; Web crawling techniques
  • G06F 16/9535 - Search customisation based on user profiles and personalisation
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H04L 67/01 - Protocols
  • H04L 67/60 - Scheduling or organising the servicing of application requests, e.g. requests for application data transmissions using the analysis and optimisation of the required network resources
  • G06F 16/9538 - Presentation of query results
  • G06F 3/16 - Sound input; Sound output
  • G10L 13/00 - Speech synthesis; Text to speech systems
  • H04L 67/06 - Protocols specially adapted for file transfer, e.g. file transfer protocol [FTP]
  • H04L 67/10 - Protocols in which an application is distributed across nodes in the network
  • H04N 21/45 - Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies 

69.

AUTOMATED COVER SONG IDENTIFICATION

      
Application Number 17946915
Status Pending
Filing Date 2022-09-16
First Publication Date 2023-01-12
Owner Gracenote, Inc. (USA)
Inventor
  • Cremer, Markus K.
  • Rafii, Zafar
  • Coover, Robert
  • Seetharaman, Prem

Abstract

Example systems and methods for automated cover song identification are disclosed. An example apparatus includes at least one memory, machine-readable instructions, and one or more processors to execute the machine-readable instructions to at least execute a constant Q transform on time slices of first audio data to output constant Q transformed time slices, binarize the constant Q transformed time slices to output binarized and constant Q transformed time slices, execute a two-dimensional Fourier transform on time windows within the binarized and constant Q transformed time slices to output two-dimensional Fourier transforms of the time windows, generate a reference data structure based on a sequential order of the two-dimensional Fourier transforms, store the reference data structure in a database, and identify a query data structure associated with query audio data as a cover rendition of the audio data based on a comparison of the query and reference data structures.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

70.

METHODS AND APPARATUS TO CONTROL LIGHTING EFFECTS

      
Application Number 17780938
Status Pending
Filing Date 2020-11-19
First Publication Date 2022-12-29
Owner Gracenote Inc. (USA)
Inventor
  • Cremer, Markus Kurt
  • Coover, Robert
  • Rafii, Zafar
  • Vartakavi, Aneesh
  • Schmidt, Andreas
  • Hodges, Todd

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed to adjust device control information. The example apparatus comprises a light drive waveform generator to obtain metadata corresponding to media and generate device control information based on the metadata, the device control information to inform a lighting device to enable consecutive light pulses; an effect engine to apply an attack parameter and a decay parameter to consecutive light pulses corresponding to the device control information, the attack parameter and the decay parameter based on the metadata to affect a shape of the consecutive light pulses; and a color timeline generator to generate color information based on the metadata, the color information to inform the lighting device to change a color state.

IPC Classes  ?

  • H05B 45/20 - Controlling the colour of the light
  • F21S 10/02 - Lighting devices or systems producing a varying lighting effect changing colours
  • H05B 47/105 - Controlling the light source in response to determined parameters

71.

INDEXING FINGERPRINTS

      
Application Number 17901726
Status Pending
Filing Date 2022-09-01
First Publication Date 2022-12-29
Owner Gracenote, Inc. (USA)
Inventor Wilkinson, Matthew James

Abstract

Example methods and systems for indexing fingerprints are described. Fingerprints may be made up of sub-fingerprints, each of which corresponds to a frame of the media, which is a smaller unit of time than the fingerprint. In some example embodiments, multiple passes are performed. For example, a first pass may be performed that compares the sub-fingerprints of the query fingerprint with every thirty-second sub-fingerprint of the reference material to identify likely matches. In this example, a second pass is performed that compares the sub-fingerprints of the query fingerprint with every fourth sub-fingerprint of the likely matches to provide a greater degree of confidence. A third pass may be performed that uses every sub-fingerprint of the most likely matches, to help distinguish between similar references or to identify with greater precision the timing of the match. Each of these passes is amenable to parallelization.

IPC Classes  ?

  • G06F 16/44 - Browsing; Visualisation therefor
  • G06F 16/41 - Indexing; Data structures therefor; Storage structures
  • G06F 16/48 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

72.

Machine-control of a device based on machine-detected transitions

      
Application Number 17888438
Grant Number 11935507
Status In Force
Filing Date 2022-08-15
First Publication Date 2022-12-29
Grant Date 2024-03-19
Owner GRACENOTE, INC. (USA)
Inventor
  • Jeffrey, Michael
  • Cremer, Markus K.
  • Lee, Dong-In

Abstract

Apparatus, methods, and systems that operate to provide interactive streaming content identification and processing are disclosed. An example apparatus includes a classifier to determine an audio characteristic value representative of an audio characteristic in audio; a transition detector to detect a transition between a first category and a second category by comparing the audio characteristic value to a threshold value among a set of threshold values, the set of threshold values corresponding to the first category and the second category; and a context manager to control a device to switch from a first fingerprinting algorithm to a second fingerprinting algorithm different than the first fingerprinting algorithm, responsive to the detected transition between the first category and the second category.

IPC Classes  ?

  • G10H 1/00 - ELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE - Details of electrophonic musical instruments
  • H04L 65/612 - Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
  • H04M 1/72454 - User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • H04N 21/439 - Processing of audio elementary streams
  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs

73.

Vehicle-based media system with audio ad and navigation-related action synchronization feature

      
Application Number 17879736
Grant Number 11799574
Status In Force
Filing Date 2022-08-02
First Publication Date 2022-12-08
Grant Date 2023-10-24
Owner Gracenote, Inc. (USA)
Inventor Modi, Nisarg A.

Abstract

In one aspect, an example method to be performed by a vehicle-based media system includes (a) receiving audio content; (b) causing one or more speakers to output the received audio content; (c) using a microphone of the vehicle-based media system to capture the output audio content; (d) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (e) identifying a geographic location associated with the identified reference audio content; and (f) based at least on the identified geographic location associated with the identified reference audio content, outputting, via the user interface of the vehicle-based media system, a prompt to navigate to the identified geographic location.

IPC Classes  ?

  • H04H 20/26 - Arrangements for switching distribution systems
  • G01C 21/36 - Input/output arrangements for on-board computers
  • H04H 20/62 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast for transportation systems, e.g. in vehicles
  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • H04W 4/02 - Services making use of location information
  • H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
  • H04R 3/12 - Circuits for transducers for distributing signals to two or more loudspeakers
  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • G06Q 30/0251 - Targeted advertisements
  • G06Q 30/0241 - Advertisements
  • G06Q 30/0207 - Discounts or incentives, e.g. coupons or rebates
  • G10L 15/26 - Speech to text systems
  • H04N 21/41 - Structure of client; Structure of client peripherals
  • H04N 21/81 - Monomedia components thereof

74.

BROADCAST PROFILING SYSTEM

      
Application Number 17872781
Status Pending
Filing Date 2022-07-25
First Publication Date 2022-11-10
Owner Gracenote, Inc. (USA)
Inventor
  • Cremer, Markus K.
  • Sharma, Rishabh
  • Chien, Michael Yeehua
  • Jeyachandran, Suresh
  • Quinn, Paul Emmanuel

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed for a broadcast profiling system. An example apparatus includes a memory storing instructions, and a processor configured to execute the instructions stored in the memory to compare a preference included in a user profile with a portion of a content station profile to determine whether the preference included in the user profile satisfies a threshold difference from the portion of the content station profile, in response to the threshold difference being satisfied, generate a station recommendation for a user associated with the user profile, and transmit an instruction to a device associated with the user, the instruction including the station recommendation, the instruction configured to cause a radio pre-set to be adjusted.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/638 - Presentation of query results

75.

Methods and apparatus to identify media that has been pitch shifted, time shifted, and/or resampled

      
Application Number 17866272
Grant Number 11748403
Status In Force
Filing Date 2022-07-15
First Publication Date 2022-11-03
Grant Date 2023-09-05
Owner Gracenote, Inc. (USA)
Inventor
  • Coover, Robert
  • Wilkinson, Matthew James
  • Scott, Jeffrey
  • Hong, Yongju

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed to identify media that has been pitch shifted, time shifted, and/or resampled. An example apparatus includes: memory; instructions in the apparatus; and processor circuitry to execute the instructions to: transmit a fingerprint of an audio signal and adjusting instructions to a central facility to facilitate a query, the adjusting instructions identifying at least one of a pitch shift, a time shift, or a resample ratio; obtain a response including an identifier for the audio signal and information corresponding to how the audio signal was adjusted; and change the adjusting instructions based on the information.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/632 - Query formulation
  • G06F 16/65 - Clustering; Classification
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06F 18/22 - Matching criteria, e.g. proximity measures

76.

Classifying segments of media content using closed captioning

      
Application Number 17861112
Grant Number 11736744
Status In Force
Filing Date 2022-07-08
First Publication Date 2022-10-27
Grant Date 2023-08-22
Owner Gracenote, Inc. (USA)
Inventor
  • Vartakavi, Aneesh
  • Balasuriya, Lakshika
  • Ko, Chin-Ting

Abstract

In one aspect, an example method includes (i) retrieving, from a text index, closed captioning repetition data for a segment of a sequence of media content; (ii) generating features using the closed captioning repetition data; (iii) providing the features as input to a classification model, wherein the classification model is configured to output classification data indicative of a likelihood of the features being characteristic of a program segment; (iv) obtaining the classification data output by the classification model; (v) determining a prediction of whether the segment is a program segment using the classification data; and (vi) storing the prediction for the segment in a database.

IPC Classes  ?

  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • G11B 27/34 - Indicating arrangements
  • G06F 40/20 - Natural language analysis
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • G06V 20/40 - Scenes; Scene-specific elements in video content

77.

Synthesizing a presentation from multiple media clips

      
Application Number 17854201
Grant Number 11862198
Status In Force
Filing Date 2022-06-30
First Publication Date 2022-10-20
Grant Date 2024-01-02
Owner Gracenote, Inc. (USA)
Inventor
  • Roberts, Dale T.
  • Cook, Randall E.
  • Cremer, Markus K.

Abstract

In an example implementation, a method is described. The implementation accesses first and second media clips. The implementation also matches a first fingerprint of the first media clip with a second fingerprint of the second media clip and determines an overlap of the first media clip with the second media clip. The implementation also, based on the overlap, merges the first and second media clips into a group of overlapping media clips, transmits, to a client device, data identifying the group of overlapping media clips and specifying a synchronization of the first media clip with the second media clip, and generates for display on a display device of the client computing device, a graphical user interface that identifies the group of overlapping media clips, specifies the synchronization of the first media clip with the second media clip, and allows access to, and manipulation of, the first and second media clips.

IPC Classes  ?

  • G11B 27/031 - Electronic editing of digitised analogue information signals, e.g. audio or video signals
  • G11B 27/10 - Indexing; Addressing; Timing or synchronising; Measuring tape travel
  • G11B 27/28 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
  • G11B 27/34 - Indicating arrangements
  • H04N 5/262 - Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects
  • H04N 21/218 - Source of audio or video content, e.g. local disk arrays
  • H04N 21/2743 - Video hosting of uploaded data from client
  • H04N 21/8549 - Creating video summaries, e.g. movie trailer
  • H04N 7/173 - Analogue secrecy systems; Analogue subscription systems with two-way working, e.g. subscriber sending a programme selection signal
  • G11B 27/036 - Insert-editing
  • G06F 16/40 - Information retrieval; Database structures therefor; File system structures therefor of multimedia data, e.g. slideshows comprising image and additional audio data
  • G06F 16/958 - Organisation or management of web site content, e.g. publishing, maintaining pages or automatic linking
  • G06F 40/103 - Formatting, i.e. changing of presentation of documents
  • G06F 40/106 - Display of layout of documents; Previewing
  • G06F 40/174 - Form filling; Merging
  • G06F 40/177 - Editing, e.g. inserting or deleting using ruled lines
  • G06F 40/14 - Tree-structured documents
  • G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions

78.

Methods and apparatus for audio equalization based on variant selection

      
Application Number 17846913
Grant Number 11902760
Status In Force
Filing Date 2022-06-22
First Publication Date 2022-10-13
Grant Date 2024-02-13
Owner Gracenote, Inc. (USA)
Inventor
  • Coover, Robert
  • Renner, Joseph
  • Summers, Cameron A.

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed for audio equalization based on variant selection. An example apparatus to equalize audio includes at least one memory, machine readable instructions, and processor circuitry to at least one of instantiate or execute the machine readable instructions to train a neural network model to apply a first audio equalization profile to first audio associated with a first variant of media, and apply a second audio equalization profile to second audio associated with a second variant of media. The processor circuitry is to at least one of instantiate or execute the machine readable instructions to at least one of dispatch or execute the neural network model.

IPC Classes  ?

  • H04R 3/04 - Circuits for transducers for correcting frequency response
  • G06N 3/08 - Learning methods
  • G06N 20/00 - Machine learning
  • B60K 35/00 - Arrangement or adaptations of instruments
  • G06F 9/54 - Interprogram communication
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus

79.

METHODS AND APPARATUS TO IDENTIFY MEDIA

      
Application Number 17833717
Status Pending
Filing Date 2022-06-06
First Publication Date 2022-09-22
Owner Gracenote, Inc. (USA)
Inventor
  • Coover, Robert
  • Wilkinson, Matthew James
  • Scott, Jeffrey
  • Hong, Yongju

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed to identify media. An example method includes: in response to a query, generating an adjusted sample media fingerprint by applying an adjustment to a sample media fingerprint; comparing the adjusted sample media fingerprint to a reference media fingerprint; and in response to the adjusted sample media fingerprint matching the reference media fingerprint, transmitting information associated with the reference media fingerprint and the adjustment.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06F 16/632 - Query formulation
  • G06F 16/65 - Clustering; Classification
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually

80.

SEPARATING MEDIA CONTENT INTO PROGRAM SEGMENTS AND ADVERTISEMENT SEGMENTS

      
Application Number US2022013240
Publication Number 2022/186910
Status In Force
Filing Date 2022-01-21
Publication Date 2022-09-09
Owner GRACENOTE, INC. (USA)
Inventor
  • Hodges, Todd, J.
  • Schmidt, Andreas
  • Gupta, Sharmishtha

Abstract

In one aspect, an example method includes (i) extracting, by a computing system, features from media content; (ii) generating, by the computing system, repetition data for respective portions of the media content using the features, with repetition data for a given portion including a list of other portions of the media content matching the given portion; (iii) determining, by the computing system, transition data for the media content; (iv) selecting, by the computing system, a portion within the media content using the transition data; (v) classifying, by the computing system, the portion as either an advertisement segment or a program segment using repetition data for the portion; and (vi) outputting, by the computing system, data indicating a result of the classifying for the portion.

IPC Classes  ?

  • H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
  • H04N 21/236 - Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator ] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream

81.

METHODS AND APPARATUS TO FINGERPRINT AN AUDIO SIGNAL

      
Application Number US2022015442
Publication Number 2022/186952
Status In Force
Filing Date 2022-02-07
Publication Date 2022-09-09
Owner GRACENOTE INC. (USA)
Inventor
  • Topchy, Alexander
  • Nielsen, Christen V.
  • Davis, Jeremey M.

Abstract

Methods, apparatus, systems, and articles of manufacture to fingerprint an audio signal. An example apparatus disclosed herein includes an audio segmenter to divide an audio signal into a plurality of audio segments, a bin normalizer to normalize the second audio segment to thereby create a first normalized audio segment, a subfingerprint generator to generate a first subfingerprint from the first normalized audio segment, the first subfingerprint including a first portion corresponding to a location of an energy extremum in the normalized second audio segment, a portion strength evaluator to determine a likelihood of the first portion to change, and a portion replacer to, in response to determining the likelihood does not satisfy a threshold, replace the first portion with a second portion to thereby generate a second subfingerprint.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G10L 19/025 - Detection of transients or attacks for time/frequency resolution switching

82.

Separating Media Content into Program Segments and Advertisement Segments

      
Application Number 17496297
Status Pending
Filing Date 2021-10-07
First Publication Date 2022-09-08
Owner Gracenote, Inc. (USA)
Inventor
  • Hodges, Todd J.
  • Schmidt, Andreas
  • Gupta, Sharmishtha

Abstract

In one aspect, an example method includes (i) extracting, by a computing system, features from media content; (ii) generating, by the computing system, repetition data for respective portions of the media content using the features, with repetition data for a given portion including a list of other portions of the media content matching the given portion; (iii) determining, by the computing system, transition data for the media content; (iv) selecting, by the computing system, a portion within the media content using the transition data; (v) classifying, by the computing system, the portion as either an advertisement segment or a program segment using repetition data for the portion; and (vi) outputting, by the computing system, data indicating a result of the classifying for the portion.

IPC Classes  ?

  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/81 - Monomedia components thereof

83.

Methods and apparatus to fingerprint an audio signal

      
Application Number 17192592
Grant Number 11798577
Status In Force
Filing Date 2021-03-04
First Publication Date 2022-09-08
Grant Date 2023-10-24
Owner Gracenote, Inc. (USA)
Inventor
  • Topchy, Alexander
  • Nielsen, Christen V.
  • Davis, Jeremey M.

Abstract

Methods, apparatus, systems, and articles of manufacture to fingerprint an audio signal. An example apparatus disclosed herein includes an audio segmenter to divide an audio signal into a plurality of audio segments, a bin normalizer to normalize the second audio segment to thereby create a first normalized audio segment, a subfingerprint generator to generate a first subfingerprint from the first normalized audio segment, the first subfingerprint including a first portion corresponding to a location of an energy extremum in the normalized second audio segment, a portion strength evaluator to determine a likelihood of the first portion to change, and a portion replacer to, in response to determining the likelihood does not satisfy a threshold, replace the first portion with a second portion to thereby generate a second subfingerprint.

IPC Classes  ?

  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G11B 27/28 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording

84.

Music Release Disambiguation using Multi-Modal Neural Networks

      
Application Number 17203631
Status Pending
Filing Date 2021-03-16
First Publication Date 2022-09-08
Owner Gracenote, Inc. (USA)
Inventor
  • Vartakavi, Aneesh
  • Dimitriou, Konstantinos Antonio

Abstract

Methods and systems for disambiguating musical artist names are disclosed. Musical-artist-release records (MARRs) may be input to a multi-modal artificial neural network (ANN). Each MARR may be associated with a musical release of an artist, and may include a release ID and an artist ID, and release data in categories including music media content and metadata categories including sub-definitive musician name of the artist and release subcategories. All n-tuples of MARRs may be formed, and for each n-tuple, the ANN may be applied concurrently to each MARR to generate a release feature vector (RFV) that includes a set of sub-feature vectors, each characterizing a different category of release data. For each n-tuple, the ANN may be trained to cluster in a multi-dimensional RFV space RFVs of the same artist ID, and to separate RFVs of different artist IDs. The MARRs and their RFVs may be stored in a release database.

IPC Classes  ?

85.

IDENTIFYING AND LABELING SEGMENTS WITHIN VIDEO CONTENT

      
Application Number US2022013239
Publication Number 2022/177693
Status In Force
Filing Date 2022-01-21
Publication Date 2022-08-25
Owner GRACENOTE, INC. (USA)
Inventor
  • Garg, Amanmeet
  • Gupta, Sharmishtha
  • Schmidt, Andreas
  • Balasuriya, Lakshika
  • Vartakavi, Aneesh

Abstract

In one aspect, an example method includes (i) obtaining fingerprint repetition data for a portion of video content, with the fingerprint repetition data including a list of other portions of video content matching the portion of video content and respective reference identifiers for the other portions of video content; (ii) identifying the portion of video content as a program segment rather than an advertisement segment based at least on a number of unique reference identifiers within the list of other portions of video content relative to a total number of reference identifiers within the list of other portions of video content; (iii) determining that the portion of video content corresponds to a program specified in an electronic program guide using a timestamp of the portion of video content; and (iv) storing an indication of the portion of video content in a data file for the program.

IPC Classes  ?

  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • H04N 21/8547 - Content authoring involving timestamps for synchronizing content
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/434 - Disassembling of a multiplex stream, e.g. demultiplexing audio and video streams or extraction of additional data from a video stream; Remultiplexing of multiplex streams; Extraction or processing of SI; Disassembling of packetised elementary stream

86.

Vehicle-based media system with audio ad and visual content synchronization feature

      
Application Number 17734845
Grant Number 11581969
Status In Force
Filing Date 2022-05-02
First Publication Date 2022-08-18
Grant Date 2023-02-14
Owner Gracenote, Inc. (USA)
Inventor Modi, Nisarg A.

Abstract

In one aspect, an example method to be performed by a vehicle-based media system includes (a) receiving audio content; (b) causing one or more speakers to output the received audio content; (c) using a microphone of the vehicle-based media system to capture the output audio content; (d) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (e) identifying visual content based at least on the identified reference audio content; and (f) outputting, via a user interface of the vehicle-based media system, the identified visual content.

IPC Classes  ?

  • H04N 21/414 - Specialised client platforms, e.g. receiver in car or embedded in a mobile appliance
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • H04N 21/41 - Structure of client; Structure of client peripherals
  • H04N 21/81 - Monomedia components thereof
  • H04H 20/62 - Arrangements specially adapted for specific applications, e.g. for traffic information or for mobile receivers for local area broadcast, e.g. instore broadcast for transportation systems, e.g. in vehicles
  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • H04W 4/02 - Services making use of location information
  • H04W 4/44 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P] for communication between vehicles and infrastructures, e.g. vehicle-to-cloud [V2C] or vehicle-to-home [V2H]
  • G01C 21/36 - Input/output arrangements for on-board computers
  • H04R 3/12 - Circuits for transducers for distributing signals to two or more loudspeakers
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G10L 15/26 - Speech to text systems
  • G06Q 30/0251 - Targeted advertisements
  • G06Q 30/0241 - Advertisements
  • G06Q 30/0207 - Discounts or incentives, e.g. coupons or rebates

87.

Generation of media station previews using a secondary tuner

      
Application Number 17737732
Grant Number 11644824
Status In Force
Filing Date 2022-05-05
First Publication Date 2022-08-18
Grant Date 2023-05-09
Owner GRACENOTE, INC. (USA)
Inventor
  • Qin, John M.
  • Jeyachandran, Suresh
  • Fasching, Damon P.

Abstract

In one aspect, an example method includes (i) while a media playback device of a vehicle is playing back content received on a first channel, generating, by the media playback device, a query fingerprint using second content received on a second channel; (ii) sending, by the media playback device, the query fingerprint to a server that maintains a reference database containing a plurality of reference fingerprints; (iii) receiving, by the media playback device from the server, identifying information corresponding to a reference fingerprint of the plurality of reference fingerprints that matches the query fingerprint; and (iv) while the media playback device is playing back the first content received on the first channel, providing, by the media playback device for display, at least a portion of the identifying information.

IPC Classes  ?

  • G05B 19/42 - Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
  • G06F 3/0481 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
  • H04N 5/50 - Tuning indicators; Automatic tuning control

88.

IDENTIFYING AND LABELING SEGMENTS WITHIN VIDEO CONTENT

      
Application Number 17401656
Status Pending
Filing Date 2021-08-13
First Publication Date 2022-08-18
Owner Gracenote, Inc. (USA)
Inventor
  • Garg, Amanmeet
  • Gupta, Sharmishtha
  • Schmidt, Andreas
  • Balasuriya, Lakshika
  • Vartakavi, Aneesh

Abstract

In one aspect, an example method includes (i) obtaining fingerprint repetition data for a portion of video content, with the fingerprint repetition data including a list of other portions of video content matching the portion of video content and respective reference identifiers for the other portions of video content; (ii) identifying the portion of video content as a program segment rather than an advertisement segment based at least on a number of unique reference identifiers within the list of other portions of video content relative to a total number of reference identifiers within the list of other portions of video content; (iii) determining that the portion of video content corresponds to a program specified in an electronic program guide using a timestamp of the portion of video content; and (iv) storing an indication of the portion of video content in a data file for the program.

IPC Classes  ?

  • H04N 21/44 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs
  • H04N 21/8352 - Generation of protective data, e.g. certificates involving content or source identification data, e.g. UMID [Unique Material Identifier]
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

89.

Classifying segments of media content using closed captioning

      
Application Number 17359433
Grant Number 11418821
Status In Force
Filing Date 2021-06-25
First Publication Date 2022-08-11
Grant Date 2022-08-16
Owner Gracenote, Inc. (USA)
Inventor
  • Vartakavi, Aneesh
  • Balasuriya, Lakshika
  • Ko, Chin-Ting

Abstract

In one aspect, an example method includes (i) retrieving, from a text index, closed captioning repetition data for a segment of a sequence of media content; (ii) generating features using the closed captioning repetition data; (iii) providing the features as input to a classification model, wherein the classification model is configured to output classification data indicative of a likelihood of the features being characteristic of a program segment; (iv) obtaining the classification data output by the classification model; (v) determining a prediction of whether the segment is a program segment using the classification data; and (vi) storing the prediction for the segment in a database.

IPC Classes  ?

  • H04N 21/234 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs
  • G11B 27/34 - Indicating arrangements
  • G06F 40/20 - Natural language analysis
  • H04N 21/845 - Structuring of content, e.g. decomposing content into time segments
  • G06V 20/40 - Scenes; Scene-specific elements in video content

90.

Dynamic content delivery based on vehicle navigational attributes

      
Application Number 17734760
Grant Number 11674808
Status In Force
Filing Date 2022-05-02
First Publication Date 2022-08-11
Grant Date 2023-06-13
Owner GRACENOTE, INC. (USA)
Inventor
  • Cremer, Markus K.
  • Jeyachandran, Suresh
  • Quinn, Paul Emmanuel
  • Tsai, Roger

Abstract

Systems and methods are disclosed for dynamic content delivery based on vehicle navigational attributes. An example apparatus includes at least one memory, machine readable instructions, and processor circuitry to execute the machine readable instructions to at least obtain navigational attributes from an electronic device of a vehicle via a network, determine a relevancy score for respective ones of first sporting event data items based on the navigational attributes, based on a determination that the navigational attributes correspond to a driving condition, identify a second sporting event data item of the first sporting event data items based on a relevancy score of the second sporting event data item corresponding to the driving condition, and transmit the second sporting event data item to the electronic device of the vehicle to cause the second sporting event data item to be presented.

IPC Classes  ?

  • G06F 15/16 - Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
  • G01C 21/26 - Navigation; Navigational instruments not provided for in groups specially adapted for navigation in a road network
  • H04L 67/306 - User profiles
  • H04L 67/12 - Protocols specially adapted for proprietary or special-purpose networking environments, e.g. medical networks, sensor networks, networks in vehicles or remote metering networks
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06F 16/9535 - Search customisation based on user profiles and personalisation
  • G06F 16/9537 - Spatial or temporal dependent retrieval, e.g. spatiotemporal queries
  • G06F 16/2457 - Query processing with adaptation to user needs
  • H04W 4/021 - Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
  • H04W 4/024 - Guidance services
  • H04W 4/40 - Services specially adapted for particular environments, situations or purposes for vehicles, e.g. vehicle-to-pedestrians [V2P]
  • G01C 21/36 - Input/output arrangements for on-board computers
  • H04L 67/52 - Network services specially adapted for the location of the user terminal
  • H04L 67/50 - Network services

91.

Automated generation of banner images

      
Application Number 17478898
Grant Number 11711593
Status In Force
Filing Date 2021-09-18
First Publication Date 2022-08-11
Grant Date 2023-07-25
Owner Gracenote, Inc. (USA)
Inventor Vartakavi, Aneesh

Abstract

Example systems and methods for automated generation of banner images are disclosed. A program identifier associated with a particular media program may be received by a system, and used for accessing a set of iconic digital images and corresponding metadata associated with the particular media program. The system may select a particular iconic digital image for placing a banner of text associated with the particular media program, by applying an analytical model of banner-placement criteria to the iconic digital images. The system may apply another analytical model for banner generation to the particular iconic image to determine (i) dimensions and placement of a bounding box for containing the text, (ii) segmentation of the text for display within the bounding box, and (iii) selection of font, text size, and font color for display of the text. The system may store the particular iconic digital image and banner metadata specifying the banner.

IPC Classes  ?

  • H04N 21/84 - Generation or processing of descriptive data, e.g. content descriptors
  • H04N 21/81 - Monomedia components thereof
  • H04N 21/485 - End-user interface for client configuration
  • G06N 20/00 - Machine learning

92.

Audio playout report for ride-sharing session

      
Application Number 17729624
Grant Number 11581011
Status In Force
Filing Date 2022-04-26
First Publication Date 2022-08-04
Grant Date 2023-02-14
Owner Gracenote, Inc. (USA)
Inventor Modi, Nisarg A.

Abstract

In one aspect, an example method to be performed by a computing device includes (a) determining that a ride-sharing session is active; (b) in response to determining the ride-sharing session is active, using a microphone of the computing device to capture audio content; (c) identifying reference audio content that has at least a threshold extent of similarity with the captured audio content; (d) determining that the ride-sharing session is inactive; and (e) outputting an indication of the identified reference audio content.

IPC Classes  ?

  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G06F 16/68 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 3/16 - Sound input; Sound output

93.

AUDIO CONTENT RECOGNITION METHOD AND SYSTEM

      
Application Number US2021063300
Publication Number 2022/146674
Status In Force
Filing Date 2021-12-14
Publication Date 2022-07-07
Owner GRACENOTE, INC. (USA)
Inventor
  • Berrian, Alexander
  • Hodges, Tadd
  • Coover, Robert
  • Wilkinson, Matthew
  • Rafii, Zafar

Abstract

A method implemented by a computing system comprises generating, by the computing system, a fingerprint comprising a plurality of bin samples associated with audio content. Each bin sample is specified within a frame of the fingerprint and is associated with one of a plurality of non-overlapping frequency ranges and a value indicative of a magnitude of energy associated with a corresponding frequency range. The computing system removes. from the fingerprint, a plurality of bin samples associated with a frequency sweep in the andio content.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content

94.

COVER SONG IDENTIFICATION METHOD AND SYSTEM

      
Application Number US2021063318
Publication Number 2022/146677
Status In Force
Filing Date 2021-12-14
Publication Date 2022-07-07
Owner GRACENOTE, INC. (USA)
Inventor
  • Liu, Xiaochen
  • Renner, Joseph
  • Morris, Joshua
  • Hodges, Todd
  • Coover, Robert
  • Rafii, Zafar

Abstract

A cover song identification method implemented by a computing system comprises receiving, by a computing system and from a user device, harmonic pitch class profile (HPCP) information that specifies one or more HPCP features associated with target audio content. A major chord profile feature and a minor chord profile feature associated with the target audio content are derived from the HPCP features. Machine learning logic of the computing system determines. based on the major chord profile feature and the minor chord profile feature, a relatedness between the target audio conten t and each of a plurality of audio content items speci fied in records of a database. Each audio content i tem is associated with cover song information. Cover song information associated with an audio content item having a highest relatedness to the target audio content is communicated to the user device.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/632 - Query formulation
  • G06F 16/638 - Presentation of query results
  • G06F 16/61 - Indexing; Data structures therefor; Storage structures
  • G06Q 50/18 - Legal services; Handling legal documents
  • G06N 20/00 - Machine learning

95.

COVER SONG IDENTIFICATION METHOD AND SYSTEM

      
Application Number 17335657
Status Pending
Filing Date 2021-06-01
First Publication Date 2022-06-30
Owner Gracenote, Inc. (USA)
Inventor
  • Liu, Xiaochen
  • Renner, Joseph P.
  • Morris, Joshua E.
  • Hodges, Todd J.
  • Coover, Robert
  • Rafii, Zafar

Abstract

A cover song identification method implemented by a computing system comprises receiving, by a computing system and from a user device, harmonic pitch class profile (HPCP) information that specifies one or more HPCP features associated with target audio content. A major chord profile feature and a minor chord profile feature associated with the target audio content are derived from the HPCP features. Machine learning logic of the computing system determines, based on the major chord profile feature and the minor chord profile feature, a relatedness between the target audio content and each of a plurality of audio content items specified in records of a database. Each audio content item is associated with cover song information. Cover song information associated with an audio content item having a highest relatedness to the target audio content is communicated to the user device.

IPC Classes  ?

  • G10L 25/90 - Pitch determination of speech signals
  • G10L 19/022 - Blocking, i.e. grouping of samples in time; Choice of analysis windows; Overlap factoring
  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G06F 16/632 - Query formulation
  • G06N 20/00 - Machine learning
  • G06N 3/08 - Learning methods

96.

Audio content recognition method and system

      
Application Number 17315820
Grant Number 11727953
Status In Force
Filing Date 2021-05-10
First Publication Date 2022-06-30
Grant Date 2023-08-15
Owner Gracenote, Inc. (USA)
Inventor
  • Berrian, Alexander
  • Hodges, Todd J.
  • Coover, Robert
  • Wilkinson, Matthew James
  • Rafii, Zafar

Abstract

A method implemented by a computing system comprises generating, by the computing system, a fingerprint comprising a plurality of bin samples associated with audio content. Each bin sample is specified within a frame of the fingerprint and is associated with one of a plurality of non-overlapping frequency ranges and a value indicative of a magnitude of energy associated with a corresponding frequency range. The computing system removes, from the fingerprint, a plurality of bin samples associated with a frequency sweep in the audio content.

IPC Classes  ?

  • G10L 19/018 - Audio watermarking, i.e. embedding inaudible data in the audio signal
  • G10L 25/54 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for retrieval
  • G10L 25/27 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique
  • G10L 25/72 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for transmitting results of analysis
  • G10L 25/18 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
  • G10L 19/028 - Noise substitution, e.g. substituting non-tonal spectral components by noisy source

97.

Methods and apparatus for efficient media indexing

      
Application Number 17688632
Grant Number 11874814
Status In Force
Filing Date 2022-03-07
First Publication Date 2022-06-23
Grant Date 2024-01-16
Owner Gracenote, Inc. (USA)
Inventor
  • Wilkinson, Matthew James
  • Scott, Jeffrey
  • Coover, Robert
  • Dimitriou, Konstantinos Antonios

Abstract

Methods, apparatus, systems and articles of manufacture are disclosed for efficient media indexing. An example method disclosed herein includes means for initiating a list of hash seeds, the list of hash seeds including at least a first hash seed value and a second hash seed value among other hash seed values, means for generating to generate a first bucket distribution based on the first hash seed value and a first hash function and generate a second bucket distribution based on the second hash seed value used in combination with the first hash seed value, means for determining to determine a first entropy value of the first bucket distribution, wherein data associated with the first bucket distribution is stored in a first hash table and determine a second entropy value of the second bucket distribution.

IPC Classes  ?

  • G06F 16/22 - Indexing; Data structures therefor; Storage structures
  • G06F 16/41 - Indexing; Data structures therefor; Storage structures
  • G06F 7/58 - Random or pseudo-random number generators

98.

Inserting information into playing content

      
Application Number 17526949
Grant Number 11531699
Status In Force
Filing Date 2021-11-15
First Publication Date 2022-06-02
Grant Date 2022-12-20
Owner GRACENOTE, INC. (USA)
Inventor
  • Brenner, Vadim
  • Cremer, Markus K.

Abstract

Example methods and systems for inserting information into playing content are described. In some example embodiments, the methods and systems may identify a break in content playing via a playback device, select an information segment representative of information received by the playback device to present during the identified break, and insert the information segment into the content playing via the playback device upon an occurrence of the identified break.

IPC Classes  ?

  • G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
  • G11B 27/11 - Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information not detectable on the record carrier
  • G11B 27/036 - Insert-editing

99.

Logo recognition in images and videos

      
Application Number 17672963
Grant Number 11861888
Status In Force
Filing Date 2022-02-16
First Publication Date 2022-06-02
Grant Date 2024-01-02
Owner Gracenote, Inc. (USA)
Inventor
  • Pereira, Jose Pio
  • Brocklehurst, Kyle
  • Kulkarni, Sunil Suresh
  • Wendt, Peter

Abstract

Accurately detection of logos in media content on media presentation devices is addressed. Logos and products are detected in media content produced in retail deployments using a camera. Logo recognition uses saliency analysis, segmentation techniques, and stroke analysis to segment likely logo regions. Logo recognition may suitably employ feature extraction, signature representation, and logo matching. These three approaches make use of neural network based classification and optical character recognition (OCR). One method for OCR recognizes individual characters then performs string matching. Another OCR method uses segment level character recognition with N-gram matching. Synthetic image generation for training of a neural net classifier and utilizing transfer learning features of neural networks are employed to support fast addition of new logos for recognition.

IPC Classes  ?

  • G06T 7/60 - Analysis of geometric attributes
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06T 7/11 - Region-based segmentation
  • G06V 10/46 - Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
  • G06F 18/24 - Classification techniques
  • G06T 7/33 - Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
  • G06V 10/50 - Extraction of image or video features by summing image-intensity values; Projection analysis

100.

System And Method For Multi-Modal Podcast Summarization

      
Application Number 17677186
Status Pending
Filing Date 2022-02-22
First Publication Date 2022-06-02
Owner Gracenote, Inc. (USA)
Inventor
  • Garg, Amanmeet
  • Vartakavi, Aneesh
  • Morris, Joshua Ernest

Abstract

In one aspect, a method includes receiving podcast content, generating a transcript of at least a portion of the podcast content, and parsing the podcast content to (i) identify audio segments within the podcast content, (ii) determine classifications for the audio segments, (iii) identify audio segment offsets, and (iv) identify sentence offsets. The method also includes based on the audio segments, the classifications, the audio segment offsets, and the sentence offsets, dividing the generated transcript into text sentences and, from among the text sentences of the divided transcript, selecting a group of text sentences for use in generating an audio summary of the podcast content. The method also includes based on timestamps at which the group of text sentences begin in the podcast content, combining portions of audio in the podcast content that correspond to the group of text sentences to generate an audio file representing the audio summary.

IPC Classes  ?

  • G10L 15/26 - Speech to text systems
  • G10L 15/02 - Feature extraction for speech recognition; Selection of recognition unit
  1     2     3     ...     5        Next Page