Systems and method of diarization of audio files use an acoustic voiceprint model. A plurality of audio files are analyzed to arrive at an acoustic voiceprint model associated to an identified speaker. Metadata associate with an audio file is used to select an acoustic voiceprint model. The selected acoustic voiceprint model is applied in a diarization to identify audio data of the identified speaker.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
16 - Paper, cardboard and goods made from these materials
18 - Leather and imitations of leather
25 - Clothing; footwear; headgear
42 - Scientific, technological and industrial services, research and design
Goods & Services
stickers; notepad Tote bags; shoulder bags; all-purpose reusable carrying bags Clothing, namely, shirts, t-shirts, sweatshirts Providing temporary use of on-line non-downloadable computer software for use in customer relationship management (CRM); Providing temporary use of on-line non-downloadable computer software for use in customer relationship management (CRM) software applications that include self-service and automated customer engagement via intelligent personal assistant devices
By formulizing a specific company's internal knowledge and terminology, the ontology programming accounts for linguistic meaning to surface relevant and important content for analysis. The ontology is built on the premise that meaningful terms are detected in the corpus and then classified according to specific semantic concepts, or entities. Once the main terms are defined, direct relations or linkages can be formed between these terms and their associated entities. Then, the relations are grouped into themes, which are groups or abstracts that contain synonymous relations. The disclosed ontology programming adapts to the language used in a specific domain, including linguistic patterns and properties, such as word order, relationships between terms, and syntactical variations. The ontology programming automatically trains itself to understand the domain or environment of the communication data by processing and analyzing a defined corpus of communication data.
Systems and method of diarization of audio files use an acoustic voiceprint model. A plurality of audio files are analyzed to arrive at an acoustic voiceprint model associated to an identified speaker. Metadata associate with an audio file is used to select an acoustic voiceprint model. The selected acoustic voiceprint model is applied in a diarization to identify audio data of the identified speaker.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
A method of zoning a transcription of audio data includes separating the transcription of audio data into a plurality of utterances. A that each word in an utterances is a meaning unit boundary is calculated. The utterance is split into two new utterances at a work with a maximum calculated probability. At least one of the two new utterances that is shorter than a maximum utterance threshold is identified as a meaning unit.
A faster and more streamlined system for providing summary and analysis of large amounts of communication data is described. System and methods are disclosed that employ an ontology to automatically summarize communication data and present the summary to the user in a form that does not require the user to listen to the communication data. In one embodiment, the summary is presented as written snippets, or short fragments, of relevant communication data that capture the meaning of the data relating to a search performed by the user. Such snippets may be based on theme and meaning unit identification.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Computer hardware and software for use in the fields of customer service and engagement, customer and employee support, employee and operations management, and compliance and security management incorporating workforce engagement software, namely, software for workforce forecasting and scheduling, knowledge and employee assistance, quality assurance, operational insights and analytics, and operational and performance management; software for self-service, namely, intelligent virtual assistants, web and mobile self-service, and social communities; software for experience management, namely, software for capturing and correlating voice, video, email, text, chat, social digital, and survey interactions with customers for the purpose of improving customer service and experience; software for enterprise recording to enhance regulatory compliance and /or minimize fraud, namely, omnichannel recording of voice, text, screen, and / or video, compliance recording, voice biometrics and authentication, and real-time analysis of caller behavior and related call parameters to detect potential fraud. Technology consulting services in the fields of business enterprise solutions, computer software, hardware, and cloud deployment, self-service and automation, telecommunications, digital security and surveillance, computer and telecommunication networks and multimedia.
Methods, systems, and computer readable media for automated transcription model adaptation includes obtaining audio data from a plurality of audio files. The audio data is transcribed to produce at least one audio file transcription which represents a plurality of transcription alternatives for each audio file. Speech analytics are applied to each audio file transcription. A best transcription is selected from the plurality of transcription alternatives for each audio file. Statistics from the selected best transcription are calculated. An adapted model is created from the calculated statistics.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
An embodiment of the method of processing communication data to identify one or more themes within the communication data includes identifying terms in a set of communication data, wherein a term is a word or short phrase, and defining relations in the set of communication data based on the terms, wherein the relation is a pair of terms that appear in proximity to one another. The method further includes identifying themes in the set of communication data based on the relations, wherein a theme is a group of one or more relations that have similar meanings, and storing the terms, the relations, and the themes in the database.
Systems, methods, and media for the automated removal of private information are provided herein. In an example implementation, a method for automatic removal of private information may include: receiving a transcript of communication data; applying a private information rule to the transcript in order to identify private information in the transcript; tagging the identified private information with a tag comprising an identification of the private information; applying a complicate rule to the tagged transcript in order to evaluate a compliance of the transcript with privacy standards; removing the identified private information from the transcript to produce a redacted transaction; and storing the redacted transcript.
Voice activity detection (VAD) is an enabling technology for a variety of speech based applications. Herein disclosed is a robust VAD algorithm that is also language independent. Rather than classifying short segments of the audio as either “speech” or “silence”, the VAD as disclosed herein employees a soft-decision mechanism. The VAD outputs a speech-presence probability, which is based on a variety of characteristics.
Disclosed herein are methods of diarizing audio data using first-pass blind diarization and second-pass blind diarization that generate speaker statistical models, wherein the first pass-blind diarization is on a per-frame basis and the second pass-blind diarization is on a per-word basis, and methods of creating acoustic signatures for a common speaker based only on the statistical models of the speakers in each audio session.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
G10L 25/84 - Detection of presence or absence of voice signals for discriminating voice from noise
13.
System and method of video capture and search optimization for creating an acoustic voiceprint
Systems and method of diarization of audio files use an acoustic voiceprint model. A plurality of audio files are analyzed to arrive at an acoustic voiceprint model associated to an identified speaker. Metadata associate with an audio file is used to select an acoustic voiceprint model. The selected acoustic voiceprint model is applied in a diarization to identify audio data of the identified speaker.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
14.
System and method for determining the compliance of agent scripts
Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
G10L 17/00 - Speaker identification or verification
19.
System and method of diarization and labeling of audio data
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
20.
System and method of diarization and labeling of audio data
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. At least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcribed customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
22.
System and method of diarization and labeling of audio data
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
G10L 17/00 - Speaker identification or verification
24.
Diarization using linguistic labeling with segmented and clustered diarized textual transcripts
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
G10L 17/00 - Speaker identification or verification
A method of zoning a transcription of audio data includes separating the transcription of audio data into a plurality of utterances. A that each word in an utterances is a meaning unit boundary is calculated. The utterance is split into two new utterances at a work with a maximum calculated probability. At least one of the two new utterances that is shorter than a maximum utterance threshold is identified as a meaning unit.
Systems and method of diarization of audio files use an acoustic voiceprint model. A plurality of audio files are analyzed to arrive at an acoustic voiceprint model associated to an identified speaker. Metadata associate with an audio file is used to select an acoustic voiceprint model. The selected acoustic voiceprint model is applied in a diarization to identify audio data of the identified speaker.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
Systems and method of diarization of audio files use an acoustic voiceprint model. A plurality of audio files are analyzed to arrive at an acoustic voiceprint model associated to an identified speaker. Metadata associate with an audio file is used to select an acoustic voiceprint model. The selected acoustic voiceprint model is applied in a diarization to identify audio data of the identified speaker.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
G10L 17/00 - Speaker identification or verification
28.
Diarization using linguistic labeling to create and apply a linguistic model
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
29.
Diarization using textual and audio speaker labeling
Systems and methods diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
A system and method for distinguishing the sentiment of utterances in a dialog is disclosed. The system utilizes a lexicon that is expanded from a seed using unsupervised machine learning. What results is a sentiment classifier that may be optimized for a variety of environments (e.g., conversation, chat, email, etc.), each of which may communicate sentiment differently.
G10L 15/18 - Speech classification or search using natural language modelling
G10L 15/197 - Probabilistic grammars, e.g. word n-grams
G10L 15/02 - Feature extraction for speech recognition; Selection of recognition unit
G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
Systems and methods of automated ontology development include a corpus of communication data. The corpus of communication data includes communication data from a plurality of interactions and is processed. A plurality of terms are extracted from the corpus. Each term of the plurality is a plurality of words that identify a single concept within the corpus. An ontology is automatedly generated from the extracted terms.
By formulizing a specific company's internal knowledge and terminology, the ontology programming accounts for linguistic meaning to surface relevant and important content for analysis. The ontology is built on the premise that meaningful terms are detected in the corpus and then classified according to specific semantic concepts, or entities. Once the main terms are defined, direct relations or linkages can be formed between these terms and their associated entities. Then, the relations are grouped into themes, which are groups or abstracts that contain synonymous relations. The disclosed ontology programming adapts to the language used in a specific domain, including linguistic patterns and properties, such as word order, relationships between terms, and syntactical variations. The ontology programming automatically trains itself to understand the domain or environment of the communication data by processing and analyzing a defined corpus of communication data.
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
Systems and method of diarization of audio files use an acoustic voiceprint model. A plurality of audio files are analyzed to arrive at an acoustic voiceprint model associated to an identified speaker. Metadata associate with an audio file is used to select an acoustic voiceprint model. The selected acoustic voiceprint model is applied in a diarization to identify audio data of the identified speaker.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
A faster and more streamlined system for providing summary and analysis of large amounts of communication data is described. System and methods are disclosed that employ an ontology to automatically summarize communication data and present the summary to the user in a form that does not require the user to listen to the communication data. In one embodiment, the summary is presented as written snippets, or short fragments, of relevant communication data that capture the meaning of the data relating to a search performed by the user. Such snippets may be based on theme and meaning unit identification.
Voice activity detection (VAD) is an enabling technology for a variety of speech based applications. Herein disclosed is a robust VAD algorithm that is also language independent. Rather than classifying short segments of the audio as either “speech” or “silence”, the VAD as disclosed herein employees a soft-decision mechanism. The VAD outputs a speech-presence probability, which is based on a variety of characteristics.
A video stream from a webcam or video telephone is received. The video stream can be analyzed in real-time as it is being received or can be recorded and stored for later analysis. Information within the video streams can be extracted and processed by a facial and video content recognition engine and the information derived therefrom can be stored as metadata. The metadata can be used for enriching the call content recorded by a recorder. The information derived from the video streams can be used to solve business and legal issues.
Embodiments described herein provide systems and methods for sharing encoder output of video streams. In a particular embodiment, a method provides determining video profiles for each of a plurality of devices. The method further provides determining if two or more of the video profiles are similar by determining if parameters associated with each video profile differ by less than a threshold value. The method further provides transmitting a video stream encoded in a single format to the devices if they have similar profiles and transmitting video streams encoded in different formats to the devices if they do not have similar profiles.
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
H04N 7/52 - Systems for transmission of a pulse code modulated with one or more other pulse code modulated signals, e.g. an audio signal or a synchronizing signal
H04N 7/24 - Systems for the transmission of television signals using pulse code modulation
H04N 21/4402 - Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
A system and method for distinguishing the sentiment of utterances in a dialog is disclosed. The system utilizes a lexicon that is expanded from a seed using unsupervised machine learning. What results is a sentiment classifier that may be optimized for a variety of environments (e.g., conversation, chat, email, etc.), each of which may communicate sentiment differently.
G10L 15/18 - Speech classification or search using natural language modelling
G10L 15/197 - Probabilistic grammars, e.g. word n-grams
G10L 15/02 - Feature extraction for speech recognition; Selection of recognition unit
G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
Disclosed herein are methods of diarizing audio data using first-pass blind diarization and second-pass blind diarization that generate speaker statistical models, wherein the first pass-blind diarization is on a per-frame basis and the second pass-blind diarization is on a per-word basis, and methods of creating acoustic signatures for a common speaker based only on the statistical models of the speakers in each audio session.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
Embodiments described herein provide systems and methods for sharing encoder output of video streams. In a particular embodiment, a method provides determining video profiles for each of a plurality of devices. The method further provides determining if two or more of the video profiles are similar by determining if parameters associated with each video profile differ by less than a threshold value. The method further provides transmitting a video stream encoded in a single format to the devices if they have similar profiles and transmitting video streams encoded in different formats to the devices if they do not have similar profiles.
H04N 7/12 - Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
H04N 21/2343 - Processing of video elementary streams, e.g. splicing of video streams or manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
Disclosed herein are methods of diarizing audio data using first-pass blind diarization and second-pass blind diarization that generate speaker statistical models, wherein the first pass-blind diarization is on a per-frame basis and the second pass-blind diarization is on a per-word basis, and methods of creating acoustic signatures for a common speaker based only on the statistical models of the speakers in each audio session.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
G10L 25/84 - Detection of presence or absence of voice signals for discriminating voice from noise
In a method of diarization of audio data, audio data is segmented into a plurality of utterances. Each utterance is represented as an utterance model representative of a plurality of feature vectors. The utterance models are clustered. A plurality of speaker models are constructed from the clustered utterance models. A hidden Markov model is constructed of the plurality of speaker models. A sequence of identified speaker models is decoded.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
Systems, methods, and media for the automated removal of private information are provided herein. In an example implementation, a method for automatic removal of private information may include: receiving a transcript of communication data; applying a private information rule to the transcript in order to identify private information in the transcript; tagging the identified private information with a tag comprising an identification of the private information; applying a complicate rule to the tagged transcript in order to evaluate a compliance of the transcript with privacy standards; removing the identified private information from the transcript to produce a redacted transaction; and storing the redacted transcript.
Systems and methods automatedly evaluate a transcription quality. Audio data is obtained. The audio data is segmented into a plurality of utterances with a voice activity detector operating on a computer processor. The plurality of utterances are transcribed into at least one word lattice with a large vocabulary continuous speech recognition system operating on the processor. A minimum Bayes risk decoder is applied to the at least one word lattice to create at least one confusion network. At least conformity ratio is calculated from the at least one confusion network.
Methods, systems, and computer readable media for automated transcription model adaptation includes obtaining audio data from a plurality of audio files. The audio data is transcribed to produce at least one audio file transcription which represents a plurality of transcription alternatives for each audio file. Speech analytics are applied to each audio file transcription. A best transcription is selected from the plurality of transcription alternatives for each audio file. Statistics from the selected best transcription are calculated. An adapted model is created from the calculated statistics.
G06F 16/683 - Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
Systems, methods, and media for the application of funnel analysis using desktop analytics and textual analytics to map and analyze the flow of customer service interactions. In an example implementation, the method includes: defining at least one flow that is representative of a series of events comprising at least one speech event, at least one Data Processing Activity (DPA) event, and at least one Computer Telephone Integration (CTI) event; receiving customer service interaction data comprising communication data, DPA metadata, and CTI metadata; applying the at least one flow to the customer service interaction data; determining if the customer service interaction data meets the at least one flow; and producing an automated indication based upon the determination.
H04M 1/64 - Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
An embodiment of the method of processing communication data to identify one or more themes within the communication data includes identifying terms in a set of communication data, wherein a term is a word or short phrase, and defining relations in the set of communication data based on the terms, wherein the relation is a pair of terms that appear in proximity to one another. The method further includes identifying themes in the set of communication data based on the relations, wherein a theme is a group of one or more relations that have similar meanings, and storing the terms, the relations, and the themes in the database.
The disclosed solution uses machine learning-based methods to improve the knowledge extraction process in a specific domain or business environment. By formulizing a specific company's internal knowledge and terminology, the ontology programming accounts for linguistic meaning to surface relevant and important content for analysis. Based on the self-training mechanism developed by the inventors, the ontology programming automatically trains itself to understand the business environment by processing and analyzing a defined corpus of communication data. For example, the disclosed ontology programming adapts to the language used in a specific domain, including linguistic patterns and properties, such as word order, relationships between terms, and syntactical variations. The disclosed system and method further relates to leveraging the ontology to assess a dataset and conduct a funnel analysis to identify patterns, or sequences of events, in the dataset.
A method of evaluating scripts in an interpersonal communication includes monitoring a customer service interaction. At least one portion of a script is identified. At least one script requirement is determined. A determination is made whether the at least one portion of the script meets the at least one script requirement. An alert is generated indicative of a non-compliant script.
Systems and methods of automated adaptation of a language model for transcription of audio data include obtaining audio data. The audio data is transcribed with a language model to produce a plurality of audio file transcriptions. A quality of the plurality of audio file transcriptions is evaluated. At least one best transcription from a plurality of audio file transcriptions is selected based upon the evaluated quality. Statistics are calculated from the selected at least one best transcription from the plurality of audio file transcriptions. The language model is modified from the calculated statistics.
In a method of diarization of audio data, audio data is segmented into a plurality of utterances. Each utterance is represented as an utterance model representative of a plurality of feature vectors. The utterance models are clustered. A plurality of speaker models are constructed from the clustered utterance models. A hidden Markov model is constructed of the plurality of speaker models. A sequence of identified speaker models is decoded.
G10L 17/06 - Decision making techniques; Pattern matching strategies
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
A video stream from a webcam or video telephone is received. The video stream can be analyzed in real-time as it is being received or can be recorded and stored for later analysis. Information within the video streams can be extracted and processed by a facial and video content recognition engine and the information derived therefrom can be stored as metadata. The metadata can be used for enriching the call content recorded by a recorder. The information derived from the video streams can be used to solve business and legal issues.
Systems and methods automatedly evaluate a transcription quality. Audio data is obtained. The audio data is segmented into a plurality of utterances with a voice activity detector operating on a computer processor. The plurality of utterances are transcribed into at least one word lattice with a large vocabulary continuous speech recognition system operating on the processor. A minimum Bayes risk decoder is applied to the at least one word lattice to create at least one confusion network. At least conformity ratio is calculated from the at least one confusion network.
The system and method of separating speakers in an audio file including obtaining an audio file. The audio file is transcribed into at least one text file by a transcription server. Homogenous speech segments are identified within the at least one text file. The audio file is segmented into homogenous audio segments that correspond to the identified homogenous speech segments. The homogenous audio segments of the audio file are separated into a first speaker audio file and second speaker audio file the first speaker audio file and the second speaker audio file are transcribed to produce a diarized transcript.
G10L 25/78 - Detection of presence or absence of voice signals
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
G10L 17/06 - Decision making techniques; Pattern matching strategies
58.
System and method for determining the compliance of agent scripts
Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
A method for developing an ontology for practicing communication data, wherein the ontology is a structural representation of language elements and the relationship between those language elements within the domain, includes providing a training set of communication data and processing the training set of communication data to identify terms within the training set of communication data, wherein a term is a word or short phrase. The method further includes utilizing the terms to identify relations within the training set of communication data, wherein a relation is a pair of terms that appear in proximity to one another. Finally, the terms in the relations are stored in a database.
Disclosed herein are methods of diarizing audio data using first-pass blind diarization and second-pass blind diarization that generate speaker statistical models, wherein the first pass-blind diarization is on a per-frame basis and the second pass-blind diarization is on a per-word basis, and methods of creating acoustic signatures for a common speaker based only on the statistical models of the speakers in each audio session.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
G10L 25/84 - Detection of presence or absence of voice signals for discriminating voice from noise
Disclosed herein are methods of diarizing audio data using first-pass blind diarization and second-pass blind diarization that generate speaker statistical models, wherein the first pass-blind diarization is on a per-frame basis and the second pass-blind diarization is on a per-word basis, and methods of creating acoustic signatures for a common speaker based only on the statistical models of the speakers in each audio session.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
G10L 25/84 - Detection of presence or absence of voice signals for discriminating voice from noise
A method for expanding an initial ontology via processing of communication data, wherein the initial ontology is a structural representation of language elements comprising a set of entities, a set of terms, a set of term-entity associations, a set of entity-association rules, a set of abstract relations, and a set of relation instances. A method for extracting a set of significant phrases and a set of significant phrase co-occurrences from an input set of documents further includes utilizing the terms to identify relations within the training set of communication data, wherein a relation is a pair of terms that appear in proximity to one another.
A method for converting speech to text in a speech analytics system is provided. The method includes receiving audio data containing speech made up of sounds from an audio source, processing the sounds with a phonetic module resulting in symbols corresponding to the sounds, and processing the symbols with a language module and occurrence table resulting in text. The method also includes determining a probability of correct translation for each word in the text, comparing the probability of correct translation for each word in the text to the occurrence table, and adjusting the occurrence table based on the probability of correct translation for each word in the text.
The disclosed solution uses machine learning-based methods to improve the knowledge extraction process in a specific domain or business environment. By formulizing a specific company's internal knowledge and terminology, the ontology programming accounts for linguistic meaning to surface relevant and important content for analysis. Based on the self-training mechanism developed by the inventors, the ontology programming automatically trains itself to understand the business environment by processing and analyzing a defined corpus of communication data. For example, the disclosed ontology programming adapts to the language used in a specific domain, including linguistic patterns and properties, such as word order, relationships between terms, and syntactical variations. The disclosed system and method further relates to leveraging the ontology to assess a dataset and conduct a funnel analysis to identify patterns, or sequences of events, in the dataset.
A video system comprises a camera, an enclosure, and an interface system integrated with the enclosure. The interface is configured to receive, on a link external to the enclosure, video captured by the camera and transfer the video on a link internal to the enclosure. The video system further includes a conversion module and a processing system. The conversion module is configured to receive the video on the internal link and transfer the video for delivery to the processing system on another internal link which has a different characteristic impedance than the first internal link. The processing system is configured to receive the video on the other internal link and encode the video. The video system further includes a monitoring system which is configured to display and store the video.
Machine learning-based methods to improve the knowledge extraction process in a specific domain or business environment, and then provides that extracted knowledge in a word cloud user interface display capable of summarizing and conveying a vast amount of information to a user very quickly. Based on the self-training mechanism developed by the inventors, the ontology programming automatically trains itself to understand the domain or environment of the communication data by processing and analyzing a defined corpus of communication data. The developed ontology can be applied to process a dataset of communication information to create a word cloud that can provide a quick view into the content of the dataset, including information about the language used by participants in the communications, such as identifying for a user key phrases and terms, the frequency of those phrases, the originator of the terms of phrases, and the confidence levels of such identifications.
Systems, methods, and media for developing ontologies and analyzing communication data are provided herein. In an example implementation, the method includes: identifying terms in in a set of communication data; identifying a list of possible relations of the identified terms; scoring the possible relations according to a set of predefined merits; ranking the possible relations into a list of possible relations in descending order according to their score; and tagging relations in the set of communication data. The relations may be tagged by identifying the possible relations in the communication data in order corresponding with the list of possible relations. The possible relations that have lower rankings that conflict with higher ranking relations are not tagged. The conflicts may be determined by a predefined set of conflict criteria.
Systems, methods, and media for the application of funnel analysis using desktop analytics and textual analytics to map and analyze the flow of customer service interactions. In an example implementation, the method includes: defining at least one flow that is representative of a series of events comprising at least one speech event, at least one Data Processing Activity (DPA) event, and at least one Computer Telephone Integration (CTI) event; receiving customer service interaction data comprising communication data, DPA metadata, and CTI metadata; applying the at least one flow to the customer service interaction data; determining if the customer service interaction data meets the at least one flow; and producing an automated indication based upon the determination.
H04M 1/64 - Automatic arrangements for answering calls; Automatic arrangements for recording messages for absent subscribers; Arrangements for recording conversations
G06Q 10/06 - Resources, workflows, human or project management; Enterprise or organisation planning; Enterprise or organisation modelling
Systems, methods, and media for the automated removal of private information are provided herein. In an example implementation, a method for automatic removal of private information may include: receiving a transcript of communication data; applying a private information rule to the transcript in order to identify private information in the transcript; tagging the identified private information with a tag comprising an identification of the private information; applying a complicate rule to the tagged transcript in order to evaluate a compliance of the transcript with privacy standards; removing the identified private information from the transcript to produce a redacted transaction; and storing the redacted transcript.
A faster and more streamlined system for providing summary and analysis of large amounts of communication data is described. System and methods are disclosed that employ an ontology to automatically summarize communication data and present the summary to the user in a form that does not require the user to listen to the communication data. In one embodiment, the summary is presented as written snippets, or short fragments, of relevant communication data that capture the meaning of the data relating to a search performed by the user. Such snippets may be based on theme and meaning unit identification.
By formulizing a specific company's internal knowledge and terminology, the ontology programming accounts for linguistic meaning to surface relevant and important content for analysis. The ontology is built on the premise that meaningful terms are detected in the corpus and then classified according to specific semantic concepts, or entities. Once the main terms are defined, direct relations or linkages can be formed between these terms and their associated entities. Then, the relations are grouped into themes, which are groups or abstracts that contain synonymous relations. The disclosed ontology programming adapts to the language used in a specific domain, including linguistic patterns and properties, such as word order, relationships between terms, and syntactical variations. The ontology programming automatically trains itself to understand the domain or environment of the communication data by processing and analyzing a defined corpus of communication data.
The disclosed solution uses machine learning-based methods to improve the knowledge extraction process in a specific domain or business environment. By formulizing a specific company's internal knowledge and terminology, the ontology programming accounts for linguistic meaning to surface relevant and important content for analysis. Based on the self-training mechanism developed by the inventors, the ontology programming automatically trains itself to understand the business environment by processing and analyzing a defined corpus of communication data. For example, the disclosed ontology programming adapts to the language used in a specific domain, including linguistic patterns and properties, such as word order, relationships between terms, and syntactical variations. The disclosed system and method further relates to leveraging the ontology to assess a dataset and conduct a funnel analysis to identify patterns, or sequences of events, in the dataset.
A method of operating a speech processing system is provided. The method includes translating a portion of a speech record into a plurality of possible words associated with a plurality of contexts, and determining a plurality of correctness values based on a plurality of probabilities that each of the plurality of possible words is correct for each of the plurality of contexts. The method also includes determining which of the plurality of possible words is a correct translation of the portion of the speech record based on the plurality of correctness values.
The disclosed solution uses machine learning-based methods to improve the knowledge extraction process in a specific domain or business environment. By formulizing a specific company's internal knowledge and terminology, the ontology programming accounts for linguistic meaning to surface relevant and important content for analysis. For example, the disclosed ontology programming adapts to the language used in a specific domain, including linguistic patterns and properties, such as word order, relationships between terms, and syntactical variations. Based on the self-training mechanism developed by the inventors, the ontology programming automatically trains itself to understand the domain or environment of the communication data by processing and analyzing a defined corpus of communication data.
Methods, systems, and computer readable media for automated transcription model adaptation includes obtaining audio data from a plurality of audio files. The audio data is transcribed to produce at least one audio file transcription which represents a plurality of transcription alternatives for each audio file. Speech analytics are applied to each audio file transcription. A best transcription is selected from the plurality of transcription alternatives for each audio file. Statistics from the selected best transcription are calculated. An adapted model is created from the calculated statistics.
Systems and methods of automated adaptation of a language model for transcription of audio data include obtaining audio data. The audio data is transcribed with a language model to produce a plurality of audio file transcriptions. A quality of the plurality of audio file transcriptions is evaluated. At least one best transcription from a plurality of audio file transcriptions is selected based upon the evaluated quality. Statistics are calculated from the selected at least one best transcription from the plurality of audio file transcriptions. The language model is modified from the calculated statistics.
Systems and methods of script identification in audio data obtained from audio data. The audio data is segmented into a plurality of utterances. A script model representative of a script text is obtained. The plurality of utterances are decoded with the script model. A determination is made if the script text occurred in the audio data.
Voice activity detection (VAD) is an enabling technology for a variety of speech based applications. Herein disclosed is a robust VAD algorithm that is also language independent. Rather than classifying short segments of the audio as either “speech” or “silence”, the VAD as disclosed herein employees a soft-decision mechanism. The VAD outputs a speech-presence probability, which is based on a variety of characteristics.
Systems and methods automatedly evaluate a transcription quality. Audio data is obtained. The audio data is segmented into a plurality of utterances with a voice activity detector operating on a computer processor. The plurality of utterances are transcribed into at least one word lattice with a large vocabulary continuous speech recognition system operating on the processor. A minimum Bayes risk decoder is applied to the at least one word lattice to create at least one confusion network. At least conformity ratio is calculated from the at least one confusion network.
In a method of diarization of audio data, audio data is segmented into a plurality of utterances. Each utterance is represented as an utterance model representative of a plurality of feature vectors. The utterance models are clustered. A plurality of speaker models are constructed from the clustered utterance models. A hidden Markov model is constructed of the plurality of speaker models. A sequence of identified speaker models is decoded.
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
A method of evaluating scripts in an interpersonal communication includes monitoring a customer service interaction. At least one portion of a script is identified. At least one script requirement is determined. A determination is made whether the at least one portion of the script meets the at least one script requirement. An alert is generated indicative of a non-compliant script.
A computer-implemented method for text processing includes processing at least one body of text including semantic elements. Syntactic roles of the respective semantic elements are recognized, and the body of text is segmented into individual semantic constructs. At least one selected semantic element is removed from the semantic constructs, so as to produce respective nuclear semantic constructs. Occurrence frequencies of the respective nuclear semantic constructs are computed. An action is invoked with respect to one or more of the nuclear semantic constructs whose occurrence frequencies meet a predefined condition.
Systems and methods of automated ontology development include a corpus of communication data. The corpus of communication data includes communication data from a plurality of interactions and is processed. A plurality of terms are extracted from the corpus. Each term of the plurality is a plurality of words that identify a single concept within the corpus. An ontology is automatedly generated from the extracted terms.
Embodiments disclosed herein provide systems and methods for processing events using existing computer event capture modules. In a particular embodiment, a method provides a primary event module communicating with an operating system to detect application events generated by user input and processing the application events to determine if a primary event has occurred. The method further provides a secondary event module communicating with the primary event module to obtain an indication of the application events detected by the primary event module and processing the application events to determine if a secondary event has occurred.
G06F 3/00 - Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
G06F 9/48 - Program initiating; Program switching, e.g. by interrupt
Systems and method of diarization of audio files use an acoustic voiceprint model. A plurality of audio files are analyzed to arrive at an acoustic voiceprint model associated to an identified speaker. Metadata associate with an audio file is used to select an acoustic voiceprint model. The selected acoustic voiceprint model is applied in a diarization to identify audio data of the identified speaker.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
Systems and methods of diarization using linguistic labeling include receiving a set of diarized textual transcripts. A least one heuristic is automatedly applied to the diarized textual transcripts to select transcripts likely to be associated with an identified group of speakers. The selected transcripts are analyzed to create at least one linguistic model. The linguistic model is applied to transcripted audio data to label a portion of the transcripted audio data as having been spoken by the identified group of speakers. Still further embodiments of diarization using linguistic labeling may serve to label agent speech and customer speech in a recorded and transcripted customer service interaction.
G10L 17/00 - Speaker identification or verification
G10L 17/02 - Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
87.
Systems and methods for enhancing recorded or intercepted calls using information from a facial recognition engine
A video stream from a webcam or video telephone is received. The video stream can be analyzed in real-time as it is being received or can be recorded and stored for later analysis. Information within the video streams can be extracted and processed by a facial and video content recognition engine and the information derived therefrom can be stored as metadata. The metadata can be used for enriching the call content recorded by a recorder. The information derived from the video streams can be used to solve business and legal issues.
A method for converting speech to text in a speech analytics system is provided. The method includes receiving audio data containing speech made up of sounds from an audio source, processing the sounds with a phonetic module resulting in symbols corresponding to the sounds, and processing the symbols with a language module and occurrence table resulting in text. The method also includes determining a probability of correct translation for each word in the text, comparing the probability of correct translation for each word in the text to the occurrence table, and adjusting the occurrence table based on the probability of correct translation for each word in the text.
09 - Scientific and electric apparatus and instruments
42 - Scientific, technological and industrial services, research and design
Goods & Services
Telecommunication surveillance hardware and software for interception, filtering and analyzing data and voice communications; telecommunication surveillance hardware and software for interception, filtering and analyzing communications on social networks, mobile applications, chat applications, webmail services, voice over IP and video over IP; software graphical user interface used by communications service providers to assure compliance with telecommunications and electronic privacy and security laws, regulations and requirements; software graphical user interface used by law enforcement to manage lawful interception and data retention requests; software for crawling websites and gathering data for analysis; cellular, wireless, and mobility communication monitoring and management apparatus comprised of computer software and hardware for the purpose of identifying, intercepting, controlling, and locating wireless phones and handheld devices; portable camera for use in surveillance; cellular, wireless, and mobility communication monitoring and management apparatus comprised of computer software and hardware for the purpose of locating maritime vessels. Computer security consultancy; computer services, namely, on-line scanning, detecting, quarantining and eliminating of viruses, worms, spyware, adware, malware and unauthorized data and programs on computers and electronic devices; computer virus protection services; design and development of software and hardware for, namely, computer virus protection services.
The system and method of separating speakers in an audio file including obtaining an audio file. The audio file is transcribed into at least one text file by a transcription server. Homogenous speech segments are identified within the at least one text file. The audio file is segmented into homogenous audio segments that correspond to the identified homogenous speech segments. The homogenous audio segments of the audio file are separated into a first speaker audio file and second speaker audio file the first speaker audio file and the second speaker audio file are transcribed to produce a diarized transcript.
G10L 25/78 - Detection of presence or absence of voice signals
G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
G10L 17/06 - Decision making techniques; Pattern matching strategies
91.
Systems and methods of automatically scheduling a workforce
Systems and methods of workforce scheduling are disclosed. One example embodiment, among others, comprises a computer-implemented method of scheduling workers. Each worker is associated with one of a set of flexibility classifications, which include non-flex-time and at least one flex-time. The method includes generating a set of shift instances to cover forecasted demand over a planning period, and assigning the shift instances to the set of workers by iterating through the each of the workers to assign at least a portion of the shift instances to a selected one of the workers. The assigning is such that total hours assigned to the selected worker depends on a number associated with the classification of the selected worker.
09 - Scientific and electric apparatus and instruments
Goods & Services
Cellular, wireless, and mobility communication monitoring and management apparatus comprised of computer software and hardware for the purpose of identifying, intercepting, controlling, and locating wireless phones and handheld devices.
09 - Scientific and electric apparatus and instruments
Goods & Services
Cellular, wireless, and mobility communication monitoring and management apparatus comprised of computer software and hardware for the purpose of identifying, intercepting, controlling, and locating wireless phones and handheld devices.
94.
System and method for controlling the long term generation rate of compressed data
The present invention comprises a system and method for controlling the rate a data encoder generates compressed data. The system and method are preferably implemented as program code stored and executed by a processor or computer that is interfaced to standard variable or constant bit rate encoders known in the art. The system preferably encodes and compresses video signals received from a camera, and controls the rate at which the compressed data is generated by the encoder so that storage capacity reserved for the compressed data will not be exceeded. The device preferably takes advantage of periods when the data generation rate is low to increase the quality of video data generated during periods of high activity.
H04N 7/12 - Systems in which the television signal is transmitted via one channel or a plurality of parallel channels, the bandwidth of each channel being less than the bandwidth of the television signal
H04N 9/885 - Signal drop-out compensation the signal being a composite colour television signal using a digital intermediate memory
09 - Scientific and electric apparatus and instruments
Goods & Services
Cellular, wireless, and mobility communication monitoring and management apparatus comprised of computer software and hardware for the purpose of identifying, intercepting, controlling, and locating wireless phones and handheld devices.
96.
Systems and methods for extracting media from network traffic having unknown protocols
Methods and systems for analyzing network traffic. An analysis system receives network traffic, which complies with a certain protocol. The received network traffic carries a data item, which may be of value to an analyst. In order to access the data item in question, the analysis system automatically identifies the media type of the data item, by processing the network traffic irrespective of the protocol. The analysis system identifies the media type irrespective of the protocol in order to avoid the computational complexity involved in decoding the protocol.
H04J 3/16 - Time-division multiplex systems in which the time allocation to individual channels within a transmission cycle is variable, e.g. to accommodate varying complexity of signals, to vary number of channels transmitted
09 - Scientific and electric apparatus and instruments
Goods & Services
Cellular, wireless, and mobility communication monitoring and management apparatus comprised of computer software and hardware for the purpose of identifying, intercepting, controlling, and locating wireless phones and handheld devices.
98.
Methods and devices for archiving recorded interactions and retrieving stored recorded interactions
At least one contact between at least one server and at least one user is archived. The contact includes a recorded interaction between the user and the server, e.g., a recorded interaction between a customer and a customer service agent via the server. The contact is associated with a contact folder in a local storage. A portion of the contact is selected to be archived, and the time to archive the selected portion is determined. The selected portion of the contact is archived in an extended storage at the determined time. Archiving includes copying at least the content from the associated contact folder in the local storage and forwarding at least the copied content to an extended storage. A portion of the archived contact or the entire archived contact may be restored by determining that the contact is archived and retrieving the contact or the portion of the contact from the extended storage.
A method for automatically providing a plurality of additional database query terms comprising receiving a first query term from a user, receiving a plurality of characters from the user, wherein the plurality of characters is only a portion of a second query term, and selecting a set of records from a database based on the query term, wherein the database comprises records which comprise text translated from audio. The method also determines a plurality of additional query terms based on the plurality of characters, and, for at least one of the plurality of additional query terms, processes at least a portion of the set of records to determine a relevance of the additional query term. Finally, the method includes displaying at least one of the plurality of additional query terms to the user for selection based on the relevance of at least one of the plurality of additional query terms.
A computer-implemented method for processing a plurality of data items includes defining a set of one or more categories having a corresponding set of conditions that associate the data items with the categories. A sub-categorization request, requesting to divide a category from among the categories into lower-level categories, is accepted from a user. The data items associated with the category are processed responsively to the sub-categorization request, so as to automatically suggest the lower-level categories.
The automatically-suggested lower-level categories are presented to the user, and direction with respect to the automatically-suggested lower-level categories is accepted from the user. A hierarchical structure representing the categories is constructed responsively to the direction, by dividing the category into the lower-level categories. Output based on the hierarchical structure is presented to the user.