Affectiva, Inc.

United States of America

Back to Profile

1-89 of 89 for Affectiva, Inc. Sort by
Query
Aggregations
IP Type
        Patent 84
        Trademark 5
Jurisdiction
        United States 76
        World 13
Date
2023 1
2022 2
2021 6
2020 18
2019 12
See more
IPC Class
G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints 43
A61B 5/16 - Devices for psychotechnics; Testing reaction times 36
A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons 28
G06K 9/62 - Methods or arrangements for recognition using electronic means 22
G06Q 30/02 - Marketing; Price estimation or determination; Fundraising 22
See more
NICE Class
35 - Advertising and business services 2
42 - Scientific, technological and industrial services, research and design 2
41 - Education, entertainment, sporting and cultural services 1
Status
Pending 8
Registered / In Force 81

1.

DIRECTED CONTROL TRANSFER WITH AUTONOMOUS VEHICLES

      
Application Number 17962570
Status Pending
Filing Date 2022-10-10
First Publication Date 2023-02-02
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mishra, Taniya
  • Zeilman, Andrew Todd
  • Zijderveld, Gabriele

Abstract

Techniques for cognitive analysis for directed control transfer with autonomous vehicles are described. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.

IPC Classes  ?

  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles
  • G06V 20/59 - Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

2.

NEURAL NETWORK TRAINING WITH BIAS MITIGATION

      
Application Number 17482501
Status Pending
Filing Date 2021-09-23
First Publication Date 2022-03-31
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Bhattacharya, Sneha
  • Mishra, Taniya
  • Ranjalkar, Shruti

Abstract

Techniques for machine learning based on neural network training with bias mitigation are disclosed. Facial images for a neural network configuration and a neural network training dataset are obtained. The training dataset is associated with the neural network configuration. The facial images are partitioned into multiple subgroups, wherein the subgroups represent demographics with potential for biased training. A multifactor key performance indicator (KPI) is calculated per image. The calculating is based on analyzing performance of two or more image classifier models. The neural network configuration and the training dataset are promoted to a production neural network, wherein the promoting is based on the KPI. The KPI identifies bias in the training dataset. Promotion of the neural network configuration and the neural network training dataset is based on identified bias. Identified bias precludes promotion to the production neural network, while identified non-bias allows promotion to the production neural network.

IPC Classes  ?

3.

NEURAL NETWORK SYNTHESIS ARCHITECTURE USING ENCODER-DECODER MODELS

      
Application Number 17458639
Status Pending
Filing Date 2021-08-27
First Publication Date 2022-03-03
Owner Affectiva, Inc. (USA)
Inventor
  • Mishra, Taniya
  • Banerjee, Sandipan
  • Joshi, Ajjen Das

Abstract

Disclosed techniques include neural network architecture using encoder-decoder models. A facial image is obtained for processing on a neural network. The facial image includes unpaired facial image attributes. The facial image is processed through a first encoder-decoder pair and a second encoder-decoder pair. The first encoder-decoder pair decomposes a first image attribute subspace. The second encoder-decoder pair decomposes a second image attribute subspace. The first encoder-decoder pair outputs a transformation mask based on the first image attribute subspace. The second encoder-decoder pair outputs a second image transformation mask based on the second image attribute subspace. The first image transformation mask and the second image transformation mask are concatenated to enable downstream processing. The concatenated transformation masks are processed on a third encoder-decoder pair and a resulting image is output. The resulting image eliminates a paired training data requirement.

IPC Classes  ?

  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

4.

COGNITIVE STATE VEHICLE NAVIGATION BASED ON IMAGE PROCESSING AND MODES

      
Application Number 17378817
Status Pending
Filing Date 2021-07-19
First Publication Date 2021-11-04
Owner Affectiva, Inc. (USA)
Inventor
  • Fouad, Maha Amr Mohamed
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Turcot, Panu James

Abstract

Image-based analysis techniques are used for cognitive state vehicle navigation, including an autonomous or a semi-autonomous vehicle. Images including facial data of a vehicle occupant are obtained using an in-vehicle imaging device. The vehicle occupant can be an operator of or a passenger within the vehicle. A first computing device is used to analyze the images to determine occupant cognitive state data. The analysis can occur at various times along a vehicle travel route. The cognitive state data is mapped to location data along the vehicle travel route. Information about the vehicle travel route is updated based on the cognitive state data and mode data for the vehicle. The updated information is provided for vehicle control. The mode data is configurable based on a mode setting. The mode data is weighted based on additional information.

IPC Classes  ?

  • B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G07C 5/02 - Registering or indicating driving, working, idle, or waiting time only

5.

VEHICLE MANIPULATION WITH CONVOLUTIONAL IMAGE PROCESSING

      
Application Number 17327813
Status Pending
Filing Date 2021-05-24
First Publication Date 2021-09-09
Owner Affectiva, Inc. (USA)
Inventor
  • Turcot, Panu James
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad

Abstract

Disclosed embodiments provide for vehicle manipulation with convolutional image processing. The convolutional image processing uses a multilayered analysis engine. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers and hidden layers. The evaluating provides a cognitive state analysis. Further images are evaluated using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Manipulation data is provided to the second vehicle based on the evaluating the further images. An additional plurality of images of one or more occupants of one or more additional vehicles is obtained. The additional images provide opted-in, crowdsourced image training. The crowdsourced image training enables retraining the multilayered analysis engine.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • B60W 60/00 - Drive control systems specially adapted for autonomous road vehicles

6.

Synthetic data for neural network training using vectors

      
Application Number 17136083
Grant Number 11769056
Status In Force
Filing Date 2020-12-29
First Publication Date 2021-07-01
Grant Date 2023-09-26
Owner Affectiva, Inc. (USA)
Inventor
  • Banerjee, Sandipan
  • El Kaliouby, Rana
  • Joshi, Ajjen Das
  • Kyal, Survi
  • Mishra, Taniya

Abstract

Machine learning is performed using synthetic data for neural network training using vectors. Facial images are obtained for a neural network training dataset. Facial elements from the facial images are encoded into vector representations of the facial elements. A generative adversarial network (GAN) generator is trained to provide one or more synthetic vectors based on the one or more vector representations, wherein the one or more synthetic vectors enable avoidance of discriminator detection in the GAN. The training a GAN further comprises determining a generator accuracy using the discriminator. The generator accuracy can enable a classifier, where the classifier comprises a multi-layer perceptron. Additional synthetic vectors are generated in the GAN, wherein the additional synthetic vectors avoid discriminator detection. A machine learning neural network is trained using the additional synthetic vectors. The training a machine learning neural network further includes using the one or more synthetic vectors.

IPC Classes  ?

  • G06N 3/084 - Backpropagation, e.g. using gradient descent
  • G06N 3/08 - Learning methods
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06N 3/045 - Combinations of networks
  • G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
  • G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

7.

In-vehicle drowsiness analysis using blink rate

      
Application Number 17118654
Grant Number 11318949
Status In Force
Filing Date 2020-12-11
First Publication Date 2021-06-24
Grant Date 2022-05-03
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Joshi, Ajjen Das
  • Kyal, Survi
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad
  • Turcot, Panu James

Abstract

Disclosed techniques include in-vehicle drowsiness analysis using blink-rate. Video of an individual is obtained within a vehicle using an image capture device. The video is analyzed using one or more processors to detect a blink event based on a classifier for a blink that was determined. Using temporal analysis, the blink event is determined by identifying that eyes of the individual are closed for a frame in the video. Using the blink event and one or more other blink events, blink-rate information is determined using the one or more processors. Based on the blink-rate information, a drowsiness metric is calculated using the one or more processors. The vehicle is manipulated based on the drowsiness metric. A blink duration of the individual for the blink event is evaluated. The blink-rate information is compensated. The compensating is based on demographic information for the individual.

IPC Classes  ?

  • B60Q 1/00 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
  • B60W 40/09 - Driving style or behaviour
  • B60W 30/00 - Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers

8.

DEEP LEARNING IN SITU RETRAINING

      
Application Number 17078133
Status Pending
Filing Date 2020-10-23
First Publication Date 2021-04-29
Owner Affectiva, Inc. (USA)
Inventor
  • Turcot, Panu James
  • Mavadati, Seyedmohammad

Abstract

Deep learning in situ retraining uses deep learning nodes to provide a human perception state on a user device. A plurality of images including facial data is obtained for human perception state analysis. A server device trains a set of weights on a set of layers for deep learning that implements the analysis, where the training is performed with a first set of training data. A subset of weights is deployed on deep learning nodes on a user device, where the deploying enables at least part of the human perception state analysis. An additional set of weights is retrained on the user device, where the additional set of weights is trained using a second set of training data. A human perception state based on the subset of the set of weights, the additional set of weights, and input images obtained by the user device is provided on the user device.

IPC Classes  ?

9.

Vehicular in-cabin facial tracking using machine learning

      
Application Number 16928274
Grant Number 11935281
Status In Force
Filing Date 2020-07-14
First Publication Date 2021-01-07
Grant Date 2024-03-19
Owner Affectiva, Inc. (USA)
Inventor
  • Senechal, Thibaud
  • El Kaliouby, Rana
  • Turcot, Panu James
  • Mohamed, Mohamed Ezzeldin Abdelmonem Ahmed

Abstract

Vehicular in-cabin facial tracking is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. A set of seating locations for the vehicle interior is determined. The set is based on the images. The set of seating locations is scanned for performing facial detection for each of the seating locations using a facial detection model. A view of a detected face is manipulated. The manipulation is based on a geometry of the vehicle interior. Cognitive state data of the detected face is analyzed. The cognitive state data analysis is based on additional images of the detected face. The cognitive state data analysis uses the view that was manipulated. The cognitive state data analysis is promoted to a using application. The using application provides vehicle manipulation information to the vehicle. The manipulation information is for an autonomous vehicle.

IPC Classes  ?

  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06N 20/00 - Machine learning
  • G06T 3/00 - Geometric image transformation in the plane of the image
  • G06T 7/73 - Determining position or orientation of objects or cameras using feature-based methods
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 20/59 - Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

10.

Vehicle interior object management

      
Application Number 17005374
Grant Number 11887383
Status In Force
Filing Date 2020-08-28
First Publication Date 2020-12-17
Grant Date 2024-01-30
Owner Affectiva, Inc. (USA)
Inventor
  • Turcot, Panu James
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mohamed, Mohamed Ezzeldin Abdelmonem Ahmed
  • Zeilman, Andrew Todd
  • Zijderveld, Gabriele

Abstract

Vehicle interior object management uses analysis for detection of an object within a vehicle. The object can include a cell phone, a computing device, a briefcase, a wallet, a purse, or luggage. The object can include a child or a pet. A distance between an occupant and the object can be calculated. The object can be within a reachable distance of the occupant. Two or more images of a vehicle interior are collected using imaging devices within the vehicle. The images are analyzed to detect an object within the vehicle. The object is classified. A level of interaction is estimated between an occupant of the vehicle and the object within the vehicle. The object can be determined to have been left behind once the occupant leaves the vehicle. A control element of the vehicle is changed based on the classifying and the level of interaction.

IPC Classes  ?

  • G06V 20/59 - Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions

11.

Remote computing analysis for cognitive state data metrics

      
Application Number 16934069
Grant Number 11430561
Status In Force
Filing Date 2020-07-21
First Publication Date 2020-11-05
Grant Date 2022-08-30
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Picard, Rosalind Wright
  • Sadowsky, Richard Scott

Abstract

Remote computing analysis for cognitive state data metrics is performed. Cognitive state data from a plurality of people is collected as they interact with a rendering. The cognitive state data includes video facial data collected on one or more local devices from the plurality of people. Information is uploaded to a remote server. The information includes the cognitive state data. A facial expression metric based on a plurality of image classifiers is calculated for each individual within the plurality of people. Cognitive state information is generated for each individual, based on the facial expression metric for each individual. The cognitive state information for each individual within the plurality of people who interacted with the rendering is aggregated. The aggregation is based on the facial expression metric for each individual. The cognitive state information that was aggregated is displayed on at least one of the one or more local devices.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G16H 20/70 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
  • G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
  • G16H 40/63 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

12.

DISTRIBUTED ANALYSIS FOR COGNITIVE STATE METRICS

      
Application Number 16928154
Status Pending
Filing Date 2020-07-14
First Publication Date 2020-10-29
Owner Affectiva, Inc. (USA)
Inventor
  • Sadowsky, Richard Scott
  • El Kaliouby, Rana
  • Picard, Rosalind Wright
  • Wilder-Smith, Oliver Orion
  • Turcot, Panu James
  • Zheng, Zhihong

Abstract

Distributed analysis for cognitive state metrics is performed. Data for an individual is captured into a computing device. The data provides information for evaluating a cognitive state of the individual. The data for the individual is uploaded to a web server. A cognitive state metric for the individual is calculated. The cognitive state metric is based on the data that was uploaded. Analysis from the web server is received by the computing device. The analysis is based on the data for the individual and the cognitive state metric for the individual. An output that describes a cognitive state of the individual is rendered at the computing device. The output is based on the analysis that was received. The cognitive states of other individuals are correlated to the cognitive state of the individual. Other sources of information are aggregated. The information is used to analyze the cognitive state of the individual.

IPC Classes  ?

  • G16H 20/70 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
  • G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons

13.

Robotic control using profiles

      
Application Number 16914546
Grant Number 11484685
Status In Force
Filing Date 2020-06-29
First Publication Date 2020-10-15
Grant Date 2022-11-01
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Krupat, Jason

Abstract

Techniques for robotic control using profiles are disclosed. Cognitive state data for an individual is obtained. A cognitive state profile for the individual is learned using the cognitive state data that was obtained. Further cognitive state data for the individual is collected. The further cognitive state data is compared with the cognitive state profile. Stimuli are provided by a robot to the individual based on the comparing. The robot can be a smart toy. The cognitive state data can include facial image data for the individual. The further cognitive state data can include audio data for the individual. The audio data can be voice data. The voice data augments the cognitive state data. Cognitive state data for the individual is obtained using another robot. The cognitive state profile is updated based on input from either of the robots.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61M 21/00 - Other devices or methods to cause a change in the state of consciousness; Devices for producing or ending sleep by mechanical, optical, or acoustical means, e.g. for hypnosis
  • G10L 17/00 - Speaker identification or verification
  • B25J 11/00 - Manipulators not otherwise provided for
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06N 3/08 - Learning methods
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

14.

Vehicular in-cabin sensing using machine learning

      
Application Number 16833828
Grant Number 11823055
Status In Force
Filing Date 2020-03-30
First Publication Date 2020-10-01
Grant Date 2023-11-21
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mohamed, Mohamed Ezzeldin Abdelmonem Ahmed
  • Turcot, Panu James
  • Zeilman, Andrew Todd
  • Zijderveld, Gabriele

Abstract

Vehicular in-cabin sensing is performed using machine learning. In-cabin sensor data of a vehicle interior is collected. The in-cabin sensor data includes images of the vehicle interior. An occupant is detected within the vehicle interior. The detecting is based on identifying an upper torso of the occupant, using the in-cabin sensor data. The imaging is accomplished using a plurality of imaging devices within a vehicle interior. The occupant is located within the vehicle interior, based on the in-cabin sensor data. An additional occupant within the vehicle interior is detected. A human perception metric for the occupant is analyzed, based on the in-cabin sensor data. The detecting, the locating, and/or the analyzing are performed using machine learning. The human perception metric is promoted to a using application. The human perception metric includes a mood for the occupant and a mood for the vehicle. The promoting includes input to an autonomous vehicle.

IPC Classes  ?

  • G06N 20/00 - Machine learning
  • G06N 3/084 - Backpropagation, e.g. using gradient descent
  • G06V 20/59 - Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
  • G06V 40/10 - Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
  • G06V 40/20 - Movements or behaviour, e.g. gesture recognition
  • G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
  • G06F 18/25 - Fusion techniques
  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

15.

Media manipulation using cognitive state metric analysis

      
Application Number 16900026
Grant Number 11700420
Status In Force
Filing Date 2020-06-12
First Publication Date 2020-10-01
Grant Date 2023-07-11
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Burke, Melissa Sue
  • Dreisch, Andrew Edwin
  • Page, Graham John
  • Turcot, Panu James
  • Kodra, Evan

Abstract

Data on a user interacting with a media presentation is collected at a client device. The data includes facial image data of the user. The facial image data is analyzed to extract cognitive state content of the user. One or more emotional intensity metrics are generated. The metrics are based on the cognitive state content. The media presentation is manipulated, based on the emotional intensity metrics and the cognitive state content. An engagement score for the media presentation is provided. The engagement score is based on the emotional intensity metric. A facial expression metric and a cognitive state metric are generated for the user. The manipulating includes optimization of the previously viewed media presentation. The optimization changes various aspects of the media presentation, including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.

IPC Classes  ?

  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H04N 21/258 - Client or end-user data management, e.g. managing client capabilities, user preferences or demographics or processing of multiple end-users preferences to derive collaborative data
  • H04N 21/262 - Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission or generating play-lists
  • H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

16.

Convolutional computing using multilayered analysis engine

      
Application Number 16895071
Grant Number 11657288
Status In Force
Filing Date 2020-06-08
First Publication Date 2020-09-24
Grant Date 2023-05-23
Owner Affectiva, Inc. (USA)
Inventor
  • Turcot, Panu James
  • El Kaliouby, Rana
  • Mcduff, Daniel

Abstract

Disclosed embodiments provide for deep convolutional neural network computing. The convolutional computing is accomplished using a multilayered analysis engine. The multilayered analysis engine includes a deep learning network using a convolutional neural network (CNN). The multilayered analysis engine is used to analyze multiple images in a supervised or unsupervised learning process. Multiple images are provided to the multilayered analysis engine, and the multilayered analysis engine is trained with those images. A subject image is then evaluated by the multilayered analysis engine. The evaluation is accomplished by analyzing pixels within the subject image to identify a facial portion and identifying a facial expression based on the facial portion. The results of the evaluation are output. The multilayered analysis engine is retrained using a second plurality of images.

IPC Classes  ?

  • G06F 18/214 - Generating training patterns; Bootstrap methods, e.g. bagging or boosting
  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

17.

Multimodal machine learning for vehicle manipulation

      
Application Number 16852627
Grant Number 11704574
Status In Force
Filing Date 2020-04-20
First Publication Date 2020-07-30
Grant Date 2023-07-18
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mavadati, Seyedmohammad
  • Mishra, Taniya
  • Peacock, Timothy
  • Turcot, Panu James

Abstract

Techniques for machine-trained analysis for multimodal machine learning vehicle manipulation are described. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels. The trained weights cover both the audio information and the video information and are trained simultaneously. The learning facilitates cognitive state analysis of the audio information and the video information. A computing device within a vehicle captures further information and analyzes the further information using trained weights. The further information that is analyzed enables vehicle manipulation. The further information can include only video data or only audio data. The further information can include a cognitive state metric.

IPC Classes  ?

  • G06N 3/08 - Learning methods
  • G06N 3/084 - Backpropagation, e.g. using gradient descent
  • G06N 3/088 - Non-supervised learning, e.g. competitive learning
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06V 20/59 - Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
  • G06V 10/774 - Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

18.

Vehicle manipulation with crowdsourcing

      
Application Number 16852638
Grant Number 11511757
Status In Force
Filing Date 2020-04-20
First Publication Date 2020-07-30
Grant Date 2022-11-29
Owner Affectiva, Inc. (USA)
Inventor
  • Zijderveld, Gabriele
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad

Abstract

Vehicle manipulation is performed using crowdsourced data. A camera within a vehicle is used to collect cognitive state data, including facial data, on a plurality of occupants in a plurality of vehicles. A first computing device is used to learn a plurality of cognitive state profiles for the plurality of occupants, based on the cognitive state data. The cognitive state profiles include information on an absolute time or a trip duration time. Voice data is collected and is used to augment the cognitive state data. A second computing device is used to capture further cognitive state data on an individual occupant in an individual vehicle. A third computing device is used to compare the further cognitive state data with the cognitive state profiles that were learned. The individual vehicle is manipulated based on the comparing of the further cognitive state data.

IPC Classes  ?

  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G05D 1/02 - Control of position or course in two dimensions
  • G06V 20/59 - Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G10L 17/00 - Speaker identification or verification
  • G06F 16/9536 - Search customisation based on social or collaborative filtering

19.

Vehicular cognitive data collection with multiple devices

      
Application Number 16819357
Grant Number 11587357
Status In Force
Filing Date 2020-03-16
First Publication Date 2020-07-16
Grant Date 2023-02-21
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad
  • Turcot, Panu James

Abstract

Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. At least one face in the facial image data is partially occluded. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The partial occluding includes a time basis of occluding. The partial occluding includes an image basis of occluding. The cognitive state data from multiple sources is tagged.

IPC Classes  ?

  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G16H 20/40 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mechanical, radiation or invasive therapies, e.g. surgery, laser therapy, dialysis or acupuncture
  • G16H 80/00 - ICT specially adapted for facilitating communication between medical practitioners or patients, e.g. for collaborative diagnosis, therapy or health monitoring
  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/0533 - Measuring galvanic skin response
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • G06Q 30/0251 - Targeted advertisements

20.

FILE SYSTEM MANIPULATION USING MACHINE LEARNING

      
Application Number 16828154
Status Pending
Filing Date 2020-03-24
First Publication Date 2020-07-16
Owner Affectiva, Inc. (USA)
Inventor
  • Pitre, Boisy G.
  • El Kaliouby, Rana
  • Kashef, Youssef

Abstract

File system manipulation using machine learning is described. Access to a machine learning system is obtained. A connection between a file system and an application is structured. The connection is managed through an application programming interface (API). The connection provides two-way data transfer through the API between the application and the file system. The connection provides distribution of one or more data files through the API. The connection provides enablement of processing of the one or more data files. The processing uses classifiers running on the machine learning system. Data files are retrieved from the file system connected through the interface. The file system is network-connected to the application through the interface. The data files comprise image data of one or more people. Cognitive state analysis is performed by the machine learning system. The application programming interface is generated by a software development kit (SDK).

IPC Classes  ?

21.

Live streaming analytics within a shared digital environment

      
Application Number 16829743
Grant Number 11887352
Status In Force
Filing Date 2020-03-25
First Publication Date 2020-07-16
Grant Date 2024-01-30
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Page, Graham John
  • Zijderveld, Gabriele

Abstract

Analytics are used for live streaming based on analysis within a shared digital environment. An interactive digital environment is accessed, where the interactive digital environment is a shared digital environment for a plurality of participants. The participants include presenters and viewers. A plurality of images is obtained from a first set of participants within the plurality of participants involved in the interactive digital environment. Cognitive state content is analyzed within the plurality of images for the first set of participants within the plurality of participants. Results of the analyzing cognitive state content are provided to a second set of participants within the plurality of participants. The obtaining and the analyzing are accomplished on a device local to a participant such that images of the first set of participants are not transmitted to a non-local device. The analyzing cognitive state content is augmented with evaluation of audio information.

IPC Classes  ?

  • G06V 10/764 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
  • H04L 12/18 - Arrangements for providing special services to substations for broadcast or conference
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • H04L 67/131 - Protocols for games, networked simulations or virtual reality
  • G06V 10/82 - Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

22.

Emoji manipulation using machine learning

      
Application Number 16823404
Grant Number 11393133
Status In Force
Filing Date 2020-03-19
First Publication Date 2020-07-09
Grant Date 2022-07-19
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Fouad, May Amr
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad
  • Mcduff, Daniel

Abstract

A machine learning system is accessed. The machine learning system is used to translate content into a representative icon. The machine learning system is used to manipulate emoji. The machine learning system is used to process an image of an individual. The machine learning processing includes identifying a face of the individual. The machine learning processing includes classifying the face to determine facial content using a plurality of image classifiers. The classifying includes generating confidence values for a plurality of action units for the face. The facial content is translated into a representative icon. The translating the facial content includes summing the confidence values for the plurality of action units. The representative icon comprises an emoji. A set of emoji can be imported. The representative icon is selected from the set of emoji. The emoji selection is based on emotion content analysis of the face.

IPC Classes  ?

  • G06T 11/00 - 2D [Two Dimensional] image generation
  • G06F 3/04817 - Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
  • G06T 7/20 - Analysis of motion
  • G06T 13/80 - 2D animation, e.g. using sprites
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G06F 3/0482 - Interaction with lists of selectable items, e.g. menus

23.

Autonomous vehicle control using heart rate collection based on video imagery

      
Application Number 16729730
Grant Number 11151610
Status In Force
Filing Date 2019-12-30
First Publication Date 2020-04-30
Grant Date 2021-10-19
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Bhatkar, Viprali
  • Haering, Niels
  • Kashef, Youssef
  • Osman, Ahmed Adel

Abstract

Video of one or more vehicle occupants is obtained and analyzed. Heart rate information is determined from the video. The heart rate information is used in cognitive state analysis. The heart rate information and resulting cognitive state analysis are correlated to stimuli, such as digital media, which is consumed or with which a vehicle occupant interacts. The heart rate information is used to infer cognitive states. The inferred cognitive states are used to output a mood measurement. The cognitive states are used to modify the behavior of a vehicle. The vehicle is an autonomous or semi-autonomous vehicle. Training is employed in the analysis. Machine learning is engaged to facilitate the training. Near-infrared image processing is used to obtain the video. The analysis is augmented by audio information obtained from the vehicle occupant.

IPC Classes  ?

  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • G10L 15/26 - Speech to text systems

24.

Electronic display viewing verification

      
Application Number 16726647
Grant Number 11430260
Status In Force
Filing Date 2019-12-24
First Publication Date 2020-04-30
Grant Date 2022-08-30
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Langeveld, Nicholas
  • Mcduff, Daniel
  • Mavadati, Seyedmohammad

Abstract

Techniques for performing viewing verification using a plurality of classifiers are disclosed. Images of an individual may be obtained concurrently with an electronic display presenting one or more images. Image classifiers for facial and head pose analysis may be obtained. The images of the individual may be analyzed to identify a face of the individual in one of the plurality of images. A viewing verification metric may be calculated using the image classifiers and a verified viewing duration of the screen images by the individual may be calculated based on the plurality of images and the analyzing. Viewing verification can involve determining whether the individual is in front of the screen, facing the screen, and gazing at the screen. A viewing verification metric can be generated in order to determine a level of interest of the individual in particular media and images.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

25.

Drowsiness mental state analysis using blink rate

      
Application Number 16685071
Grant Number 10867197
Status In Force
Filing Date 2019-11-15
First Publication Date 2020-04-02
Grant Date 2020-12-15
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Kyal, Survi
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad
  • Turcot, Panu James

Abstract

Drowsiness mental state analysis is performed using blink rate. Video is obtained of an individual or group. The individual or group can be within a vehicle. The video is analyzed to detect a blink event based on a classifier, where the blink event is determined by identifying that eyes are closed for a frame in the video. A blink duration is evaluated for the blink event. Blink-rate information is determined using the blink event and one or more other blink events. The evaluating can include evaluating blinking for a group of people. The blink-rate information is compensated to determine drowsiness, based on the temporal distribution mapping of the blink-rate information. Mental states of the individual are inferred for the blink event based on the blink event, the blink duration of the individual, and the blink-rate information that was compensated. The compensating is biased based on demographic information of the individual.

IPC Classes  ?

  • B60Q 1/00 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • B60K 28/06 - Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver responsive to incapacity of driver
  • B60R 11/04 - Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

26.

Image analysis using a semiconductor processor for facial evaluation in vehicles

      
Application Number 16678180
Grant Number 11410438
Status In Force
Filing Date 2019-11-08
First Publication Date 2020-03-05
Grant Date 2022-08-09
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mishra, Taniya
  • Pitre, Boisy G.
  • Turcot, Panu James
  • Zeilman, Andrew Todd

Abstract

Analysis for convolutional processing is performed using logic encoded in a semiconductor processor. The semiconductor chip evaluates pixels within an image of a person in a vehicle, where the analysis identifies a facial portion of the person. The facial portion of the person can include facial landmarks or regions. The semiconductor chip identifies one or more facial expressions based on the facial portion. The facial expressions can include a smile, frown, smirk, or grimace. The semiconductor chip classifies the one or more facial expressions for cognitive response content. The semiconductor chip evaluates the cognitive response content to produce cognitive state information for the person. The semiconductor chip enables manipulation of the vehicle based on communication of the cognitive state information to a component of the vehicle.

IPC Classes  ?

  • G06V 20/59 - Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
  • G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • B60K 28/02 - Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions

27.

Multidevice multimodal emotion services monitoring

      
Application Number 16587579
Grant Number 11073899
Status In Force
Filing Date 2019-09-30
First Publication Date 2020-01-23
Grant Date 2021-07-27
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mavadati, Seyedmohammad
  • Mishra, Taniya
  • Peacock, Timothy
  • Poulin, Gregory
  • Turcot, Panu James

Abstract

Techniques for multidevice, multimodal emotion services monitoring are disclosed. An expression to be detected is determined. The expression relates to a cognitive state of an individual. Input on the cognitive state of the individual is obtained using a device local to the individual. Monitoring for the expression is performed. The monitoring uses a background process on a device remote from the individual. An occurrence of the expression is identified. The identification is performed by the background process. Notification that the expression was identified is provided. The notification is provided from the background process to a device distinct from the device running the background process. The expression is defined as a multimodal expression. The multimodal expression includes image data and audio data from the individual. The notification enables emotion services to be provided. The emotion services augment messaging, social media, and automated help applications.

IPC Classes  ?

  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • H04L 12/58 - Message switching systems
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

28.

Mental state analysis using blink rate within vehicles

      
Application Number 16126615
Grant Number 10482333
Status In Force
Filing Date 2018-09-10
First Publication Date 2019-11-19
Grant Date 2019-11-19
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad
  • Turcot, Panu James

Abstract

Mental state analysis is performed using blink rate within vehicles. Video is obtained of an individual or a group within a vehicle. The video is analyzed to detect a blink event based on a classifier, where the blink event is determined by identifying that eyes are closed for a frame in the video. A blink duration is evaluated for the blink event. Blink-rate information is determined using the blink event and one or more other blink events. The evaluating can include evaluating blinking for a group of people. The blink-rate information is compensated using the processors for a context. Mental states of the individual are inferred for the blink event, where the mental states are based on the blink event, the blink duration of the individual, and the blink-rate information that was compensated. A difference in blinking between the individual and the remainder of a group can be determined.

IPC Classes  ?

  • B60Q 1/00 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • B60K 28/06 - Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver responsive to incapacity of driver
  • B60R 11/04 - Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

29.

Vehicle manipulation using cognitive state engineering

      
Application Number 16429022
Grant Number 11292477
Status In Force
Filing Date 2019-06-02
First Publication Date 2019-09-19
Grant Date 2022-04-05
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Turcot, Panu James
  • Zeilman, Andrew Todd
  • Mishra, Taniya

Abstract

Vehicle manipulation uses cognitive state engineering. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A computing device is used to analyze the images to determine a cognitive state. Audio information from the occupant is obtained and the analyzing is augmented based on the audio information. The cognitive state is mapped to a loading curve, where the loading curve represents a continuous spectrum of cognitive state loading variation. The vehicle is manipulated, based on the mapping to the loading curve, where the manipulating uses cognitive state alteration engineering. The manipulating includes changing vehicle occupant sensory stimulation. Additional images of additional occupants of the vehicle are obtained and analyzed to determine additional cognitive states. Additional cognitive states are used to adjust the mapping. A cognitive load is estimated based on eye gaze tracking.

IPC Classes  ?

  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06N 3/08 - Learning methods
  • G06N 5/04 - Inference or reasoning models
  • G06N 20/00 - Machine learning

30.

Vehicle video recommendation via affect

      
Application Number 16408552
Grant Number 10911829
Status In Force
Filing Date 2019-05-10
First Publication Date 2019-08-29
Grant Date 2021-02-02
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Turcot, Panu James

Abstract

Techniques are disclosed for vehicle video recommendation via affect. A first media presentation is played to a vehicle occupant. The playing is accomplished using a video client. Cognitive state data for the vehicle occupant is captured, where the cognitive state data includes video facial data from the vehicle occupant during the first media presentation playing. The first media presentation is ranked, on an analysis server, relative to another media presentation based on the cognitive state data which was captured for the vehicle occupant. The ranking is determined for the vehicle occupant. The cognitive state data which was captured for the vehicle occupant is correlated, on the analysis server, to cognitive state data collected from other people who experienced the first media presentation. One or more further media presentation selections are recommended to the vehicle occupant, based on the ranking and the correlating.

IPC Classes  ?

  • H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
  • H04N 21/4223 - Cameras
  • H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06Q 30/06 - Buying, selling or leasing transactions
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons

31.

Cognitive state based vehicle manipulation using near-infrared image processing

      
Application Number 16289870
Grant Number 10922567
Status In Force
Filing Date 2019-03-01
First Publication Date 2019-06-27
Grant Date 2021-02-16
Owner Affectiva, Inc. (USA)
Inventor
  • Mahmoud, Abdelrahman N.
  • El Kaliouby, Rana
  • Mavadati, Seyedmohammad
  • Turcot, Panu James

Abstract

Cognitive state-based vehicle manipulation uses near-infrared image processing. Images of a vehicle occupant are obtained using imaging devices within a vehicle. The images include facial data of the vehicle occupant. The images include visible light-based images and near-infrared based images. A classifier is trained based on the visible light content of the images to determine cognitive state data for the vehicle occupant. The classifier is modified based on the near-infrared image content. The modified classifier is deployed for analysis of additional images of the vehicle occupant, where the additional images are near-infrared based images. The additional images are analyzed to determine a cognitive state. The vehicle is manipulated based on the cognitive state that was analyzed. The cognitive state is rendered on a display located within the vehicle.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • G06N 7/00 - Computing arrangements based on specific mathematical models
  • H04N 5/00 - PICTORIAL COMMUNICATION, e.g. TELEVISION - Details of television systems
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G10L 25/78 - Detection of presence or absence of voice signals
  • H04N 5/247 - Arrangement of television cameras
  • H04N 5/33 - Transforming infrared radiation
  • G06N 3/00 - Computing arrangements based on biological models
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]
  • B60R 11/00 - Arrangements for holding or mounting articles, not otherwise provided for
  • B60R 11/02 - Arrangements for holding or mounting articles, not otherwise provided for for radio sets, television sets, telephones, or the like; Arrangement of controls thereof
  • H04N 7/18 - Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
  • G10L 25/48 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use

32.

Audio analysis learning with video data

      
Application Number 16272054
Grant Number 10573313
Status In Force
Filing Date 2019-02-11
First Publication Date 2019-06-06
Grant Date 2020-02-25
Owner Affectiva, Inc. (USA)
Inventor
  • Mishra, Taniya
  • El Kaliouby, Rana

Abstract

Audio analysis learning is performed using video data. Video data is obtained, on a first computing device, wherein the video data includes images of one or more people. Audio data is obtained, on a second computing device, which corresponds to the video data. A face within the video data is identified. A first voice, from the audio data, is associated with the face within the video data. The face within the video data is analyzed for cognitive content. Audio features corresponding to the cognitive content of the video data are extracted. The audio data is segmented to correspond to an analyzed cognitive state. An audio classifier is learned, on a third computing device, based on the analyzing of the face within the video data. Further audio data is analyzed using the audio classifier.

IPC Classes  ?

  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
  • G10L 15/02 - Feature extraction for speech recognition; Selection of recognition unit
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
  • G10L 25/90 - Pitch determination of speech signals
  • G10L 15/18 - Speech classification or search using natural language modelling
  • G10L 21/055 - Time compression or expansion for synchronising with other signals, e.g. video signals
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • B60R 16/037 - Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric for occupant comfort
  • B60W 50/10 - Interpretation of driver requests or demands
  • G10L 15/25 - Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G10L 21/0356 - Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for synchronising with other signals, e.g. video signals

33.

Avatar image animation using translation vectors

      
Application Number 16206051
Grant Number 10628985
Status In Force
Filing Date 2018-11-30
First Publication Date 2019-06-06
Grant Date 2020-04-21
Owner Affectiva, Inc. (USA)
Inventor
  • Mishra, Taniya
  • Reichenbach, George Alexander
  • El Kaliouby, Rana

Abstract

Techniques are described for image generation for avatar image animation using translation vectors. An avatar image is obtained for representation on a first computing device. An autoencoder is trained, on a second computing device comprising an artificial neural network, to generate synthetic emotive faces. A plurality of translation vectors is identified corresponding to a plurality of emotion metrics, based on the training. A bottleneck layer within the autoencoder is used to identify the plurality of translation vectors. A subset of the plurality of translation vectors is applied to the avatar image, wherein the subset represents an emotion metric input. The emotion metric input is obtained from facial analysis of an individual. An animated avatar image is generated for the first computing device, based on the applying, wherein the animated avatar image is reflective of the emotion metric input and the avatar image includes vocalizations.

IPC Classes  ?

  • G06T 13/40 - 3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G10L 15/00 - Speech recognition
  • G10L 15/06 - Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
  • G10L 15/16 - Speech classification or search using artificial neural networks
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
  • G10L 13/027 - Concept to speech synthesisers; Generation of natural phrases from machine-based concepts
  • G10L 25/78 - Detection of presence or absence of voice signals

34.

Cognitive state vehicle navigation based on image processing

      
Application Number 16261905
Grant Number 11067405
Status In Force
Filing Date 2019-01-30
First Publication Date 2019-05-30
Grant Date 2021-07-20
Owner Affectiva, Inc. (USA)
Inventor
  • Fouad, Maha Amr Mohamed
  • Cabot, Chilton Lyons
  • El Kaliouby, Rana
  • Handford, Forest Jay

Abstract

Image-based analysis techniques are used for cognitive state vehicle navigation, including an autonomous or a semi-autonomous vehicle. Images including facial data of a vehicle occupant are obtained using an in-vehicle imaging device. The vehicle occupant can be an operator of or a passenger within the vehicle. A first computing device is used to analyze the images to determine occupant cognitive state data. The analysis can occur at various times along a vehicle travel route. The cognitive state data is mapped to location data along the vehicle travel route. Information about the vehicle travel route is updated based on the cognitive state data. The updated information is provided for vehicle control. The updated information is rendered on a second computing device. The updated information includes road ratings for segments of the vehicle travel route. The updated information includes an emotion metric for vehicle travel route segments.

IPC Classes  ?

  • G01C 21/34 - Route searching; Route guidance
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]
  • G06N 3/00 - Computing arrangements based on biological models
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
  • G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 3/08 - Learning methods
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06N 7/00 - Computing arrangements based on specific mathematical models
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state

35.

Directed control transfer for autonomous vehicles

      
Application Number 16234762
Grant Number 11465640
Status In Force
Filing Date 2018-12-28
First Publication Date 2019-05-23
Grant Date 2022-10-11
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mishra, Taniya
  • Zeilman, Andrew Todd
  • Zijderveld, Gabriele

Abstract

Techniques are described for cognitive analysis for directed control transfer for autonomous vehicles. In-vehicle sensors are used to collect cognitive state data for an individual within a vehicle which has an autonomous mode of operation. The cognitive state data includes infrared, facial, audio, or biosensor data. One or more processors analyze the cognitive state data collected from the individual to produce cognitive state information. The cognitive state information includes a subset or summary of cognitive state data, or an analysis of the cognitive state data. The individual is scored based on the cognitive state information to produce a cognitive scoring metric. A state of operation is determined for the vehicle. A condition of the individual is evaluated based on the cognitive scoring metric. Control is transferred between the vehicle and the individual based on the state of operation of the vehicle and the condition of the individual.

IPC Classes  ?

  • B60W 50/08 - Interaction between the driver and the control system
  • B60W 40/09 - Driving style or behaviour
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • G06N 3/00 - Computing arrangements based on biological models
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06N 7/00 - Computing arrangements based on specific mathematical models
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
  • G06N 3/08 - Learning methods
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06V 20/59 - Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
  • G06V 40/16 - Human faces, e.g. facial parts, sketches or expressions
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
  • B60W 50/00 - CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT - Details of control systems for road vehicle drive control not related to the control of a particular sub-unit
  • A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
  • A61B 5/0533 - Measuring galvanic skin response
  • A61B 5/024 - Measuring pulse rate or heart rate

36.

Sporadic collection of affect data within a vehicle

      
Application Number 16208211
Grant Number 10779761
Status In Force
Filing Date 2018-12-03
First Publication Date 2019-05-09
Grant Date 2020-09-22
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad

Abstract

Mental state analysis uses sporadic collection of affect data within a vehicle. Mental state data of a vehicle occupant is collected within a vehicle on an intermittent basis. The mental state data includes facial image data and the facial image data is collected intermittently across a plurality of devices within the vehicle. The mental state data further includes audio information. Processors are used to interpolate mental state data in between the collecting which is intermittent. Analysis of the mental state data is obtained on the vehicle occupant, where the analysis of the mental state data includes analyzing the facial image data. An output is rendered based on the analysis of the mental state data. The rendering includes communicating by a virtual assistant, communicating with a navigation component, and manipulating the vehicle. The mental state data is translated into an emoji.

IPC Classes  ?

  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06N 7/00 - Computing arrangements based on specific mathematical models
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06N 3/08 - Learning methods
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]
  • G06N 3/00 - Computing arrangements based on biological models
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/01 - Measuring temperature of body parts

37.

Vehicle content recommendation using cognitive states

      
Application Number 16211592
Grant Number 10897650
Status In Force
Filing Date 2018-12-06
First Publication Date 2019-04-11
Grant Date 2021-01-19
Owner AFFECTIVA, INC. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Turcot, Panu James
  • Zeilman, Andrew Todd
  • Zijderveld, Gabriele

Abstract

Content manipulation uses cognitive states for vehicle content recommendation. Images are obtained of a vehicle occupant using imaging devices within a vehicle. The one or more images include facial data of the vehicle occupant. A content ingestion history of the vehicle occupant is obtained, where the content ingestion history includes one or more audio or video selections. A first computing device is used to analyze the one or more images to determine a cognitive state of the vehicle occupant. The cognitive state is correlated to the content ingestion history using a second computing device. One or more further audio or video selections are recommended to the vehicle occupant, based on the cognitive state, the content ingestion history, and the correlating. The analyzing can be compared with additional analyzing performed on additional vehicle occupants. The additional vehicle occupants can be in the same vehicle as the first occupant or different vehicles.

IPC Classes  ?

  • H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
  • H04N 21/4223 - Cameras
  • H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06Q 30/06 - Buying, selling or leasing transactions
  • G06N 3/00 - Computing arrangements based on biological models
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]
  • G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • G10L 25/48 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use
  • G08G 1/04 - Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology

38.

Personal emotional profile generation for vehicle manipulation

      
Application Number 16173160
Grant Number 10796176
Status In Force
Filing Date 2018-10-29
First Publication Date 2019-03-07
Grant Date 2020-10-06
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad
  • Zijderveld, Gabriele

Abstract

Personal emotional profile generation uses cognitive state analysis for vehicle manipulation. Cognitive state data is obtained from an individual. The cognitive state data is extracted, using one or more processors, from facial images of an individual captured as they respond to stimuli within a vehicle. The cognitive state data extracted from facial images is analyzed to produce cognitive state information. The cognitive state information is categorized, using one or more processors, against a personal emotional profile for the individual. The vehicle is manipulated, based on the cognitive state information, the categorizing, and the stimuli. The personal emotional profile is generated by comparing the cognitive state information of the individual with cognitive state norms from a plurality of individuals and is based on cognitive state data for the individual that is accumulated over time. The cognitive state information is augmented based on audio data collected from within the vehicle.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
  • G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06N 3/08 - Learning methods
  • G10L 25/48 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use

39.

Multimodal machine learning for emotion metrics

      
Application Number 16127618
Grant Number 10628741
Status In Force
Filing Date 2018-09-11
First Publication Date 2019-01-10
Grant Date 2020-04-21
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mavadati, Seyedmohammad
  • Mishra, Taniya
  • Peacock, Timothy
  • Turcot, Panu James

Abstract

Techniques are described for machine-trained analysis for multimodal machine learning. A computing device captures a plurality of information channels, wherein the plurality of information channels includes contemporaneous audio information and video information from an individual. A multilayered convolutional computing system learns trained weights using the audio information and the video information from the plurality of information channels, wherein the trained weights cover both the audio information and the video information and are trained simultaneously, and wherein the learning facilitates emotional analysis of the audio information and the video information. A second computing device captures further information and analyzes the further information using trained weights to provide an emotion metric based on the further information. Additional information is collected with the plurality of information channels from a second individual and learning the trained weights factors in the additional information. The further information can include only video data or audio data.

IPC Classes  ?

  • G06N 3/08 - Learning methods
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
  • G10L 21/0356 - Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for synchronising with other signals, e.g. video signals
  • G10L 25/30 - Speech or voice analysis techniques not restricted to a single one of groups characterised by the analysis technique using neural networks
  • G06N 20/10 - Machine learning using kernel methods, e.g. support vector machines [SVM]

40.

Cognitive state evaluation for vehicle navigation

      
Application Number 15975007
Grant Number 10922566
Status In Force
Filing Date 2018-05-09
First Publication Date 2018-11-15
Grant Date 2021-02-16
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad
  • Turcot, Panu James

Abstract

Disclosed embodiments provide cognitive state evaluation for vehicle navigation. The cognitive state evaluation is accomplished using a computer, where the computer can perform learning using a neural network such as a deep neural network (DNN) or a convolutional neural network (CNN). Images including facial data are obtained of a first occupant of a first vehicle. The images are analyzed to determine cognitive state data. Layers and weights are learned for the deep neural network. Images of a second occupant of a second vehicle are collected and analyzed to determine additional cognitive state data. The additional cognitive state data is analyzed, and the second vehicle is manipulated. A second imaging device is used to collect images of a person outside the second vehicle to determine cognitive state data. The second vehicle can be manipulated based on the cognitive state data of the person outside the vehicle.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
  • G06N 3/08 - Learning methods
  • B60W 50/14 - Means for informing the driver, warning the driver or prompting a driver intervention
  • G06N 5/04 - Inference or reasoning models
  • B60W 50/08 - Interaction between the driver and the control system
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/107 - Measuring physical dimensions, e.g. size of the entire body or parts thereof
  • G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
  • G06N 3/02 - Neural networks
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/0531 - Measuring skin impedance
  • A61B 5/01 - Measuring temperature of body parts
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs

41.

Image analysis for emotional metric evaluation

      
Application Number 16017037
Grant Number 10869626
Status In Force
Filing Date 2018-06-25
First Publication Date 2018-10-25
Grant Date 2020-12-22
Owner Affectiva, Inc. (USA)
Inventor
  • Krupat, Jason
  • El Kaliouby, Rana
  • Radice, Jason
  • Cabot, Chilton Lyons

Abstract

Techniques are described for image analysis and representation for emotional metric threshold generation. A client device is used to collect image data of a user interacting with a media presentation, where the image data includes facial images of the user. One or more processors are used to analyze the image data to extract emotional content of the facial images. One or more emotional intensity metrics are determined based on the emotional content. The one or more emotional intensity metrics are stored into a digital storage component. The one or more emotional intensity metrics, obtained from the digital storage component, are coalesced into a summary emotional intensity metric. The summary emotional intensity metric is represented.

IPC Classes  ?

  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
  • G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
  • A61B 5/01 - Measuring temperature of body parts
  • G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
  • G06Q 10/10 - Office automation; Time management
  • A61B 3/113 - Objective types, i.e. instruments for examining the eyes independent of the patients perceptions or reactions for determining or recording eye movement
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body

42.

Image analysis for two-sided data hub

      
Application Number 15918122
Grant Number 10401860
Status In Force
Filing Date 2018-03-12
First Publication Date 2018-07-12
Grant Date 2019-09-03
Owner Affectiva, Inc. (USA)
Inventor
  • Krupat, Jason
  • El Kaliouby, Rana
  • Radice, Jason
  • Zijderveld, Gabriele
  • Cabot, Chilton Lyons

Abstract

Image analysis is performed for a two-sided data hub. Data reception on a first computing device is enabled by an individual and a content provider. Cognitive state data including facial data on the individual is collected on a second computing device. The cognitive state data is analyzed on a third computing device and the analysis is provided to the individual. The cognitive state data is evaluated and the evaluation is provided to the content provider. A mood dashboard is displayed to the individual based on the analyzing. The individual opts in to enable data reception for the individual. The content provider provides content via a website.

IPC Classes  ?

  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
  • G05D 1/02 - Control of position or course in two dimensions
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/46 - Extraction of features or characteristics of the image
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G10L 15/18 - Speech classification or search using natural language modelling
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
  • G10L 25/90 - Pitch determination of speech signals
  • A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
  • G08G 1/0962 - Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages

43.

Vehicle manipulation using convolutional image processing

      
Application Number 15910385
Grant Number 11017250
Status In Force
Filing Date 2018-03-02
First Publication Date 2018-07-05
Grant Date 2021-05-25
Owner Affectiva, Inc. (USA)
Inventor
  • Turcot, Panu James
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad

Abstract

Disclosed embodiments provide for vehicle manipulation using convolutional image processing. The convolutional image processing is accomplished using a computer, where the computer can include a multilayered analysis engine. The multilayered analysis engine can include a convolutional neural network (CNN). The computer is initialized for convolutional processing. A plurality of images is obtained using an imaging device within a first vehicle. A multilayered analysis engine is trained using the plurality of images. The multilayered analysis engine includes multiple layers that include convolutional layers hidden layers. The multilayered analysis engine is used for cognitive state analysis. The evaluating provides a cognitive state analysis. Further images are analyzed using the multilayered analysis engine. The further images include facial image data from one or more persons present in a second vehicle. Voice data is collected to augment the cognitive state analysis. Manipulation data is provided to the second vehicle based on the evaluating.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G01C 21/34 - Route searching; Route guidance
  • G06K 9/66 - Methods or arrangements for recognition using electronic means using simultaneous comparisons or correlations of the image signals with a plurality of references, e.g. resistor matrix references adjustable by an adaptive method, e.g. learning
  • G06N 3/08 - Learning methods
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G06K 9/46 - Extraction of features or characteristics of the image
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers

44.

Vehicular cognitive data collection using multiple devices

      
Application Number 15886275
Grant Number 10592757
Status In Force
Filing Date 2018-02-01
First Publication Date 2018-06-07
Grant Date 2020-03-17
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad
  • Turcot, Panu James

Abstract

Vehicle cognitive data is collected using multiple devices. A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and cognitive states inferred from reactions to the tasks and activities. A first computing device within a vehicle obtains cognitive state data which is collected on an occupant of the vehicle from multiple sources, wherein the multiple sources include at least two sources of facial image data. A second computing device generates analysis of the cognitive state data which is collected from the multiple sources. A third computing device renders an output which is based on the analysis of the cognitive state data. The cognitive state data from multiple sources is tagged. The cognitive state data from the multiple sources is aggregated. The cognitive state data is interpolated when collection is intermittent. The cognitive state analysis is interpolated when the cognitive state data is intermittent.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G10L 15/26 - Speech to text systems
  • B60W 50/10 - Interpretation of driver requests or demands
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination

45.

Audio analysis learning using video data

      
Application Number 15861855
Grant Number 10204625
Status In Force
Filing Date 2018-01-04
First Publication Date 2018-05-24
Grant Date 2019-02-12
Owner Affectiva, Inc. (USA)
Inventor
  • Mishra, Taniya
  • El Kaliouby, Rana

Abstract

Audio analysis learning is performed using video data. Video data is obtained, on a first computing device, wherein the video data includes images of one or more people. Audio data is obtained, on a second computing device, which corresponds to the video data. A face is identified within the video data. A first voice, from the audio data, is associated with the face within the video data. The face within the video data is analyzed for cognitive content. Audio features are extracted corresponding to the cognitive content of the video data. The audio data is segmented to correspond to an analyzed cognitive state. An audio classifier is learned, on a third computing device, based on the analyzing of the face within the video data. Further audio data is analyzed using the audio classifier.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • B60W 50/10 - Interpretation of driver requests or demands
  • G10L 15/02 - Feature extraction for speech recognition; Selection of recognition unit
  • G10L 15/18 - Speech classification or search using natural language modelling
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
  • G10L 15/25 - Speech recognition using non-acoustical features using position of the lips, movement of the lips or face analysis
  • G10L 25/51 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
  • G10L 25/90 - Pitch determination of speech signals
  • B60R 16/037 - Electric or fluid circuits specially adapted for vehicles and not otherwise provided for; Arrangement of elements of electric or fluid circuits specially adapted for vehicles and not otherwise provided for electric for occupant comfort
  • G10L 21/055 - Time compression or expansion for synchronising with other signals, e.g. video signals
  • G10L 21/0356 - Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude for synchronising with other signals, e.g. video signals

46.

Vehicle manipulation using occupant image analysis

      
Application Number 15875644
Grant Number 10627817
Status In Force
Filing Date 2018-01-19
First Publication Date 2018-05-24
Grant Date 2020-04-21
Owner Affectiva, Inc. (USA)
Inventor
  • Zijderveld, Gabriele
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N
  • Mavadati, Seyedmohammad

Abstract

Vehicle manipulation is performed using occupant image analysis. A camera within a vehicle is used to collect cognitive state data including facial data, on an occupant of a vehicle. A cognitive state profile is learned, on a first computing device, for the occupant based on the cognitive state data. The cognitive state profile includes information on absolute time. The cognitive state profile includes information on trip duration time. Voice data is collected and the cognitive state data is augmented with the voice data. Further cognitive state data is captured, on a second computing device, on the occupant while the occupant is in a second vehicle. The further cognitive state data is compared, on a third computing device, with the cognitive state profile that was learned for the occupant. The second vehicle is manipulated based on the comparing of the further cognitive state data.

IPC Classes  ?

  • G05D 1/00 - Control of position, course, altitude, or attitude of land, water, air, or space vehicles, e.g. automatic pilot
  • G06N 3/04 - Architecture, e.g. interconnection topology
  • G08G 1/0967 - Systems involving transmission of highway information, e.g. weather, speed limits
  • G10L 25/48 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • B60W 40/08 - Estimation or calculation of driving parameters for road vehicle drive control systems not related to the control of a particular sub-unit related to drivers or passengers
  • G10L 15/18 - Speech classification or search using natural language modelling
  • G10L 25/63 - Speech or voice analysis techniques not restricted to a single one of groups specially adapted for particular use for comparison or discrimination for estimating an emotional state
  • G10L 15/22 - Procedures used during a speech recognition process, e.g. man-machine dialog
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
  • G05D 1/02 - Control of position or course in two dimensions
  • G08G 1/0962 - Arrangements for giving variable traffic instructions having an indicator mounted inside the vehicle, e.g. giving voice messages
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G08G 1/01 - Detecting movement of traffic to be counted or controlled
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06N 20/00 - Machine learning
  • G10L 25/90 - Pitch determination of speech signals

47.

Individual data sharing across a social network

      
Application Number 15720301
Grant Number 10799168
Status In Force
Filing Date 2017-09-29
First Publication Date 2018-02-08
Grant Date 2020-10-13
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Sadowsky, Richard Scott
  • Wilder-Smith, Oliver Orion

Abstract

Facial image data of an individual is collected of the individual to provide mental state data using a first web-enabled computing device. The mental state data is analyzed to produce mental state information using a second web-enabled computing device. The mental state information is shared across a social network using a third web-enabled computing device. The mental state data is also collected from the individual through capture of sensor information. The mental state data is also collected from the individual through capture of audio data. The individual elects to share the mental state information across the social network. The mental state data may be collected over a period of time and analyzed to determine a mood of the individual. The mental state information is translated into a representative icon for sharing, which may include an emoji. An image of the individual is shared along with the mental state information.

IPC Classes  ?

  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • H04L 29/08 - Transmission control procedure, e.g. data link level control procedure
  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
  • G06Q 10/10 - Office automation; Time management
  • G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
  • G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
  • G16H 40/63 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation

48.

Mental state analysis using blink rate for vehicles

      
Application Number 15670791
Grant Number 10074024
Status In Force
Filing Date 2017-08-07
First Publication Date 2017-11-23
Grant Date 2018-09-11
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman N.
  • Mavadati, Seyedmohammad
  • Turcot, Panu James

Abstract

Mental state analysis is performed by obtaining video of an individual as the individual interacts with a computer, either by performing various operations, such as driving a vehicle or being a passenger in a vehicle, or by consuming a media presentation. The video is analyzed to determine eye-blink information on the individual, such as eye-blink rate or eye-blink duration. The blink-rate information is compensated for a context. Blinking for a group of people of which the individual is a part is evaluated. Mental states of the individual are inferred for the blink event based on the blink event, the blink duration of the individual, the difference in blinking between the individual and the remainder of the group, and the blink-rate information that was compensated. The blink-rate information and associated mental states can be used to modify an advertisement, a media presentation, a digital game, or vehicle controls.

IPC Classes  ?

  • B60Q 1/00 - Arrangement of optical signalling or lighting devices, the mounting or supporting thereof or circuits therefor
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • B60K 28/06 - Safety devices for propulsion-unit control, specially adapted for, or arranged in, vehicles, e.g. preventing fuel supply or ignition in the event of potentially dangerous conditions responsive to conditions relating to the driver responsive to incapacity of driver
  • B60R 11/04 - Mounting of cameras operative during drive; Arrangement of controls thereof relative to the vehicle
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/18 - Devices for psychotechnics; Testing reaction times for vehicle drivers
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

49.

Mental state mood analysis using heart rate collection based on video imagery

      
Application Number 15589959
Grant Number 10517521
Status In Force
Filing Date 2017-05-08
First Publication Date 2017-08-24
Grant Date 2019-12-31
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Bhatkar, Viprali
  • Haering, Niels
  • Kashef, Youssef
  • Osman, Ahmed Adel

Abstract

Video of one or more people is obtained and analyzed. Heart rate information is determined from the video. The heart rate information is used in mental state analysis. The heart rate information and resulting mental state analysis are correlated to stimuli, such as digital media, which is consumed or with which a person interacts. The heart rate information is used to infer mental states. The inferred mental states are used to output a mood measurement. The mental state analysis, based on the heart rate information, is used to optimize digital media or modify a digital game. Training is employed in the analysis. Machine learning is engaged to facilitate the training.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/026 - Measuring blood flow
  • A61B 5/0295 - Measuring blood flow using plethysmography, i.e. measuring the variations in the volume of a body part as modified by the circulation of blood therethrough, e.g. impedance plethysmography
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
  • G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
  • G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
  • G16H 20/70 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising

50.

Analytics for livestreaming based on image analysis within a shared digital environment

      
Application Number 15444544
Grant Number 11056225
Status In Force
Filing Date 2017-02-28
First Publication Date 2017-06-15
Grant Date 2021-07-06
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Deal, Jr., James Henry
  • Handford, Forest Jay
  • Turcot, Panu James
  • Zijderveld, Gabriele

Abstract

Analytics are used for live streaming based on image analysis within a shared digital environment. A group of images is obtained from a group of participants involved in an interactive digital environment. The interactive digital environment can be a shared digital environment. The interactive digital environment can be a gaming environment. Emotional content within the group of images is analyzed for a set of participants within the group of participants. Results of the analyzing of the emotional content within the group of images are provided to a second set of participants within the group of participants. The analyzing emotional content includes identifying an image of an individual, identifying a face of the individual, determining facial regions, and performing content evaluation based on applying image classifiers.

IPC Classes  ?

  • A63F 13/86 - Watching games played by other players
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G16H 20/70 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H04N 21/4223 - Cameras
  • H04N 21/2187 - Live feed
  • H04N 21/478 - Supplemental services, e.g. displaying phone caller identification or shopping application
  • H04N 21/233 - Processing of audio elementary streams
  • H04N 21/422 - Input-only peripherals, e.g. global positioning system [GPS]
  • A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
  • A63F 13/92 - Video game devices specially adapted to be hand-held while playing
  • A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
  • G06K 9/46 - Extraction of features or characteristics of the image
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G16H 50/70 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
  • G16H 20/30 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
  • G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
  • G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising

51.

Image analysis using sub-sectional component evaluation to augment classifier usage

      
Application Number 15395750
Grant Number 11232290
Status In Force
Filing Date 2016-12-30
First Publication Date 2017-04-20
Grant Date 2022-01-25
Owner Affectiva, Inc. (USA)
Inventor
  • Mcduff, Daniel
  • El Kaliouby, Rana

Abstract

Images are analyzed using sub-sectional component evaluation in order to augment classifier usage. An image of an individual is obtained. The face of the individual is identified, and regions within the face are determined. The individual is evaluated to be within a sub-sectional component of a population based on a demographic or based on an activity. An evaluation of content of the face is performed based on the individual being within a sub-sectional component of a population. The sub-sectional component of a population is used for disambiguating among content types for the content of the face. A Bayesian framework that includes a conditional probability is used to perform the evaluation of the content of the face, and the evaluation is further based on a prior event that occurred.

IPC Classes  ?

  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G06K 9/46 - Extraction of features or characteristics of the image
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • G06K 9/32 - Aligning or centering of the image pick-up or image-field
  • G16H 50/70 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
  • A61B 5/1171 - Identification of persons based on the shapes or appearances of their bodies or parts thereof
  • G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
  • G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
  • G16H 20/70 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G16H 40/67 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for remote operation

52.

Video recommendation via affect

      
Application Number 15357585
Grant Number 10289898
Status In Force
Filing Date 2016-11-21
First Publication Date 2017-03-09
Grant Date 2019-05-14
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman
  • Turcot, Panu James

Abstract

Analysis of mental state data is provided to enable video recommendations via affect. Analysis and recommendation is made for socially shared live-stream video. Video response is evaluated based on viewing and sampling various videos. Data is captured for viewers of a video, where the data includes facial information and/or physiological data. Facial and physiological information is gathered for a group of viewers. In some embodiments, demographic information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06Q 30/06 - Buying, selling or leasing transactions
  • H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
  • H04N 21/4223 - Cameras
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies

53.

Affect usage within a gaming context

      
Application Number 15012246
Grant Number 10843078
Status In Force
Filing Date 2016-02-01
First Publication Date 2016-05-26
Grant Date 2020-11-24
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Turcot, Panu James
  • Handford, Forest Jay
  • Bender, Daniel
  • Picard, Rosalind Wright
  • Sadowsky, Richard Scott
  • Wilder-Smith, Oliver Orion

Abstract

Mental state data is collected as a person interacts with a game played on a machine. The mental state data includes facial data, where the facial data includes facial regions or facial landmarks. The mental state data can include physiological data and actigraphy data. The mental state data is analyzed to produce mental state information. Mental state data and/or mental state information can be shared across a social network or a gaming community. The affect of the person interacting with the game can be represented to the social network or gaming community in the form of an avatar. Recommendations based on the affect resulting from the analysis can be made to the person interacting with the game. Mental states are analyzed locally or via a web services. Based on the results of the analysis, the game with which the person is interacting is modified.

IPC Classes  ?

  • A63F 13/58 - Controlling game characters or game objects based on the game progress by computing conditions of game characters, e.g. stamina, strength, motivation or energy level
  • A63F 13/655 - Generating or modifying game content before or while executing the game program, e.g. authoring tools specially adapted for game development or game-integrated level editor automatically by game devices or servers from real world data, e.g. measurement in live racing competition by importing photos, e.g. of the player
  • A63F 13/825 - Fostering virtual characters
  • A63F 13/213 - Input arrangements for video game devices characterised by their sensors, purposes or types comprising photodetecting means, e.g. cameras, photodiodes or infrared cells
  • G06F 3/01 - Input arrangements or combined input and output arrangements for interaction between user and computer
  • G07F 17/32 - Coin-freed apparatus for hiring articles; Coin-freed facilities or services for games, toys, sports, or amusements
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G16H 20/70 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training
  • G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
  • G16H 40/63 - ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation

54.

Sporadic collection with mobile affect data

      
Application Number 14961279
Grant Number 10143414
Status In Force
Filing Date 2015-12-07
First Publication Date 2016-03-24
Grant Date 2018-12-04
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Bender, Daniel
  • Kodra, Evan
  • Nowak, Oliver Ernst
  • Sadowsky, Richard Scott

Abstract

An individual can exhibit one or more mental states when reacting to a stimulus. A camera or other monitoring device can be used to collect, on an intermittent basis, mental state data including facial data. The mental state data can be interpolated between the intermittent collecting. The facial data can be obtained from a series of images of the individual where the images are captured intermittently. A second face can be identified, and the first face and the second face can be tracked.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • A61B 5/01 - Measuring temperature of body parts
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising

55.

Image analysis using a semiconductor processor for facial evaluation

      
Application Number 14947789
Grant Number 10474875
Status In Force
Filing Date 2015-11-20
First Publication Date 2016-03-17
Grant Date 2019-11-12
Owner Affectiva, Inc. (USA)
Inventor
  • Pitre, Boisy G
  • El Kaliouby, Rana
  • Turcot, Panu James

Abstract

Image analysis for facial evaluation is performed using logic encoded in a semiconductor processor. The semiconductor chip analyzes video images that are captured using one or more cameras and evaluates the videos to identify one or more persons in the videos. When a person is identified, the semiconductor chip locates the face of the evaluated person in the video. Facial regions of interest are extracted and differences in the regions of interest in the face are identified. The semiconductor chip uses classifiers to map facial regions for emotional response content and evaluate the emotional response content to produce an emotion score. The classifiers provide gender, age, or ethnicity with an associated probability. Localization logic within the chip is used to localize a second face when one is evaluated in the video. The one or more faces are tracked, and identifiers for the faces are provided.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • G06T 7/11 - Region-based segmentation
  • G16H 50/70 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
  • G16H 30/40 - ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
  • G16H 50/30 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for individual health risk assessment
  • G16H 20/70 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to mental therapies, e.g. psychological therapy or autogenous training

56.

Facial tracking with classifiers

      
Application Number 14848222
Grant Number 10614289
Status In Force
Filing Date 2015-09-08
First Publication Date 2016-01-07
Grant Date 2020-04-07
Owner Affectiva, Inc. (USA)
Inventor
  • Senechal, Thibaud
  • El Kaliouby, Rana
  • Turcot, Panu James

Abstract

Concepts for facial tracking with classifiers is disclosed. One or more faces are detected and tracked in a series of video frames that include at least one face. Video is captured and partitioned into the series of frames. A first video frame is analyzed using classifiers trained to detect the presence of at least one face in the frame. The classifiers are used to initialize locations for a first set of facial landmarks for the first face. The locations of the facial landmarks are refined using localized information around the landmarks, and a rough bounding box that contains the facial landmarks is estimated. The future locations for the facial landmarks detected in the first video frame are estimated for a future video frame. The detection of the facial landmarks and estimation of future locations of the landmarks are insensitive to rotation, orientation, scaling, or mirroring of the face.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06K 9/62 - Methods or arrangements for recognition using electronic means
  • G16H 50/70 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
  • G16H 20/30 - ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
  • G16H 50/50 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for simulation or modelling of medical disorders
  • G16H 50/20 - ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems

57.

Video recommendation using affect

      
Application Number 14821896
Grant Number 09503786
Status In Force
Filing Date 2015-08-10
First Publication Date 2015-12-03
Grant Date 2016-11-22
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Mahmoud, Abdelrahman
  • Turcot, Panu James

Abstract

Analysis of mental states is provided to enable data analysis pertaining to video recommendation based on affect. Analysis and recommendation can be for socially shared livestream video. Video response may be evaluated based on viewing and sampling various videos. Data is captured for viewers of a video where the data includes facial information and/or physiological data. Facial and physiological information may be gathered for a group of viewers. In some embodiments, demographics information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos.

IPC Classes  ?

  • H04H 60/33 - Arrangements for monitoring the users' behaviour or opinions
  • H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
  • H04N 21/442 - Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed or the storage space available from the internal hard disk
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06Q 30/06 - Buying, selling or leasing transactions
  • H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
  • H04N 21/4223 - Cameras
  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints

58.

Mental state analysis for norm generation

      
Application Number 14598067
Grant Number 09959549
Status In Force
Filing Date 2015-01-15
First Publication Date 2015-05-21
Grant Date 2018-05-01
Owner Affectiva, Inc. (USA)
Inventor
  • Kodra, Evan
  • El Kaliouby, Rana
  • Peacock, Timothy
  • Poulin, Gregory

Abstract

Mental state data is gathered from a plurality of people and analyzed in order to determine mental state information. Metrics are generated based on the mental state information gathered as the people view media presentations. Norms, defined as the quantitative measures of the mental states of a plurality of people as they view the media presentation, are determined based on the mental state information metrics. The norms can be determined based on various viewer criteria including country of residence, demographic group, or device type on which the media presentation is viewed. Responses to new media are then compared against norms to determine the effectiveness of the new media presentations.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons

59.

MENTAL STATE ANALYSIS USING AN APPLICATION PROGRAMMING INTERFACE

      
Application Number US2014051290
Publication Number 2015/023952
Status In Force
Filing Date 2014-08-15
Publication Date 2015-02-19
Owner AFFECTIVA, INC. (USA)
Inventor
  • Pitre, Boisy, G.
  • El Kaliouby, Rana
  • Kashef, Youssef

Abstract

A mobile device is emotionally enabled using an application programming interface (API) in order to infer a user's emotions and make the emotions available for sharing. Images of an individual or individuals are captured and send through the API. The images are evaluated to determine the individual's mental state. Mental state analysis is output to an app running on the device on which the API resides for further sharing, analysis, or transmission. A software development kit (SDK) can be used to generate the API or to otherwise facilitate the emotional enablement of a mobile device and the apps that run on the device.

IPC Classes  ?

  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)

60.

Biosensor with electrodes and pressure compensation

      
Application Number 14325263
Grant Number 08965479
Status In Force
Filing Date 2014-07-07
First Publication Date 2014-10-30
Grant Date 2015-02-24
Owner Affectiva, Inc. (USA)
Inventor
  • Wilder-Smith, Oliver Orion
  • Picard, Rosalind Wright

Abstract

A biosensor is described which can obtain physiological data from an individual. The biosensor may collect electrodermal activity, skin temperature, and other information. The biosensor may be attached to the body through the use of a garment which may be fastened in multiple locations on the human body. The biosensor has replaceable electrodes which may be interchanged. The electrodes contact the body without having any wires or leads external to the sensor.

IPC Classes  ?

  • A61B 5/04 - Measuring bioelectric signals of the body or parts thereof
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/01 - Measuring temperature of body parts
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body

61.

Personal emotional profile generation

      
Application Number 14328554
Grant Number 10111611
Status In Force
Filing Date 2014-07-10
First Publication Date 2014-10-30
Grant Date 2018-10-30
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • England, Avril

Abstract

The mental state of an individual is obtained in order to generate an emotional profile for the individual. The individual's mental state is derived from an analysis of the individual's facial and physiological information. The emotional profile of other individuals is correlated to the first individual for comparison. Various categories of emotional profiles are defined based upon the correlation. The emotional profile of the individual or group of individuals is rendered for display, used to provide feedback and to recommend activities for the individual, or provide information about the individual.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons

62.

MENTAL STATE WELL BEING MONITORING

      
Application Number US2014029951
Publication Number 2014/145228
Status In Force
Filing Date 2014-03-15
Publication Date 2014-09-18
Owner AFFECTIVA, INC. (USA)
Inventor
  • El Kaliouby, Rana
  • Bender, Daniel, Abraham

Abstract

The mental state of an individual is obtained to determine their well-being status. The mental state is derived from an analysis of facial information and physiological information of an individual. The well-being status of other individuals is correlated to the well-being status of the first individual. The well-being status of the individual or group of individuals is rendered for display. The well-being status of an individual is used to provide feedback and to recommend activities for the individual.

IPC Classes  ?

63.

MENTAL STATE ANALYSIS USING HEART RATE COLLECTION BASED VIDEO IMAGERY

      
Application Number US2014029926
Publication Number 2014/145204
Status In Force
Filing Date 2014-03-15
Publication Date 2014-09-18
Owner AFFECTIVA, INC. (USA)
Inventor
  • Kashef, Youssef
  • El Kaliouby, Rana
  • Osman, Ahmed, Adel
  • Haering, Niels
  • Bhatkar, Viprali

Abstract

Video of one or more people is obtained and analyzed. Heart rate information is determined from the video and the heart rate information is used in mental state analysis. The heart rate information and resulting mental state analysis are correlated to stimuli, such as digital media which is consumed or with which a person interacts. The heart rate information is used to infer mental states. The mental state analysis, based on the heart rate information, can be used to optimize digital media or modify a digital game.

IPC Classes  ?

  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • A61B 5/024 - Measuring pulse rate or heart rate
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)

64.

Mental state data tagging for data collected from multiple sources

      
Application Number 14214704
Grant Number 09646046
Status In Force
Filing Date 2014-03-15
First Publication Date 2014-07-17
Grant Date 2017-05-09
Owner Affectiva, Inc. (USA)
Inventor
  • Sadowsky, Richard Scott
  • El Kaliouby, Rana

Abstract

Mental state data useful for determining mental state information on an individual, such as video of an individual's face, is captured. Additional data that is helpful in determining the mental state information, such as contextual information, is also determined. The data and additional data allows interpretation of individual mental state information. The additional data is tagged to the mental state data and at least some of the mental state data, along with the tagged data, can be sent to a web service where it is used to produce further mental state information.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • G06F 17/30 - Information retrieval; Database structures therefor
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism

65.

Mental state analysis using heart rate collection based on video imagery

      
Application Number 14214719
Grant Number 09642536
Status In Force
Filing Date 2014-03-15
First Publication Date 2014-07-17
Grant Date 2017-05-09
Owner Affectiva, Inc. (USA)
Inventor
  • Kashef, Youssef
  • El Kaliouby, Rana
  • Osman, Ahmed Adel
  • Haering, Niels
  • Bhatkar, Viprali

Abstract

Video of one or more people is obtained and analyzed. Heart rate information is determined from the video and the heart rate information is used in mental state analysis. The heart rate information and resulting mental state analysis are correlated to stimuli, such as digital media which is consumed or with which a person interacts. The heart rate information is used to infer mental states. The mental state analysis, based on the heart rate information, can be used to optimize digital media or modify a digital game.

IPC Classes  ?

  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • A61B 5/024 - Measuring pulse rate or heart rate
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb

66.

Mental state analysis using blink rate

      
Application Number 14214918
Grant Number 09723992
Status In Force
Filing Date 2014-03-15
First Publication Date 2014-07-17
Grant Date 2017-08-08
Owner Affectiva, Inc. (USA)
Inventor
  • Senechal, Thibaud
  • El Kaliouby, Rana
  • Haering, Niels

Abstract

Mental state analysis is performed by obtaining video of an individual as the individual interacts with a computer, either by performing various operations or by consuming a media presentation. The video is analyzed to determine eye-blink information on the individual, such as eye-blink rate or eye-blink duration. A mental state of the individual is then inferred based on the eye blink information. The blink-rate information and associated mental states can be used to modify an advertisement, a media presentation, or a digital game.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb

67.

OPTIMIZING MEDIA BASED ON MENTAL STATE ANALYSIS

      
Application Number US2013067891
Publication Number 2014/105266
Status In Force
Filing Date 2013-10-31
Publication Date 2014-07-03
Owner AFFECTIVA, INC. (USA)
Inventor
  • El Kaliouby, Rana
  • Burke, Melissa Sue
  • Dreisch, Andrew Edwin
  • Turcot, Panu James
  • Kodra, Evan

Abstract

Mental state data is collected from a group of people as they view a media presentation, such as an advertisement, a television show, or a movie. The mental state data is analyzed to produce mental state information, such as inferred mental states, facial expressions, or valence. The mental state information is used to automatically optimize the previously viewed media presentation. The optimization may change various aspects of the media presentation including the length of different portions of the media presentation, the overall length of the media presentation, character selection, music selection, advertisement placement, and brand reveal time.

IPC Classes  ?

  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)

68.

COLLECTION OF AFFECT DATA FROM MULTIPLE MOBILE DEVICES

      
Application Number US2013078380
Publication Number 2014/106216
Status In Force
Filing Date 2013-12-30
Publication Date 2014-07-03
Owner AFFECTIVA, INC. (USA)
Inventor
  • El Kaliouby, Rana
  • Bender, Daniel, Abraham
  • Kodra, Evan
  • Nowak, Oliver, Ernst
  • Sadowsky, Richard, Scott
  • Senechal, Thibaud
  • Turcot, Panu, James

Abstract

A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and mental states inferred from these performances. Multiple devices, including mobile devices, can observe and record or transmit a user's mental state data. The mental state data collected from the multiple devices can be used to analyze the mental states of the user. The mental state data can be in the form of facial expressions, electrodermal activity, movements, or other detectable manifestations. Multiple cameras on the multiple devices can be usefully employed to collect facial data. An output can be rendered based on an analysis of the mental state data.

IPC Classes  ?

  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons

69.

SPORADIC COLLECTION OF MOBILE AFFECT DATA

      
Application Number US2013066991
Publication Number 2014/066871
Status In Force
Filing Date 2013-10-26
Publication Date 2014-05-01
Owner AFFECTIVA, INC. (USA)
Inventor
  • Bender, Daniel
  • El Kaliouby, Rana
  • Kodra, Evan
  • Nowak, Oliver, Ernst
  • Sadowsky, Richard, Scott

Abstract

A user may react to an interaction by exhibiting a mental state. A camera or other monitoring device can be used to capture one or more manifestations of the user's mental state, such as facial expressions, electrodermal activity, or movements. However, there may be conditions where the monitoring device is not able to detect the manifestation continually. Thus, various capabilities and implementations are described where the mental state data is collected on an intermittent basis, analyzed, and an output rendered based on the analysis of the mental state data.

IPC Classes  ?

  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)

70.

Collection of affect data from multiple mobile devices

      
Application Number 14144413
Grant Number 09934425
Status In Force
Filing Date 2013-12-30
First Publication Date 2014-04-24
Grant Date 2018-04-03
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Bender, Daniel Abraham
  • Kodra, Evan
  • Nowak, Oliver Ernst
  • Sadowsky, Richard Scott
  • Senechal, Thibaud
  • Turcot, Panu James

Abstract

A user interacts with various pieces of technology to perform numerous tasks and activities. Reactions can be observed and mental states inferred from these performances. Multiple devices, including mobile devices, can observe and record or transmit a user's mental state data. The mental state data collected from the multiple devices can be used to analyze the mental states of the user. The mental state data can be in the form of facial expressions, electrodermal activity, movements, or other detectable manifestations. Multiple cameras on the multiple devices can be usefully employed to collect facial data. An output can be rendered based on an analysis of the mental state data.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism

71.

Sporadic collection of mobile affect data

      
Application Number 14064136
Grant Number 09204836
Status In Force
Filing Date 2013-10-26
First Publication Date 2014-02-20
Grant Date 2015-12-08
Owner Affectiva, Inc. (USA)
Inventor
  • Bender, Daniel
  • El Kaliouby, Rana
  • Kodra, Evan
  • Nowak, Oliver Ernst
  • Sadowsky, Richard Scott

Abstract

A user may react to an interaction by exhibiting a mental state. A camera or other monitoring device can be used to capture one or more manifestations of the user's mental state, such as facial expressions, electrodermal activity, or movements. However, there may be conditions where the monitoring device is not able to detect the manifestation continually. Thus, various capabilities and implementations are described where the mental state data is collected on an intermittent basis, analyzed, and an output rendered based on the analysis of the mental state data.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising

72.

Facial analysis to detect asymmetric expressions

      
Application Number 14031907
Grant Number 10108852
Status In Force
Filing Date 2013-09-19
First Publication Date 2014-01-16
Grant Date 2018-10-23
Owner Affectiva, Inc. (USA)
Inventor
  • Senechal, Thibaud
  • El Kaliouby, Rana

Abstract

A system and method for facial analysis to detect asymmetric expressions is disclosed. A series of facial images is collected, and an image from the series of images is evaluated with a classifier. The image is then flipped to create a flipped image. Then, the flipped image is evaluated with the classifier. The results of the evaluation of original image and the flipped image are compared. Asymmetric features such as a wink, a raised eyebrow, a smirk, or a wince are identified. These asymmetric features are associated with mental states such as skepticism, contempt, condescension, repugnance, disgust, disbelief, cynicism, pessimism, doubt, suspicion, and distrust.

IPC Classes  ?

  • G06K 9/00 - Methods or arrangements for reading or recognising printed or written characters or for recognising patterns, e.g. fingerprints
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)
  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • A61B 5/0205 - Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/053 - Measuring electrical impedance or conductance of a portion of the body
  • A61B 5/08 - Measuring devices for evaluating the respiratory organs
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb

73.

AFFECT BASED EVALUATION OF ADVERTISEMENT EFFECTIVENESS

      
Application Number US2012068496
Publication Number 2013/086357
Status In Force
Filing Date 2012-12-07
Publication Date 2013-06-13
Owner AFFECTIVA, INC. (USA)
Inventor
  • El Kaliouby, Rana
  • England, Avril
  • Picard, Rosalind, Wright
  • Sapiro, Brent
  • Zeng, Zhihong

Abstract

Analysis of mental states is provided in order to enable data analysis pertaining to affect-based evaluation of advertisement effectiveness. Advertisements can have various objectives, including entertainment, education, awareness, persuasion, startling, or a drive to action. Data, including facial information, is captured for an individual viewer or group of viewers. Physiological information may also be gathered for the viewer or group of viewers. In some embodiments, demographics information is collected and used as a criterion for rendering the mental states of the viewers in a graphical format. In some embodiments data captured from an individual viewer or group of viewers is used to optimize an advertisement.

IPC Classes  ?

  • G06Q 30/02 - Marketing; Price estimation or determination; Fundraising
  • G06T 7/00 - Image analysis

74.

AFFDEX

      
Serial Number 85891297
Status Registered
Filing Date 2013-03-31
Registration Date 2013-11-12
Owner Affectiva, Inc. ()
NICE Classes  ? 35 - Advertising and business services

Goods & Services

Advertising and business management consultancy; Advertising and marketing; Advertising and marketing consultancy; Advertising consultation; Advice in the field of business management and marketing; Advice on the analysis of consumer buying habits and needs provided with the help of sensory, quality and quantity-related data; Analysis of advertising response; Analysis of market research data and statistics; Audience rating determination for radio and television broadcasts; Brand imagery consulting services; Business advice and analysis of markets; Business consultancy; Business consultation; Business consultation and management regarding marketing activities and launching of new products; Business marketing consulting services; Business marketing services; Business research; Collection of market research information; Conducting business and market research surveys; Conducting marketing studies; Consultancy and advisory services in the field of business strategy; Consumer marketing research and consulting related thereto; Consumer research; Development of marketing strategies and concepts; Market analysis; Market analysis and research services; Market assessment services; Market opinion polling studies; Market reports and studies; Market research; Market research and business analyses; Market research and market intelligence services; Market research consultation; Market research services; Market research studies; Marketing analysis services; Marketing consulting; Marketing research services; Marketing services, namely, conducting consumer tracking behavior research and consumer trend analysis; Marketing services, namely, consumer marketing research; Personality testing for business purposes; Providing consumer information in the field of demonstrative data from physiological expressions and signals; Providing information in the field of marketing; Providing statistical information; Provision of market research information; Provision of marketing reports; Psychological testing for the selection of personnel; Public opinion polling; Public opinion surveys; Statistical analysis and reporting services for business purposes; Statistical evaluations of marketing data

75.

AFFDEX

      
Serial Number 85891304
Status Registered
Filing Date 2013-03-31
Registration Date 2013-11-12
Owner Affectiva, Inc. ()
NICE Classes  ? 42 - Scientific, technological and industrial services, research and design

Goods & Services

Compiling data for research purposes in the field of medical science and medical consultancy; Computer hardware development; Computer software development; Computer software development in the field of physiological expressions and signals; Design and development of computer hardware; Design and development of computer hardware and software; Design and development of computer software; Design and development of software and hardware for physiological expressions and signals; Design and testing for new product development; Industrial research in the field of physiological expressions and signals; Innovation consulting services, namely, advising others in the areas of product development; Laboratory research in the field of physiological expressions and signals; Medical and scientific research in the field of medical imaging; Medical and scientific research in the field of physiological expressions and signals; Medical and scientific research information in the field of physiological expressions and signals; Product development consultation; Product research; Product research and development; Research and development and consultation related thereto in the field of physiological expressions and signals; Research and development in the field of physiological expressions and signals; Research and development of new products; Research and development of new products for others; Research and development of technology in the field of physiological expressions and signals; Research in the field of physiological expressions and signals; Research in the field of physiological expressions and signals; Scientific and technological services, namely, research and design in the field of physiological expressions and signals; Scientific research; Scientific research and development; Scientific research consulting in the field of physiological expressions and signals; Scientific research in the field of physiological expressions and signals; Scientific research services for others in the field of sensory perceptions; Scientific study and research in the fields of physiological expressions and signals; Software development consulting in the field of physiological expressions and signals

76.

ANALYSIS OF PHYSIOLOGY BASED ON ELECTRODERMAL ACTIVITY

      
Application Number US2012056772
Publication Number 2013/044183
Status In Force
Filing Date 2012-09-22
Publication Date 2013-03-28
Owner AFFECTIVA, INC. (USA)
Inventor
  • Picard, Rosalind Wright
  • El Kaliouby, Rana
  • Sadowsky, Richard Scott
  • Wilder-Smith, Oliver Orion

Abstract

Disclosed are computer-implemented techniques for analyzing physiology based on electrodermal activity. Initially, an individual's autonomic data is captured into a computer system. The autonomic data provides information for evaluating the physiology of the individual. The autonomic data is captured through at least one sensor. Analysis, based on the autonomic data captured on the individual, is received from a web service. An output related to physiology is rendered based on the analysis which was received.

IPC Classes  ?

  • A61B 5/04 - Measuring bioelectric signals of the body or parts thereof
  • A61B 5/01 - Measuring temperature of body parts
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/0452 - Detecting specific parameters of the electrocardiograph cycle

77.

CLINICAL ANALYSIS USING ELECTRODERMAL ACTIVITY

      
Application Number US2012056774
Publication Number 2013/044185
Status In Force
Filing Date 2012-09-22
Publication Date 2013-03-28
Owner AFFECTIVA, INC. (USA)
Inventor
  • Picard, Rosalind, Wright
  • El Kaliouby, Rana
  • Sadowsky, Richard, Scott
  • Wilder-Smith, Oliver, Orion

Abstract

Computer-implemented techniques for clinical analysis using electrodermal activity are disclosed. Initially, an individual's electrodermal activity data is captured into a computer system. The electrodermal activity data is captured through a sensor. Then, this electrodermal activity data provides information on physiology of the individual. Analysis is received from a web service wherein the analysis is based on the electrodermal activity data captured on the individual. An output, related to physiology, is rendered based on the analysis which was received.

IPC Classes  ?

78.

Method for biosensor usage with pressure compensation

      
Application Number 13674325
Grant Number 08396530
Status In Force
Filing Date 2012-11-12
First Publication Date 2013-03-12
Grant Date 2013-03-12
Owner AFFECTIVA, INC. (USA)
Inventor
  • Wilder-Smith, Oliver Orion
  • Picard, Rosalind Wright
  • Zhang, Tao

Abstract

A biosensor is described which can obtain physiological data from an individual. The biosensor may collect electrodermal activity, skin temperature, and other information. The biosensor may be attached to the body through the use of a garment which may be fastened in multiple locations on the human body. The biosensor includes compensation for electrodermal activity measurements based on the amount of pressure applied to the electrodes that are in contact with the skin. As pressure is increased the electrodermal activity values typically increase. By compensating for the influence that pressure changes have on electrodermal activity, more accurate analysis of physiology and therefore mental state analysis may be performed.

IPC Classes  ?

  • A61B 5/04 - Measuring bioelectric signals of the body or parts thereof

79.

VIDEO RECOMMENDATION BASED ON AFFECT

      
Application Number US2012026805
Publication Number 2012/158234
Status In Force
Filing Date 2012-02-27
Publication Date 2012-11-22
Owner AFFECTIVA, INC. (USA)
Inventor
  • El Kakiouby, Rana
  • Sadowsky, Richard Scott
  • Wilder-Smith, Oliver Orion
  • Picard, Rosalind Wright
  • Bahgat, May

Abstract

Analysis of mental states is provided to enable data analysis pertaining to video recommendation based on affect. Video response may be evaluated based on viewing and sampling various videos. Data is captured for viewers of a video where the data includes facial information and/or physiological data. Facial and physiological information may be gathered for a group of viewers. In some embodiments, demographics information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos.

IPC Classes  ?

80.

Video recommendation based on affect

      
Application Number 13406068
Grant Number 09106958
Status In Force
Filing Date 2012-02-27
First Publication Date 2012-08-30
Grant Date 2015-08-11
Owner Affectiva, Inc. (USA)
Inventor
  • El Kaliouby, Rana
  • Sadowsky, Richard Scott
  • Picard, Rosalind Wright
  • Wilder-Smith, Oliver Orion
  • Bahgat, May

Abstract

Analysis of mental states is provided to enable data analysis pertaining to video recommendation based on affect. Video response may be evaluated based on viewing and sampling various videos. Data is captured for viewers of a video where the data includes facial information and/or physiological data. Facial and physiological information may be gathered for a group of viewers. In some embodiments, demographics information is collected and used as a criterion for visualization of affect responses to videos. In some embodiments, data captured from an individual viewer or group of viewers is used to rank videos.

IPC Classes  ?

  • H04H 60/33 - Arrangements for monitoring the users' behaviour or opinions
  • H04N 21/466 - Learning process for intelligent management, e.g. learning user preferences for recommending movies
  • H04N 21/25 - Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication or learning user preferences for recommending movies
  • H04N 21/4223 - Cameras
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06Q 30/06 - Buying, selling or leasing transactions

81.

Using affect within a gaming context

      
Application Number 13366648
Grant Number 09247903
Status In Force
Filing Date 2012-02-06
First Publication Date 2012-05-31
Grant Date 2016-02-02
Owner Affectiva, Inc. (USA)
Inventor
  • Bender, Daniel
  • El Kaliouby, Rana
  • Picard, Rosalind Wright
  • Sadowsky, Richard Scott
  • Turcot, Panu James
  • Wilder-Smith, Oliver Orion

Abstract

Mental state data is collected as a person interacts with a game machine. Analysis is performed on this data and mental state information and affect are shared across a social network. The affect of a person can be represented to the social network or gaming community in the form of an avatar. Recommendations can be based on the affect of the person. Mental states can be analyzed by web services which may, in turn, modify the game.

IPC Classes  ?

  • G06F 17/00 - Digital computing or data processing equipment or methods, specially adapted for specific functions
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • G06F 19/00 - Digital computing or data processing equipment or methods, specially adapted for specific applications (specially adapted for specific functions G06F 17/00;data processing systems or methods specially adapted for administrative, commercial, financial, managerial, supervisory or forecasting purposes G06Q;healthcare informatics G16H)

82.

SHARING AFFECT ACROSS A SOCIAL NETWORK

      
Application Number US2011060900
Publication Number 2012/068193
Status In Force
Filing Date 2011-11-16
Publication Date 2012-05-24
Owner AFFECTIVA, INC. (USA)
Inventor
  • El Kaliouby, Rana
  • Sadowsky, Richard, Scott
  • Wilder-Smith, Oliver, Orion

Abstract

Mental state information is collected from an individual through video capture or capture of sensor information. The sensor information can be of electrodermal activity, accelerometer readings, skin temperature, or other characteristics. The mental state information may be collected over a period of time and analyzed to determine a mood of the individual. An individual may share their mental state information across a social network. The individual may be asked to elect whether to share their mental state information before it is shared.

IPC Classes  ?

  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism

83.

MEASURING AFFECTIVE DATA FOR WEB-ENABLED APPLICATIONS

      
Application Number US2011054125
Publication Number 2012/044883
Status In Force
Filing Date 2011-09-30
Publication Date 2012-04-05
Owner AFFECTIVA, INC. (USA)
Inventor
  • El Kaliouby, Rana
  • Picard, Rosalind Wright
  • Sadowsky, Richard Scott

Abstract

Mental state information is collected as a person interacts with a rendering such as a website or video. The mental state information is collected through video capture or capture of sensor information. The sensor information can be of electrodermal activity, accelerometer readings, skin temperature, or other characteristics. The mental state information is uploaded to a server and aggregated with other people's information so that collective mental states are associated with the rendering. The aggregated mental state information is displayed through a visual representation such as an avatar.

IPC Classes  ?

  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
  • G06Q 30/00 - Commerce
  • G06T 7/40 - Analysis of texture

84.

MENTAL STATE ANALYSIS USING WEB SERVICES

      
Application Number US2011039282
Publication Number 2011/156272
Status In Force
Filing Date 2011-06-06
Publication Date 2011-12-15
Owner AFFECTIVA,INC. (USA)
Inventor
  • Sadowsky, Richard, Scott
  • El Kaliouby, Rana
  • Picard, Rosalind, Wright
  • Wilder-Smith, Oliver, Orion
  • Turcot, Panu, James
  • Zheng, Zhihong

Abstract

Analysis of mental states is provided using web services to enable data analysis. Data is captured for an individual where the data includes facial information and physiological information. Analysis is performed on a web service and the analysis is received. The mental states of other people may be correlated to the mental state for the individual. Other sources of information may be aggregated where the information may be used to analyze the mental state of the individual. Analysis of the mental state of the individual or group of individuals is rendered for display.

IPC Classes  ?

  • G06Q 50/00 - Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism

85.

Biosensor module with leadless contacts

      
Application Number 12905560
Grant Number 08774893
Status In Force
Filing Date 2010-10-15
First Publication Date 2011-04-21
Grant Date 2014-07-08
Owner Affectiva, Inc. (USA)
Inventor
  • Wilder-Smith, Oliver Orion
  • Picard, Rosalind Wright

Abstract

A biosensor is described which can obtain physiological data from an individual. The biosensor may collect electrodermal activity, skin temperature, and other information. The biosensor may be attached to the body through the use of a garment which may be fastened in multiple locations on the human body. The biosensor has replaceable electrodes which may be interchanged. The electrodes contact the body without having any wires or leads external to the sensor.

IPC Classes  ?

  • A61B 5/04 - Measuring bioelectric signals of the body or parts thereof
  • A61B 5/024 - Measuring pulse rate or heart rate
  • A61B 5/00 - Measuring for diagnostic purposes ; Identification of persons
  • A61B 5/16 - Devices for psychotechnics; Testing reaction times
  • A61B 5/0245 - Measuring pulse rate or heart rate using sensing means generating electric signals
  • A61B 5/11 - Measuring movement of the entire body or parts thereof, e.g. head or hand tremor or mobility of a limb
  • A61B 5/01 - Measuring temperature of body parts

86.

Biosensor with pressure compensation

      
Application Number 12905636
Grant Number 08311605
Status In Force
Filing Date 2010-10-15
First Publication Date 2011-04-21
Grant Date 2012-11-13
Owner Affectiva, Inc. (USA)
Inventor
  • Wilder-Smith, Oliver Orion
  • Picard, Rosalind Wright
  • Zhang, Tao

Abstract

A biosensor is described which can obtain physiological data from an individual. The biosensor may collect electrodermal activity, skin temperature, and other information. The biosensor may be attached to the body through the use of a garment which may be fastened in multiple locations on the human body. The biosensor includes compensation for electrodermal activity measurements based on the amount of pressure applied to the electrodes that are in contact with the skin. As pressure is increased the electrodermal activity values typically increase. By compensating for the influence that pressure changes have on electrodermal activity, more accurate analysis of physiology and therefore mental state analysis may be performed.

IPC Classes  ?

  • A61B 5/04 - Measuring bioelectric signals of the body or parts thereof

87.

AFFECTIVA

      
Serial Number 77776833
Status Registered
Filing Date 2009-07-08
Registration Date 2011-02-08
Owner Affectiva, Inc ()
NICE Classes  ? 35 - Advertising and business services

Goods & Services

Business management consulting and business marketing consulting services featuring demonstrative data from physiological expressions and signals

88.

AFFECTIVA

      
Serial Number 77776854
Status Registered
Filing Date 2009-07-08
Registration Date 2011-02-08
Owner Affectiva, Inc. ()
NICE Classes  ? 41 - Education, entertainment, sporting and cultural services

Goods & Services

Education and training services, namely, providing instructional, tutorial and training classes and seminars in the field of measuring, monitoring and evaluating demonstrative data derived from physiological expressions and signals

89.

AFFECTIVA

      
Serial Number 77776880
Status Registered
Filing Date 2009-07-08
Registration Date 2010-10-12
Owner Affectiva, Inc. ()
NICE Classes  ? 42 - Scientific, technological and industrial services, research and design

Goods & Services

Scientific research for others in the field of physiological expressions and signals; Industrial research for others in the field of physiological expressions and signals; Product research and development for others featuring sensor apparatus for measuring demonstrative physiological expressions and signals; Development of computer software for others for use in measuring, monitoring, and evaluating demonstrative data of physiological expressions and signals