Artificial Emotional Intelligence

September 28, 2017 | Autor: Anmol Garg | Categoría: Artificial Intelligence
Share Embed


Descripción

Artificial Emotional Intelligence

Anmol Garg Rahul Singhal Ved Ratn Dixit Astha Agarwal

Abstract ❖

Introduction



Fields Under Artificial Emotional Intelligence



Basic Models for Emotion Recognition



Facial Emotion Recognition – Problems and Solutions



Emotion Synthesis – through speech or expressions



Applications and Future Prospects



References

Introduction ❖

Starting with some definitions...



Emotion A natural instinctive state of mind deriving from one's circumstances, mood, or relationships with others.



Emotional Intelligence Ability to identify, assess, and control the emotions of oneself, of others, and of groups.



Artificial Intelligence Study and design of intelligent agents that perceive their environment and take actions accordingly to achieve certain objectives.

More Definitions ❖

We focus on Artificial Emotional Intelligence... Artificial Intelligence + Emotional Intelligence



Affective Computing Study and development of systems and devices that can recognize, interpret, process, and simulate human emotions



Areas of Affective Computing ➢

Detecting and recognizing emotions



Synthesis of emotions by machines

Why Model Emotions? ❖

Main goals in AI is to produce intelligent systems that act and reason similar to human beings.



Emotions are an important aspect in human rational thinking.



We need an emotional component in intelligent systems.





Examples can be seen in the fields of robotics, diagnosis, vision, learning, business etc. Modeling emotions is system-specific and might not always be required. For Eg. in an air traffic control system.

Modeling Emotions: Some Basic Models ❖





Using HMM's: Probabilities are employed to change from one emotional state to another. ➢

Input - Set of observations



Output - Set of probabilities for each state

Rule Based Models: Use signal processing and pattern recognition systems to transform a human signal to a script which can be processed. Before applying affective computing techniques : ➢

Identify relevant emotions for the application



Recognise, express and model such emotions



Strategise how to respond and use those emotions

Recognizing Emotions A variety of inputs can be used to recognize emotions ❖

Visual information processing (facial expressions)



Body movements analysis (body posture and gestures)



Text input lexical analysis



Voice signals



Keystroke pattern, and other input devices



Physiological measurements (heartbeat, body temperature)

Recognizing Emotions from Facial Expressions ❖

Six Primary Emotions: Angry, Disgust, Fearful, Happy, Sad, Surprised, Neutral

Source: http://www.kasrl.org/jaffe.html

Wait! What if?

Source: http://abbastudios.blogspot.in/

Emotion Recognition Techniques ❖



Technique 1: ➢

Feature Extraction: Principal component analysis(PCA)



Emotion Recognition: PCA reconstruction

Technique 2: ➢

Feature Extraction: Spatial feature Analysis using ASM



Emotion Recognition: Radial basis function network

Principal Component Analysis ❖









A tool to find a smaller dimensional orthogonal coordinate system(bases) as compared the current dimension so that the correlation between different axis is minimized. Goal: Reduce the dimensionality of the data while retaining as much as possible of the variation present in the original dataset Why PCA? - Given Facial expressions features for a particular emotion, we need to figure out basis feature vectors to represent any other input images as a combination of these bases. The dimension of bases should be practical for computation and analysis These bases are referred as eigen-faces in facial expression analysis field.

PCA - Feature Extraction (1/3)

PCA - Feature Extraction (2/3)

PCA - Feature Extraction (3/3)

Emotion recognition -PCA reconstruction ❖ ❖



Use the training data to compute the Eigen-space of each class. For a test image, project it onto the Eigen-space of each class independently and then derive reconstructed image from each Eigenspace. Measure similarity(mean squared error) between the original image and reconstructed image

Active Shape Model ❖



Aimed at automatically locating landmark points that define the shape of any statistically modeled object in an image Landmark points: points of interest consist of points that lie along the shape boundaries of facial features such as the eyes, lips, nose, mouth and eyebrows

Training ASM ❖



Building of a statistical facial model from a training set containing images with manually annotated landmarks 2D profiles of grey level intensities of region around each landmark point are generated source: [7]

ASM Testing ❖



Viola Jones face detector [10] is used for locating the face in an image ( skipping details, not the focus here ) Until Convergence ➢ The mean face is scaled, rotated and translated using a similarity transform to roughly fit on top of the face in the test image ➢ Landmarks are repeatedly moved into locations with profiles that best match the mean profile for that landmark

Vector of Coordinates of the finally obtained landmarks are input to RBFN

Radial Basis Function Network ❖

Radial Basis Function (RBF): real-valued function whose value depends only on the distance from the center eg. Gaussian functions (center : mean)

source: http://teaching.sociology.ul.ie

source: http://www2.math.umd.edu

source: http://www.cs.bham.ac.uk/~jxb/INC/l15.pdf

RBFN Architecture

❖ ❖

Three layered network Classification by measuring the input’s similarity to a set of prototypes, which implement a set of radial basis functions prototype/s: representative neuron/s of each class

RBFN Training & Characteristics Two Stages(Implementation specific detail later): ❖



Parameters of the bias functions (hidden units) are determined using unsupervised learning. Calculate the second weight layer values. Create a linear mapping from hidden layer activations to the target output patterns

Why RBFN? Classification of an image into emotion is defined by its closeness to an ideal face showing that emotion. ❖ ❖

Sigmoid Based Neuron : Threshold based activation function Radial Basis Neuron : Closeness based activation function

Since we need to judge on the basis of closeness and not threshold, an RBFN is a better choice.

RBFN Training Details 1/2 ❖

Hidden Unit Parameter( u and sigma of hidden layer neuron ) ➢

Separate K-means clustering on each class of emotion. Number of hidden layer neurons = K * number_of_classes ■ ■

"u" = cluster centers "sigma" = average distance between all points in the cluster and the cluster center

RBFN Training Details 2/2 ❖

Weights ➢ ➢

Weights only between hidden layer and output layer. Each output neuron gives one linear equation between expected and observed output for each training image. exp_output: 1 for correct category, 0 for others.



#equations = #input * #output_neurons



#variables = #weights

For sufficiently large number of input, #equations > #weights, so we apply best fit method to calculate the weights Testing ➢ Input given in form of feature vector ➢ Output neuron with maximum value dictates the class inferred. ➢



Demo

Results PCA RECONSTRUCTION: Data type

Test set Top One Match

Top Two Match

Face

78.42

89.21

Lips

69.06

83.45

Eyes

73.38

87.77

RBFN WITH ASM Basic Emotion

Result(%)

Basic Emotion

Result( % )

Happy

93.2

Sad

91.2

Fear

87.4

Angry

89.8

Surprise

90.8

Disgust

92.0

Average : 90.73%

Emotion Synthesis ❖







Emotion synthesis is mainly done using speech called as emotional speech synthesis. Build machines that not only appear to “have” emotions, but actually have mechanisms analogous to human emotions. In a machine, which "has" emotions, the synthesis model decides it’s emotional state which influences subsequent behavior. The ability to synthesize emotions via reasoning about them, i.e. to know that certain conditions tend to produce certain affective states is important.

Projects and Applications ❖









Autistic People: artificial recognition and synthesis of emotion can help them communicate with others. Call Center: Customer emotion recognition through speech to review and improve efficiency of employees Expression Glasses: A wearable device which allows any viewer to visualize the confusion and interest levels of the wearer. User feedback through Sentic Mouse: A modified computer mouse that includes a directional pressure sensor for aiding in recognition of emotional valence (liking/attraction vs. disliking/avoidance) Orpheus, the affective CD player: A digital music delivery system that plays music based on your current mood, and your listening preferences

Future Prospects ❖







Scientific ability involves not merely subject area training, but also acquiring the ability to judge the validity of results and developing baseline curiosity. Civic understanding provides a feel for how the social contract actually works when we unite it with effective public problem solving skills. Global intelligence includes the use of cultural IQ and diplomatic skills to overcome polarization to identify and act for the common good. Self-knowledge is a critical skill, as it lies at the center of social intelligence and involves the essential quality of empathy in an increasingly crowded world.

References [1]

Endang Setyati, Yoyon K. Suprapto: “Facial Emotional Expressions Recognition Based on Active Shape Model and Radial Basis Function Network”,CISMA IEEE, July 2012

[2]

Juan Martınez-Miranda, Arantza Aldea: “Emotions in human and artificial intelligence”, Computers in Human Behaviours vol.2, 2005

[3]

Daw-Tung Lin, Jam Chen: “Facial Expressions Classification with Hierarchical Radial Basis Function Networks”, Proc. Int. Conf. Neural Inf. (ICONIP) Vol 3, 1999

[4]

Matthew Turk, Alex Pentland: “Eigenfaces for Recognition”, Journal of Cognitive Neuroscience, 1991

[5]

Cahn, J.E.: The generation of affect in synthesized speech. Journal of the American Voice I/O Society 8,1990

[6]

Utsav Prabhu, Keshav Seshadhri: “Facial Recognition Using Active Shape Models, Local Patches and Support Vector Machines”

Thank You!

Source: www.xkcd.com

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.