EEG-Based Cross-Subject Emotion Recognition
EEG-Based Cross-Subject Emotion Recognition
fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2883497, IEEE Sensors
                                                                                                                  Journal
              IEEE SENSOR JOURNAL                                                                                                                                                                                1
                 Abstract—Human emotion is a physical or psychological pro-                                       to recognize human emotions vary from facial images, ges-
              cess which is triggered either consciously or unconsciously due to                                  ture, speech signals, to other physiological signals [7]. An
              perception of any object or situation. The electroencephalogram                                     inherent ambiguity exists in recognition of emotions using
              (EEG) signals can be used to record ongoing neuronal activities
              in the brain to get the information about the human emotional                                       facial images, gesture, or speech signals because it might
              state. These complicated neuronal activities in the brain cause                                     be a pretended emotion not the real ones. To resolve this
              non-stationary behavior of the EEG signals. Thus, emotion                                           ambiguity, emotion recognition using electroencephalogram
              recognition using EEG signals is a challenging study and it                                         (EEG) signals gained significant attention of researchers due
              requires advanced signal processing techniques to extract the                                       to its accurate assessment of the emotions and objective
              hidden information of emotions from EEG signals. Due to poor
              generalizability of features from EEG signals across subjects,                                      evaluation in comparison with facial expressions and gestures
              recognizing cross-subject emotion has been difficult. Thus, our                                     based techniques [8]. It has been proven that EEG signals can
              aim is to comprehensively investigate the channel specific nature                                   be helpful in effectively identifying the different emotions [9]–
              of EEG signals and to provide an effective method based on                                          [12]. For effective medical care, the consideration of emotional
              flexible analytic wavelet transform (FAWT) for recognition of                                       state is important [13], [14]. The process of recognition of
              emotion. FAWT decomposes the EEG signal into different sub-
              band signals. Further, we applied information potential (IP) to                                     emotion requires suitable signal processing techniques, feature
              extract the features from the decomposed sub-band signals of                                        extraction, and machine learning based classifiers for auto-
              EEG signal. The extracted feature values were smoothed and                                          mated classification.
              fed to the random forest and support vector machine (SVM)                                              Several techniques for automated classification of human
              classifiers that classified the emotions. The proposed method is                                    emotion using EEG signals are proposed in the literature
              applied to two different publicly available databases which are
              SJTU emotion EEG dataset (SEED) and database for emotion                                            [15]–[22]. The technique based on discrete wavelet transform
              analysis using physiological signal (DEAP). The proposed method                                     (DWT) is used in [15] to extract features from the EEG
              has shown better performance for human emotion classification                                       signals for emotion recognition. The features like energy and
              as compared to the existing method. Moreover, it yields channel                                     entropy are computed from the wavelet coefficients of the
              specific subject classification of emotion EEG signals when                                         emotion EEG signals and the fuzzy c-mean and fuzzy k-mean
              exposed to same stimuli.
                                                                                                                  clustering algorithms are used for classification purpose. In
                Index Terms—Human emotions, EEG, FAWT, Random forest,                                             [16], the authors presented a method for user-independent
              SVM.                                                                                                emotion recognition based on EEG signals, gaze distance,
                                                                                                                  and pupillary response. The reported classification accuracy is
                                               I. I NTRODUCTION                                                   68.5% for three valence labels and 76.4% for three arousal
                                                                                                                  labels using modality fusion strategy, and support vector
                    MOTIONS play a vital role in human life and are one of
              E     the crucial features of humans [1]. The everyday activities
              like communication, decision-making, etc., get highly affected
                                                                                                                  machine (SVM). The EEG signals pertaining to emotions of
                                                                                                                  happiness and sadness are classified using common special
                                                                                                                  patterns (CSP) and linear-SVM classifier. They also presented
              by emotional behavior. For decades, brain-computer interfaces                                       a strategy to choose an optimal frequency band and gamma
              (BCI) [2] have been one of the emerging and interesting bio-                                        band which is found suitable for EEG-based emotion classi-
              medical engineering research field that allows human being to                                       fication [18]. The three time-frequency distributions namely,
              control the external devices using their brain waves. To achieve                                    Hilbert-Huang, Zhao-Atlas-Marks, and spectrogram are used
              precise and natural interaction, computers and robots must                                          to compute the features based on time-windowing approach for
              possess the ability of emotion processing [3], [4]. The study                                       discrimination between music appraisal responses [19]. The
              of emotions has drawn attention of researchers from various                                         fast Fourier transform (FFT) based features are extracted and
              disciplines like psychology, bio-medical science, neuroscience,                                     classification is performed by employing a classifier depending
              etc. In the field of computer science, emotion study is inclined                                    on Bayes theorem and perceptron convergence algorithm [20].
              towards the development of applications such as task workload                                       Differential entropy based features are computed from the
              assessment and vigilance of operator [5], [6]. An automated                                         EEG signals for emotion recognition. These features are found
              emotion recognition system enriches the computer interface                                          appropriate for recognition of emotion categories namely,
              more user-friendly, effective, and enjoyable. The approaches                                        positive, neutral, and negative [21]. In another work, the
                                                                                                                  differential entropy computed in different frequency bands is
                V. Gupta, M.D. Chopda,and R.B. Pachori are with the Disclipline of Electri-
              cal Engineering, Indian Institute of Technology Indore, Indore, 453552 India.                       related to EEG rhythms. The beta and gamma rhythms are
              e-mails: vipingupta@iiti.ac.in, ee150002013@iiti.ac.in, and pachori@iiti.ac.in                      found most effective for emotion recognition [22]. Recently,
     1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2883497, IEEE Sensors
                                                                                                                  Journal
              IEEE SENSOR JOURNAL                                                                                                                                                                                2
                                 Emotion                                               Feature
                                                          FAWT                                                    Feature                                                  Classified
                                  EEG                                                 extraction                                            Classifier
                                                       decomposition                                             smoothing                                                 emotion
                                 signals                                               using IP
Fig. 1: Block diagram representation of the proposed methodology for the automated classification of the emotion EEG signals.
              the authors investigated 18 different kinds of linear and non-                                      The 62-channel electrode cap was used for recording the EEG
              linear features out of which nine are time-frequency domain                                         signals according to the international 10-20 system at 1000 Hz
              features and others are dynamical system features from EEG                                          sampling rate. The recorded EEG signals were preprocessed
              measurements and studied the different aspects which are                                            with down-sampling rate of 200 Hz followed by a band pass
              important for cross-subject emotion recognition e.g., different                                     filter between 0.5 Hz to 70 Hz to remove the noise and
              EEG channels and achieved average classification accuracies                                         artifacts. The detailed information related to the dataset can
              of 59.06% and 83.33% on the database for emotion analysis                                           be found in [24].
              using physiological signals (DEAP) and SJTU emotion EEG                                                 The SEED database contains recordings from 62 channels.
              dataset (SEED) databases, respectively [23].                                                        In [22], the authors presented the appropriate number of
                 In this paper, we have focused on the channel specific                                           channels for emotion EEG signals classification and it was
              features and developed an identification system based on the                                        observed that 12 channels were most effective for classification
              signal processing technique for automated emotion recognition                                       of emotions. These channels are as follows: C5, C6, CP5,
              using EEG signals and provided channel specific analysis                                            CP6, FT7, FT8, P7, P8, T7, T8, TP7, and TP8. We have
              across subjects. For this purpose, the emotion EEG signals                                          considered each channel separately, and extracted one second
              are first decomposed using flexible analytic wavelet transform                                      epochs from the last 30 seconds of the recorded EEG signals.
              (FAWT) method. The FAWT based decomposition of EEG                                                  The authors in [25], have suggested the use of last 30 seconds
              signal results in sub-band signals. The FAWT method has                                             of each trial (video) for emotions identification from EEG
              many advantages over the conventional DWT method such as                                            signals. On the other hand, the human emotions normally fall
              flexibility in the selection of parameters (fractional sampling,                                    in the duration of 0.5-4 seconds [26]. It should be noted that
              quality factor, dilation, and redundancy). Moreover, the FAWT                                       the suitable selection of the duration is an important factor
              provides a platform for analysis of transient and oscillatory                                       in the identification of human emotions from EEG signals.
              nature of the signal. It should be noted that with these                                            The selection of too long or too short duration may lead to
              above mentioned specific features, the FAWT can also be                                             misclassification of human emotions. For these reasons, the
              implemented using iterative filter bank approach like DWT.                                          optimal duration of one second has been suggested in [27],
              The information potential (IP) estimator is used to extract the                                     [28] for identification of human emotions.
              feature values from different sub-band signals. These feature                                           In this work, we have also studied the DEAP emotion
              values are smoothen and fed to the random forest and SVM                                            database which consists recording of 32 subjects and the
              classifiers separately that classify the emotion EEG signals.                                       recording from each subject contains 32 EEG and 8 peripheral
              The block diagram for the proposed automated emotion clas-                                          signals corresponding to 40 channels. These EEG signals were
              sification system is shown in Fig. 1.                                                               recorded by showing 40 pre-selected music video each with
                 The rest of the paper is organised as follows: In Section                                        duration of 60 seconds and baseline recording of 3 seconds
              II, the details about the datasets are provided. The proposed                                       duration. The sampling frequency of these recorded EEG
              methodology is explained in Section III, followed by results                                        signals is 128 Hz. The detailed information about database
              and discussions in Section IV. Section V provides conclusion                                        can be found in [25]. The channels T7, T8, CP5, CP6, P7,
              of the paper.                                                                                       and P8 are considered in this work because these channels
                                                                                                                  are more suitable for recognition of emotions as suggested in
                                                  II. DATASETS                                                    [22].
                 The datasets used in this work are SEED and DEAP which
              are publicly available online for the research purpose [22],                                                                       III. M ETHODOLOGY
              [24], [25]. The SEED dataset consists EEG signals recorded
              from 15 subjects (7 males and 8 females). Each participant                                          A. Flexible Analytic Wavelet Transform
              contributed to the experiment thrice at an interval of one
              week or longer. The emotion EEG signals were collected                                                 FAWT [29], [30] is an advanced form of DWT that serves
              by showing fifteen Chinese film clips for positive, neutral,                                        as an effective method for analyzing bio-medical signals [31],
              and negative emotions. These films contain both scene and                                           [32]. The time-frequency covering is one of the salient features
              audio to elicit strong emotion in subject. Every emotion                                            of FAWT. The FAWT contains Hilbert transform pairs of atoms
              contains five film clips with each 4 minutes long in one                                            that make it suitable for analysis of signals which contain
              experiment. The subject’s emotion reactions were recorded                                           oscillations. The Q-factor (QF), number of decomposition
              through a questionnaire after watching each emotion film clip.                                      level (J), and redundancy (r) are the input parameters for
     1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2883497, IEEE Sensors
                                                                                                                  Journal
              IEEE SENSOR JOURNAL                                                                                                                                                                                3
40 10
Epoch
                                                                                                                   SS 1
                                                      0                                                     (a)              0                                                      (b)
                                                                                                                           -10
                                                    -40                                                                                 50           100           150           200
                                                                 50          100           150           200                10
                                                                                                                   SS 3
                                                      5                                                                      0                                                      (d)
                                            SS 2
                                                      0                                                     (c)            -10
                                                     -5                                                                                 50           100           150           200
                                                                 50          100           150           200                 5
                                                                                                                    SS 5
                                                     10                                                                      0                                                      (f)
                                           SS 4       0
                                                    -10
                                                                                                            (e)             -5
                                                                                                                                        50           100           150           200
                                                                 50          100           150           200                 5
                                                                                                                   SS 7
                                                     10                                                                      0                                                      (h)
                                           SS 6
                                                      0                                                     (g)             -5
                                                    -10                                                                                 50           100           150           200
                                                                 50          100           150           200                 5
                                                                                                                   SS 9
                                                      5                                                                      0                                                      (j)
                                           SS 8
                                                      0                                                     (i)             -5
                                                     -5                                                                                 50           100           150           200
                                                                 50          100           150           200                 5
                                                                                                                   SS 11
                                                      5
                                           SS 10
                                                                                                                             0                                                      (l)
                                                      0                                                     (k)             -5
                                                     -5                                                                                 50           100           150           200
                                                                 50          100           150           200                50
                                                                                                                   SS 13
                                                     10
                                           SS 12
                                                      0                                                     (m)              0                                                      (n)
                                                    -10                                                                    -50
                                                                 50          100           150           200                            50           100           150           200
                                                                                                          Sample number
              Fig. 2: Plots of (a) an epoch from positive emotion EEG signal and (b)-(n) its corresponding reconstructed sub-bands (SS1 −SS13 )
              obtained using FAWT decomposition.
                                                     40                                                                     10
                                            Epoch
                                                                                                                   SS 1
                                                      0                                                    (a)               0                                                      (b)
                                                                                                                           -10
                                                    -40                                                                                 50           100           150           200
                                                                 50          100           150           200                 5
                                                                                                                   SS 3
                                                      5                                                                      0                                                      (d)
                                           SS 2
                                                      0                                                    (c)              -5
                                                     -5                                                                                 50           100           150           200
                                                                 50          100           150           200                 5
                                                                                                                   SS 5
                                                      5                                                                      0                                                      (f)
                                           SS 4
                                                      0                                                    (e)              -5
                                                     -5                                                                                 50           100           150           200
                                                                 50          100           150           200                10
                                                                                                                   SS 7
                                                      5                                                                      0                                                      (h)
                                           SS 6
                                                      0                                                    (g)             -10
                                                     -5                                                                                 50           100           150           200
                                                                 50          100           150           200                 5
                                                                                                                   SS 9
                                                     10                                                                      0                                                      (j)
                                           SS 8
                                                      0                                                    (i)              -5
                                                    -10                                                                                 50           100           150           200
                                                                 50          100           150           200                 5
                                                                                                                   SS 11
                                                      5
                                           SS 10
                                                                                                                             0                                                      (l)
                                                      0                                                    (k)              -5
                                                     -5                                                                                 50           100           150           200
                                                                 50          100           150           200                50
                                                                                                                   SS 13
                                                      2
                                           SS 12
                                                      0                                                    (m)               0                                                      (n)
                                                     -2                                                                    -50
                                                                 50          100           150           200                            50           100           150           200
                                                                                                         Sample number
              Fig. 3: Plots of (a) an epoch from neutral emotion EEG signal and (b)-(n) its corresponding reconstructed sub-bands (SS1 −SS13 )
              obtained using FAWT decomposition.
              FAWT. The QF for an oscillatory pulse can be expressed as                                           As per the definition of FAWT, the parameters e, f , g, h, and β
              [29]:                                                                                               control the number of oscillation in the wavelet. For a specific
                                           ω0                                                                     QF, the generated wavelet for different decomposition levels
                                   QF =       .                     (1)
                                           ∆ω                                                                     will have same number of oscillations. The shape of these
                                                                                                                  wavelets will change with the variation of FAWT parameters
              where ω0 is central frequency and ∆ω is the bandwidth of the
                                                                                                                  [29]. The fractional sampling can also be done using these
              signal.
                                                                                                                  FAWT parameters in low and high pass channels. Implementa-
              Thus, QF is the controlling parameter of the number of
                                                                                                                  tion of J level decomposition using FAWT is done by iterative
              oscillations in the mother wavelet. The redundancy controls the
                                                                                                                  filter bank comprising of high pass and low pass channels at
              time localization of the wavelet. FAWT provides the facility
                                                                                                                  every iteration level. The high pass and low pass channels of
              to specify the dilation factor, QF, and redundancy through
                                                                                                                  the filter bank separate the positive and negative frequencies,
              adjusting parameters namely, e, f , g, h, and β. We have e and
                                                                                                                  respectively. The frequency response corresponding to high
               f for up and down sampling of high pass channel while
              g and h are used for up and down sampling of low pass
              channel, respectively. The β is a positive constant which gives
              a measure for QF and it can be expressed as [29]:
                                                                 2
                                                          β=                                             (2)
                                                               QF + 1
     1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2883497, IEEE Sensors
                                                                                                                  Journal
              IEEE SENSOR JOURNAL                                                                                                                                                                                4
                                            Epoch
                                                     40                                                                     10
                                                                                                                   SS 1
                                                      0                                                     (a)              0                                                      (b)
                                                    -40                                                                    -10
                                                                50           100           150           200                            50           100           150           200
                                                     10                                                                      5
SS 2
                                                                                                                   SS 3
                                                      0                                                     (c)              0                                                      (d)
                                                    -10                                                                     -5
                                                                50           100           150           200                            50           100           150           200
                                                      5                                                                      5
                                            SS 4
                                                                                                                   SS 5
                                                      0                                                     (e)              0                                                      (f)
                                                     -5                                                                     -5
                                                                50           100           150           200                            50           100           150           200
                                                      2                                                                      5
                                            SS 6
                                                                                                                   SS 7
                                                      0                                                     (g)              0                                                      (h)
                                                     -2                                                                     -5
                                                                50           100           150           200                            50           100           150           200
                                                      5                                                                      5
                                            SS 8
                                                                                                                   SS 9
                                                      0                                                     (i)              0                                                      (j)
                                                     -5                                                                     -5
                                                                50           100           150           200                            50           100           150           200
                                                      5                                                                      5
                                            SS 10
                                                                                                                   SS 11
                                                      0                                                     (k)              0                                                      (l)
                                                     -5                                                                     -5
                                                                50           100           150           200                            50           100           150           200
                                                      5                                                                     50
                                            SS 12
                                                                                                                   SS 13
                                                      0                                                     (m)              0                                                      (n)
                                                     -5                                                                    -50
                                                                50           100           150           200                            50           100           150           200
                                                                                                          Sample number
              Fig. 4: Plots of (a) an epoch from negative emotion EEG signal and (b)-(n) its corresponding reconstructed sub-bands (SS1 −SS13 )
              obtained using FAWT decomposition.
              pass filter is expressed as [29]:                                                                       Figs. 2, 3, and 4 show the plots of the epochs and
                                                                                                                  its corresponding reconstructed sub-band signals (SS1 -SS13 )
                           
                            (e f )1/2,                                |ω| < ω p                                 obtained from FAWT decomposition for positive, neutral, and
                            (e f )1/2 θ ω−ω p ,
                                                
                                                                        ω p ≤ ω ≤ ωs
                           
                           
                                                                                                                  negative emotion EEG signals, respectively. These epochs are
                    H(ω) =               ωs −ω p 
                           
                           
                                          π−(ω−ω p )                                                     (3)      corresponding to last one second extracted from FT7 channel
                            (e f ) θ ωs −ω p ,                         −ωs ≤ ω ≤ −ω p
                                  1/2
                           
                                                                                                                 (first session of first subject) obtained with SEED database. It
                                                                        |ω| ≥ ωs
                           
                            0,
                                                                                                                 should be noted that SS1 to SS13 denote the first to thirteenth
              and the low pass filter frequency response is expressed as [29]:                                    reconstructed sub-band signals (SS) in their decreasing order
                                                                                                                of frequency. These components are well behaved and suitable
                         
                         
                          (gh)1/2 θ π−ω−ω
                                       ω 1 −ω0
                                               0
                                                   , ω0 ≤ ω < ω1                                                  for features extraction for the classification of human emotion
                                                                                                                  EEG signals. These obtained SS show the outcome of FAWT
                         
                          (gh)1/2                   ω1 < ω < ω2
                         
                         
                G(ω) =
                         
                                     
                                       ω−ω2
                                                                            (4)                                  based analysis.
                         
                         
                          (gh) θ ω3 −ω2 ,
                               1/2                   ω2 ≤ ω ≤ ω3                                                      In this work, we have used a fixed value of dilation factor
                                                                                                                  ( ef = 43 ) as suggested in [33] for EEG signals classification.
                         
                                                     ω ∈ [0, ω0 ) ∩ (ω3, 2π)
                         
                          0,
                         
                                                                                                                  On the basis of this fixed dilation factor, we have chosen the
              where ω p = (1−β)π+
                              e     ; ωs = πf ; ω0 =                    (1−β)π+
                                                                            g    ;      ω1 =     eπ
                                                                                                 fg;              values of parameters (QF and r) for the FAWT decomposition
                                         e− f +β f
              ω2 = π−          π+
                    g ; ω3 = g ;  ≤       e+ f π.
                                                                                                                  subjected to constraints which are expressed in equations (8)
                                                                                                                  and (9), respectively. The selected range of values for QF
              The θ(ω) can be given by [29]:                                                                      parameter are (3, 4, 5, and 6) and r parameter are (3, 4, 5,
                                                                                                                  6, 7, and 8). The value of J is selected from the range of
                            [1 + cos(ω)][2 − cos(ω)]1/2                                                           (5, 6, 7, 8, 9, 10, 11, and 12) because J=12 is the maximum
                       θ(ω) =                             , for ω ∈ [0, π] (5)
                                         2                                                                        possible decomposition level using FAWT on these parameters
              For perfect reconstruction, following condition must be satis-                                      values for EEG signals of length 200 samples [29].
              fied [29]:                                                                                              The FAWT is successfully applied for identification of atrial
                                 |θ(π − ω)| 2 + |θ(ω)| 2 = 1               (6)                                    fibrillation electrocardiogram (ECG) signals [34], myocardial
                                                                                                                  infarction ECG signals [31], coronary artery disease [35], [36],
              The constraint for selecting the QF parameter is expressed as:                                      and focal EEG signals [33]. For the FAWT decomposition
                                          e        g                                                              method, matlab toolbox is available at (http://web.itu.edu.tr/
                                      1− ≤ β ≤                           (7)
                                           f       h                                                              ibayram/AnDWT/).
              The redundancy parameter r can be expressed as:
                                                                   1                                              B. Feature Extraction using Information Potential
                                                    r ≈ (g/h)                                            (8)
                                                                1 − e/ f                                            The IP is a kernel based non-parametric estimator to eval-
              Thus, the selection of parameter r is subjected to following                                        uate Renyi’s quadratic entropy. For a random variable X, the
              constraint:                                                                                         IP of X is expressed as [35], [37]:
                                                e
                                     r > β/(1 − )                      (9)                                                                            N N
                                                                                                                                                    1 ÕÕ
                                                f                                                                                            ˆ
                                                                                                                                             I(X) = 2        k σ (x j , xi )                                (10)
                                                                                                                                                   N i=1 j=1
     1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2883497, IEEE Sensors
                                                                                                                  Journal
              IEEE SENSOR JOURNAL                                                                                                                                                                                5
0.12 0.11
                                                                                                                         0.1
                                       0.1
                                                                                                                       0.09
                           IP value
                                                                                                                       0.07
                                      0.06
                                                                                                                       0.06
                                      0.04                                                                             0.05
                                                  5       10         15          20         25         30                             5          10         15         20          25         30
                                                                                                            Epoch number
Fig. 5: Plots of (a) raw feature values (b) smooth feature values using moving average filter.
     1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2883497, IEEE Sensors
                                                                                                                  Journal
              IEEE SENSOR JOURNAL                                                                                                                                                                                6
94
                                                                       92
                                 Average classification accuracy (%)
90
88
86
84
82
              Fig. 6: Plot of average classification accuracies on SEED database for different channels with respect to J values at QF=3 and
              r=3 using random forest classifier.
94
                                                                       92
                                 Average classification accuracy (%)
90
88
86
84
82
              Fig. 7: Plot of average classification accuracies on SEED database for different channels with respect to QF values at J=12
              and r=3 using random forest classifier.
              and this variation can be seen in Fig. 8. Therefore, we have                                        can be observed from Table I, that the highest average
              selected r=3 for FAWT decomposition in our methodology.                                             classification accuracies across channels are obtained with
              Table I shows the achieved average classification accuracies                                        random forest classifier in comparison to SVM classifier for
              on selected FAWT parameters across channels obtained with                                           SEED database. It can also be observed from Table I, that
              random forest and SVM classifiers on SEED database. It                                              channels FT7, FT8, T7, T8, C5, and TP7 have shown higher
     1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2883497, IEEE Sensors
                                                                                                                  Journal
              IEEE SENSOR JOURNAL                                                                                                                                                                                                  7
94
                                                                       92
                                 Average classification accuracy (%)
90
88
86
84
82
              Fig. 8: Plot of average classification accuracies on SEED database for different channels with respect to r values at J=12 and
              QF=5 using random forest classifier.
                TABLE I: Average classification accuracies (%) across channels for J=12, QF=5, and r=3 on SEED and DEAP databases.
                                                                                                                                                          Channel   name
                 Database                Classification problem                          Classifier
                                                                                                            FT7       FT8          T7      T8      C5      C6        TP7         TP8       CP5          CP6         P7      P8
                                                                                   Random forest           91.53     90.63        93.46   92.84   91.06   89.32     91.22       89.47     89.47        87.82       89.63   89.33
                  SEED                     Positve/negative/neutral             SVM (polynomial kernel)    79.56     77.78        83.50   81.21   79.75   76.63     78.99       75.71     75.17        71.96       71.69   73.68
                                                                                SVM (RBF kernel)           66.11     66.28        70.85   71.07   65.42   61.73     66.28       61.33     58.55        54.07       58.49   58.06
                                      HA/LA                                                                  -         -          80.53   80.42     -       -         -           -       79.66        79.39       80.21   79.49
                  DEAP                HV/LV                                            Random forest         -         -          80.64   80.15     -       -         -           -       79.64        79.85       79.73   79.95
                               HVHA/HVLA/LVLA/LVHA                                                           -         -          72.07   71.70     -       -         -           -       70.99        70.92       71.77   71.11
              average classification accuracies across channels obtained with                                                 outperforms in comparison to methodology proposed in [23],
              SEED database using random forest classifier in compari-                                                        which gives an average classification accuracies of 83.33% on
              son to other channels. Thus, we can clearly say that these                                                      SEED database and 59.06% on DEAP database.
              channels are more efficient for cross-subject recognition of
              emotion with FAWT decomposition using EEG signals. The
              proposed methodology with selected FAWT parameters along                                                                                          V. C ONCLUSION
              with random forest classifier has been also tested on DEAP
              database to study the effectiveness of the proposed method.                                                        In this work, we have presented a new method for the
              The selection of random forest classifier for DEAP database                                                     cross-subject classification of the emotion EEG signals. The
              is based on the better classification performance obtained                                                      proposed method explores the FAWT for identification of
              on SEED database as compared to SVM classifier. For the                                                         human emotions. The effect of variation in FAWT parameters
              DEAP database, J=11 is the maximum possible decompo-                                                            have been studied in this work. The IP feature values of SS
              sition levels due to signal length of 128 samples. Table I                                                      obtained using FAWT decomposition have been found useful
              also shows the results of average classification accuracies                                                     for classification of the emotion EEG signals. On increasing
              across channels obtained with DEAP database using random                                                        the decomposition level (J) and QF parameter, the average
              forest classifier. The higher average classification accuracies                                                 classification accuracies are increased. The classification ac-
              across channels for T7 and T8 common channels on DEAP                                                           curacies achieved with random forest classifier are higher
              database can be seen from Table I. The proposed methodology                                                     than SVM classifier. It has been shown that our method
              obtained the average classification accuracies are 90.48% for                                                   achieves higher classification accuracies in comparison to ex-
              positive/neutral/negative, 79.95% for high arousal (HA)/low                                                     isting method for cross-subject channel specific classification
              arousal (LA), 79.99% for the high valence (HV)/low valence                                                      of emotion EEG signals. Cross-subject classification using
              (LV), and 71.43% for HVHA/HVLA/LVLA/LVHA emotions                                                               channel specific nature can provide an insight to the emotional
              classification using EEG signals. Our proposed methodology                                                      sensitivity of different persons across brain regions when the
                                                                                                                              similar stimuli are presented.
     1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2883497, IEEE Sensors
                                                                                                                  Journal
              IEEE SENSOR JOURNAL                                                                                                                                                                                8
                                                   R EFERENCES                                                    [22] W.-L. Zheng and B.-L. Lu, “Investigating critical frequency bands
                                                                                                                       and channels for EEG-based emotion recognition with deep neural
               [1] S. M. Alarcao and M. J. Fonseca, “Emotions recognition using EEG                                    networks,” IEEE Transactions on Autonomous Mental Development,
                   signals: A survey,” IEEE Transactions on Affective Computing, pp. 1–1,                              vol. 7, no. 3, pp. 162–175, 2015.
                   2018.                                                                                          [23] X. Li, D. Song, P. Zhang, Y. Zhang, Y. Hou, and B. Hu, “Exploring
               [2] F. Nijboer, F. O. Morin, S. P. Carmien, R. A. Koene, E. Leon, and                                   EEG features in cross-subject emotion recognition,” Frontiers in Neu-
                   U. Hoffmann, “Affective brain-computer interfaces: Psychophysiological                              roscience, vol. 12, p. 162, 2018.
                   markers of emotion in healthy persons and in persons with amyotrophic                          [24] W.-L. Zheng, J.-Y. Zhu, and B.-L. Lu, “Identifying stable patterns over
                   lateral sclerosis,” in 2009 3rd International Conference on Affective                               time for emotion recognition from EEG,” IEEE Transactions on Affective
                   Computing and Intelligent Interaction and Workshops. IEEE, 2009,                                    Computing, 2017.
                   pp. 1–11.
                                                                                                                  [25] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi,
               [3] L. Pessoa and R. Adolphs, “Emotion processing and the amygdala: From
                                                                                                                       T. Pun, A. Nijholt, and I. Patras, “DEAP: A database for emotion
                   a’low road’to’many roads’ of evaluating biological significance,” Nature
                                                                                                                       analysis using physiological signals,” IEEE Transactions on Affective
                   Reviews Neuroscience, vol. 11, no. 11, p. 773, 2010.
                                                                                                                       Computing, vol. 3, no. 1, pp. 18–31, 2012.
               [4] W. Zheng, W. Liu, Y. Lu, B. Lu, and A. Cichocki, “Emotionmeter:
                   A multimodal framework for recognizing human emotions,” IEEE                                   [26] R. W. Levenson, “Emotion and the autonomic nervous system: a
                   Transactions on Cybernetics, pp. 1–13, 2018.                                                        prospectus for research on autonomic specificity,” in Social Psychophys-
               [5] L.-C. Shi and B.-L. Lu, “EEG-based vigilance estimation using extreme                               iology and Emotion:Theory and Clinical Applications, H. L. Wagner,
                   learning machines,” Neurocomputing, vol. 102, pp. 135–143, 2013.                                    Ed., pp. 17–42.
               [6] W.-L. Zheng and B.-L. Lu, “A multimodal approach to estimating vig-                            [27] N. Jatupaiboon, S. Pan-ngum, and P. Israsena, “Real-time EEG-based
                   ilance using EEG and forehead EOG,” Journal of Neural Engineering,                                  happiness detection system,” The Scientific World Journal, vol. 2013,
                   vol. 14, no. 2, p. 026017, 2017.                                                                    no. 618649, p. 12, 2013.
               [7] S. Jerritta, M. Murugappan, R. Nagarajan, and K. Wan, “Physiological                           [28] R. Sharma, “Automated identification systems based on advanced signal
                   signals based human emotion recognition: A review,” in 2011 IEEE                                    processing techniques applied on EEG signals,” Ph.D. dissertation,
                   7th International Colloquium on Signal Processing and its Applications                              Discipline of Electrical Engineering, Indian Institute of Technology
                   (CSPA). IEEE, 2011, pp. 410–415.                                                                    Indore, Indore, India, 2017.
               [8] G. L. Ahern and G. E. Schwartz, “Differential lateralization for positive                      [29] İ. Bayram, “An analytic wavelet transform with a flexible time-frequency
                   and negative emotion in the human brain: EEG spectral analysis,”                                    covering,” IEEE Transactions on Signal Processing, vol. 61, no. 5, pp.
                   Neuropsychologia, vol. 23, no. 6, pp. 745–755, 1985.                                                1131–1142, 2013.
               [9] D. Sammler, M. Grigutsch, T. Fritz, and S. Koelsch, “Music and                                 [30] C. Zhang, B. Li, B. Chen, H. Cao, Y. Zi, and Z. He, “Weak fault
                   emotion: Electrophysiological correlates of the processing of pleasant                              signature extraction of rotating machinery using flexible analytic wavelet
                   and unpleasant music,” Psychophysiology, vol. 44, no. 2, pp. 293–304,                               transform,” Mechanical Systems and Signal Processing, vol. 64, pp. 162–
                   2007.                                                                                               187, 2015.
              [10] G. G. Knyazev, J. Y. Slobodskoj-Plusnin, and A. V. Bocharov, “Gender                           [31] M. Kumar, R. B. Pachori, and U. R. Acharya, “Automated diagnosis
                   differences in implicit and explicit processing of emotional facial ex-                             of myocardial infarction ECG signals using sample entropy in flexible
                   pressions as revealed by event-related theta synchronization.” Emotion,                             analytic wavelet transform framework,” Entropy, vol. 19, no. 9, p. 488,
                   vol. 10, no. 5, p. 678, 2010.                                                                       2017.
              [11] D. Mathersul, L. M. Williams, P. J. Hopkinson, and A. H. Kemp, “Inves-                         [32] M. Sharma, R. B. Pachori, and U. R. Acharya, “A new approach to
                   tigating models of affect: Relationships among EEG alpha asymmetry,                                 characterize epileptic seizures using analytic time-frequency flexible
                   depression, and anxiety.” Emotion, vol. 8, no. 4, p. 560, 2008.                                     wavelet transform and fractal dimension,” Pattern Recognition Letters,
              [12] V. Bajaj and R. B. Pachori, “Human emotion classification from EEG                                  vol. 94, pp. 172 – 179, 2017.
                   signals using multiwavelet transform,” in 2014 International Conference                        [33] V. Gupta, T. Priya, A. K. Yadav, R. B. Pachori, and U. R. Acharya,
                   on Medical Biometrics. IEEE, 2014, pp. 125–130.                                                     “Automated detection of focal EEG signals using features extracted from
              [13] C. Doukas and I. Maglogiannis, “Intelligent pervasive healthcare                                    flexible analytic wavelet transform,” Pattern Recognition Letters, vol. 94,
                   systems,” in Advanced Computational Intelligence Paradigms in                                       pp. 180–188, 2017.
                   Healthcare-3. Springer, 2008, pp. 95–115.                                                      [34] M. Kumar, R. B. Pachori, and U. R. Acharya, “Automated diagnosis
              [14] P. C. Petrantonakis and L. J. Hadjileontiadis, “A novel emotion elic-                               of atrial fibrillation ECG signals using entropy features extracted from
                   itation index using frontal brain asymmetry for enhanced EEG-based                                  flexible analytic wavelet transform,” Biocybernetics and Biomedical
                   emotion recognition,” IEEE Transactions on Information Technology in                                Engineering, vol. 38, no. 3, pp. 564–573, 2018.
                   Biomedicine, vol. 15, no. 5, pp. 737–746, 2011.                                                [35] ——, “Characterization of coronary artery disease using flexible an-
              [15] M. Murugappan, M. Rizon, R. Nagarajan, S. Yaacob, I. Zunaidi, and                                   alytic wavelet transform applied on ECG signals,” Biomedical Signal
                   D. Hazry, “EEG feature extraction for classifying emotions using FCM                                Processing and Control, vol. 31, pp. 301–308, 2017.
                   and FKM,” International Journal of Computers and Communications,
                                                                                                                  [36] ——, “An efficient automated technique for CAD diagnosis using
                   vol. 1, no. 2, pp. 21–25, 2007.
                                                                                                                       flexible analytic wavelet transform and entropy features extracted from
              [16] M. Soleymani, M. Pantic, and T. Pun, “Multimodal emotion recognition
                                                                                                                       HRV signals,” Expert Systems with Applications, vol. 63, pp. 165 – 172,
                   in response to videos,” in 2015 International Conference on Affective
                                                                                                                       2016.
                   Computing and Intelligent Interaction (ACII). IEEE, 2015, pp. 491–
                   497.                                                                                           [37] D. Xu and D. Erdogmuns, “Renyi entropy, divergence and their non-
                                                                                                                       parametric estimators,” in Information Theoretic Learning. Springer,
              [17] V. Bajaj and R. B. Pachori, Detection of Human Emotions Using
                                                                                                                       2010, pp. 47–102.
                   Features Based on the Multiwavelet Transform of EEG Signals. Cham:
                   Springer International Publishing, 2015, pp. 215–240.                                          [38] M. Brennan, M. Palaniswami, and P. Kamen, “Do existing measures of
              [18] M. Li and B.-L. Lu, “Emotion classification based on gamma-band                                     poincare plot geometry reflect nonlinear features of heart rate variabil-
                   EEG,” in 2009 Annual International Conference of the IEEE Engineer-                                 ity?” IEEE Transactions on Biomedical Engineering, vol. 48, no. 11,
                   ing in Medicine and Biology Society EMBC 2009. IEEE, 2009, pp.                                      pp. 1342–1347, 2001.
                   1223–1226.                                                                                     [39] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning,
              [19] S. K. Hadjidimitriou and L. J. Hadjileontiadis, “EEG-based classification                           vol. 20, no. 3, pp. 273–297, Sep 1995.
                   of music appraisal responses using time-frequency analysis and familiar-                       [40] L. Fraiwan, K. Lweesy, N. Khasawneh, H. Wenz, and H. Dickhaus,
                   ity ratings,” IEEE Transactions on Affective Computing, vol. 99, no. 1,                             “Automated sleep stage identification system based on time–frequency
                   p. 1, 2013.                                                                                         analysis of a single EEG channel and random forest classifier,” Computer
              [20] H. J. Yoon and S. Y. Chung, “EEG-based emotion estimation using                                     Methods and Programs in Biomedicine, vol. 108, no. 1, pp. 10–19, 2012.
                   bayesian weighted-log-posterior function and perceptron convergence                            [41] A. Nishad, R. B. Pachori, and U. R. Acharya, “Application of TQWT
                   algorithm,” Computers in Biology and Medicine, vol. 43, no. 12, pp.                                 based filter-bank for sleep apnea screening using ECG signals,” Journal
                   2230–2237, 2013.                                                                                    of Ambient Intelligence and Humanized Computing, May 2018.
              [21] R.-N. Duan, J.-Y. Zhu, and B.-L. Lu, “Differential entropy fea-                                [42] R. Sharma, R. B. Pachori, and A. Upadhyay, “Automatic sleep stages
                   ture for EEG-based emotion classification,” in 2013 6th International                               classification based on iterative filtering of electroencephalogram sig-
                   IEEE/EMBS Conference on Neural Engineering (NER). IEEE, 2013,                                       nals,” Neural Computing and Applications, vol. 28, no. 10, pp. 2959–
                   pp. 81–84.                                                                                          2978, Oct 2017.
     1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/JSEN.2018.2883497, IEEE Sensors
                                                                                                                  Journal
              IEEE SENSOR JOURNAL                                                                                                                                                                                9
              [43] A. Bhattacharyya and R. B. Pachori, “A multivariate approach for                                                         Ram Bilas Pachori received the B.E. degree with
                   patient-specific EEG seizure detection using empirical wavelet trans-                                                    honours in Electronics and Communication Engi-
                   form,” IEEE Transactions on Biomedical Engineering, vol. 64, no. 9,                                                      neering from Rajiv Gandhi Technological University,
                   pp. 2003–2015, Sept 2017.                                                                                                Bhopal, India in 2001, the M.Tech. and Ph.D. de-
              [44] V. Joshi, R. B. Pachori, and A. Vijesh, “Classification of ictal and                                                     grees in Electrical Engineering from Indian Institute
                   seizure-free EEG signals using fractional linear prediction,” Biomedical                                                 of Technology (IIT) Kanpur, Kanpur, India in 2003
                   Signal Processing and Control, vol. 9, pp. 1 – 5, 2014.                                                                  and 2008, respectively. He worked as a Postdoctoral
              [45] A. K. Tiwari, R. B. Pachori, V. Kanhangad, and B. K. Panigrahi,                                                          Fellow at Charles Delaunay Institute, University of
                   “Automated diagnosis of epilepsy using key-point-based local binary                                                      Technology of Troyes, Troyes, France during 2007-
                   pattern of EEG signals,” IEEE Journal of Biomedical and Health                                                           2008. He served as an Assistant Professor at Com-
                   Informatics, vol. 21, no. 4, pp. 888–896, July 2017.                                                                     munication Research Center, International Institute
              [46] A. H. Khandoker, D. T. H. Lai, R. K. Begg, and M. Palaniswami,                                 of Information Technology, Hyderabad, India during 2008-2009. He served
                   “Wavelet-based feature extraction for support vector machines for                              as an Assistant Professor at Discipline of Electrical Engineering, IIT Indore,
                   screening balance impairments in the elderly,” IEEE Transactions on                            Indore, India during 2009-2013. He worked as an Associate Professor at
                   Neural Systems and Rehabilitation Engineering, vol. 15, no. 4, pp. 587–                        Discipline of Electrical Engineering, IIT Indore, Indore, India during 2013-
                   597, Dec 2007.                                                                                 2017 where presently he has been working as a Professor since 2017. He
              [47] R. Kohavi et al., “A study of cross-validation and bootstrap for accuracy                      worked as a Visiting Scholar at Intelligent Systems Research Center, Ulster
                   estimation and model selection,” in 14th International Joint Conference                        University, Northern Ireland, UK during December 2014. He is an Associate
                   on Artificial Intelligence, vol. 14, no. 2. Montreal, Canada, 1995, pp.                        Editor of Biomedical Signal Processing and Control journal and an Editor of
                   1137–1145.                                                                                     IETE Technical Review journal. He is a senior member of IEEE and a Fellow
              [48] M. Hall, E. Frank, G. Holmes, B. Pfahringer, P. Reutemann, and I. H.                           of IETE. He has more than 150 publications which include journal papers,
                   Witten, “The WEKA data mining software: an update,” ACM SIGKDD                                 conference papers, books, and book chapters. His publications have around
                   Explorations Newsletter, vol. 11, no. 1, pp. 10–18, 2009.                                      3300 citations, h index of 30, and i10 index of 69 (Google Scholar, November
                                                                                                                  2018). He has served as a reviewer for more than 75 journals and served
                                                                                                                  for scientific committees of various national and international conferences.
                                                                                                                  His research interests are in the areas of biomedical signal processing, non-
                                                                                                                  stationary signal processing, speech signal processing, signal processing for
                                                                                                                  communications, computer-aided medical diagnosis, and signal processing for
                                                                                                                  mechanical systems.
1558-1748 (c) 2018 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.