A Neural Network Based EEG Temporal Pattern Sonification

July 19, 2017 | Autor: Roy Francis Navea | Categoría: Artificial Neural Networks, Neurocomputation
Share Embed


Descripción

6th International Conference Humanoid, Nanotechnology, Information Technology Communication and Control, Environment and Management (HNICEM) The Institute of Electrical and Electronics Engineers Inc. (IEEE) – Philippine Section 12-14 November 2013 Henry Sy Hall, De La Salle University, Manila, Philippines

A Neural Network Based EEG Temporal Pattern Sonification Roy Francis Navea

Elmer P. Dadios, PhD

Department of Electronics and Communications Engineering De La Salle University Manila, Philippines [email protected]

Department of Manufacturing Engineering and Management De La Salle University Manila, Philippines [email protected]

Abstract—This paper presents a technique to provide an acoustic representation of electroencephalogram (EEG) data using neural networks. The sample EEG consists of actual random movements of left and right hand recorded with eyes closed of a 21-year old, right handed male with no known medical conditions. In addition, an EEG signal simulator was used to generate random EEG signals aside from the actual EEG data used. Pre-data processing was done using short time Fourier transform (STFT) and singular value decomposition (SVD) techniques. A neural network (NN) based system was used to sonify the EEG data into an acoustic sound in the C5B5 octave. Keywords—EEG, sonification, short time Fourier transform, singular value decomposition, neural network

I.

INTRODUCTION

An EEG signal is a measurement of current that flow during synaptic excitations of the dendrites of many pyramidal neurons in the cerebral cortex. When brain cells (neurons) are activated, the synaptic currents are produced within the dendrites. This current generates a magnetic field measurable by electromyogram (EMG) machines and a secondary electrical field over scalp measurable by EEG systems [1]. Classical techniques in the analysis of EEG data are event related potentials (ERP) and coherence studies [2]. Sonification of EEG signals is a means of assisting and accelerating data inspection, pattern classification and exploratory data analysis [3]. Early works on sonification was made by Mayer-Kress [4]. He mapped activation directly to musical pitches of musical instruments which allowed only to present short signal parts on a reasonable time. Another is by [5]. Here, the usual EEG signal was played without modifications or by applying frequency modulation to shift them into a suited frequency range for the purpose of real-time monitoring. Three sonification methods were proposed by [3], thus improving the way sonification was done for EEG signals.

II.

TEMPORAL PATTERN AND SONIFICATION

A. Temporal Pattern A temporal pattern is defined as a segment of signals that recurs frequently in the whole temporal signal sequence. The patterns of the body movement represent the habit of a person. The patterns of the music represent melodic phases. Those patterns encode the characteristic of the original temporal sequence and can be used for data summarization and pattern detection [6]. This paper looked at the segmented parts of the EEG data series and performed sonification for pattern detection and EEG monitoring. B. Sonification Sonification is the generation of artificial sounds by using control by data or parameters extracted from data [7]. Sonification has been defined by as a subtype of auditory displays that use non-speech audio to represent information. As elaborated by [8], sonification is the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation. Sonification, seeks to translate relationships in data or information into sound(s) that exploit the auditory perceptual abilities of human beings such that the data relationships are comprehensible. III.

NEURAL NETWORKS

Neural networks are computing systems made up of large number of simple, highly interconnected processing elements (called nodes or artificial neurons) that mimic the structure and operation of the biological nervous system. Neural networks learn by examples. They can be trained with known examples of a problem to “acquire” knowledge about it. Once appropriately designed and trained, the network can be put to effective use solving unknown or untrained instances of a problem. A common neural network is made up of multiple layers: an input layer, a number of hidden layers and an output layer. Each layer is composed of computing elements which are the neurons. The inputs are defined by the weights. The weighted sum of the inputs from each neuron is passed

6th International Conference Humanoid, Nanotechnology, Information Technology Communication and Control, Environment and Management (HNICEM) The Institute of Electrical and Electronics Engineers Inc. (IEEE) – Philippine Section 12-14 November 2013 Henry Sy Hall, De La Salle University, Manila, Philippines

through an activation function to obtain an output. In addition to inputs, there are also biases in each neuron [1]. IV.

Table 1 shows the combination and the respective key to be played.

EEG DATA SOURCE AND PROCESSING

As in [9,10], the EEG data used in this study was taken from a 21-year old, right handed male with no known medical conditions. The EEG consists of actual random movements of left and right hand recorded with eyes closed. The data is composed of 19 rows to represent the number of electrodes. The order of the electrodes is FP1, FP2, F3, F4, C3, C4, P3, P4, O1, O2, F7, F8 T3, T4 T5, T6, FZ, CZ, and PZ. The recording was done at 500 Hz using Neurofax EEG system which uses daisy chain montage. The data was exported with a common reference using Eemagine EEG. The data sources consists of three right hand forward movement, three right hand backward movement, three left hand forward movement and three left hand backward movement. The EEG data processing sequence is shown in Fig.1.

TABLE I.

NN OUTPUTS AND KEY NOTES

4-Bit Output

Key (5)

4-Bit Output

Key (5)

0000

C

0110

F#/Gb

0001

C#/Db

0111

G

0010

D

1000

G#/Ab

0011

D#/Eb

1001

A

0100

E

1010

A#/Bb

0101

F

1011

B

The regression values obtained in training the network are Rtrain = 0.99987, Rvalidation = 0.99986, Rtest = 0.99985, Roverall = 0.99987 as shown in Fig.2. Since a user can select 1 channel at a time, once the network is trained, the input will just become 1, to indicate the chosen channel. The best validation performance of the network is 6.7591 x 10-5 at 374 epochs.

Fig. 1. EEG Data Processing Sequence

The signal length is of 3008 samples each. The entire signals taken from the subject are divided into 16-sample windows using STFT. Therefore, in total there are 188 sample windows or segments. The data are in frequency domain representation. As in [11], the SVD was used to reduce the size of a complex matrix to single vector. The vectors served as the input for the neural network for the classification step. The neural network used in this study is a three-layered network with input, hidden and output layers configured at 19-20-4 topology. The input layer is composed of 19 nodes for the 19 EEG channels. The output is composed of 4 nodes. Each node outputs a binary number, 0 and 1. The 4-bit output has a total of 16 combinations to cover the 12 tones of the C5 to B5 octave. Of the 16 combinations, only 12 are used.

Fig. 2. NN Regression Values

Fig.3 shows the validation performance plot of the neural network.

6th International Conference Humanoid, Nanotechnology, Information Technology Communication and Control, Environment and Management (HNICEM) The Institute of Electrical and Electronics Engineers Inc. (IEEE) – Philippine Section 12-14 November 2013 Henry Sy Hall, De La Salle University, Manila, Philippines

LF 100%

LB 100%

EEG Data Source RF RB 100% 100%

Random 99.61%

The unified graphical representation of the results for LF, LB, RF and RB EEG signals is in Fig. 5. It shows a 100% sonification of the EEG signals.

Fig. 3. NN Validation Performance Plot

A simple user input interface, shown in Fig.4, was created in order to select the channel for sonification. The left hand and right hand were divided into forward and backward movements. The user has the option as to which channel is to choose for sonification. For simulating other EEG signals, a random EEG generator was used. Each channel for both the actual and random EEG data has a distinct code which the user will type on the input line. Afterwards, the program, which runs in Matlab, will play the tones of the keys that correspond to the output of the neural network as shown in Table I.

Fig. 5. Sonification Results for LF, LB, RF and RB EEG Signals

The system was found to properly sonify the EEG data obtained as used by [9, 10]. The ratings were marked against a sonification process without using a neural network. Using the actual EEG data from the subject, the neural network based sonification process was found to be 100%. All of the processed discrete values of the EEG signal are sonified by the neural network used. Using a random EEG signal generator, the sonification was found to be 99.61%. Fig. 6 shows the result in each random EEG channel. The possibility of having not completely sonified the EEG data values are caused by the randomness EEG patterns when it comes to the body movements considered.

Fig. 4. User Input Interface

In this study, the sound is played based on the runtime clock of the program. The “beeps” are heard depending on how the loop executes until it reaches the maximum number of data to be sonified. V.

RESULTS AND DISCUSSION

The result of the sonification using neural networks is shown in Table II. TABLE II.

SONIFICATION RESULTS

Fig. 6. Sonification Results for Random Signals

These percentages for the results were obtained on an average considering all the 19 EEG channels. VI.

CONCLUSION

This study was able to demonstrate a sonification process for EEG data sets using neural networks. An algorithm,

6th International Conference Humanoid, Nanotechnology, Information Technology Communication and Control, Environment and Management (HNICEM) The Institute of Electrical and Electronics Engineers Inc. (IEEE) – Philippine Section 12-14 November 2013 Henry Sy Hall, De La Salle University, Manila, Philippines

based on STFT and SVD, was presented together with the designed neural network to perform the conversion of nonaudio data into an audio data. The sonification results show a high percentage of data conversion as for the EEG data used in the study. Using real EEG data, the sonification process was found to be 100% but while using simulated EEG signals, the process was found to be at 99.61%. Still, high percentages are obtained. It is recommended that an algorithm for the analysis of the obtained audio data be made to interpret the behavior of the EEG signal in relation to the body movements which are taken into account. Other parameters may also be considered to dictate the duration of the tones being played thus providing additional information about the behavior of the EEG signal. References [1]

B. Chambayil, R. Singla, R. Jha, "EEG eye blink classification using neural network," World Congress on Engineering, vol. 1, 2010 [2] E. Niedermeyer and F. H. Lopes da Silvia, Eds., Electroencephalography: Basic principles, clinical applications and related fields, Lippincott Williams & Wilkins, Philadelphia, 4 edition, 1999. [3] T. Herman, P. Meinicke, H. Bekel, H. Ritter, H.M. Muller, S. Weiss, "Sonifications for EEG Data Analysis," International Conference on Auditory Display, Kyoto, Japan, 2002 [4] G. Mayer-Kress, “Sonification of multiple electrode human scalp electroencephalogram,” Poster presentation demo at ICAD ’94, http://www.ccsr.uiuc.edu/People/gmk/Projects/EEGSound/, 1994. [5] E. Jovanov, D. Starcevic, A. Samardzic, A. Marsh, and T. Obrenovic, “EEG analysis in a telemedical virtual world,” Future Generaion Computer Systems, , no. 15, pp. 255–263, 1999. [6] P. Hong and S. Huang, "Automatic temporal pattern extraction and association," ICPR 2002 [7] T. Hinterberger and G. Baier, "POSER: Parametric Orchestral Sonification of EEG in Real-Time for the self-regulation of brain states," International workshop on Interactive Sonification, Bielefeld, January 2004 [8] Kramer, G., Walker, B. N., Bonebright, T., Cook, P., Flowers, J., Miner, N., et al. (1999). The Sonification Report: Status of the Field and Research Agenda. Report prepared for the National Science Foundation by members of the International Community for Auditory Display. Santa Fe, NM: International Community for Auditory Display (ICAD). [9] A. Delorme, G. Rousselet, M. Mace, M. FabreThorpe, “Interaction of Bottom-up and Top-down processing in the fast visual analysis of natural scenes,” Cognitive Brain Research, 19, 103-113, 2004 [10] A. Dolorme, S. Makeig, M. Fabre-Thorpe, T. Sejnowski, “from Single-trials EEG to Brain Area Dynamics ,” Neurocomputing, 44-46, 1057-1064, 2002 [11] S. Somnugpong, S. Phimoltares, A. Maneeroj, "Iris identification system based on Fourier coefficients and singular value decomposition," International Conference on Machine Vision, 2012

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.