Patient-Specific Seizure Detection from Intra-cranial EEG Using High Dimensional Clustering

Share Embed


Descripción

Patient-Specific Seizure Detection From Intra-cranial EEG Using High Dimensional Clustering Haimonti Dutta, David Waltz, Karthik M Ramasamy, Phil Gross, Ansaf Salleb-Aouissi, Hatim Diab, Manoj Pooleery The Center for Computational Learning Systems (CCLS) Columbia University, New York, NY. {haimonti@ccls, waltz@ccls, km2580@, phil@ccls, ansaf@ccls, hdiab@ccls, manoj@ccls}.columbia.edu

Abstract—Automatic seizure detection is becoming popular in modern epilepsy monitoring units since it assists diagnostic monitoring and reduces manual review of large volumes of EEG recordings. In this paper, we describe the application of machine learning algorithms for building patient-specific seizure detectors on multiple frequency bands of intra-cranial electroencephalogram (iEEG) recorded by a dense MicroElectrode Array (MEA). The MEA is capable of recording at a very high sampling rate (30 KHz) producing an avalanche of time series data. We explore subsets of this data to build seizure detectors – we discuss several methods for extracting univariate and bivariate features from the channels and study the effectiveness of using high dimensional clustering algorithms such as K-means and Subspace clustering for constructing the model. Future work involves design of more robust seizure detectors using other features and non-parametric clustering techniques, detection of artifacts and understanding the generalization properties of the models. Keywords-seizure detection, clustering, K-means, Subspace clustering.

I. I NTRODUCTION Epilepsy is a chronic neurological disorder characterized by recurrent, unprovoked seizures that manifest in a variety of ways, including emotional or behavioral disturbances, convulsive movements, and loss of awareness. Approximately 65% of epilepsy patients become seizure free on use of anti-epileptic medications, while another 8 − 10% benefit from surgical resection of the region of the brain from which seizures originate ([13], [10]). To pin-point regions that are to be removed by surgery and accurately identify focal points, patients are subjected to long-term invasive electroencephalogram (EEG) monitoring. In a clinical trial initiated at the Columbia University Medical School by two of our co-authors (Drs. Schevon and Emerson) standard electrode arrays for recording EEG An abstract of this work was published at the 4th International Workshop on Seizure Prediction, Kansas City, MO, 2009. This research was done by Karthik Ramasamy and Phil Gross when they were affiliated to CCLS.

Catherine A Schevon and Ronald Emerson The Columbia University Medical School (CUMC) Columbia University, New York, NY. {cas2044, rge2}@columbia.edu

are supplemented with a specialized Micro Electrode Array (MEA), measuring 4 mm × 4 mm with 96 channels recording signals. The discovery of “atoms” of epileptogenic activity from this data make us believe that these may in fact be bio-markers for seizure prediction. The goal of our research is to analyze large volumes of EEG data obtained from long-term monitoring using statistical and machine learning techniques. Thus far, clinical neurophysiologists have spent long hours manually examining the data collected, screening them visually for artifacts, detecting anomalies and annotating them. This is extremely time and labor intensive. In this paper, we study the problem of identifying whether a time-series snippet belongs to a seizure or not – this is referred to as the seizure detection problem. Features are extracted from very high-dimensional time series data and the task of the learning algorithm is to identify one of two clusters – seizure vs non-seizure. We compare the performance of the clustering algorithms to a known gold standard data set made available to us with human annotations. This paper is organized as follows: Section II provides related work; Section III describes the micro-electrode array implantation, data collection procedure, artifact removal and feature extraction; Section IV studies clustering algorithms and Section V illustrates the cluster validity measures used; Section VI provides extensive empirical evaluation and finally Section VII concludes the paper. II. R ELATED W ORK Some of the earliest seizure detection algorithms were developed in the 1970s ([17]). Later, in the 1990s, Gotman ([6]) developed a monitoring system for epileptic seizures which was evaluated on 24 surface EEG recordings and 44 intracerebral recordings of patients at the Montreal Neurological Institute. Klatchko et al [9] built a generic detector by representing seizures as connected components in a digraph. Wilson et. al. [18] used a matching pursuit based algorithm called Reveal to detect seizures. The Matching Pursuit

algorithm converts EEG into sum of overlapping “atoms”, called Gabor atoms, which can be thought of as describing the time-frequency evolution of an independent component of the EEG waveform. Finally, Independent Component Analysis (ICA) based techniques for seizure detection have been studied by Hoeve et al [7] who cluster portions of abnormal EEG signals into physiologically relevant clusters and use the number of underlying components and their activity as a measure for seizure detection. III. DATA C OLLECTION USING THE M ICRO E LECTRODE A RRAY (MEA) To collect data for our experiments, subjects are recruited from patients with medically intractable focal epilepsy undergoing intracranial EEG (iEEG) monitoring at the Columbia University Medical Center/New YorkPresbyterian Hospital. The study is approved by the Institutional Review Board of the Columbia University Medical Center and informed consent is obtained from each patient prior to the procedure. A. Data Pre-processing Artifact Removal: Channels at which recording quality is compromised are identified by a clinical neurophysiologist by visual inspection of the iEEG traces. Powerfrequency curves at each channel are also inspected for nonphysiolologic power spikes at fixed frequencies, such as 60 Hz (A/C power noise) and harmonics of 2000 Hz (caused by the clinical recording system’s A/D converter). These are subsequently removed by signal processing using a notch filter. Normalization: EEG is the amplified differential signal from a source (the “electrode”) and a reference, usually chosen to be as silent as possible so as not to introduce artifacts into the recording. This is necessary because brain signals are tiny compared to those in the background. There are several ways to mitigate the effects of a noisy reference [16] such as (1) Average Referencing – compute the average of all the EEG signals and subtract this average from each one before processing. (2) Use a bipolar montage – this involves using adjacent channel pairs and subtracting one from the other, thus creating a set of bipolar signals to use for computation. (3) Subtract the signals for each channel from a known ground reference. All of the above mentioned normalization procedures are used in different experiments in the seizure detection task. B. EEG Data from Multiple Frequency Bands EEG can be divided into bands by frequency. The divisions came from a large body of observations about the role of brain oscillations in these different bands in cognitive function and in epileptic pathology. The bands are Low Delta (0.1 - 2 Hz), High Delta (2 - 4 Hz), Theta (4 - 8 Hz), Alpha (8 - 12 Hz), Beta (12 - 25 Hz), Low Gamma (25 - 55 Hz),

Mid Gamma (65 - 80 Hz), High Gamma (80 - 150 Hz) and Fast Ripples (200 - 500 Hz). In all our experiments, the raw EEG signal is broken down into these nine frequency bands before further processing. C. Feature Extraction There are a number of features that can be used to characterize EEGs and a detailed review is presented by Mormann et al. [13]. Typically, these features are extracted using a sliding window approach. Given an EEG time series x[m] of length T, it is possible to build a “sliding window” of data by sampling w contiguous positions from T [?]. The features are extracted from each sliding window instead of the entire sequence of length T. Features extracted from a single channel are called univariate while those involving two or more channels are called bivariate or multivariate respectively. For our experiments, we have used the following univariate and bivariate features: Univariate Power of the signal: P [n] = Pn features: (1) 2 x[m] ) (2) Mean Curve Length: CL[n] log( w1 P m=n−w+1 n = log( w1 m=n−w+2 |x[m] − x[m − 1]|). We applied logarithmic scaling for feature normalization as suggested by Gardner et al. [5]. Since there are 96 channels (minus 10 channels with known recording problems) and 9 frequency bands, we ended up with a feature set of size 86 × 9 = 774. The signals were normalized by subtracting them from the ground reference. Bivariate features: (1) Mean Phase Coherence (MPC): Phase synchronization is a measurement of the coherence of phase across cortical sites, independent of amplitude variations. This measure is intended to detect processing over a cortical network, which is thought to be one of the mechanisms of seizure genesis. We adapt the definition of mean phase coherence as presented by Mormann et al [14]. IV. C LUSTERING A LGORITHMS In this section we describe clustering algorithms that we use to detect seizures. The K-Means Algorithm: One of the oldest and most commonly used clustering algorithms is the K-means algorithm ([11], [12]). Assume we are given an integer K and a set of N data points X ⊂ Rd ; the goal is to partition X into K clusters, K < N . This can be achieved by choosing K centroids CP 1 , C2 , · · · , CK so as to minimize the potential function φ = x∈X minc∈C Dist[x−c], where Dist represents a distance function (such as squared euclidean, L1 norm, cosine metric etc.). The basic steps of the algorithm are as follows: Arbitrarily choose initial K centroids C1 , C2 , · · · , CK from X; for each i ∈ {1, 2, · · · K} set the cluster Ci to be the set of all points in X that are closer to centroid Ci than they are to centroid Cj , ∀j 6= i;P for each i ∈ {1, 2, · · · K} set the cluster centroid Ci = |C1i | x∈Ci x; the last two steps are repeated until the process stabilizes and there are no new cluster assignments.

Subspace Clustering: Subspace clustering ([15]) seeks to find clusters in lower dimensional space. There are two classes of subspace clustering algorithms: (1) Top Down approaches which eliminate irrelevant dimensions starting from a high dimensional space and (2) Bottom Up approaches which build on existing dimensions one at a time. The Algorithm we use for our experiments (described in Table IV-.1), uses a top-down approach. A key decision to be made is which dimension(s) are deemed relevant and by what measure – a commonly used strategy is to first use dimension reduction algorithms (such as Principal Component Analysis, Singular Value Decomposition) and use only those dimensions on which a large portion of the variance of the data is captured. Yet another technique prevalent in Statistics literature ([19], [20]), proposes to project data on the minor (last) Principal Components – these techniques identify data points that deviate sharply from the “correlation” structure of the data and cluster them together; in this case, subspace clustering is done by projecting high dimensional data on the last few Principal Components instead of ones which capture the maximum variance. Algorithm IV-.1 The Subspace Algorithm Require: Input: r, the dimension to which the data will be projected. 1. Compute the m × m covariance matrix, Cov(O) of the data set O. Estimate the eigen vectors of the covariance matrix to obtain the Principal Components. 2. Project the data onto the r (last / first) principal components. 3. Perform a k-means clustering with a suitable distance metric (such as euclidean, cosine measure, etc.) and choice of starting point as cluster centroid.

V. T ESTING VALIDITY OF C LUSTERS In order to measure the quality of the clusters produced by the k-means and subspace algorithms, we compare the clusters they produce to human annotated data marking each instance as seizure or non-seizure. This procedure allows us to quantitatively measure how useful the cluster labels are when compared to the expert annotated class labels. The external cluster-validity measure used in this work was first suggested by Dom [3] and is equivalent to mutual information when cluster labels and class labels are exactly the same. Let each data set D have n instances O1 , O2 , · · · , On and we want to partition it into K clusters. Let K = {1, 2}1 be the set of cluster labels and C = {1, 2} be the expert annotated class labels assigned to 1 We are just considering only two classes since we are interested in seizure / non seizure detection. Extension of this validity measure to multiple classes is trivial.

the objects in D. Consider a two-dimensional contingency table, H = h(c, k) where h(c, k) represents the number of objects labelled class c are assigned to cluster k by the algorithm. Then, if there is a perfect clustering H is a square matrix with only one non-zero element perProw / column. The marginals are defined as h(c) = k h(c, k) and P h(k) = h(c, k). Since in our experiments the number c of clusters are known apriori, the cluster-validity measure is ˆ K) = essentially is the empirical mutual information I(C, P|C| h(c) ˆ ˆ ˆ H(C) − H(C|K), where H(C) = − c=1 n log h(c) n and P|C| P|K| h(c,k) ˆ H(C|K) = − c=1 k=1 h(c,k) log . n h(k) VI. E XPERIMENTAL R ESULTS The data used for our experiments is obtained during the monitoring phase before a surgery. This phase can last for several days at a time and the time series data is stored in files corresponding to recordings of every half-an-hour. Each half an hour chunk of data for a single channel contains 30, 0002 × 30(mins) × 60(secs) = 5400000 sample points. In addition there are 96 channels, thus the total number of sample points corresponds to 5400000 × 96 = 518400000. Signal processing of each half an hour chunk of data (for all 96 channels) on a single core (1.8 GHz, 8 GB RAM) machine takes approximately 11 hours. Since we are limited by computing power of available systems, we studied two patients only – the first patient had 3 seizures while the second had 6 seizures during the time they were monitored. These correspond to 9 half-an-hour chunks of data. A seizure is likely to last approximately one - two minutes at most. The recordings for the remaining time in a file correspond to non-seizure instances (including artifacts). The total number of instances obtained from each file is (30 × 60)/5 = 360 (we use a sliding window of size 5 secs). Of these instances, only those during a seizure (roughly 12 - 20) are assigned to cluster 1 and all others (approximately 340 instances) are labeled 2 from human annotations. For a given seizure for a patient, we extracted three different features power, mean curve length and mean phase coherence. We experimented with three different clustering algorithms – (1) K-Means (2) Subspace with projection on the first few significant principle components and (3) Subspace with projection on the last few significant principle components. The distance metrics used in each case are squared euclidean, cosine and L1-norm. It is possible to question why this task is posed as unsupervised learning and not a supervised one [1] – we would like to stress that this real-life data is not completely annotated and generation of labels is painstakingly expensive; part of the goal of this exploratory data analysis was to be able to automate the process of label generation. We have simplified the problem to study only seizure / non-seizure clusters, but it is not surprising to have 2 Sampling

rate is 30KHz

a larger number of unknown clusters in the data. We hope to study this phenomenon in future work. We analyze data from each patient separately – this is motivated by the fact that treatment for each patient has to be geared to the particular type of epilepsy s/he has. Thus even though merging data from different patients produces more data to study, we are not allowed to do this for our results to be meaningful to clinical neurophysiologists. Figures 1(a), 1(b) and 1(c) indicate the percentage of variance captured by the first 10 Principle Components in data sets generated from a seizure of patient 1 having Power, MCL and MPC features. For the power feature, a large part (approximately 99.92%) of the variance is captured by the first PC. The y-axis in Figure 1(a) uses a logarithm scale and clearly illustrates that the second, third and other PCs capture much lower variance than the first. In the MCL feature, approximately 76% of variance is captured in the first PC, but there is a more gradual drop in percentage of variance in the other PCs. In MPC, only 28% is captured by the first PC and the second, third, · · · , tenth PCs capture a significant portion of the variance. This indicates that each feature exhibits very different properties and is likely to produce different results when used for modeling seizure detectors. Furthermore, for subspace clustering, the choice of which subspaces to project data has to be tuned carefully based on the above information. For experiments requiring choice of first few significant PCs, we tried different dimensions but picked ones that gave the highest cluster validity measure. Similarly, when projecting data onto the last few significant PCs, we carefully chose those which captured at least 1% or more variance of the data. Table 1 illustrates the results for the first patient. For all three clustering techniques, the best performing method is indicated in red. The letters in bold indicate the best method found so far for that seizure using a particular distance metric. The column representing “Ideal” shows the value of the mutual information3 if all the automated and manual labels were exactly identical. Interestingly enough, K-means clustering with the cosine metric and using the bivariate feature MPC gives the best performance unanimously in all three seizures for this patient. In two of the three seizures, Subspace clustering with projection on the first two principal components seems to be competitive as well. Figure 2 shows the percentage of variance captured by the first ten PCs for each feature in a seizure experienced by the second patient. As before, the first PC captures more than 95% of the variance if the power feature is used whereas only slightly more than 10% is captured if the MPC feature is used. Table VI shows the results obtained from clustering for this patient. In general, univariate features MCL and Power seem to be better performers than MPC. In this case, 3 The matlab implementation of mutual information was obtained from http://www.cs.rug.nl/∼rudy/matlab/.

the cluster validity measure is closest to the ideal using the subspace method with projection on the last two PCs. While there was a clear winner in the results for the first patient, a similar trend is not noticeable for the second patient. This is explained by the fact that the first patient had a larger number of seizures when in sleep state while the second patient had seizures both in wake and sleep states. Our future research is directed towards identifying different states the patient is in when a seizure occurs using non-parametric clustering techniques such as expectation maximization. VII. C ONCLUSION A clinical trial initiated at the Columbia University Medical School has collected large volumes of intra-cranial EEG recordings. In this paper, we describe the challenges faced in mining this data. The goal is to develop seizure detector(s) by using univariate and bivariate features and applying well-known time-series clustering algorithms such as Kmeans and subspace clustering. We compare the automated clustering results to manual annotations of seizure / nonseizure using mutual information as the cluster validity measure. Our results indicate that measures that are capable of detecting activity directly tied to seizure generation are most likely to succeed. In future work, we will examine the performance of non-parametric clustering algorithms and a wider selection of univariate and bivariate features. ACKNOWLEDGMENT Funding for this work is provided by the Research Initiatives in Science and Engineering (RISE) grant from Columbia University and NSF award, IIS–0916186. R EFERENCES [1] H. Dutta, D. Waltz, A. Salleb-Aouissi, C. A. Schevon and R. Emerson. Designing Patient-Specific Seizure Detectors from Multiple Frequency Bands of Intracranial EEG Using Support Vector Machines. In Proceedings of the Workshop on Data Mining for Healthcare Management, 14th Pacific-Asia Conference on Knowledge Discovery and Data Mining, June, 2010. [2] K. Beyer, R. Ramakrishnan, U. Shaft, and J. Goldstein. When is nearest neighbor meaningful? Proceedings on the International Conference on Database Theory, 1999. [3] B. E. Dom. An information-theoretic external cluster-validity measure. IBM Research Technical Report, RJ - 10219, October - 2001. [4] Freiburg Seizure Prediction Project. https://epilepsy.unifreiburg.de/freiburg-seizure-prediction-project. [5] A. Gardner, A. M. Kreiger, G. Vachtsevanos, and B. Lit. Oneclass noveltydetection for seizure analysis from intracranial eeg. Journal of Machine LearningResearch, 7:1025-1044, June 2006.

Sz No.

1

2

3

Features Power MCL MPC Power MCL MPC Power MCL MPC

Squared Euclidean KMeans Subspace PC12 Last PCs 2.05E-04 2.05E-04 0.0063 0.0925 0.1057 2.28E-04 0.1074 0.1074 4.17E-04 5.20E-04 5.20E-04 0.0018 0.1175 0.1175 0.0031 0.1296 0.1296 3.03E-06 1.78E-05 1.77E-05 0.0587 0.1498 0.1498 0.0412 0.1619 0.1619 1.81E-04

KMeans 0.0437 0.1074 0.1212 0.0022 0.106 0.1424 0.0678 0.1383 0.1746

Cosine Subspace PC12 Last PCs 0.0563 4.16E-07 0.0946 5.47E-05 0.1212 1.35E-04 0.0022 0.0011 0.0949 7.04E-05 0.1424 2.17E-04 0.0678 6.32E-04 0.1166 0.0541 0.0381 9.67E-05

KMeans 0.0578 0.1057 0.0946 7.42E-05 0.1175 0.0949 9.66E-04 0.1498 0.1383

L1-Norm Subspace PC12 Last PCs 5.57E-05 0.0013 0.0925 4.15E-04 0.1074 2.05E-04 7.42E-05 2.94E-04 0.1175 4.18E-05 0.1175 0.0023 0.001 0.0033 0.1498 1.49E-04 0.1619 2.15E-05

Ideal 0.1554 0.1554 0.1554 0.1903 0.1903 0.1903 0.2224 0.2224 0.2224

Table I C LUSTER -VALIDITY M EASURES OBTAINED BY USING K-M EANS , S UBSPACE C LUSTERING ( WITH P ROJECTION ON F IRST T WO PC S AND L AST T WO PC S ) AND S QUARED E UCLIDEAN , C OSINE AND L1-N ORM AS DISTANCE METRICS FOR THREE SEIZURES EXPERIENCED BY PATIENT 1. T HE COLUMNS CORRESPONDING TO PC12 REPRESENT SUBSPACE CLUSTERING WITH PROJECTION ON THE 1 ST AND 2 ND P RINCIPAL C OMPONENTS ; L AST PC S REFER TO PROJECTION ON P RINCIPAL C OMPONENTS THAT CAPTURE AT LEAST 0.1% OF THE VARIANCE .

(a) Power (Logscale)

(b) MCL

(c) MPC

Figure 1. Percentage of variance captured by the first 10 Principle Components in data sets generated from a seizure of patient 1 having Power, MCL and MPC features.

[6] J. Gotman. Automatic seizure detection: Improvements and evaluation. Electroencephalography and Clinical Neurophysiology, 76:317 – 324, 1990.

[12] J. MacQueen. Some methods for classification and analysis of multivariate observations. Proc. of the 5th Berkeley Symposium, pages 281 – 297, 1967.

[7] M. J. Hoeve, B. J. van der Zwaag, M. van Burik, C. H. Slump, and R. Jones. Detecting epileptic seizure activity in the eeg by independent component analysis. In 14th Annual Workshop on Circuits Systems and Signal Processing (ProRISC), pages 373 – 378, Nov - 2003.

[13] F. Mormann, R. G. Andrzejak, C. E. Elger, and K. Lehnertz. Seizure prediction: The long and winding road. Brain, 130:314 – 333, 2007.

[8] I. Jollife. Principal Component Analysis. Springer Verlag, 1st edition edition, 2002. [9] A. Klatchko, G. Raviv, W. R. S. Webber, and R. P. Lesser. Enhancing the detection of seizures with a clustering algorithm. Electroencephalography and clinical neurophysiology, 106:52 – 63, 1998. [10] B. Litt and J. Echauz. Prediction of epileptic seizures. The Lancet Neurology, 1:22 – 30, May - 2002. [11] S. Llyod. Least squares quantization in pcm. Bell Telephone Laboratories Paper, 1957.

[14] F. Mormann, K. Lehnertz, P. David, and C. E. Elger. Mean phase coherence as a measure for phase synchronization and its application to the eeg of epilepsy patients. Physica D, 144:358 – 369, April - 2000. [15] L. Parsons, E. Haque, and H. Liu. Subspace clustering for high dimensional data: A review. SIGKDD Explor. Newsl., 6 - 1:90 – 105, 2004. [16] A. J. Rowan and E. Tolunsky. Primer of EEG: With a mini atlas, chapter 1. Elsevier Science, USA, 51:21 – 24, 2003. [17] S. S. Viglione, V. A. Ordon, and F. Risch. A methodology for detecting ongoing changes in the eeg prior to clinical seizures. In 21st Western Institute on Epilepsy, 1970.

(a) Power (Logscale)

(b) MCL

(c) MPC

Figure 2. Percentage of variance captured by the first 10 Principle Components in data sets generated from a seizure of patient 2 having Power, MCL and MPC features. Approximately 99% of the variance is captured in the first PC using the power feature, 54% by the MCL feature and only 11.25% by MPC.

Sz

1

2

3

4

5

6

Features Power MCL MPC Power MCL MPC Power MCL MPC Power MCL MPC Power MCL MPC Power MCL MPC

Squared Euclidean KMeans Subspace PC12 Last PCs 1.35E-04 1.35E-04 0.0018 0.0107 0.0107 0.0011 0.002 0.0019 1.53E-07 0.0161 0.0089 6.06E-04 2.78E-05 2.08E-04 0.0015 0.0104 0.0101 4.33E-05 0.0084 0.0108 0.0084 4.74E-04 2.21E-05 1.54E-05 0.0428 0.0344 0.0011 0.0072 0.0072 2.83E-04 0.0236 0.0236 0.0232 0.0186 0.0196 1.60E-06 3.09E-05 3.09E-05 2.00E-03 0.0326 4.60E-04 2.38E-04 0.0055 0.0054 7.17E-05 3.89E-05 3.89E-05 6.73E-04 6.40E-05 2.23E-05 2.50E-03 5.89E-04 4.41E-04 2.10E-04

KMeans 0.0068 7.92E-04 0.0052 0.0026 0.0329 0.0048 0.0837 3.41E-02 0.031 0.0286 0.0025 0.0188 2.20E-02 0.037 2.17E-04 1.40E-02 1.47E-02 5.00E-03

Cosine Subspace PC12 Last PCs 5.17E-04 3.19E-04 0.0028 0.0031 0.0307 7.03E-05 2.76E-06 0.0076 0.0563 3.87E-04 0.0032 0.0016 0.0062 0.0044 1.75E-02 7.67E-04 0.0314 0.0019 0.0156 0.0067 0.0013 0.0173 0.0184 0.003 8.40E-04 3.60E-03 0.0019 1.77E-04 0.0053 8.83E-05 1.30E-03 2.60E-03 1.42E-02 4.30E-03 3.60E-03 7.17E-05

KMeans 3.08E-03 4.75E-04 0.0019 2.75E-04 4.90E-03 0.0055 0.0541 1.73E-04 0.0168 0.0072 0.0207 0.0141 7.19E-04 2.95E-02 3.67E-04 4.60E-03 9.26E-04 4.64E-05

L1-Norm Subspace PC12 Last PCs 3.08E-03 1.29E-02 0.0015 9.86E-06 0.0019 1.12E-04 2.71E-02 9.77E-04 3.32E-05 8.27E-04 6.50E-04 5.66E-04 0.0385 3.80E-03 1.54E-04 8.40E-03 3.14E-02 3.30E-03 0.0067 0.0104 0.0241 0.0072 0.0154 3.27E-05 6.72E-04 6.70E-03 2.68E-04 1.37E-04 7.10E-04 6.61E-04 4.20E-03 5.28E-05 1.47E-02 3.40E-03 5.70E-06 1.16E-04

Ideal 0.1903 0.1903 0.1903 0.1732 0.1732 0.1732 0.3927 0.3927 0.3927 0.1169 0.1169 0.1169 0.0732 0.0732 0.0732 0.0732 0.0732 0.0732

Table II C LUSTER -VALIDITY M EASURES OBTAINED BY USING K-M EANS , S UBSPACE C LUSTERING ( WITH P ROJECTION ON F IRST T WO PC S AND L AST T WO PC S ) AND S QUARED E UCLIDEAN , C OSINE AND L1-N ORM AS DISTANCE METRICS FOR SIX SEIZURES EXPERIENCED BY PATIENT 2.

[18] S. B. Wilson, Scheuer, M. L, Emerson, R. G, Gabor, and A. J. Seizure detection: Evaluation of the reveal algorithm. Clinical Neurophysiology, 6(115):2280 – 2291, 2004. [19] Gnanadesikan R. and Kettenring J. Robust Estimates, Residuals, and Outlier Detection with Multiresponse Data. Biometrics, Vol 28, 81–124, 1972. [20] Hawkins D. and Fatti P. Exploring Multivariate Data Using the Minor Principal Components. The Statistician, Vol 33, 325 – 338, 1984.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.