Efficient temporal decomposition of local field potentials

Share Embed


Descripción

2011 IEEE International Workshop on Machine Learning for Signal Processing September 18-21, 2011, Beijing, China

EFFICIENT TEMPORAL DECOMPOSITION OF LOCAL FIELD POTENTIALS Austin J. Brockmeier and Jos´e C. Pr´ıncipe

Babak Mahmoudi and Justin C. Sanchez

University of Florida Electrical and Computer Engineering Gainesville, FL 32611 USA

University of Miami Biomedical Engineering Coral Gables, FL 33146 USA

ABSTRACT Local field potentials (LFPs) arise from dendritic currents that are summed by the brain tissue’s impedance. By assuming that the rhythms existing in the LFPs result from the coordinated neural activity of sparse and transient neural assemblies transformed by the neural tissue, we propose to recover these neural assemblies sources using an independent component analysis on segments of a single LFP channel. The corresponding source signals and the set of temporal filters that operate on them constitute an efficient time-frequency decomposition of the LFP. This decomposition has the potential to identify sources that are more statistically dependent with stimuli or single-cell activity than the raw signal. In this work we show preliminary results on a synthetic dataset and a real dataset recorded from a rats nucleus accumbens during a reward administering experiment. When compared with the standard time-frequency analysis, this computational model for LFP analysis is totally data-driven because the filters, which form the basis for the decomposition, are estimated directly from the data. Index Terms— local field potential, multiscale neural signal analysis, statistical decoding, ICA 1. INTRODUCTION The electrical potentials within the neural tissue can be recorded via invasive electrodes as local field potentials (LFPs). These electric potentials are the result of the combined ionic currents produced by activity of the surrounding neurons converted by the electrical impedance of the physical medium of the brain [1]. Observable fluctuations are driven by interacting neurons and are implicated communication between brain areas and ultimately in behavior [2]. Analyzing the neural response to specific stimuli or condition is common objective for neuroscience research. As LFPs represent the combined activity of ensembles of neurons, the raw signal may show poor dependence to stimuli or singlecell activity [3]. The combination of the stimulus-evoked and This work was supported in part by the University of Florida Graduate School Fellowship and DARPA Contract N66001-10-C-2008.

c 978-1-4577-1623-2/11/$26.00 2011 IEEE

on-going activity potentials can be approximated by a linear summation within a wide dynamic range of voltages [4]. With this perspective, it is feasible that some spectral [5] or spatial decomposition [3] may be able to find separate source signals, which are statistically dependent with the variable of interest. Decomposing LFP back to constituent sources can be used to explore the function of brain dynamics. The proven machinery of independent component analysis is an applicable approach and has been applied to similar research to decompose multi-electrode electroencephalograms (EEGs) [6, 7, 8, 9]. In [3], Makarov et al. used ICA across multielectrode LFP signals recorded from collinear electrodes. The authors advance the idea that under experimental conditions the neuronal sources of LFPs could be considered independent in time [3]. Along similar lines, we propose using independent component analysis (ICA) [10] to decompose a single LFP channel to its constituent sources, but instead of finding a spatial demixing matrix, we use windows in time from a single channel for the decomposition. An algorithm for decomposing single-channel ICA problem for biophysical signals [11, 12] has been applied to EEG and ECG data [13, 14]. The proposed algorithm is developed along similar lines; the improvement in this work is that the ICA is performed in the frequency domain reducing the number of redundant components by approximately half. 1.1. LFP Model LFP signals represent integrated representation of microscopic neural activity with much longer time and wider spatial scales (sometimes referred to as mesoscopic brain activity). Accordingly, we propose that LFPs contain the result of coordinated activity emanating from latent neural cell populations mixed with unsynchronized neural activity acting as additive noise. We assume this coordinated neural activity occurs in short bursts transiently in time (sometimes referred to as spindles in the neurophysiology literature). Since, the bursts are transient their distribution is sparse in time; they scarcely overlap. Given their locality in space and asynchrony in time these cell populations may be considered independent. Thus, we consider these latent cell populations to be sparse independent sources. The LFP collected at a given electrode is

the result of the summation of the activity of these sources, which has been filtered by the impedance of the biological medium.

.. .

Source Activity

+

Filters

LFP

Fig. 1. Model of LFPs as the linear combination of sparse source activity (coordinated neuron activity).

Based on this discussion, a simplified model for LFP is the instantaneous sum of independent components each filtered by a unique temporal filter with added high-frequency noise as seen in Fig. 1. The peak of the impulse response of the filters may be considered as the occurrence of activity from a particular cell population. The objective is to decompose LFP signals to a descriptive set of filters and source signals representing the activity of latent cell populations. This sparse encoding of a signal has been successfully utilized in computational perception to explain the processing in the auditory system [15, 16]. We assume the temporal filters can be approximated by finite impulse response (FIR) filters in a multiple-input single-output (MISO) system. As a first approach we propose using independent component analysis to decompose the potential on a single channel and analyze the resulting filters as was done for natural sounds in [15] and biomedical signals in [12]. In this work, we formulate the problem—with necessary assumptions—such that offthe-shelf algorithms for independent component analysis are applicable. 2. METHOD The formulation we propose for decomposing a single LFP signal into a set of mixing filters and source signals is designed so that tested implementations of popular ICA algorithms can be used for the estimation. Since there is only one channel, the input is formed by multiple windows of the original time series, where the different lags in each window correspond to the dimensions. Directly using this formulation has three specific challenges—a phase ambiguity, a tradeoff between resolution and the number of components (controlled by the window length), and the distribution of the independent sources. To alleviate part of the phase ambiguity, where shifting the location of the window in time would correspond to a different sources, we work on the complex coefficients of the discrete time Fourier transform (DFT) of the window of samples. In the frequency domain delays become multiplications with complex exponentials, which for delays less than the window length result in circular shifts. We restrict the time domain fil-

ters to have real coefficients. By the properties of the DFT we need roughly half of the coefficients since the remainder will be conjugate symmetric. As for the source distributions, we use a deterministic approach with a fixed non-linearity (G1 in [17]). We use publicly available code for non-circular complex ICA [18], which extends complex FastICA [17] to noncircular sources and is a fixed-point algorithm like the popular FastICA [19]. By working in the frequency domain and preserving phase we gain to methods over previous single channel decomposition work: the independent components are invariant to small shifts and the number of independent components for filters of length T is T /2 + 1 versus T for [11, 12]; thereby, there is less redundancy in extracted components, which would require post-processing by clustering. From our testing, besides the reduced component number the temporal aspects and frequency content between the frequency-domain versus timedomain do not differ significantly. 2.1. ICA on Discrete Fourier Transform Coefficients A single channel of LFP data is a series of real values x(1), x(2), . . . , x(N ). We propose this series is the result of filtering sparse independent sources as in Fig. 1. Let [x(t) x(t+1) · · · x(t+T −1)]T be a T length column vector corresponding to a window of samples starting at time t. By working in the frequency domain, temporal shifts of the window much less than the window length T can still correspond to the same filter. In order to avoid discontinuities of the filters at the edges of the windows we use a non-rectangular windowing function. The discrete Fourier transform (DFT) of a window at time t is ˆ t (k) = X

T−1 X

ω(τ )x(t+τ )e−j2πkτ /T

k = 0, . . . , T −1 (1)

τ =0

√ with −1 = j and ω(.) is a windowing function. We use a 25% tapered cosine window, a Tukey window [20], because our primary aim is to resolve the time domain features—not spectral estimation. Since we want purely real filters in the time domain, we can remove some coefficients. The original signal is real valued so the coefficients of the DFT are conjugate symmetric ˆ t (k) = X ˆ t (T −k). Assuming that T is even, we need to X ˆ t (k) k = use only the first T /2 + 1 coefficients, Xt (k) = X 0, . . . , T2 . Define a set of vectors X = {Xt }t∈T , where the indexes in T are drawn from 0, 1, . . . , N − T without replacement. These vectors are treated as the input to any complex ICA algorithm. The dimension of the vectors is T /2 + 1 and each dimension corresponds to DFT coefficient. We decompose the vector of DFT coefficients, Xt , resulting from a window of T samples starting at time t, into n

complex source coefficients S1 (t), . . . , Sn (t) T

Si (t) = Wi Xt ,

3. RESULTS (2)

where Wi is a row in the demixing matrix W = [W1 · · · Wn ]T . In general, W is optimized over the set of samples such that the source coefficients are as statistically independent as possible. 2.2. Mixing Matrix: Frequency and Time Representations From X an ICA algorithm estimates the demixing matrix. Each of the n rows of the demixing matrix has only entries for the first T /2 + 1 DFT coefficients, where n ≤ T /2+1 is the number of components estimated. If n = T /2+1 then the matrix will be square and the mixing matrix is A = W −1 = [A1 · · · An ]. Xt = A[S1 (t) S2 (t) · · · Sn (t)]T

(3)

The time domain representation of the mixing or demixing filters requires the inverse discrete time Fourier transform. Since, we restrict the time domain to be real, we multiply all of the coefficients by the conjugate of the DC component, Ai (0), use only the real portion of the coefficient at the Nyquist frequency Ai (T /2), and use the conjugate symmetry to obtain the remaining complex coefficients to form the frequency-domain mixing filter Aˆi (0), . . . , Aˆi (T −1).  Ai (0)Ai (0)    Ai (0)Ai (k)  Aˆi (k) =  < Ai (0)Ai (k)   Ai (0)Ai (T −k)

k k k k

=0 = 1, . . . , T2 − 1 = T2 = T2 + 1, . . . , T −1

(4)

Then the time domain representation of the mixing filter ai for component i ∈ {1, . . . , n} is

ai (τ ) =

T−1 1 X ˆ Ai (k)ej2πkτ /T T

τ = 0, . . . , T −1.

(5)

k=1

The same approach can be used to convert the rows of the demixing matrix W back to time domain demixing filters wi i ∈ {1, . . . , n}. Each source signal si (t) is the original signal filtered with the respective time-domain demixing filter wi as in (6). The independent component process xsi (t) is simply the source signal filtered with the mixing filter ai as in (7).

si (t) =

T−1 X

T−1 X τ =0

3.1. Synthetic Testset A synthetic dataset is created with specified mixing filters and source distributions. The filters are a combination of Haar and Daubechies wavelet packets. This ensures the synthetic model can handle filters covering the frequency space with varying temporal characteristics. The source distributions are each independent but are temporally correlated binary distributions. The correlation is imparted by taking a Bernoulli process with success rate p and keeping every 10th success and setting the rest to failures. In this framework, the successes represent coordinated events from the latent cell populations. The success rate p controls the source density/sparsity. A N = 750,000 length signal was created for 32 sources with wavelet packet filters. The filters are shown in black in Fig. 2. The method was applied to 10,000 windows of length T = 64. This allow up to n = 33 unique components. In this experiment the sparsity of the random source distributions was varied. For illustration, we assume this signal was sampled at 100Hz. Then the expected rate of events on one channel is 10pHz and the rate across all sources is 320pHz. We tested convergence for different source sparsities, by varying p. In the case of known filters, for each true filter we match the closest in the estimated set, and then find the mean and standard deviation across these best-fit crosscorrelations; the results are shown in Table 1. For p = 0.01 the filters that best matched the true filters and the remaining filters are shown in Fig. 2. Table 1 shows that the method succeeds when the sources are relatively sparse, and as the density increases the filters are more likely to be poorly estimated. Even at p = 0.15, 0.5 the cross-correlation was not much better than the cross-correlation of filters estimated with Gaussian white noise input: 0.420 ± 0.035. p rate mean st. dev.

0.005 1.6Hz 0.854 0.064

0.05 16Hz 0.836 0.074

0.1 32Hz 0.799 0.092

0.15 48Hz 0.431 0.056

0.5 160Hz 0.442 0.048

Table 1. Mean and standard-deviation of the cross-correlation bewi (τ )x(t − τ )

(6)

ai (τ )si (t − τ )

(7)

τ =0

xsi (t) =

The results cover a synthetic dataset created to test the basic performance of the method and a LFP recording from the nucleus accumbens of an awake rat during a rewardadministering experiment.

tween the true filters and the best-matched estimated filters across different sparsities controlled by p. Assuming 100Hz sampling rate, each source has a rate of 10pHz and 320pHz for the sources combined.

Aligned Filters

Frequency Domain Amplitudes

32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1

0

0.2

0.4 time (s)

0.6

0

The raw LFP was filtered with a 2nd order Butterworth low-pass filter with a 190Hz cutoff and recorded at 381.5Hz, this recording has N = 228,881 samples. A digital notch filter was used to remove 60Hz noise, and due to the finite window length used (T =128), the signal was filtered with a digital 1st order Butterworth high-pass filter with cutoff (381.5/128)Hz = 2.98Hz. For each channel 10,000 random windows within the trials were supplied to estimate the independent components. Fig. 3 shows the filters and frequency profiles on a single channel. To see if the sources carried experimentally useful information, we analyzed the difference in power for each source for the rewarding versus non-rewarding trials approaching the end of the trial. The power was computed in a 1s window where the window was moved by 100ms increments. Fig. 4 shows the time series of the normalized power for all components and the LFP. Two selected components that showed two opposite trends are highlighted in Fig. 5. Filters

20 40 frequency (Hz)

Fig. 2. Synthetic dataset: Impulse and frequency response of filters with true filters shown in black. Aligned best matches shown in blue for p = 0.005 in the top figure. The remaining filters that were not best matches are in red or green/dotted in the bottom figure.

33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0

0.1

3.2. Rat Dataset Microwire neuronal data was recorded chronically from the nucleus accumbens of a rat. Before implantation, the rat was trained in a two-lever choice task. By pressing the target lever, cued by LEDs, the rat received a water reward. Each trial was initiated by the rat with a nose poke. Here the rat is simply watching a robotic arm move toward one of two levers—the lever on each trial is chosen pseudo-randomly; if the robotic arm moves toward the target lever—indicated by a LED—the rat will receive the water reward, but if the robotic arm moves to the wrong target no reward is given and the rat receives a negative tone. Thus, the robotic arm is replacing the rat’s own lever pressing, and the rat can identify upcoming reward based on the robot movement. The data used in the analysis was from a single day’s recording where the same target lever was always cued. We analyze the first 80 trials; on 40 trials the robot moved to the correct target lever, and the rat received reward; on 40 trials the robot moved to the wrong lever, and the rat received the negative tone.

Frequency Domain Amplitudes

33 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0.2 time (s)

0.3

0

Filters

50 100 150 frequency (Hz)

Frequency Domain Amplitudes

65 64 63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34

65 64 63 62 61 60 59 58 57 56 55 54 53 52 51 50 49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 0

0.1

0.2 time (s)

0.3

0

50 100 150 frequency (Hz)

Fig. 3. Rat dataset: Impulse and frequency response of mixing filters with length T = 128 (the sampling rate is 381.5Hz). Each curve is normalized by its maximum magnitude.

Difference in source power (rewarding minus non−rewarding) 0.5 LFP IC15 IC52

difference in source power computed in 1s window

0.4

0.3

0.2

0.1

0

−0.1

−0.2 −7

−6

−5

−4

−3 −2 −1 time from trial end (s)

0

1

2

3

Fig. 4. Rat dataset: Windowed power for the indepedent components and LFP. Power is computed in 1s window incremented by 100ms. The impulse and frequency response of labeled independent components are shown in Fig. 5. None-rewarding trials that received a tone at time 0s were 6s long, and rewarding trials were 3.9-5.5s long with mean duration of 4.47s.

Filters

Frequency Domain Amplitudes

52

52

15

15

perform well for non-circular super-Gaussian (sparse) sources [18]. The method’s success may also rely on certain bounds on the distribution of the sources. The sources have to be sufficiently sparse. The method starts to break down for less sparse sources, see the results for p = 0.15, 0.5 in Table 1. This assumption imposes that the original source signals are formed from isolated events in time idealized as sparse delta functions in time. On the other hand, if a single source is very sparse then its contributions to the overall signal may be missed in the subset of samples used for estimation. In addition, the deterministic ICA approach has no model for noise in the system. The decomposition in this paper may prove a useful tool for exploring network oscillations in the brain. The method reveals sources with rich structure in the time frequency plane, as seen in Fig. 3, that change with the task Fig. 4. The exact role that network oscillations such as LFPs play in the brain is still being discussed [5, 21]. With this decomposition, any dependence should be more apparent between the constituent source signals and known single unit activity (neuron action potentials) [3]. Standard data mining techniques can be applied to find these correlations, but this is left for future work. 5. CONCLUSION

0

0.1

0.2 time (s)

0.3

0

50 100 150 frequency (Hz)

Fig. 5. Rat dataset: The impulse and frequency response of mixing filters for the independent components that showed large and opposite deviations approaching the reward/tone.

4. DISCUSSION The method presented in the work is a computational model for decomposing LFPs into sparse cell population sources. By assuming a sparse input to the filters from coordinated activity of latent cell populations, an efficient time-frequency representation of the original signal can be estimated solely from the signal statistics. The current formulation is such that complex ICA algorithms can be used to estimate the filter coefficients, but the user is left to select the window length T and the nonlinearity. A large T allows more resolution in longer FIR filters and increases the possible number of independent components. The second parameter is choice of the nonlinearity, which relates to the source distributions, here we used the G1 nonlinearity from [17], which is robust to outliers and was shown to

We have presented a formulation to estimate temporal filters that form the basis of local field potentials. This decomposition has the potential to isolate components that are more clearly dependent to stimuli or single-cell activity. The model assumes biologically plausible cell populations whose collective coordination is both sparse and independent in time. With idealized assumptions on the filters—non-causal finite impulse response—complex ICA algorithms are used to estimate the filter coefficients. Results on a synthetic dataset and real neural dataset corroborate that the filters estimated are characteristic of the underlying generation process. 6. REFERENCES [1] C. B´edard, H. Kr¨oger, and A. Destexhe, “Model of lowpass filtering of local field potentials in brain tissue,” Phys. Rev. E, vol. 73, no. 5, pp. 051911, May 2006. [2] X. Wang, “Neurophysiological and computational principles of cortical rhythms in cognition,” Physiological Reviews, vol. 90, no. 3, pp. 1195–1268, 2010. [3] V.A. Makarov, J. Makarova, and O. Herreras, “Disentanglement of local field potential sources by independent component analysis,” Journal of computational neuroscience, vol. 29, no. 3, pp. 445–457, 2010.

[4] A. Arieli, A. Sterkin, A. Grinvald, and A. Aertsen, “Dynamics of ongoing activity: explanation of the large variability in evoked cortical responses,” Science, vol. 273, no. 5283, pp. 1868–1871, 1996. [5] G. Buzs´aki and A. Draguhn, “Neuronal oscillations in cortical networks,” Science, vol. 304, no. 5679, pp. 1926, 2004. [6] A.J. Bell and T.J. Sejnowski, “An informationmaximization approach to blind separation and blind deconvolution,” Neural Computation, vol. 7, no. 6, pp. 1129–1159, 1995. [7] S. Makeig, M. Westerfield, T.P. Jung, S. Enghoff, J. Townsend, E. Courchesne, and TJ Sejnowski, “Dynamic brain sources of visual evoked responses,” Science, vol. 295, no. 5555, pp. 690, 2002. [8] J. Anem¨uller, T.J. Sejnowski, and S. Makeig, “Complex independent component analysis of frequency-domain electroencephalographic data,” Neural Networks, vol. 16, no. 9, pp. 1311 – 1323, 2003, Neuroinformatics. [9] C.J. James and C.W. Hesse, “Independent component analysis for biomedical signals,” Physiological measurement, vol. 26, pp. R15, 2005. [10] P. Comon, “Independent component analysis, a new concept?,” Signal processing, vol. 36, no. 3, pp. 287– 314, 1994. [11] C.J. James and D. Lowe, “Extracting multisource brain activity from a single electromagnetic channel,” Artificial Intelligence in Medicine, vol. 28, no. 1, pp. 89 – 104, 2003. [12] M.E. Davies and C.J. James, “Source separation using single channel ICA,” Signal Processing, vol. 87, no. 8, pp. 1819–1832, 2007.

View publication stats

[13] C.J. James and M.E. Davies, “Contrasting spatial, temporal and spatio-temporal ICA applied to ictal EEG recordings,” in Engineering in Medicine and Biology Society, 2008. EMBS 2008. 30th Annual International Conference of the IEEE, aug. 2008, pp. 3336 –3339. [14] F. Lucena, A.K. Barros, J.C. Pr´ıncipe, and N. Ohnishi, “Statistical coding and decoding of heartbeat intervals,” PLoS ONE, vol. 6, no. 6, pp. e20227, 06 2011. [15] M.S. Lewicki, “Efficient coding of natural sounds,” Nature Neuroscience, vol. 5, no. 4, pp. 356–363, 2002. [16] E.C. Smith and M.S. Lewicki, “Efficient auditory coding,” Nature, vol. 439, no. 7079, pp. 978–982, 2006. [17] E. Bingham and A. Hyvarinen, “A fast fixed-point algorithm for independent component analysis of complex valued signals,” International Journal of Neural Systems, vol. 10, no. 1, pp. 1–8, 2000. [18] M. Novey and T. Adali, “On extending the complex FastICA algorithm to noncircular sources,” Signal Processing, IEEE Transactions on, vol. 56, no. 5, pp. 2148– 2154, 2008. [19] A. Hyvarinen, “Fast and robust fixed-point algorithms for independent component analysis,” Neural Networks, IEEE Transactions on, vol. 10, no. 3, pp. 626–634, 1999. [20] F.J. Harris, “On the use of windows for harmonic analysis with the discrete fourier transform,” Proceedings of the IEEE, vol. 66, no. 1, pp. 51 – 83, jan. 1978. [21] T.J. Sejnowski and O. Paulsen, “Network oscillations: Emerging computational principles,” The Journal of Neuroscience, vol. 26, no. 6, pp. 1673–1676, 2006.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.