Three-dimensional finite-difference bidomain modeling of homogeneous cardiac tissue on a data-parallel computer

Share Embed


Descripción

200

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 2, FEBRUARY 1997

Communications Three-Dimensional Finite-Difference Bidomain Modeling of Homogeneous Cardiac Tissue on a Data-Parallel Computer Hasan I. Saleheen, Paul D. Claessen, and Kwong T. Ng* Abstract—In this paper, a data-parallel computer is used to provide the memory and reduction in computer time for solving large finite-difference bidomain problems. The finite-difference grid is mapped effectively to the processors of the parallel computer, simply by mapping one node to one (virtual) processor. Implemented on the connection machines (CM’s) CM200 and CM-5, the data-parallel finite-difference algorithm has allowed the solution of finite-difference bidomain problems with over 2 million nodes. Details on the algorithm are presented together with computational performance results. Index Terms—Bidomain, biological tissues, biomedcal computing, connection machine, data-parallel algorithm, finite difference methods.

I. INTRODUCTION Various authors have used the bidomain theory to study cardiac tissue behavior [1]–[8]. As coupled nonlinear partial differential equations are involved in bidomain models, analytical solutions are only available for a limited number of simple geometries [1]. In general, the equations need to be solved using numerical techniques instead. One major technique that has been used to solve the bidomain equations is the finite-difference method, where the spatial derivatives are approximated with finite-difference expressions [4], [8], [9]. The finite-difference approach has been chosen mainly for its simplicity and low memory requirements. However, an adequate three-dimensional (3-D) discretization of a cardiac tissue with realistic dimensions often requires a large number of finite-difference nodes. This can lead to a large number of unknowns that require a large amount of memory and processor power in the solution process. Thus, even with the lower computational requirements of the finitedifference technique, finite-difference bidomain studies have been limited by conventional computer resources. In this paper, a new approach is described in which a dataparallel computer is used to provide the high computational speed and large memory required for finite-difference bidomain modeling of large 3-D systems. The connection machines (CM’s) CM-200 and CM-5 from Thinking Machines Corporation, Cambridge, MA, serve as the platform for parallel implementation. The modeled system can have a conducting medium surrounding the cardiac tissue, e.g., tissue-and-bath configuration. In addition, the bidomain model of the cardiac tissue is not required to have an equal anisotropy ratio for the intracellular and interstitial conductivities. However, both the Manuscript received January 11, 1995; revised August 2, 1996. This work was supported in part by the National Institutes of Health (NIH) under Grant R01-HL-44747. Asterisk indicates corresponding author. H. I. Saleheen is with Concurrent Technologies Corporation, Johnstown, PA 15904 USA. P. D. Claessen is with the Department of Chemical, Bio and Materials Engineering, Arizona State University, Tempe, AZ 85287 USA. *K. T. Ng is with the Klipsch School of Electrical and Computer Engineering, New Mexico State University, Las Cruces, NM 88003 USA (e-mail: [email protected]). Publisher Item Identifier S 0018-9294(97)00295-4.

cardiac tissue and volume conductor are assumed to be homogeneous. An explicit finite-difference technique [2], [4] is used to solve the bidomain equations for the cardiac tissue and Laplace’s equation for the conducting medium. Spatial discretization is achieved with the finite-difference method and the transmembrane potential is updated with the forward Euler method. At each time step, appropriate finitedifference equations are solved with the Jacobi iterative technique [9]. While developed on the CM’s, the algorithm described in this paper can be implemented efficiently on any data-parallel computer operating in the single-instruction-multiple-data (SIMD) mode. The finite-difference technique is a general numerical method that may be implemented effectively on different kinds of parallel computers [10]. In particular, as described in Section III-B, the uniformity of the finite-difference grid allows it to be mapped efficiently to the processors on a data-parallel computer by simply mapping one node to one (virtual) processor. The explicit finite-difference approach does limit the time step size that can be used due to stability requirements. However, it requires less memory than the implicit schemes. The Jacobi technique is chosen because it allows efficient parallelization and has low memory requirement. In general, the parallelized Jacobi technique requires less memory than the parallelized versions of some other more robust iterative techniques such as the conjugate gradient method [10]. Finally, it should be noted that the bidomain model of interest here is different than the monodomain [3] and cellular network [11] models implemented previously on the CM’s.

II. THE BIDOMAIN EQUATIONS The general bidomain-conductor configuration consists of a cardiac tissue and a volume conductor, e.g., bath, as shown in Fig. 1. The space outside the tissue and bath is assumed to be occupied by an insulating material. The bidomain representing the cardiac tissue consists of two overlapping regions: the intracellular (i) and the interstitial (o) regions. In the tissue region, the intracellular (J~i ) and the interstitial (J~o ) current densities are given by

= J~i = 0 i ri @i @i @i =0 x ^ix +y ^iy +z ^ iz @x @y @z = J~o = 0 o ro @o @o @o =0 x ^ox +y ^oy +z ^oz @x @y @z

(1)

(2)

where i is the intracellular potential, o is the interstitial potential, = = i is the intracellular conductivity tensor, o is the interstitial conductivity tensor, and ix; y; z and ox; y; z are the x; y; and z components of the intracellular and interstitial conductivities, respectively. It is assumed that the tissue fibers are all oriented in the same direction and one of the principal axes in the global cartesian coordinate system is parallel to the fibers, so the conductivity tensors are uniform and diagonal. The tissue is stimulated by intracellular (Isi ) and interstitial (Iso ) current sources. Continuity of the transmembrane, intracellular,

0018–9294/97$10.00  1997 IEEE

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 2, FEBRUARY 1997

201

finite-difference equations are presented below together with a brief description of the CM and the data-parallel implementation of the numerical algorithm. A. Explicit Finite-Difference Solutions

= (

)

(

)

Equation (3) can be expressed in the form @Vm =@t F1 Vm; o , where F1 only consists of spatial derivatives. Following the forward Euler technique [2], the time derivative of Vm is approximated by a first-order forward finite-difference expression at each node i; j; k , which leads to

(V ) +1 = (V ) + (F1 ) 1t (6) where (i; j; and k) are node indices in the x; y; and z directions, respectively, n is the time step index, and 1t is the time step n m i; j; k

n m i; j; k

n i; j; k

size. The spatial derivatives of o in the function F1 can now be approximated by second-order center finite-difference expressions to give the following finite-difference equation at each node i; j; k

(V )

=(V ) 0 C1t 1 1 [(I ) + FD1] + (Iion )

+1

n m i; j; k

n m i; j; k

Fig. 1. Three-dimensional tissue-bath (bidomain-conductor) configuration used in the numerical studies.

= 0 C1 1 m

+

oy

@ 2 o @x2 2 @ o + Iion @z 2

Iso + ox

@ 2 o @y2

+

oz

(3)

( ) +1 +  ( )

@ 2 Vm @x2

@ 2 Vm @y2 2 + (iy + oy ) @@y2o

+

iy

+

iz

@ 2 Vm @z 2

2

+ ( +  ) @@x2 ix

2

ox

o

oz

o

si

so

(4) where Cm is the transmembrane capacitance per unit area, is the ratio of the total membrane surface area to the total tissue volume, Vm is the transmembrane potential Vm i 0 o , and Iion is the total ionic current (per unit area) that can be determined from different models [12]–[15]. Finally, assuming a uniform conductivity, the potential in the volume conductor or extracellular region is governed by Laplace’s equation

( =

ex

@ 2 e @x2

+

ey

@ 2 e @y2

)

+

ez

@ 2 e @z 2

=0

(5)

where e is the extracellular potential and ex; y; z designates the x; y, and z components of the extracellular conductivity. Equations (3)–(5) are solved with the numerical procedure described in the next section, subjected to appropriate boundary conditions. For a boundary next to an insulating material, the normal component of any current is zero. This condition can be represented by the Neumann boundary condition. At an interface between a tissue and a conductor, the potential and current boundary conditions suggested by Tung [1] are used. III. NUMERICAL IMPLEMENTATION Equations (3)–(5) are solved together with the boundary conditions on a rectangular grid using an explicit finite-difference method. The

n o i

; j; k

n o i; j

n o i; j; k

; j; k

n o i; j

;k

n o i; j; k

oz

+ ( +  ) @@z2 = 0(I + I ) iz

(7)

+ ( ) 01 0 2( ) 1x2 + ( ) 01 0 2( ) +1 1y2 (  ) + (  ) 01 0 2( ) +1 + (8) 1z2 where 1x; 1y; and 1z are the node spacings in the x; y; and z n o i

FD1 = ox

oy

ix

n i; j; k

where FD1 represents a summation of second-order center finitedifference approximations to second-order partial derivatives of o in x; y; and z directions, respectively, and it is given by

and interstitial currents then gives the following equations:

@Vm @t

)

m

n so i; j; k



(

n o i; j; k

;k

n o i; j; k

n o i; j; k

directions, respectively. Equation (7) can be used to update the nodal values of Vm at each time step, using the values at the previous time step. The ionic current term Iion in (7) contains gating parameters that are governed by a set of first-order ordinary differential equations. These parameters are also updated via the forward Euler method. Similarly, for (4) and (5), the spatial derivatives can be replaced by second-order center finite-difference approximations, leading to the following finite-difference equations for o and e :

( )

=

( )

=

n o i; j; k

n e i; j; k

2 2

FD2 + FD3 + (Isi + Iso )i;n j; k ix + ox iy + oy iz + oz 1x2 + 1y2 + 1z2 FD4 ex ey ez 1x2 + 1y2 + 1z2

(9)

(10)

where

( ) +1 + ( ) 01 1x2 (  ) +1 + ( ) 01 + ( +  ) 1y2 (  ) +1 + ( ) 01 + ( +  ) 1z2 (V ) +1 + (V ) 01 0 2(V ) FD3 =  1x2 ( V ) +1 + (V ) 01 0 2(V ) + 1y2 ( V ) + ( V ) 01 0 2(V ) +1 + 1z2 n o i

FD2 = (ix + ox ) iy

oy

iz

oz

n m i

ix

iy

iz

n o i; j

n m i; j; k

;k

n o i; j; k

n m i

;k

; j; k

n o i; j

;k

n o i; j; k

; j; k

n m i; j

n o i

; j; k

; j; k

n m i; j

n m i; j; k

(11)

n m i; j; k

;k

n m i; j; k

n m i; j; k

(12)

202

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 2, FEBRUARY 1997

( ) +1 + ( ) 01 1x 2 +  ( ) +1 1+y2( ) +  ( ) +11+z2( ) n e i

FD4 = ex

ey

ez

n e i

; j; k

n e i; j

; j; k

n e i; j

;k

n e i; j; k

01

n e i; j; k

;k

01 :

(13)

The finite-difference equations are solved as follows. All unknown variables are assigned appropriate values at the beginning of a . Then at each subsequent time step, the ionic simulation t current is first calculated at each node in the bidomain using the previous value of the transmembrane potential Vm , which is updated next with (7). Using the updated values of Vm , the systems of finitedifference equations represented by (9) and (10) are solved for o and e with the Jacobi iterative technique. The final value of o or e from the previous time step is used as the initial guess value for the current step at each node. Each nodal value is then updated at each iteration with the corresponding finite-difference equation, using the present values for the right-hand side of the equation. For an explicit finite-difference method, the time step size used is typically small and the unknown variables’ values do not change substantially from one time step to the next. Thus, values from the previous time step provide a good initial guess and reduce the number of iterations required. The Jacobi procedure is terminated when the residual is smaller than a certain value. In this study, a static, normalized, global sum of the squared residual (SSR) is used and is of the form

( = 0)

(upre 0 ucur )2 i

SSR

=

i

i

(upre )2

sors perform the same operation or instruction on their corresponding data elements simultaneously, leading to an increase in the number of instructions performed per second. The SIMD architecture is naturally suited for such an approach. For high parallel efficiency, the data item should be chosen such that the mapping of the problem domain to the processor array allows as many processors to work on the problem concurrently as possible. In addition, data-parallel computation usually requires interprocessor communication that is absent in the serial approach. Since communication is generally time consuming, it should be kept to a minimum, and where needed, performed as efficiently as possible. To minimize communication, the data item should be chosen such that different processors are as independent as possible during the computation. As shown previously, for each of the unknown variables Vm , o ; or e , the finite-difference discretization leads to basically the same equation for every node which expresses the variable at a particular node in terms of variables at the same node or neighboring nodes, e.g., (9). Thus, it is natural to use the node as the data item and map one node to one (virtual) processor. Each instruction can then be performed on all the processors simultaneously to update the unknown quantities at every node. Each processor is responsible for storing the variables and performing all arithmetic operations associated with its node. It acquires the values of variables at other nodes through interprocessor communication. Since each nodal equation only involves variables at a particular node or its neighboring nodes, only the faster nearest-neighbor communication needs to be used in the solution process, leading to higher parallel efficiency.

(14)

i

IV. RESULTS

i i i where u is either o or e , and upre and ucur represent the unknown values of node i at the previous and current iterations, respectively. Finally, the unknown variables’ values at boundary nodes are updated using second-order forward or backward finitedifference approximations of the boundary conditions. Second-order rather than first-order finite-difference approximations are used to maintain the second-order accuracy of the algorithm.

B. Data-Parallel Implementation The CM is a massively parallel computer that achieves its supercomputing performance from the simultaneous operation of many processors. A detailed description of the machine can be found in [16]–[18]. The first production model in the CM series is the CM2, followed by the CM-200 that differs from the CM-2 mainly in processor speed. The most recent version of the CM is the CM1024) 5. Both the CM-2 and CM-200 can have up to 64 k (1k bit-serial processors. Each processor can have as much as 1 Mb of memory, and every 32 processors can share a floating point unit. When the number of data elements or array size exceeds the total number of processors, each physical processor can emulate several virtual processors, sharing its computational resources among them. Each array element can then be assigned to a virtual processor. The CM-5 uses the more sophisticated SPARC Cypress (RISC) processors rather than the simple bit-serial processors used in the CM-2/CM-200. Each processor (commonly referred to as a processor node) can have up to 32 megabytes of memory with four vector units and a fully configured CM-5 can have up to 1024 processors. All the CM’s can operate in a SIMD mode where all processors are synchronized to perform the same instruction at any particular time. In addition, they all offer faster communications between a processor and its nearest neighbors. In the data-parallel approach, each member of a data set is assigned to a (virtual) processor. During the solution process, different proces-

=

AND

DISCUSSION

The explicit finite-difference method described above is implemented in CM-Fortran [18] and simulations are performed on the CM-200 and CM-5. All of the computations are performed using double precision (64-b) arithmetic. Several ionic current models are implemented, including: 1) the original Beeler–Reuter (BR) model [12]; 2) the Beeler–Reuter/Drouhard–Roberge (BRDR) model that includes the modifications suggested by Drouhard and Roberge in the BR model [13]; 3) the Ebihara–Johnson (EJ) model [14]; 4) the Beeler–Reuter/Ebihara–Johnson (BREJ) model, which replaces the fast inward sodium current in the BR model with that given in the EJ model [14]; 5) the Ebihara–Johnson/Spach–Kootsey (EJSK) model that incorporates the suggestions of Spach and Kootsey [15] in the EJ model. The basic 3-D configuration for numerical study consists of a bath on top of a layer of cardiac tissue, as shown in Fig. 1. The extracellular potential on the top face in the bath is set to be zero. Conductivities for the tissue are assigned according to experimental values obtained for canine myocardial tissue [19]: ix 3.44 mS/cm, iy iz 0.596 mS/cm, ox 1.17 mS/cm, and oy oz 0.802 mS/cm. The bath is assumed to be isotropic with a conductivity of 20 mS/cm while the surface-to-volume ratio and transmembrane capacitance Cm are 2000/cm and 1 F/cm2 , respectively. The cardiac tissue is excited by a rectangular current pulse that is above the threshold value and 1 ms in duration. This stimulus is applied across either one full face of the tissue or a 2 2 2 2 2 node region located at a corner just underneath the tissue-bath interface. The node spacings x; y; and z are 32.4 m, 30.0 m, and 30.0 m, respectively, and the time step size t is 2 s. Twenty Jacobi iterations are used in each of the iterative solutions at every time step and this is found to be sufficient to give a SSR that is smaller than 1006 in the numerical simulations for both the two-dimensional (2-D) planar and 3-D stimulations. A planar stimulus is used so that the 3-D configuration can be reduced to a 2-D model for which previous simulation data are

=

1 1

=

=

1

1

=

=

=

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 2, FEBRUARY 1997

Fig. 2. Relative number of operations required in each portion of the program.

203

Fig. 3. Relative number of operations required for each ionic current model.

available for the verification of our results [4] and [8]. The simulation results do demonstrate a bending of the transmembrane potential wavefront near the tissue-bath interface and a variation in the action potential shape along the thickness of the tissue. A. Computational Performance Results A performance evaluation of the algorithm is presented in which the CPU time and million floating point operations (Mflop) ratings are calculated for different problem sizes. The Mflop rating is calculated by counting the total number of floating point operations, including addition, subtraction, multiplication, division, and exponential operations, within a time frame measured with the CM timing facility. The time frame consists of the time required to update all variables for one simulated time step, which consists of calculating an ionic current, updating the transmembrane potential, iteratively solving the interstitial potential, iteratively solving the extracellular potential and enforcing the boundary conditions. The relative number of operations required for each of these steps (except update of boundary values) is shown in Fig. 2. Here, BRDR represents the relative number of operations needed to calculate the ionic current from the BRDR model (excluding all operations common to all membrane models), “common” represents the relative number of operations common to all ionic current models (including the update of transmembrane potential), and “phio iteration” and “phie iteration” represent the relative number of operations needed in the iterative solution of interstitial and extracellular potentials, respectively. An equal number of nodes in the bath and tissue is assumed. The number of operations required to calculate the ionic current (corresponding to BRDR in Fig. 2) is shown in Fig. 3 for different membrane models. A comparison of the solution time for different bidomain sizes on the CM-200 with 32-k processors, the CM-5 with 64 processor nodes and the CM-5 with 256 processor nodes is presented in Fig. 4. The BRDR model is used for the ionic current, 20 iterations are used in the Jacobi scheme and each simulation is performed for 3000 time steps (6 ms). There are 128 nodes in the x or z direction in both the tissue and bath. The total number of nodes in the bath is held constant at 256k, i.e., 16 nodes in the y direction, while the number of tissue nodes in the y direction is varied to give the total number of nodes in Fig. 4. This represents a worst-case scenario since the tissue (bidomain) requires more CPU time than the bath (volume conductor) to complete the calculations for a node. While the CM-200 requires roughly the same solution time as the CM-5 with 64 nodes for small problems, as the problem size increases, the CM-5 shows a better performance. As expected, the CM-5 with 256 processor nodes offers

Fig. 4. CPU time for the CM’s versus number of nodes for 3000 time steps (6 ms).

the best computational performance, requiring 32 min to obtain a solution for the largest model with 2.4 million nodes. (For a typical node spacing of 30 m used in the numerical studies, the tissue size for this model is 3.8 2 3.8 2 3.8 mm.) In addition, it should be noted that the solution times shown in Fig. 4 will in general increase with the number of operations required for calculating the ionic current at each node, as given in Fig. 3 for different membrane models. The Mflop rates for different problem sizes considered in Fig. 4 are shown in Fig. 5. As one can see, the Mflop performance also varies with the problem size. The CM-5 with 256 processor nodes is 74–264% faster than the CM-200 for the model sizes considered. In addition, the difference in performance between the CM-5 with 256 nodes and the other two less powerful platforms becomes more significant as the problem size increases, because the 256-node partition provides more computational resources and hence is more efficient in handling larger problems. The performance of the less powerful platforms also begins to level off for larger model sizes due to the saturation of their processing power. However, the 256node partition is not four times faster than the one with 64 nodes because the speedup provided by a parallel computer generally does not increase linearly with the number of processors, which is basically a consequence of Amdahl’s law [10]. Besides providing computational speedup, the data-parallel computer also offers a large memory for the solution of large problems. The memory used in a problem depends on the number of bath nodes versus the number of tissue nodes, as the latter ones consume more memory per node, due to the larger number of unknown variables involved in the tissue and the extra memory required for the ionic

204

IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 2, FEBRUARY 1997

Fig. 5. CPU Mflops for the CM’s versus number of nodes.

current model. The actual memory required for an ionic current model depends on the number of gating parameters used. Specifically, the BR model needs 144 bytes/node, the BRDR model requires slightly less at 120 bytes/node, and the EJ and EJSK models both use 48 bytes/node. Assuming the BRDR ionic current model, each tissue node consumes 8 times more memory than a bath node, and a tissue-bath system with one million nodes requires 240 megabytes of memory when half of the nodes are in the tissue and half of them are in the bath. Thus, a typical fully-configured CM-5 with 512 processor nodes and 16 Gbytes of memory can handle a half-tissuehalf-bath type system with 67 million nodes or a tissue with 38 million nodes. V. CONCLUSION An explicit finite-difference solution of the bidomain-conductor equations for a homogeneous cardiac tissue and bath has been implemented on a data-parallel computer. The parallel nature of the finite-difference equations enables the fine-grain parallelism inherent in the CM-200 and CM-5 to be fully exploited by mapping each node to a (virtual) processor. The one-node-one-processor mapping allows the same operations to be performed concurrently on all processors and the use of fast nearest-neighbor communication in the solution process, leading to high parallel efficiency. A computational speed close to 2 Gflops can be achieved for a model with 2.4 million nodes (with 89% of the nodes in the tissue) on the CM-5 with 256 processors. It is expected that the computational speed will increase further as the model size or the number of processors is increased. For a fully-configured CM-5 with 16 Gbytes of memory, solution of problems with model sizes up to 67 million nodes (with 50% tissue nodes and the BRDR model) is feasible. ACKNOWLEDGMENT Computational resources for the study have been provided by the National Center for Supercomputing Applications (NCSA), Army High Performance Computing Research Center (AHPCRC), Thinking Machines Corporation, and Los Alamos National Laboratory. The authors would like to acknowledge R. A. Hart for his help in the initial compilation of different ionic current models and the consulting staff at NCSA for their help on computing issues. They would also like to thank Dr. C. S. Henriquez, Dr. L. J. Leon, and Dr. A. E. Pollard for their helpful discussions. REFERENCES [1] L. Tung, “A bidomain model for describing ischemic myocardial D. C. potentials,” Ph.D. dissertation, Massachusetts Institute of Technology, Cambridge, MA, 1978.

[2] B. J. Roth, “Action potential propagation in a thick strand of cardiac muscle,” Circ. Res., vol. 68, pp. 162–173, 1991. [3] M. G. Fishler and N. V. Thakor, “Temporal variability of defibrillation threshold in a two-dimensional supercomputer model of ‘ventricular fibrillation’,” in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 1992, vol. 14, pp. 626–627. [4] A. E. Pollard, N. Hooke, and C. S. Henriquez, “Cardiac propagation simulation,” in High-Performance Computing in Biomedical Research, T. C. Pilkington et al., Eds. Boca Raton, FL: CRC, 1993. [5] C. S. Henriquez, “Simulating the electrical behavior of cardiac tissue using the bidomain model,” Crit. Rev. Biomed. Eng., vol. 21, pp. 1–77, 1993. [6] C. S. Henriquez and A. A. Papazoglou, “Conduction in a 3D bidomain representation of cardiac tissue with unequal anisotropy,” in Proc. Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 1993, vol. 15, pp. 748–749. [7] B. J. Roth and J. P. Wikswo, Jr., “Electrical stimulation of cardiac tissue: A bidomain model with active membrane properties,” IEEE Trans. Biomed. Eng., vol. 41, pp. 232–240, 1994. [8] P. D. Claessen, “Finite difference bidomain modeling of cardiac tissue on a data parallel computer,” M.Sc. thesis, New Mexico State Univ., Las Cruces, NM, 1994. [9] G. D. Smith, Numerical Solution of Partial Differential Equations: Finite Difference Methods, 3rd ed. London: Clarendon, 1985. [10] V. Kumar, A. Grama, A. Gupta, and G. Karypis, Introduction to Parallel Computing. Redwood City, CA: Benjamin/Cummings, 1994. [11] R. Winslow, A. Varghese, D. Noble, J. C. Denyer, and A. Kimball, “Modeling large SA node—Atrial cell networks on a massively parallel computer,” J. Physiol., vol. 446, p. 242P, 1992. [12] G. W. Beeler and H. Reuter, “Reconstruction of the action potential of ventricular myocardial fibers,” J. Physiol., vol. 268, pp. 177–210, 1977. [13] J. Drouhard and F. A. Roberge, “Revised formulation of the Hodgkin–Huxley representation of the sodium current in cardiac cells,” Comput. Biomed. Res., vol. 20, pp. 333–350, 1987. [14] L. Ebihara and E. A. Johnson, “Fast sodium current in cardiac muscle: A quantitative description,” Biophys. J., vol. 32, pp. 779–790, 1980. [15] M. S. Spach and J. M. Kootsey, “Relating the sodium current and conductance to the shape of transmembrane and extracellular potentials by simulation: Effects of propagation boundaries,” IEEE Trans. Biomed. Eng., vol. BME-32, pp. 743–755, 1985. [16] Thinking Machines Corporation, Cambridge, MA, Connection Machine Model CM-2 Technical Summary, 1989. [17] Thinking Machines Corporation, Cambridge, MA, Connection Machine CM-5 Technical Summary, 1992. [18] Thinking Machines Corporation, Cambridge, MA, CM Fortran User’s Guide, 1991. [19] D. E. Roberts and A. M. Scher, “Effect of tissue anisotropy on extracellular potential fields in canine myocardium in situ,” Circ. Res., vol. 50, pp. 342–351, 1982.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.