Métodos de Segundo Orden Para Resolver Ecuaciones No Lineales

July 24, 2017 | Autor: Andrés Granados | Categoría: Metodos Numericos Aplicados a La Ingenieria, Métodos Numericos
Share Embed


Descripción

INTEVEP, S.A. Centro de Investigaci´ on y Apoyo Tecnol´ ogico. Filial de PDVSA. Los Teques, Edo. Miranda. VENEZUELA.

METODOS DE SEGUNDO ORDEN PARA RESOLVER ECUACIONES NO LINEALES ANDRES L. GRANADOS M. (EPPR/322).

Departamento de Producci´ on (EPPR). Secci´ on Manejo de Crudos (EPPR/3). Unidad de transporte de Crudos (EPPR/32).

PROYECTO (STE) No. 6555. REPORTE TECNICO No. INT-EPPR/322-91-0002.

JUNIO, 1991.

CONFERENCE ON NUMERICAL METHODS IN ENGINEERING PROBLEMS. INTEVEP Headquarters. Los Teques, July 15-17. 1991. Edo. Miranda, Venezuela.

SECOND ORDER METHODS FOR SOLVING NON-LINEAR EQUATIONS ANDRES L. GRANADOS M. INTEVEP, S.A. Production Department (EPPR/32). Venezuela Government Research Facility for Petroleum Industry. Los Teques, Estado Miranda. Venezuela.

ABSTRACT.-

Several methods for solving non-linear equations are developed in this work. Both single non-linear equations (n = 1) and systems of non-linear equations (n = 1) are treated. For single non-linear equations, bracketing methods and open methods are studied. For systems of non-linear equations only open or iterative methods are studied. The second order methods for solving nonlinear equations are, in all cases, based on the Taylor series expansion of the function or functions around a point x near a root r, of the homogeneous equation f (x) = 0, up to the second order term, where x may be a n-dimensional variable or an array of n variables, and f (x) may be a n-dimensional function or an array of n functions depending on x. The second order methods for systems of nonlinear equations are similar to the Newton-Raphson method, and, in one case, is an extension of the Richmond’s method (which apply for single non-linear equations) to n-dimensional non-linear equations. For this last methods three types of relaxation factors are used. The first one is the same relaxation factor that is used in the method of Newton-Raphson, and is named “principal relaxation factor”. The second one is the relaxation factor applied to the internal iterative process, which is carried out in every step in the principal (or external) iterative process, and is named “internal relaxation factor”. The third one produces the effect of a weight factor between the pure Newton-Raphson method and the pure second order method, and is named “secondary relaxation factor”. A well-accepted characteristic of the open second order methods is that they may also be used to find local maximum or minimum of the evaluated functions. Finally, the convergence and the stability are studied and evaluated only for the open methods and a significant improvement is noted from the analysis and from fractal representation of some examples.

CONFERENCIA SOBRE METODOS NUMERICOS EN LOS PROBLEMAS DE INGENIERIA. Auditorium de INTEVEP. Los Teques, 15-17 Julio. 1991. Edo. Miranda, Venezuela.

METODOS DE SEGUNDO ORDEN PARA RESOLVER ECUACIONES NO LINEALES ANDRES L. GRANADOS M. INTEVEP, S.A. Departamento de Producci´ on (EPPR/32). Centro de Investigaci´ on y Apoyo Tecnol´ ogico. Filial de PDVSA. Los Teques, Estado Miranda. Venezuela.

RESUMEN.Varios m´etodos para resolver ecuaciones no lineales son desarrollados en este trabajo. Son tratados tanto una sola ecuaci´ on no lineal como sistemas de ecuaciones no lineales. Para una sola ecuaci´on no lineal son estudiados los m´etodos cerrados y los m´etodos abiertos. Para sistemas de ecuaciones no lineales solo los m´etodos abiertos o iterativos son estudiados.Los m´etodos de segundo orden para resolver ecuaciones no lineales son, en todos los casos, desarrollados con base en la expansi´on en series de Taylor de la funci´ on o funciones alrededor de un punto x cerca de una ra´ız r, de la ecuaci´on homog´enea f (x) = 0, hasta el t´ermino de segundo orden, donde x puede ser una variable n-dimensional o un arreglo de n variables, y f (x) puede ser una funci´ on n-dimensional o un arreglo de n funciones dependiendo de x. El m´etodos de segundo orden para sistemas de ecuaciones no lineales son similares al m´etodo de Newton-Raphson y, en una caso, es una extensi´on del m´etodo de Richmond (el cual aplica para una sola ecuaci´ on no lineal) para ecuaciones n-dimensionales no lineales. Para estos u ´ ltimos m´etodos tres tipos de factores de relajaci´on son usados. El primero es el mismo factor de relajaci´on que usa el m´etodo de Newton-Raphson, y es llamado “factor de relajaci´on principal”. El segundo es el factor de relajaci´ on aplicado al proceso iterativo interno, el cual se lleva a cabo en cada paso del proceso iterativo principal (o externo), y es llamado “factor de relajaci´on interno”. El tercero produce el efecto de un factor de peso entre el m´etodo de NewtonRaphson puro y el m´etodo de segundo orden, y es llamado “factor de relajaci´on secundario”. Una bien aceptada caracter´ıstica de los m´etodos de segundo orden abiertos es que ellos pueden ser usados para encontrar m´ aximos o m´ınimos locales de la funci´ on evaluada. Finalmente, la convergencia y la estabilidad son evaluadas s´ olamente para los m´etodos abiertos y una significativa mejora es notada de los an´ alisis y de las representaciones fractales de algunos ejemplos.

INTRODUCTION

This work is composed of two main parts: Bracketing Methods and Open Methods. However, more emphasis is made in the second part. The firt part only treats algorithm for solving single non-linear equations. The second part treats both algorithm for solving both single nonlinear equations and systems of non-linear equations. In each part, primarily, the classical known methods are introduced as a framework to understand easily the second order methods, specially applied to systems of equations. All the known methods found in the bibliography, few of which are of second order only for single equations, are well referenced. The remainder methods are original of this work. The principal contributions can be extracted from the second order methods for systems of equations. At the final of the second part, as a novelty, fractal technics are introduced for the study of stability of the algorithm used in systems of two non-linear equations. Several fractals maps were generated showing a constellation of curious figures, similar to octopuses and squids.

BRACKETING METHODS

BOLZANO THEOREM

All the bracketing methods are based on the Bolzano theorem, which is stated as:

Bolzano Theorem. Be y = f (x) a continuous function of the type f : IR −→ IR, defined in the closed interval [a, b]. If it is satisfied that f (a).f (b) ≤ 0, then there exist at least one value r ∈ [a, b], such that f (r) = 0.

The value r thus found is named a “root” of the function y = f (x), or in a similar way is also named the solution of the homogeneous equation f (x) = 0. The Bolzano theorem predicts at least one root of the function y = f (x) in the closed interval [a, b], if such function is continuous, but it does not predict the number of these roots. 2

ORDER ZERO AND ORDER ONE METHODS

The most well-known bracketing methods are the ”Method of halving the Interval” and the ”Method of Linear Interpolation”. They are name, respectively, the first of order zero and the second of order one, according to the auxiliary function that is used to find an estimation of the root in the closed interval. In both cases the method attemps to find an intermediate value c that is assumed to be an estimation of the root in that closed interval. For the method of halving the interval the intermediate value is calculated by

c=

a+b 2

(1)

For the method of linear interpolation, or REGULA FALSI method [1], as it is also known, the intermediate value is calculated using an interpolation with a straight line between the points (a, f (a)) and (b, f (b)), finding that the intersection of such line with the straight line f (x) = 0 is for a value of x equal to

c = a − f (a)

b−a b−a af (b) − bf (a) = b − f (b) = f (b) − f (a) f (b) − f (a) f (b) − f (a)

(2)

The intermediate value c thus found, divides the closed interval [a, b] in two subintervals, one of them satisfying the Bolzano theorem, but not the other one. The subinterval that does not satisfy the conditions of the theorem is discarded and the other one is conserved. The subinterval that is conserved will be divided again and then it will continue successively. This is made until the interval containing the root of the function is so small that it will be said that any intermediate value in such interval is the root within an approximation error, which is calculated as the size of the interval. This quantity should be less than a tolerance imposed at the begining of the algorithm, and this is named the tolerance for the local error, max . It also should be verified that the absolute values of the function at the extreme points of the smallest interval, which are opposite to each other, will be less than other tolerance also imposed to the method at the begining, which is name the tolerance for the global deviation, dmax . The local error for bracketing methods is defined as k = ck − ck−1 3

(3)

being ck , ck−1 the intermediate values for the subintervals in two consecutive subdivisions. The global error for close methods is defined as e k = ck − r

(4)

being r the root to which the method converges when k → ∞. The global deviation for the method is defined as dk = f (ck )

(5)

and it should tend to zero if ck tends to r.

SECOND ORDER METHOD

For the bracketing second order methods, the intermediate value c, found either by the method of halving the interval or by the method of linear interpolation, is used as a third point. As a result of this therefore there are three points (a, f (a)), (b, f (b)) y (c, f (c)) by which it can be drawn a parabola or a second order polynomial of the form P2 (x) = f [a] + (x − a)f [a, b] + (x − a)(x − b)f [a, b, c]

(6)

where the the symbol f [ · ] is denominated the divided difference and is defined in a recurrent manner employing the following expressions [2], f [x0 ] = f (x0 )

f [x1 , x0 ] =

f [x2 , x1 , x0 ] =

f [x3 , x2 , x1 , x0 ] =

f [xn , xn−1 , . . . , x1 , x0 ] =

f [x1 ] − f [x0 ] x1 − x0

f [x2 , x1 ] − f [x1 , x0 ] x2 − x0

f [x3 , x2 , x1 ] − f [x2 , x1 , x0 ] x3 − x0

f [xn , xn−1 , . . . , x2 , x1 ] − f [xn−1 , xn−2 , . . . , x1 , x0 ] xn − x0

Being f [xn , xn−1 , . . . , x1 , x0 ] = f [x0 , x1 , . . . , xn−1 , xn ] ∀n ∈ IN . 4

(7.a)

(7.b)

(7.c)

(7.d)

(7.e)

Consequently an additional intermediate value c can be calculated for the second order method making an interpolation with the parabola (6) which passes over the three aforementioned points, finding that the intersection of such parabola with the straight line f (x) = 0 can be obtained solving the following polynomial equation αx2 + βx + γ = 0

(8)

where α = f [a, b, c] β = f [a, b] − (a + b)f [a, b, c] γ = f [a] − a f [a, b] + ab f [a, b, c] The solutions of this polynomial equation can be calculated by



c =

−β ±

 β 2 − 4αγ 2α

(9)

and an example of this problem is presented in the figure 1. The expression (9) contains two solutions for c in the problem stated, however, there exists only one valid value that pertains to the interval [a, b]. This restriction solves the problem and finally the solution (9) can be expressed in the following form c = c¯ − δ +

 δ 2 + ∆(∆/4 − δ) − ζ Sign(δ)

(10)

where ∆=b−a f [a, b] = [f (b) − f (a)]/∆ c¯ = (a + b)/2 , c = c¯ or c = a − f (a)/f [a, b] f [a, b, c] = {f [a, b] − [f (b) − f (c)]/(b − c)}/(a − c) δ = 12 f [a, b]/f [a, b, c] ζ = f (a)/f [a, b, c] Sign(δ) = δ/|δ| The discriminant β 2 − 4αγ in the solution of the equation (8) always has a positive value due to the condition f (a).f (b) ≤ 0 that forces the parabola, containing the points (a, f (a)) and 5

(b, f (b)), to intersect the line f (x) = 0 in two points that represent two different real solutions of the equation (8), one of which pertains to the closed interval [a, b]. The unique case where the aforementioned discriminant can be zero is when in some of the extremes of the interval [a, b] there exists a minimum or a maximum of the parabolic function, otherwise the discriminant always has a positive value. This guarantees that the square root of the expression (10) is always defined in the set of positive real numbers.

6

Figure 1. Bracketing method of second order showing the parabolic interpolation.

7

The second order bracketing method algorithm, abridged in the expression(10), can be interpreted as a modification of the conventional order zero and order one bracketing methods, where the value c obtained by such methods is modified in order to find another value c by the expression (10). The method thus modified eliminate in an absolute way the problem of slow unilateral convergence, which is a characteristic aspect of the method of linear interpolation. The method of linear interpolation has been modified before by other authors in order to avoid the problem of slow unilateral convergence, however, these modified methods have a poor theoretical base, though in practice they work well enough. The costs of computer calculations are not significantly higher than the unmodified method, however the convergence, though more rapid, is still linear. On the other hand, the second order bracketing method has a rich theoretical base and a good performance being the costs of computer calculations just a little higher than the necessary for the other modified methods, but this can be compensated by the quadratic convergence that is characteristic of the second order methods. It should be noticed that the second order bracketing method is unstable and could fail if the convergence directs the method to a root where the divided difference f [a, b, c] tends to be small in absolute value, and, as it can be seen in the expressions of δ and ζ, this quantity is in the denominator of the fractions. It should be remembered that the above mentioned divided difference is proportional to the second order derivative in the interval [a, b]. Therefore, this problem could occur only in case that a root of multiplicity three is obtained, which is seldom found in practice. The multiplicity m of a root is established for a continuous function with continuous derivatives up to the order m, if f (r) = f  (r) = f  (r) = f  (r) = · · · = f (m−1) (r) = 0 , f (m) = 0

(11)

As it has been seen the bracketing methods form a set of iterative processes which begin with two estimated values that define a closed interval where the user of the method is sure to find a root of the equation to be solved. The open methods, conversely, form a set of iterative processes which begin with only one estimated inicial value and the user of the method can not be sure to obtain a root of the equation to be solved. The following pages are going to be dealt with these last methods. Another bracketing method, similar to the aforementioned one, is the algorithm developed in the 1960s by Wijngaarden, Dekker, and others at the Mathematical Center in Amsterdam (see reference [3]), and later improved by Brent [4]. This algorithm uses three prior points to fit an inverse quadratic function (x as a quadratic function of y), whose value of x at y = 0, obtained with an inverse quadratic interpolation, is taken as the next estimate of the root r. 8

OPEN METHODS

TAYLOR SERIES

All the open methods are theoretically based on Taylor series expansions of the function whose roots shall be determined. These expansions in Taylor series are made in the neighbourhood of an arbitrary point x (or x) and such series are evaluated for an assumed value of the root r (or r) near the arbitrary point. This reasoning is applied for both single function non-linear equations and n-dimensional function non-linear equations. In each case the function treated is the function that is defined by the homogeneous equation f (x) = 0 (or f (x) = 0). If a scalar function f : IR → IR is defined, the expansion in Taylor series up to the term of second order is expressed as

1 f (r) = f (x) + (r − x)f  (x) + (r − x)2 f  (x) + O(|e|3 ) 2

(12)

where e = x − r is the global error of the value x related to the root r, and, by definition of a root, f (r) ≡ 0. If we have a n-dimensional function f : IRn → IRn , the expansion in Taylor series up to the term of second order is expressed as

1 f (r) = f (x) + (r − x).∇f (x) + (r − x)(r − x) : ∇∇f (x) + O( e 3 ) 2

(13)

where e = x − r is the global error of the value x related to the root r, and, by definition of a root, f (r) ≡ 0.

NEWTON’S METHOD

The Newton’s method is based on an expansion in Taylor series, as defined before, but up to the first order term f (r) = f (x) + (r − x)f  (x) + O(|e|2 ) = 0 9

(14)

Solving for r in this expression we obtain

r =x−

f (x) + O(|e|2 ) f  (x)

(15)

If we do not take in consideration the term O(|e|2 ), then the preceding expression does not offer the value of r, but a value that could be near to r. According to this reasoning, it can be applied an iterative procedure of the form xk+1 = xk −

f (xk ) f  (xk )

(16)

This iterative procedure is used, begining with an inicial estimated value xo , in a successive manner until the local error k = xk − xk−1

(17)

be, in absolute value, less than the tolerance for the local error max . In other words, until |k | < max

(18)

Also the iterative procedure must be applied until the global deviation dk = f (xk )

(19)

be, in absolute value, less than the tolerance for the global deviation dmax . In other words, until |dk | < dmax

(20)

The Newton’s method normally is relaxed in the form

xk+1 = xk − ω

where the relaxation factor ω can be ω > 1 to overrelax the method ω < 1 to subrelax the method 10

f (xk ) f  (xk )

(21)

NEWTON-RAPHSON METHOD

The Newton-Raphson method is an extension of the Newton’s method to n-dimensional functions [5]. In a similar way to the case of escalar functions in one dimension, this method is based on an expansion in Taylor series, such as the expression (13), but up to the term of first order f (r) = f (x) + (r − x).∇f (x) + O( e 2 )

(22)

Solving this equation for r it is obtained r = x − {[∇f (x)]∗ }−1 . [f (x) + O( e 2 )]

(22 )

2

Without considering the term O( e ) and following a reasoning similar to the preceding method, it can be applied an iterative procedure of the form xk+1 = xk − [Jf (xk )]−1 . f (xk )

(23)

where [Jf (xk )] = [∇f (x )]∗ is the jacobian matrix of the function f evaluated in the point xk . k

However, it is more practical to write the expression (23) as xk+1 = xk + zk

(24)

where zk is the solution of the system of linear equations [Jf (xk )].zk = −f (xk )

(25)

The established iterative procedure is applied begining with an estimated inicial value of the root named xo , in a successive form, until k < max

(26.a)

dk < dmax

(26.b)

and

where k and dk are, respectively, the local error and the global deviation and are defined, in a similar way as expressions (17) and (19), in the following way k = xk − xk−1 k

dk = f (x ) 11

(27.a) (27.b)

The norm · used in this analysis is the euclidean norm defined as   n  x =  x2 i

  n m  A =  a2ij

i=1

(27.c)

i=1 j=1

The Newton-Raphson method may be relaxed in the form xk+1 = xk + ωzk

(28)

where ω is the relaxation factor, similar to the expression (21).

RICHMOND’S METHOD

The Richmond’s method is based on an expansion of Taylor series for scalar functions, up to the second order term [6], such as it is stated in the expression (12). Reorganizing that expression factorizating (r − x), the following is obtained 1 [f  (x) + (r − x)f  (x)](r − x) = −f (x) − O(|e|3 ) 2

(29)

Without taking into account the term O(|e|3 ) and changing then r by xk+1 and x by xk , it can be applied an iterative procedure in the form 1 xk+1 = xk − [f  (xk ) + z k f  (xk )]−1 f (xk ) 2

(30)

where z k is the local error k+1 obtained with the Newton’s method (16) for the iteration k, thus

zk = −

f (xk ) f  (xk )

(31)

It should be noticed in the expression (29) that the quantity (r − x), between the brackes, by an estimation offered by the Newton’s method (like in the expression (15)), instead of a more precise value (xk+1 −xk ). This feature transform the series (29), which is a second order expression, into the Richmond’s method, which is almost a second order method. The criteria to stop this method are similar to those for the Newton’s method, offered by the expressions (18) and (20). 12

SECOND ORDER METHODS

SINGLE NON-LINEAR EQUATIONS. The second order method, as its name indicates, is based on a expansion in Taylor series up to the second order term. For single scalar functions the series (12) is used. Eliminating the term O(|e|3 ) and making the change of r by xk+1 and x by xk (following the same reasoning as in the preceding methods), the next expression is obtained

1 f (xk ) + (xk+1 − xk )f  (xk ) + (xk+1 − xk )2 f  (xk ) = 0 2

(32)

which represents a second order polinomial equation in (xk+1 − xk ). Finally, solving for xk+1 in the equation (32) it is obtained

k+1

x

=x − k

f  (xk ) ∓

 [f  (xk )]2 − 2 f (xk )f  (xk ) f  (xk )

(33)

As it must be observed, there exist two solutions for the equation(32). A graphic example of this may be observed in the figure 2, where this method is compared with the Newton’s method. In order to solve the problem with the double sign is always selected that value of xk+1 nearest to xk . This is made by modifying the equation (33) in the form

k+1

x

=x − k

Bk −

√ Dk Sign(B k ) Ck

where

Ak = f (xk ) B k = f  (xk ) ∼ = f [xk , xk−1 ] + (xk − xk−1 )f [xk , xk−1 , xk−2 ] C k = f  (xk ) ∼ = 2 f [xk , xk−1 , xk−2 ] Dk = (B k )2 − 2 Ak C k

13

(34)

Figure 2. Open method of second order compared with the Newton’s method.

14

If Dk ≤ 0 then its value is assumed to be zero. This little additional modification of the algorithm mentioned before, produces the effect that it is now possible to find local minimum or maximum points in the neighbourhood of xk , if there are not near roots. If the divided difference approximations for the derivatives in the expression (34) are used, then the Muller’s method [7] is obtained. For roots of multiplicity m = 2 the aforementioned algorithm does not show the same problem of Newton’s method, but for roots with multiplicity m ≥ 3, C k in the denominator of (34) is zero and therefore the L Hopital rule must be applied

lim

f  (xk ) Bk f (m−1) (xk ) = lim k = lim  k k k f (x ) x →r C x →r f (m) (xk )

lim

f (xk ) Ak f (m−2) (xk ) = lim = lim f  (xk ) xk →r C k xk →r f (m) (xk )

xk →r

xk →r

In other words, when the iterative procedure is near a root of multiplicity m, then it is convenient to redefine Ak = f (m−2) (xk ), B k = f (m−1) (xk ) and C k = f (m) (xk ). This method also may be relaxed similar to Newton’s method in the form

k+1

x

=x −ω k

Bk −

√ Dk Sign(B k ) Ck

(35)

The criteria to stop the calculation procedure in this case are similar to those used for the Newton’s method, offered by the expressions (18) and (20).

SYSTEMS OF NON-LINEAR EQUATIONS. For n-dimensional functions, the development of the second order methods are based on 3

the expansion in Taylor series (13). If the term O( e ) is eliminated, then r is not already a root and it is converted into an estimated value xs+1 , being x converted into an estimated value xs . Thus it can be applied an iterative procedure where

1 f (xs ) + (xs+1 − xs ).∇f (xs ) + (xs+1 − xs )(xs+1 − xs ) : ∇∇f (xs ) = 0 2 15

(36)

Additionally, in this expression the term xs+1 − xs can be factorized in the form

1 [∇f (xs ) + (xs+1 − xs ).∇∇f (xs )]∗ . (xs+1 − xs ) = −f (xs ) 2

(37)

and making the corresponding variable changes we have

1 [Jf (xs ) + Hf (xs ).zs ].zs = −f (xs ) 2

(38)

where Jf (xs ) = [∇f (xs )]∗ is the jacobian matrix with elements Jij =

∂fi ∂xj .

Hf (xs ) = {∇[∇f (xs )]∗ }∗ is the hessian tensor with elements Hijk =

∂ 2 fi ∂xj ∂xk .

As it may be observed it is imposible to solve the expression (38) for zs and then to find thus xs+1 in a direct form . Therefore, an iterative procedure of “fixed-point” type is applied, where 1 zs,p+1 = −[Jf (xs ) + Hf (xs ).zs,p ]−1 . f (xs ) 2

(39)

Here the superindex s indicate the “principal iterative process” (the index s in the expression (38)), and the superindex p indicate the “internal iterative process”. For the iterative process (39), the inicial value zs,o is taken as an estimated value obtained by the Newton-Raphson method. This is, zs,o = −[Jf (xs )]−1 .f (xs )

(40)

In the expressions (39) y (40) the values of zs,p+1 and zs,o , respectively, are obtained by solving the systems of linear equations written in the form [ · ]z = −f . Once the value of zs is obtained, then xs+1 is calculated from xs+1 = xs + zs

(41)

The iterative process in s, which is denominated principal iterative process, continues until the stopping criteria be fulfilled. The criteria to stop the iterative process are exactly the same as Newton-Raphson method, expressed in the equation (26) and (27). 16

The iterative process in p, which is denominated internal iterative process, is executed for each step or iteration s, until the local error in z, which is defined as

1 ∆zs,p+1 = (zs,p+1 − zs,p ) = −[Jf (xs ) + Hf (xs ).zs,p ]−1 .f (xs ) − zs,p 2

(42)

be less than an imposed tolerance z,max for the local error. This is, ∆zs,p+1 < z,max

(43)

The second order method algorithm presented in the expressions (39)-(43) may be relaxed with three relaxation factors in the form 1 ∆zs,p+1 = −ωz {[Jf (xs ) + ωh Hf (xs ).zs,p ]−1 . f (xs ) + zs,p } 2

(44)

zs,p+1 = zs,p + ∆zs,p+1

(45)

xs+1 = xs + ωzs

(46)

where ωz = Relaxation factor for the internal iterative process. ωh = Secondary relaxation factor. If ωh = 0 then the method is converted to the NewtonRaphson method and the internal iterative process in p converges after only one iteration. (Normally ωh ≤ 1). ω = Principal relaxation factor. If ω > 1 the method is overrelaxed and if ω < 1 the method is subrelaxed. zs = zs,p+1 when ∆zs,p+1 < z,max . The secondary relaxation factor ωh may be automatically adjusted after each iteration, in the same way as the relaxation factor in the Marquardt algorithm [8] is adjusted. The value of ωh is diminished from unity in the form ωh = Ch− ωh

Ch− = 0.9 ∼ 0.99

(47)

until the iterative procedure presented in the equations (44) and (45) be completely convergent, from the begining with a inicial value (40) to finally fulfill the condition (43). This means that ∆zs,p+1 < ∆zs,p 17

∀p = 0, 1, 2, 3, . . .

(48)

The value of ∆zs,p for p = 0 is selected as ∆zs,o = zs,o

(49)

which is a limit case of ∆zs,p , when the single equation problem has to be solved and Dk in the equation (34) is zero. This means that the parabola has its vertex just in the point (xk+1 , 0). The inicial condition (49) also can be interpreted as the limit case in which,during the first iteration in p the solution zs,o is changed from a linear solution for xs+1 (with the Newton-Raphson method (40)) into an extreme parabolic solution for xs+1 (with the second order method (44)-(45)), which is the posible farthest (from xs ) solution, within the internal iterative process. The condition (48) also may be modified in order to take into account every term in the form |∆zis,p+1 | < |∆zis,p |

∀p = 0, 1, 2, 3, . . .

∀i = 1, 2, 3, . . . , n

(50.a)

until |∆zis,p+1 | < z,max

∀i = 1, 2, 3, . . . , n

(50.b)

If it occurs that the secondary iterative procedure in p is completely convergent for one iteration in s, then for the following iteration (s + 1) is taken as

ωh = Ch+ ωh

Ch+ ≈

1 Ch−

(51)

without leaving that ωh overrides the unity. Remember that for ωh = 1 it is obtained the second order method without any internal relaxation. Here it is convenient to emphasize the fact that the Newton-Raphson method finds approximate solutions converging to a root, projecting planes tanget to the surfaces of the functions. The second order method finds the aproximate solutions also converging to a root, projecting curved parabolic surfaces tangent to the surfaces of the functions and with the same curvature. The relaxation factor ωh modifies that curvature making it smoother. The principal relaxation factor ω may be automatically adjusted, in each iteration, in a s+1

similar way as it is applied in the steepest descent method [9] to ds+1 = f (x

) (remember

that f (r) = 0). If it ocurrs that s+1

f (x

s

) < f (x ) 18

(52)

then the method descends in ds and therefore the function is approximated to a minimum (in the superior quadrant), to a maximum (in the inferior quadrant) or to a root, and the procedure is locally convergent. When there is not convergence, then, for the next iteration in s, the value of ω is diminished, begining with the unity, in the form ω = C −ω

C − = 0.9 ∼ 0.99

(53.a)

s

until the convergence is verified in f (x ). When there is convergence, then, for the next iteration (s + 1), the value of ω is increased in the form

ω = C +ω

C+ ≈

1 C−

(53.b)

This controls automatically the convergence, thus driving always the method to a root, to a maximum or to a minimum in the conditions stated above. The procedure is applied until the conditions (26.a) and (26.b) are satisfied, similar to the Newton-Raphson method. The relaxation factor for the internal iterative process ωz is selected in such a way that the process always is subrelaxed in order to produce more stability to the internal iterative process. The condition (52) may be modified to consider term by term in the form |fi (xs+1 )| < |fi (xs )|

∀i = 1, 2, 3, . . . , n

(54)

If the second order method carries out only one iteration in p, then Richmond’s method (30) is extended to be used for systems of n-dimensional non-linear equations. For very complicated functions, it is convenient to express the partial derivatives, of the jacobian matrix and the hessian tensor, in a numerical approximated form, as it is indicated in the following expressions

[Jf (xs )] =

 ∂f s i

∂xj



fi (xs1 , xs2 , xs3 , . . . , xsj , . . . , xsn ) − fi (xs1 , xs2 , xs3 , . . . , xs−1 , . . . , xsn ) j xsj − xs−1 j

(55)



∂fi s ∂fi s ] − [ ∂x ] [ ∂x ∂ 2 fi (xs ) k k j [Hf (x )] = ≈ s−1 ∂xj ∂xk xsj − xj s

≈2

∂fi s ∂fi s [ ∂x ] − [ ∂x ] j j j

19

xsj − xs−2 j

(j = k)

(56.a)

(j = k)

(56.b)

where  ∂f s i

∂xk

s , .., xsk , .., xsn ) − fi (xs1 , xs2 , .., xs−1 , .., xs−1 fi (xs1 , xs2 , .., xs−1 j j k , .., xn )

xsk − xs−1 k

j

 ∂f s i

∂xj

=

=

, . . . , xsn ) − fi (xs1 , xs2 , xs3 , . . . , xs−2 , . . . , xsn ) fi (xs1 , xs2 , xs3 , . . . , xs−1 j j xs−1 − xs−2 j j

j

(j = k)

(57.a)

(j = k)

(57.b)

It should be notice that the partial first order derivatives have been calculated with expressions whose error is of the order of es , but the partial second order derivatives have been calculated with expressions whose error is of the order of es 2 . This has been made in this way to simplify the calculations. When these approximated formulas are used for the partial first order derivatives in the Newton-Raphson method, then the method is called “method of the secant plane”. Similarly, when these approximated formulas are used for the first and second order partial derivatives in the second order method, then the method is called “method of the secant paraboloid”. The derivatives in the expressions (55),(56) and (57) may be calculated in a different manner applying the method of perturbations. This can be made substituting xs−1 by (xs − h) and xs−2 by (xs − 2 h) in the expressions mentioned before, where h is assumed to be smaller than max to avoid the instability produced in the neighbourhood of the root. Remember that h divides the expressions of the derivatives in the method of perturbations. A different way to improve the second order method is by using the Newton-Raphson method to solve zs for the equation (38), instead of the fixed-point method. This can be made defining a new function

F(z

s,p

1 ) = f (xs ) + [Jf (xs ) + ωh Hf (xs ).zs,p ].zs,p 2

(58)

that is supposed to be zero in its solution for zs,p , and where xs is supposed to be constant for the internal iterative process. Then, the problem can be solved using the Newton-Raphson method for the function F(z

s,p

). This is zs,p+1 = zs,p − ωz [JF (zs,p )]−1 . F(zs,p )

(59)

JF (zs,p ) = [Jf (xs )] + ωh [Hf (xs )].zs,p

(60)

where

20

and where ωh and ωz are, similar to the equations (44) and (45) and as it was aforementioned, the secondary relaxation factor and the relaxation factor for the internal iterative process, respectively. The inicial value zs,o for the iterative procedure (59) is again the value given by (40). The equation (45) and (46) continue here being valid and the criterion to stop the iterative process is the relation (43), but the local error in z is now ∆zs,p+1 = (zs,p+1 − zs,p ) = −ωz [JF (zs,p )]−1 . F(zs,p )

1 ∆zs,p+1 = −ωz [Jf (xs ) + ωh Hf (xs ).zs,p ]−1 . f (xs ) + [Jf (xs ) + ωh Hf (xs ).zs,p ].zs,p 2

(42 )

(44 )

In all cases for the second order methods, the internal iterative procedure may operate the number of internal iteration in p that the user of the method wants to use, in order to reduce the cost of computer calculations. However, there is a minimum number of internal iterations in p to be processed, in order to guarantee the stability of the complete method. The cost of computer calculations for the iterative procedure (44)-(45) is lower than the cost for the iterative procedure (59)-(60), nevertheless, the convergence of the later is cuadratic while the convergence of the former is linear. If only one iteration in p is carried out and the expressions (44) and (44 ) are used in the equation (45), and then it is substituted this result also in the equation (46), it is obtained, respectively, the next two algorithm

1 xs+1 = xs − ωωz [Jf (xs ) + ωh Hf (xs ).zs ]−1 . f (xs ) + ω(1 − ωz )zs 2   1 xs+1 = xs − ω[Jf (xs ) + ωh Hf (xs ).zs ]−1 . f (xs ) − (2 − ωz )ωh Hf (xs ) : zs zs 2

(61.a)

(61.b)

where zs = −[Jf (xs )]−1 . f (xs )

(40 )

This two algorithms described by the expressions (61.a) and (61.b) represent, respectively, the extensions of the Richmond’s method (30) and the second order method (34), for systems of nonlinear equations. Nevertheless, the equation (61.b) is only an approximation of the equation (34). Notice that if ωz = 1, one term disappears in the equation (61.a). It should also be noted that the main difference between the equations (61.a) and (61.b) is the second factor of the second term on the right side. The additional effort and cost to calculate this factor for the equation (61.b) is not excessive, comparing with the equation (61.a). 21

It is interesting to observe that the expression (61.b) is equivalent to the Newton-Raphson method (23), but with both the jacobian matrix and the function vector perturbed. Similarly, the expression (61.a) is the result of weighting, with the factor ωz , the solutions of the Newton-Raphson method with the jacobian matrix perturbed and with the simple jacobian matrix as in (23). The algorithmic equations (61.a) and (61.b) may be expressed, similarly to the relaxed Newton-Raphson method (28), defining a relaxation matrix W(xs ) for each iteration s, such that (46 )

xs+1 = xs + [W(xs )].zs where the matrix W(xs ) for the algorithms (61.a) and (61.b) are, respectively, 1 W(xs ) = ωωz [Jf (xs ) + ωh Hf (xs ).zs ]−1 . [Jf (xs )] + ω(1 − ωz )[ I ] 2

(61.a )

  1 W(xs ) = ω[Jf (xs ) + ωh Hf (xs ).zs ]−1 . Jf (xs ) + (2 − ωz )ωh Hf (xs ).zs 2

(61.b )

If the procedure to obtain the expressions (61.a ) and (61.b ) is repeated iteratively p times, then the algorithm (46 ) is obtained again as xs+1 = xs + Wp+1 (xs ).zs

Wp+1 (xs ) = ωSp+1

zs,p+1 = Sp+1 .zs,o

(46 )

but the relaxation matrix Wp+1 (xs ) is computed now from a series of matrixes Sp+1 , p = 0, 1, 2, . . ., which are calculated respectively with the following formulae −1

 1 1 Sp+1 = Jf (xs ) + ωh Hf (xs ).zs,p . Jf (xs ).[ ωz I + (1 − ωz )Sp ] + (1 − ωz )ωh Hf (xs ) : zs,p Sp 2 2 (61.a )

1 Sp+1 = [Jf (xs ) + ωh Hf (xs ).zs,p ]−1 . Jf (xs ).[ ωz I + (1 − ωz )Sp ] + (2 − ωz )ωh Hf (xs ) : zs,p Sp 2 (61.b ) where zs = zs,o = −[Jf (xs )]−1 .f (xs )

So = I

(40 )

All this procedure may be verified as an exercise. When the second order methods are been applied to find a minimum or a maximum of systems of equations, as a optimization program, the algorithm should be treated adequately. This is because the jacobian matrix tend to be singular and the inicial value for the internal iterative process, zs,o , can not be calculated any more with the solution of the linear system of equation 22

(40), or Newton-Raphson method. In this case, it is recommended to assume another convenient small inicial value for zs,o , which may be the tolerance max in each component with an appropiate sign. Finally, it is convenient to specify that the second order methods fall within the group of Succesive Linear Programming Methods (SLP). This is because the second order methods are by themselves non-linear and are used to solve non-linear problems, but in each step (main iterations) the algorithm has to solve a linear iterative procedure maintaining some properties constant (jacobian matrix and hessian tensor), which linearizes the mentioned procedure.

CONVERGENCE

All the open methods [9] have the form xs+1 = g(xs )

(62)

and, if the iterative procedure is convergent to a root r, then it is satisfied that r = g(r)

(63)

Equation (63) can now be substrated from equation (62) and the right side of the result can be evaluated using an expansion of Taylor series up to the first order term in the following form xs+1 − r = (xs − r).∇g(r) + O( es 2 )

(64)

es+1 = es .∇g(r) + O( es 2 )

(64 )

where es is the “global error” of the iteration s, defined as es = xs − r

(65)

From the expression (64), the last term can be truncated and therefore the derivatives must be evaluated in an intermediated point q. This is es+1 = [Jg (q)].es

(66)

where [Jg (q)] = [∇g(q)]∗ is the jacobian matrix of the function g(x) evaluated in q, and q ∈ B(r, es ), that is, the n-dimensional sphere with center in r and radius es . 23

Getting now the norm of the expression (66) and applying the Cauchy-Schwarz inequality, it is obtained that es+1 ≤ Jg (q) . es

(67)

This last expression means that If Jg (q) < 1, the iterative procedure is convergent. If Jg (q) > 1, the iterative procedure is divergent. In other words, the iterative process (62) converges, if the function g(x) is a contraction in a subset of IRn containing r, and the inicial value xo is selected in a neighbourhood also contained in the mentioned subset. Similarly to the procedure to obtain (66), it can be evaluated the right side of (62) minus (63), but using an expansion in Taylor series up to the second order term for the function g(x) in the neighbourhood of r. The result of this is es+1 = [Jg (r)].es + [Hg (r)] : es es + O( e 3 )

(68)

where Jg (r) = [∇g(r)]∗ is the jacobian matrix of the function g(x) evaluated in r. Hg (r) = {∇[∇g(r)]∗ }∗ is the hessian tensor of the function g(x) evaluated in r. When the functions and their derivatives are bounded in the neighbourhood of r, this last expression means that If [Jg (r)] = 0, the iterative procedure is convergent in a linear form. If [Jg (r)] = 0, the iterative procedure is convergent in a cuadratic form. If [Jg (r)] = 0 and Hg (r) = 0, the iterative procedure is convergent in a cubic form. Observing the expression (66), it is obvious that the fixed-point method has generally a linear convergence. It can be demonstrated [5][9] that for the Newton-Raphson method [Jg (r)] = 0, and, therefore, it has a cuadratic convergence. It has not been demonstrated, but it can be assumed that the second order methods have a cubic (p → ∞) or an almost cubic (p < ∞) convergence. Nevertheless, it is convenient to say that all the iterative open methods are based on the convergence criterion expressed by the inequality (67) in the neighbourhood of a root. It is evident that the criterion of convergence (67) may also be applied to the internal iterative processes (45) and (59). This is because the mentioned iterative process are “fixed-point” type processes similar to (62) but in z instead of x and in iterations p instead of s. 24

STABILITY

All procedures that can be defined in the form of the iterative equation (62) can be considered as a dynamical system [10]. The complexity of the system depends on how complex is the function g(x), even for the simplest cases the behaviour of such dynamical systems may be apparently unpredictible. The sequences x0 , x1 , x2 , x3 , . . . , xs , xs+1 , . . . , x∞ in the aforementioned dynamical systems may transform itself into chaotic processes. The fixed-point method, the Newton-Raphson and the second order methods are examples of these complex chaotic dynamical systems. The solution of the equation (62) is obtained when the iterative process find a fixed point as in (63). Suppose that the function is defined to be g : IRn → IRn and every time the iterative process begin, a different inicial point xo is selected. Imagine every inicial point xo in a subset of the space IRn is marked, for example with different colors, depending on the fixed point r to which the iterative process (62) converges. Thus, if the iterative process is deterministic, the subset of the space will be colored with different colors in different well-determined zones. If the iterative process is chaotic, the mentioned subset of the space considered will be colored with different colors in different zones forming stranges n-dimensional figures where the colored zones are not well delimited. Such figures are known as “Fractals”[11]. If in these fractal figures one try to make a zoom to the boundary of a monocolored zone, the same structure is found repeatedly within the fractal structure up to the infinity. According to B. B. Mandelbrot, as it is mentioned in [11], a subset A of the space is called “fractal” provided its Hausdorff dimension DH (A) is not an integer. Intuitively, DH (A) measures the growth of the number of sets of diameter ε needed to cover A, when ε → 0. More precisely, if A ⊂ IRn , let N (ε) be the minimum number of n-dimensional balls of diameter ε needed to cover A. Then if N (ε) increases like N (ε) ∝ ε−DH

ε→0

(69)

it can be said that A has a Hausdorff dimension DH . It is not difficult to show that the Hausdorff dimension can be obtained by log[N (ε)] log(k/ε)

(70)

N (ε → 0) = k ε−DH

(69 )

DH = lim

ε→0

where k is the proportionality constant in (69)

25

For example, a monocolor rectangle with sides a and b can be covered with circles of diameter ε = a/n = b/m, ordered in n lines by m rows, and the Hausdorff dimension is then obtained as

DH =

lim

n,m→∞ log[ k 

log(n m)  √ =2 a b/ (a/n) (b/m) ]

(71)

where k  is a proportionality constant. For a square the result is trivial and it is still the same. For a fractal figure the result is not an integer as it was mentioned, but the procedure is the same. From now on term “fractal map” will be used to name a complete fractal representation of an specific dynamical system, the term “fractal figure” will be used to name the intricate configuration of colors and forms in the boundary of color zones, and the big color zones will be named “basins”. The points in a fractal map may be marked with diferent colors but it can be marked in any other form, for example with characters, nevertheless the term “color” will continue being used. Finally, a different Hausdorff fractal dimension should be calculated for each color within the fractal map. Let’s now focus our attention to an specific problem in IR2 . This problem consist of solving, using both the Newton-Raphson method and the Second Order methods, the following systems of equations

f (z) = z 4 − 1 = 0 ≡





x (x2 − 3 y 2 ) − 1 = 0 y (3 x2 − y 2 ) = 0

(72)

(x2 − y 2 )2 − 4 x2 y 2 − 1 = 0 4 x y (x2 − y 2 ) = 0

(73)

f (z) = z 3 − 1 = 0 ≡

The equations (72) and (73) represent, respectively, the equations for the cubic and the quartic roots of 1 in the complex plane, where z = x + iy

∈ C / 1 . Both equations, in the set of complex

numbers, represent a system of two non-linear equations in the set of real numbers. Here, two different second order methods to solve the systems of equations shall be distinguished. The first one is the algorithmic method based on the expressions (44) and (45) for the internal iterative process, which will be named “Richmond’s method” due to its similarity to Richmond’s method for single non-linear equations (30). The other one is based on the expressions (58) and (59) for the internal iterative process, which will be named simply “Second Order method”. 26

Figures 3 and 4 show the fractal maps of the Newton-Raphson method for the solution of the equations (72) and (73), respectively.. Additionally, figure 3 shows a detail of the first quadrant of the fractal figure. Figure 5 at the left side shows also a detail of the first quadrant of the fractal map for equation (73). Notice that every colored basin surrounds a different root √ of the equations. The roots of the equation (72) are +1 and (−1 ± i 3)/2, and the roots of the equation (73) are ±1 and ±i. The fractal figure is formed with combinations of different colors depending on which root the system converges, begining with the colored point or the inicial point. The legend of each fractal map contains information about the quantity of points (NN,MM) and the intervals ([XMIN,XMAX],[YMIN,YMAX]) in the x-axis and y-axis, respectively. Also specifies the number of iterations (KM,LM) of the principal and internal iterative processes, and the relaxation factors (W,WH,WZ). Finally, at the bottom there are the minimum, averaged and maximum (KMIN,QMED,KMAX) number of principal iterations performed, and the Hausdorff fractal dimension (FRAC) averaged for all the colors and for the subset of the plane shown. The Hausdorff dimensions here calculated are only aproximations due to the limited number of points used, and are numerically obtained by

log[N (COLOR)] DH ∼ =2 log[NT (COLOR)]

(74)

where N (COLOR) is the number of points of color “COLOR” that are totally surrounded by points of the same color, and NT (COLOR) is the total number of points of color “COLOR”. Notice that the Hausdorff fractal dimension average for all the colors, named from now on simply “fractal dimension”, is aproximately 2, but a little smaller. If the map were a collection of colored polygons, then the fractal dimension would be exactly 2. While the fractal figure is smaller within the map, then the fractal dimension is closer to 2. While the fractal dimension is closer to 2, then the behavior of the system is more stable. The chaotic process occurred in the systems described above is more or less stable depending on the location of the inicial point in the iterative process. When the iterative process begins in a point colored with a specific color, the consecutive iterations jump from a point of that specific color to another point with the same color, until a root in the same color basin is reached. This process is chaotic if the inicial point is located within the fractal figure and may be unstable if a jump is made outside the fractal map and falls in another point really far from the root. This process is absolutely stable if the inicial point is located in an attraction basin which contains a root. 27

Figure 5 shows the influence of the automatic selective relaxation for the principal iterative process established in the expressions (52) and (53) applied to the Newton-Raphson method. The map on the right side shows the improvement obtained with this procedure. The fractal dimension is higher for this procedure than for the standard Newton-Raphson method, the fractal figure is slightly smaller and these features make the procedure more stable. Additionally, tested some cases it was obtained that the condition (54) is not desirable unless the method drives the points to a root monotonically. Figure 6 shows the small influence of the number of internal iterations on the Richmond’s method. There, six iterations and twelve iterations are compared, both with automatic selective relaxation for the internal iterative process. The fractal dimension is a little closer to 2 for twelve iterations than for six. The figure 7 shows the influence of the automatic selective relaxation on the Richmond’s method. The upper-left map shows the influence of the internal selective relaxation with the condition (48). The upper-right map shows the influence of the same relaxation but with the condition (50.a). The lower-left map shows the influence of the automatic selective relaxation for both the principal and the internal iterative processes with the condition (50.a), and this map is considered to be better than the former two. The lower-right map is located here only for comparison. This was obtained by mean of the second order method with only three iterations and only with selective relaxation for the principal iterative process. It is similar to the last map but implies less effort. Figure 8 shows the influence of the number of internal iterations on the second order method. There are one, two, three and five iterations shown, and it is evident that the higher is the number of iterations, the closer to 2 is the fractal dimension and the smaller is the fractal figure. Figure 9 shows three, six and ten internal iterations for the second order method but with selective relaxation in both the principal and internal iterative processes and with the condition (50.a). The Newton-Raphson first quadrant map is showed only for comparison purposes. The improvement is really remarkable. The smaller is the fractal figure, the higher is the stability of the procedure, and the closer is the fractal dimension to 2. Finally, figures 10 and 11 show the last procedure (Second order method with six internal iterations and with automatic selective relaxation in both the principal and the internal iterative processes and the condition (50.a)) for the systems (72) and (73), respectively. The improvement for the system (72) is higher due to the smaller order of the polynomial complex equation. Both maps

28

represent the region [−1, 1]x[−1, 1] in the complex plane. It should be noticed that the proximity of the fractal dimension to 2 and the stability of the systems are notable with the procedure used, despite of the increment of the number of principal iterations performed (averaged for all the fractal map), which are aproximately six times higher than the standard Newton-Raphson method. The conclusion of this analysis is that the second order methods offer more stability in the resolution of single or systems of non-linear equations than the classical methods, as NewtonRaphson method,and, most of the time, it is possible to find a root (or a minimum or a maximum), however, the effort of calculation and the cost is greater. Even though, it is worthwhile for highly non-linear equations. In the case of single non-linear equations the number of total iterations is, in average, reduced by the second order methods, for both the bracketing and open methods. On the other hand, in the case of systems of non-linear equations, the second order methods increase, in average, the number of principal iterations. Nevertheless, the increment of the number of principal iteration is not so high, and this is compensated by stability reached by the second order algorithmic procedure. In summary, the higher stability is the main feature that make the second order methods attractive.

29

Figure 3. Fractal representation of Newton-Raphson method for the cubic equation.

30

Figure 4. Fractal representation of Newton-Raphson method for the biquadratic equation.

31

Figure 5. Influence of the automatic principal relaxation in the Newton-Raphson method.

Figure 6. Influence of the number of internal iterations in the Richmond’s method.

32

Figure 7. Automatic convergence for the Richmond’s method.

33

Figure 8. Influence of the number of internal iterations in the Second Order method.

34

Figure 9. Automatic convergence for the Second Order method.

35

Figure 10. Fractal representation of Second Order method for the cubic equation.

36

Figure 11. Fractal representation of Second Order method for the biquadratic equation.

37

REFERENCES

[1] Gerald, C. F. Applied Numerical Analysis. 2nd Edition. Addison-Wesley, 1970. [2] Carnahan, B.; Luther, H. A.; Wilkes, J. O. Applied Numerical Methods. John Wiley & Sons, 1969. [3] Press, W. H.; Flannery, B. P.; Teukolsky, S. A.; Vetterling, W. T. Numerical Recipes. The Art of Scientific Computing. Cambridge University Press, 1986. [4] Brent, R. P. Algorithms for Minimization without Derivatives. Prentice-Hall, 1973. [5] Burden R. L.; Faires, J. D. Numerical Analysis. 3rd Edition. PWS. Boston, 1985. [6] Gundersen, T. “Numerical Aspects of the Implementation of Cubic Equations of State in Flash Calculation Routines”. Computer and Chemical Engineering. Vol.6, No.3, 1982, pp.245-255. [7] Muller, D. E. “A Method of Solving Algebraic Equations Using an Automatic Computer”. Mathematical Tables and Other Aids to Computation (MTAC). Vol.10, 1956, pp.208-215. [8] Marquardt, D. “An Algorithm for Least Squares Estimation of Non-Linear Parameters”. SIAM J. Appl. Math.. Vol. 11, 1963, pp.431-441. [9] Ortega, J. M.; Rheinboldt, W. C. Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, 1970. [10] Devaney, R. L. An Introduction to Chaotic Dynamical Systems. Addison-Wesley, 1987. [11] Peitgen, H.-O.; Richter, P. H. The Beauty of Fractals. Images of Complex Dynamical Systems. Springer-Verlag, 1986. [12] Mandelbrot, B. B. The Fractal Geometry of Nature, Updated and Augmented Edition. W. H. Freeman and Company: New York, 1983.

38

APPENDICES

Appendix A. List of the program “FRACTAL” to generate the fractal representations of f (z) = z 3 − 1 = 0 and f (z) = z 4 − 1 = 0. Intentionally, the code has been erased. Any request for the code in FORTRAN should be sought directly through the author to the following e-mail: [email protected] Appendix B. List of the subroutine “RIGRA” to solve systems of non-linear equations using the second order methods (The same for the codes of routines). Appendix C. List of the subroutine “MATRIX” to solve systems of linear equations using normalized pivoting Gauss elimination (the same for the codes of routines).

39

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.