A novel local stochastic linearization method via two extremum entropy principles

Share Embed


Descripción

International Journal of Non-Linear Mechanics 37 (2002) 785–800

A novel local stochastic linearization method via two extremum entropy principles Giuseppe Ricciardia ; ∗ , Isaac Elishako+ b a Dipartimento

di Costruzioni e Tecnologie Avanzate, University of Messina, Salita Sperone 31, 98166 Villaggio S. Agata, Messina, Italy b Department of Mechanical Engineering, Florida Atlantic University, 777 Glades Road, Boca Raton, FL 33431-0991, USA

Abstract The classical Gaussian stochastic linearization method for non-linear random vibration problems is reinterpreted on the basis of the maximum entropy principle. Starting from this theoretical result, the maximum entropy principle allows to formulate a local stochastic linearization method, based on the substitution of the original non-linear system by an equivalent locally linear one. The expressions of the equivalent coe2cients are derived. The equivalence of this method with a non-Gaussian closure based on the maximum entropy method for stochastic dynamics is evidenced. In addition, an alternative stochastic linearization method is proposed, based on the minimum cross-entropy principle. Numerical applications show the superiority of the two proposed local stochastic linearization methods over the Gaussian one. ? 2002 Published by Elsevier Science Ltd. Keywords: Random vibrations; Entropy principles; Local stochastic linearization

1. Introduction The Gaussian stochastic linearization (GSL) method is the most used approximate technique in the study of non-linear random vibration problems. Indeed, there are over 400 papers, and apparently a single monograph on this subject by Roberts and Spanos [1]. For the review papers, the reader may consult Socha and Soong [2], Schu;eller and Pradlwarter [3], and Elishako+ [4]. The main idea of stochastic linearization consists in the replacement of the original non-linear system by an equivalent linear one, whose determination ∗ Corresponding author. Tel.: +39-090-395022; fax: +39-090-6765618. E-mail address: [email protected] (G. Ricciardi).

is performed by minimizing the di+erence between the forces in them, in a least mean square sense. It can be easily applied to solve problems dealing with systems with many degrees of freedoms, without limitation regarding the type of non-linearity. It is well known that there is an equivalence between the Gaussian stochastic linearization method and the moment equation method applied in conjunction with the Gaussian closure technique. This is due to the shared hypothesis of Gaussianity of the response process, as a result of which both methods lead to the same resolving equations and, consequently, to the same results. Unfortunately, the Gaussian stochastic linearization leads to reasonably good results only for small non-linearities. When the system exhibits a strongly non-linear behavior, the Gaussian stochastic

0020-7462/02/$ - see front matter ? 2002 Published by Elsevier Science Ltd. PII: S 0 0 2 0 - 7 4 6 2 ( 0 1 ) 0 0 0 9 9 - 3

786

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

linearization may lead to erroneous results, especially when some speciFc features of the response process are of interest, such as the stability analysis [5] or the reliability estimation. In these cases alternative methods ought to be used, as evidenced in the literature. A central method appears to be the moment equation method in conjunction with non-Gaussian closure technique [6 –9]. In general the quality of results depends on the type of closure technique adopted. New methods to solve random vibration problems have been proposed by Chang [10] and by Sobczyk and Trebicki [11,12] and by Ricciardi and Lacquaniti [13]. These approaches are inspired by the Shannon’s information theory [14] and by the consequent reformulation of the statistical mechanics, based on some entropy optimization principles provided by Jaynes [15] and Kullback [16]. The maximum entropy principle [15] and the minimum cross-entropy principle [16] have each been applied for the analyses of a great variety of physical problems, such as time-dependent problems, linear numerical analysis, image processing, queuing theory, spectral analysis, etc. In spite of the long debate concerning the conceptual foundation and the rationale of entropy optimization problems outside the traditional domain of statistical thermodynamics [17], the numerous number of papers demonstrate the success in practical applications. If one focuses attention on the probability density function of a random variable, it is uniformly known that its knowledge allows to evaluate all possible statistical moments. On the contrary, if only a Fnite number of moments is available, the problem of determination of the probability density function has no unique solution. This constitutes the classical moment problem. The maximum entropy principle [15] o+ers a procedure in order to select, among all the possible solutions, the one that is optimal. It appears that in the face of incomplete knowledge about the probability density function, one ought to try to make a best guess; that is, the solution ought to be consistent with the available information in terms of moments. In addition, to determine the probability density function all (and only) available information ought to be employed, without making any assumption that one does not actually possess [18]. By following a common terminology,

the solution is noncommittal with respect to missing information, or the solution is the least biased probability density function compatible with the available information [19]. When additional information is postulated in terms of an approximate a priori probability density function, the minimum cross-entropy principle [16] allows to select the best guess by introducing a measure of statistical distance (or discrepancy, or cross-entropy) between two probability density functions [20]. The minimum cross-entropy principle can be regarded as a generalization of the maximum entropy principle to the case in which an a priori probability density function is available. In this study, the maximum entropy principle is employed in order to furnish a new interpretation of the Gaussian stochastic linearization method. An improved stochastic linearization method is proposed, following a recent approach by Alaoui Ismaili and Bernard [21]. They suggested that the linearization criterion can be carried out locally in the state space of the random response process. They proposed the local stochastic linearization (LSL) method as a valid alternative to the Gaussian stochastic linearization method. This approach consists in substituting the original non-linear system with an equivalent locally linear one. Note that the locally linear system is a non-linear one. This is in contrast to the classical linearization, which replaces the original system by a globally linear one. The main aim of this study consists in the derivation of the equivalent coe2cients of the locally linear system. Moreover, the equivalence of this method with the maximum entropy principle is demonstrated. Then, an alternative local stochastic linearization is proposed, by using the minimum cross-entropy principle, starting from the postulated a priori probabilistic character in terms of the approximate probability density function of the response process derived by the classical Gaussian stochastic linearization. 2. The extremum entropy principles: a brief review The extremum entropy principles allow to determine the probability density function p(x) of a

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

random variable X when partial probabilistic information is given in terms of the following M averages: 

b

a

j (x)p(x) d x =

j

(j = 1; 2; : : : ; M );

(1)

where [a; b] is the interval of deFnition of p(x). Clearly, there exist an inFnite number of functions p(x) that satisfy Eqs. (1). Thus, a unique reconstruction of p(x) is impossible. This represents a classical undetermined inverse problem. The maximum entropy principle, or Jaynes principle, o+ers a criterion in order to select the optimal probability density function in some sense. According to Jaynes, the least biased density function compatible with information (1) is one that maximizes the following (Shannon’s entropy) functional [15,19]: 

H [p(x)] ˜ =−

b

a

p(x) ˜ log[p(x)] ˜ dx

(2)

under the constraints imposed by Eqs. (1) and the appropriate normalization condition  a

b

p(x) d x = 1

(3)

that should be fulFlled identically. By using the method of Lagrange multipliers, the extended unconstrained Lagrangian functional is expressed as 

˜ k ] = − L1 [p(x);

b

a

p(x) ˜ log[p(x)] ˜ dx  b

− (0 − 1) −

M  j=1



j

a

a b



The stationarity condition of the extended unconstrained functional @L1 [p; ˜ k ]=@p˜ = 0 yields   M  j j (x) p(x) = exp −0 − j=1

 M  1 = exp − j j (x) ; ZH j=1 

(5)

where ZH = exp(0 ). Expression (5) is referred to as a maximum entropy probability density function. It constitutes the least biased probability density function compatible with the constraints given by Eqs. (1). By taking into account the normalization condition and by substituting Eq. (5) into Eq. (3), we obtain ZH (0 ) = ZH (1 ; 2 ; : : : ; M )    b M  = exp − j j (x) d x: a

(6)

j=1

The latter quantity turns out to depend upon all other Lagrange multipliers j (j = 1; 2; : : : ; M ). Function ZH (1 ; 2 ; : : : ; M ) is called the partition function. Lagrange multipliers j (j = 1; 2; : : : ; M ) can be determined as solution of the following system of non-linear equations, obtained by substituting Eq. (5) into Eqs. (1) and taking into account Eq. (6)  b 1 j (x) ZH (1 ; 2 ; : : : ; M ) a   M  × exp − h h (x) d x= j (j=1; 2; : : : ; M ): h=1

p(x) ˜ dx − 1

j (x)p(x) ˜ dx −

787

(7)  j

; (4)

where j are the Lagrange multipliers. Note that, instead of 0 , 0 − 1 is used as the Frst Lagrange multiplier, as a matter of mathematical convenience.

In the general case an analytical solution of Eqs. (7) is impossible and a numerical solution is called for. In addition to the probabilistic information given by Eqs. (1), let us assume that a priori probability ∼ p(x) is postulated, which density function q(x) = approximates (even if crudely) the exact probability density function p(x). The minimum cross-entropy principle, or Kullback principle, states that the true

788

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

density p(x) minimizes the following Kullback– Leibler cross-entropy functional [19,20]

 b p(x) ˜ dx p(x) ˜ log (8) D[p(x); ˜ q(x)] = q(x) a consistent with constraints (1). Eq. (8) introduces a measure of cross-entropy (or discrepancy) between the two probability densities. It is instructive to consider the case in which no information is given in terms of a priori probability density. Then, as a priori probability density the uniform density can be chosen, that is the maximum entropy probability density when there are no constraints, that is q(x) = u(x) = 1=(b − a) deFned in the range [a; b]. In this case, it can be easily shown that the cross-entropy functional becomes [19] ˜ D[p(x); ˜ u(x)] = log(b − a) − H [p(x)]:

(9)

Thus, when the a priori density is not speciFed, p(x) is chosen so as to maximize the Shannon’s measure, subjected to p(x) satisfying given constraints. It turns out that the Kullback principle reduces to the Jaynes principle. By introducing the extended unconstrained Lagrangian functional



M 

j

a

j=1

(12)

k=1

where ZD (1 ; 2 ; : : : ; M ) is the partition function. The Lagrange multipliers j (j = 1; 2; : : : ; M ) can be determined by solving the following system of non-linear equations, obtained by substituting Eq. (11) into Eqs. (1) and taking into account Eq. (12)   M  b  1 j (x) exp − h h (x) ZD (1 ; 2 ; : : : ; M ) a h=1 j

(j = 1; 2; : : : ; M ):

(13)



j (x)p(x) ˜ dx −

j

(10)

and by imposing the stationarity condition @L2 [p; ˜ q; k ]=@p˜ = 0, we obtain   M  k k (x) q(x) p(x) = exp −0 − k=1

 1 exp − k k (x) q(x); = ZD k=1 

a

It is instructive now to recall brieMy the essentials of the stochastic linearization method, for our aim is to improve it by using entropy principles.

a

b

ZD (0 ) = ZD (1 ; 2 ; : : : ; M )   M  b  exp − j j (x) q(x) d x; =

× q(x) d x =

˜ q(x); k ] L2 [p(x);

 b p(x) ˜ dx = p(x) ˜ log q(x) a   b − (0 − 1) p(x) ˜ dx − 1 

function and constitutes the least biased probability density function with minimum distance (cross-entropy) from the a priori density function q(x) and compatible with the constraints given by Eqs. (1). Note that the exponential function appearing in Eq. (11) can be considered as a corrective term in order to Fnd the a posteriori probability density function p(x) starting from the knowledge of the a priori probability density function q(x). By taking into account the normalization condition, we arrive at

M 

(11)

where ZD = exp(0 ). Eq. (11) is the general form of the minimum cross-entropy probability density

3. Gaussian stochastic linearization and the Gaussian closure techniques For the sake of simplicity, let us consider the following oscillator with a non-linear restoring force: (14) X; + X˙ + f(X ) = 2W (t); where  ¿ 0 is the damping coe2cient and W (t) is a zero-mean white noise process with correlation function E[W (t)W (t + )] = d(), d being the intensity of the white noise and () being the Dirac’s delta function. The analysis will be conFned to the case in that f(x) is an odd function of x.

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

According to stochastic linearization method, an approximate solution of Eq. (14) can be obtained by considering the following linearized system: (15) X; + X˙ + keq X = 2W (t): The equivalent sti+ness keq is found by minimizing the di+erence between the two systems in a least-squares sense, that is d E {[f(X ) − keq X ]2 } = 0: d keq

(16)

The following expression of keq is obtained keq = E[f(X )X ]=E[X 2 ]:

(17)

Eq. (17) contains averages that cannot be evaluated, since the exact probability density function of the response process utilized in Eq. (17) is unknown. The Gaussian stochastic linearization consists in an approximate evaluation of these averages by assuming the Gaussian probability density function of the linearized system (15), i.e. √ (18) pG (x) = (1= 2 !G ) exp[ − x2 =(2!G2 )]; !G2 being the variance of displacement. Then, Eq. (17) becomes (19) kG = EG [f(X )X ]=!G2 ; +∞ where EG [g(X )] = −∞ g(x)pG (x) d x. In order to obtain the equivalent variance !G2 , it can be easily shown that for the linearized system (15), where keq = kG , it is given by !G2 = d=kG :

(20)

By arranging Eqs. (19) and (20), we obtain EG [f(X )X ] = d

(21)

that constitutes a non-linear algebraic equation for !G2 . An alternative approach to the above method is given by the moment equation method in conjunction with the Gaussian closure technique. This approach is based on the theory of Markov processes, for which the joint probability density function p = p(x; x) ˙ of the di+erential equation (14)

789

is the solution of the following forward Fokker– Planck equation: @ @p @2 p @p = − x˙ + {[x˙ + f(x)]p} + d 2 : @t @x @x˙ @x˙ (22) This equation fully characterizes the solution of the di+erential equation (14). Nevertheless, from Eq. (22) partial information about some averages of relevant quantities of the response process can be obtained. Indeed, let ’ = ’(X; X˙ ) be a di+erentiable function of X and X˙ . The di+erential equation governing the evolution of the average of ’(X; X˙ ) can be determined by multiplying Eq. (22) by ’(x; x) ˙ and by integrating over the state space. By performing some integrations by part, the following equation is obtained:

 @’ @’ @ E[’] = E x˙ − E [x˙ + f(x)] @t @x @x˙ + d

@2 ’ @x˙2

where = 0 does not appear :

(23)

˙ In the stationary case, by setting E[’] = 0, we obtain an algebraic equation that must be satisFed by the resulting averages. The moment equation approach in conjunction with the Gaussian closure technique deals with particular choice of the function ’(X; X˙ ). Indeed, by h setting ’(X; X˙ ) = X j X˙ (let the exponent h down, aligned horizontally to the exponent j), with j + h = 2, we obtain the following second-order moment equations (in the stationary state): E[X X˙ ] = 0; 2

− 2E[X˙ ] − 2E[f(X )X˙ ] + 2d = 0; 2 E[X˙ ] − E[X X˙ ] − E[f(X )X ] = 0:

(24)

By taking into account the statistical independence of the processes X and X˙ , it can be easily shown that 2 E[X˙ ] = d and Eqs. (24) reduce to the following equation only: E[f(X )X ] = d:

(25)

This equation cannot be solved, since the unknown average E[f(X )X ] appears in it. To obtain a solution, a closure technique is needed to approximate

790

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

this average in some way. A simple closure technique is the Gaussian one, whereby the Gaussian probability density function given by Eq. (18) is assumed to approximate this average. By comparing Eqs. (21) and (25), where E[g(X )] = EG [g(X )], the equivalence of the Gaussian stochastic linearization and the Gaussian closure technique is evidenced, that constitutes a well-known result. In what follows, the maximum entropy principle for stochastic dynamic problems is presented. It will be shown that in a particular case this method is equivalent to the Gaussian closure technique and, moreover, to Gaussian stochastic linearization. This will be a starting point in order to improve the Gaussian stochastic linearization results. 4. Maximum entropy method in stochastic dynamics The maximum entropy principle provides a criterion to construct an approximation to the true probability density function from available information given in terms of averages. This criterion can be used in conjunction with the moment equation method in order to obtain an approximate solution of the non-linear system Equations (14) [22]. To this end let us consider the di+erential equation of moments up to a given order n, by setting in h Eq. (23) ’(X; X˙ ) = X j X˙ (let the exponent h down, aligned horizontally to the exponent j), with j + h 6 n, according to the moment equation approach ˙ j X˙ h ] = jE[X j−1 X˙ h+1 ] − hE[X j X˙ h ] E[X − hE[f(X )X j X˙

h−1

]

h−2 + h(h − 1) d E[X j X˙ ]:

(26)

(let the exponent h down, aligned horizontally to the exponent j) It is well known that this system of di+erential equations (or algebraic equation in the stationary state) is no longer closed with respect to the moments. In general, moments of greater order than n or some unknown averages appear due to the term h−1 E[f(X )X j X˙ ]. Thus, it is no longer possible to solve this system of equations. The maximum entropy principle suggests constructing an approximate solution of this system by assuming the least biased probability density

function under the constraints given by the moments up to nth order. This appears to represent a rational closure method of the moment equations. Indeed, by assuming the probability density in the form of Eq. (5), all moments and averages appearing in the moment equations can be expressed as functions of the Lagrange multipliers, that, in turn, depend on the selected moments through Eqs. (7). Then the moment equations become a closed system of (non-linear) equations, whose unknowns are the Lagrange multipliers or the selected moments. Numerous applications of this method can be found in Chang [10] and Sobczyk and Trebicki [11,12]. Referring to zero-mean random processes deFned in the range (−∞; +∞), it is of special interest to consider the case of a single constraint speciFed by the variance. By setting M = 1 and 1 (x) = x2 :  b x2 p(x) d x = !2 : (27) E[X 2 ] = a

In this case, the Lagrange multiplier 1 and ZH can be evaluated in the closed form as a solution of Eqs. (7) and (6) √ (28) 1 = 1=(2!2 ) ZH = 2 !: The maximum entropy probability density consistent with constraint (27) is the Gaussian one with zero-mean and variance !2 . Then, the maximum entropy approach applied to stochastic dynamic problems considers the second-order moment differential equations (24) and one performs a closure technique according to the maximum entropy probability density. In this case, a natural question arises: Is there equivalence between the maximum entropy approach and the Gaussian closure? The latter can be interpreted as such a closure, which assumes the maximum entropy probability density under the constraint given by Eq. (27) in order to close the moment equations up to second order. In a similar manner, the Gaussian stochastic linearization can be reinterpreted in the following novel fashion: Among all stochastic linearizations that can be obtained by di+erent choices of the probability density for the evaluation of averages appearing in Eq. (17), the Gaussian stochastic linearization is one that assumes the maximum

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

entropy probability density function under the constraint given by Eq. (27). In other words, the Gaussian stochastic linearization is one that assumes the least biased probability density consistent with the only constraint on the variance, in order to evaluate the averages appearing in expression (17) of the equivalent coe2cient keq . It is evident that the three methods are equivalent and lead to the same results.

5. Maximum entropy approach for the local stochastic linearization The above-mentioned considerations suggest the use of the maximum entropy principle to obtain a more accurate solution by considering additional constraints. To accomplish this aim, the following two constraints are considered: 

E[|X |] = E[X 2 ] =

b

a



b

a

|x|p(x) d x = %;

791

non-linear equations, determined by substituting Eq. (30) into constraint Eq. (29) 1 1 − = %; 2 Z(1 ; 2 ) 22 2 1 1 = !2 : + 12 − 2 22 42 22 Z(1 ; 2 )

(32)

The application of the maximum entropy principle in stochastic dynamics requires the knowledge of some average quantities that can be obtained by Eq. (23). In this case, we consider the following di+erential equations: ˙ |X |] = E[X˙ sign(X )] = 0; E[ ˙ X˙ sign(X )] = 2E[X˙ 2 (X )] − E[X˙ sign(X )] E[ −E[f(X ) sign(X )] = 0;

˙ 2 ] = E[X X˙ ] = 0; E[X

x2 p(x) d x = !2 ;

(29)

˙ X˙ ] = E[X˙ 2 ] − E[X X˙ ] − E[f(X )X ] = 0; E[X ˙ X˙ 2 ] = − 2E[X˙ 2 ] − 2E[f(X )X˙ ] + 2d = 0: E[

where in addition to Eq. (27) a constraint E[|X |] = % is introduced. The maximum entropy probability density resulting from the constrained maximization problem is given by

It can be easily shown that these equations reduce to the two following ones:

p(x) = exp(−0 − 1 |x| − 2 x2 )

E[f(X ) sign(X )] = 2dE[(X )];

(34a)

E[f(X )X ] = d:

(34b)

= (1=ZH ) exp(−1 |x| − 2 x2 );

(30)

where 0 , 1 and 2 are the Lagrange multipliers and ZH = exp(0 ) = ZH (1 ; 2 ) is the partition function  ∞ exp[ − 1 |x| − 2 x2 ] d x ZH (1 ; 2 ) = −∞



=

2



 2   1 1 exp 1 − erf √ :

42

2 2

(31) In the deFnition of the partition function, the normalization condition has been taken into account. The Lagrange multipliers 1 and 2 can be evaluated as solutions of the following system of

(33)

Note that Eq. (34b) coincides with Eq. (25). The averages appearing in Eqs. (34) can be expressed in terms of the selected averages % and !2 . Indeed, these can be evaluated by assuming the probability density function Eq. (30) in terms of the Lagrange multipliers 1 and 2 , that, in turn, depend on % and !2 , according to the non-linear relationships (32). This procedure represents the maximum entropy closure technique. Alternatively, if all averages appearing in Eqs. (34) are evaluated by simply considering Eq. (30) as functions of 1 and 2 , the system of Eqs. (34) will be closed with respect to the Lagrange multipliers.

792

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

The maximum entropy probability density function (30) can be rewritten in the following form:  

1 2 1 '|x| + kx ; (35) p(x) = N exp − d 2

that lead to the following expression for the coe2cients of the equivalent locally linear system 'L =

E[f(X )X ]E[|X |] − E[f(X ) sign(X )]E[X 2 ] ; E[f(X )X ] − E[f(X ) sign(X )]E[|X |]

kL =

E[f(X )X ] − E[f(X ) sign(X )]E[|X |] : E[X 2 ] − (E[|X |])2

where N = N ('; k); ' and k are −1

N ('; k) = [ZH ('=d; k=(2d))] ; ' = d1 ; k = 2d2 ;

(36)

N = N ('; k) being a normalization constant. It can be easily shown that the probability density (35) is the solution for the pair of di+erential equations Frst considered by Alaoui Ismaili and Bernard [21] X; + X˙ + kX + ' = 2W (t); X ¿ 0; X; + X˙ + kX − ' = 2W (t); X ¡ 0: (37) In other words, the maximum entropy probability density consistent with the constraints given by Eqs. (29) is the solution of an oscillator with local linear behavior. If ' ≡ 0, Eqs. (37) and (15) coincide. They reduce to the equation of the global linearization, i.e., Eqs. (34) reduce to Eq. (25). Here in essence we perform a non-linearization, by involving an additional parameter '. This fact suggests to replace the original non-linear system by the following one X; + X˙ + kX + ' sign(X ) = 2W (t); ∀X: (38) Alaoui Ismaili and Bernard [21] minimize the cross-entropy between the two systems. Here we perform another path. By minimizing the mean square error between the two systems given by Eqs. (14) and (38) (@=@')E {[f(X ) − kX − ' sign(X )]2 } = 0; (@=@k)E {[f(X ) − kX − ' sign(X )]2 } = 0

(39)

we obtain E[f(X ) sign(X )] = kE[|X |] + '; E[f(X )X ] = kE[X 2 ] + 'E[|X |]

(40)

(41)

In analogy with Gaussian stochastic linearization, the averages appearing in these equations can be approximately evaluated by assuming the probability density function (35). This method will be referred hereinafter as the local stochastic linearization method. For theoretical and numerical purposes, it is relevant to verify if the proposed local stochastic linearization is equivalent to the previously presented maximum entropy approach. In order to do this, let us consider the algebraic moment equations resulting from the application of Eq. (23) in the stationary state for the equivalent local linear system. These equations reduce to kL E[|X |] + 'L = 2dN ('L ; kL ); kL E[X 2 ] + 'L E[|X |] = d:

(42)

By substituting Eq. (42) into Eqs. (40), we obtain Eqs. (34), where E[(X )] = N ('L ; kL ). Therefore, the local stochastic linearization method turns out to be equivalent to the maximum entropy method in conjunction with the moment equation method, when the constraints are given in terms of the averages E[|X |] = % and E[X 2 ] = !2 . This result can be used in order to evaluate the averages appearing in Eqs. (41). Indeed, this problem is equivalent to numerically solving the non-linear equations (34) resulting from the application of the maximum entropy method. 6. An alternative local stochastic linearization by the minimum cross-entropy principle It has been shown that the minimum crossentropy principle allows determining a posteriori probability density consistent with the

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

assigned constraints and having minimum distance from a priori probability density. The minimum cross-entropy principle applied to random vibration problems avails itself to the moment equation approach and, like the maximum entropy principle, it also constitutes a rational closure method of the moment equations. However, the minimum cross-entropy principle takes into account additional probabilistic information in terms of an a priori probability density function. Application of this method can be found in Ricciardi and Lacquaniti [13]. This fact suggests to assume as a priori probability density function one that results from application of the stochastic linearization method. In this case, it is interesting to note that one of the multiple methods of stochastic linearization can be used: Gaussian stochastic linearization, stochastic linearization based on energy criteria [23] or other stochastic linearization based on the moment approach [24]. Let k∗ be the equivalent coe2cient deriving from the application of a stochastic linearization method. A priori probability density function is given as   x2 1 exp − 2 (43) q(x) = √ 2!∗ 2 !∗ !∗2 = d=k∗ being a priori variance. It can be easily shown that if the constraints are E[|X |] = % and E[X 2 ] = !2 , the minimum cross-entropy probability density reduces to the form of the maximum entropy probability density, and the two approaches are equivalent. On the contrary, of particular interest is the case in which solely the constraint E[|X |] = % is considered 

E[|X |] =

b

where 1 is a single Lagrange multiplier needed and ZD = exp(0 ) = ZD (1 ) is given as  ∞ ZD (1 ) = exp(−1 |x|)q(x) d x −∞



 2 2  ! 1 ! ∗ 1 − erf √ : = exp 1 ∗ 2 2

(46)

The Lagrange multiplier 1 can be evaluated as a solution of the following non-linear equation, determined by substituting Eq. (45) into constraint Eq. (44)   2 − 1 = %: (47) !∗2 ZD (1 ) In this case, only the Frst of moment equation (34) is considered E[f(X ) sign(X )] = 2dE[(X )]:

(48)

The averages appearing in Eq. (48) can be expressed in terms of the selected average E[|X |] = %. Indeed, these can be evaluated by assuming the probability density (45), which depends on the Lagrange multiplier 1 ; the latter, in turn, depends on %, according to the non-linear relationship (47). This procedure constitutes the minimum crossentropy closure technique, assuming E[|X |] = % as a moment constraint and a priori probability density (43). If the averages appearing in Eq. (48) are evaluated by considering Eq. (45) as a function of 1 , then Eq. (48) can be solved with respect to 1 . It can be easily shown that this method is equivalent to substituting the original non-linear system (14) by the following locally linear one X; + X˙ + k∗ X + ' sign(X ) = 2W (t) ∀X;

(44)

(49)

The minimum cross-entropy probability density function resulting from the optimization problem is

where ' = d1 . By minimizing the mean square error between the two systems given by Eqs. (14) and (49)

a

|x|p(x) d x = %:

793

p(x) = exp(−0 − 1 |x|)q(x)   1 1 √ exp −1 |x| − 2 x2 ; = 2!∗ ZD (1 ) 2 !∗

(d = d ')E {[f(X ) − k∗ X − ' sign(X )]2 } = 0

(50)

we obtain (45)

E[f(X ) sign(X )] = k∗ E[|X |] + '

(51)

794

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

and the equivalent coe2cient 'L∗ is given as 'L∗ = E[f(X ) sign(X )] − k∗ E[|X |]:

(52)

The equivalence between this local stochastic linearization and the minimum cross-entropy method can be easily veriFed considering that the di+erential equation of the average E[sign(X )X˙ ] for the linearized system (49) reduces to the following algebraic equation in the stationary state: k∗ E[|X |] + ' = 2dE[(X )]:

(53)

If the coe2cient ' is evaluated according to Eq. (52), Eq. (53) reduces to Eq. (48). 7. Numerical applications 7.1. Power-law spring oscillator Let us consider the following non-linear oscillator with power-law spring: (54) X; + X˙ + g) |X |) sign(X ) = 2W (t); where g) is a positive constant, while the parameter ) ¿ 0 (and ) = 1) deFnes the degree of non-linearity of the restoring force and the features of the oscillator. Indeed, if 0 ¡ ) ¡ 1 or ) ¿ 1 the oscillator exhibits softening or hardening behavior, respectively. Note that Roberts and Spanos [1] considered such an oscillator by the Gaussian stochastic linearization method. The exact marginal probability density function of Eq. (54) is known. It is given by     *) 1=()+1) −1 ) + 2 1 + p); ex (x) = 2 )+1 )+1   )+1 |x| × exp −*) ; (55) )+1 where *) = g) =d and +(y) is the gamma function. The exact variance of displacement can be obtained by integration as follows:  2=()+1)     )+1 3 1 + +−1 : !);2 ex = *) )+1 )+1 (56)

It is instructive to consider the cases in which the parameter ) tends to some limit values. Indeed, for ) → 0 the probability density (55) becomes p0; ex (x) = (*0 =2) exp(−*0 |x|):

(57)

The variance is given by !0;2 ex = 2=*20 :

(58)

For ) = 1 system (54) is linear, with Gaussian probability density and variance !1;2 ex (x) = 1=*1 . For ) tending to inFnity, the probability density tends to be uniformly distributed in the range (−1; +1), with variance 2 !∞; ex = 1=3:

(59)

The Gaussian stochastic linearization method leads to the following approximate variance: √

2=()+1) 1 : (60) !);2 G = 2 *) +(1 + )=2) The proposed local stochastic linearization based on the maximum entropy (MaxEnt) principle has been applied to approximate the variance of Eq. (54). The non-linear equations to be solved are E[|X |) ] = 2dN ('; k); E[|X |)+1 ] = d;

(61)

where the averages appearing on the left-hand side can be expressed in terms of ' and k assuming the probability density equation (35). It can be easily shown that E[|X |] = [2dN ('; k) − ']=k:

(62)

Moreover, for ) ¿ 2 integer the following recursive formula holds: E[|X |) ] = {() − 1)dE[|X |)−2 ] − 'E[|X |)−1 ]}=k: (63) where parameter d appears. In Figs. 1 and 2 the variance !);2 L obtained by the proposed LSL method and the variance !);2 G obtained by the GSL are compared with the exact solution, in the ranges 1 ¡ ) ¡ 10 and 0 ¡ ) ¡ 1, respectively. The superior accuracy of the LSL approach is evident for each value of the parameter

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

Fig. 1. Variance of the power-law spring oscillator for 1 ¡ ) ¡ 10: Gaussian stochastic linearization (GSL) results and local stochastic linearization (LSL) results by the maximum entropy (MaxEnt) principle compared with the exact solution.

Fig. 2. Variance of the power-law spring oscillator for 0 ¡ ) ¡ 1: Gaussian stochastic linearization (GSL) results and local stochastic linearization (LSL) results by the maximum entropy (MaxEnt) principle compared with the exact solution.

) examined. Moreover, it is worth noting the high accuracy of LSL in the range 0 ¡ ) ¡ 1. This fact depends on the particular features of the probability density function equation (35) assumed to close the moment equations. Indeed, for ) → 0 the LSL leads to the exact variance !0;2 L = !0;2 ex for the values ' = *0 and k = 0. In this case, the probability density (35) reduces to the exact one given by Eq. (57). On the contrary, the GSL leads to the following limiting value: ∼ 1:571=*2 : !0;2 G = ( =2)=*20 = 0

(64)

795

Fig. 3. Percentage error for the variance of the power-law spring oscillator for 1 ¡ ) ¡ 10: Gaussian stochastic linearization (GSL) results and local stochastic linearization (LSL) results by the maximum entropy (MaxEnt) principle.

Fig. 4. Percentage error for the variance of the power-law spring oscillator for 0 ¡ ) ¡ 1: Gaussian stochastic linearization (GSL) results and local stochastic linearization (LSL) results by the maximum entropy (MaxEnt) principle.

In Figs. 3 and 4 the percentage errors e); . (. = G; L) in the ranges 1 ¡ ) ¡ 5 and 0 ¡ ) ¡ 1 are portrayed, being    !2 − !2   ); ex ); .  (65) e); . (%) =   × 100; . = G; L:  !);2 ex  Here the preferable accuracy of the LSL method is clearly seen. In particular, in the range 1 ¡ ) ¡ 5 the percentage error of the LSL results is almost one-half of that given by the GSL. Moreover, in the range 0 ¡ ) ¡ 1 the percentage error of the LSL results is lesser than 3%; on the contrary, the GSL

796

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

The oscillator with power-law spring has also been studied by the proposed LSL approach based on the minimum cross-entropy (MinxEnt) principle. In this case, the only non-linear equation to be solved is E[|X |) ] = 2dN ('; k∗ );

Fig. 5. Marginal probability density function of displacement of the power-law spring oscillator for ) = 3: Gaussian stochastic linearization (GSL) results and local stochastic linearization (LSL) results by the maximum entropy (MaxEnt) principle compared with the exact solution.

Fig. 6. Marginal probability density function of displacement of the power-law spring oscillator for ) = 1=2: Gaussian stochastic linearization (GSL) results and local stochastic linearization (LSL) results by the maximum entropy (MaxEnt) principle compared with the exact solution.

always leads to higher percentage errors, with a maximum value of 21.46% attained for ) → 0. In Figs. 5 and 6, the probability density functions evaluated with the LSL method and with the GSL method are compared with the exact solutions, for di+erent values of the parameter ) (3 and 1=2, respectively). These results conFrm that, in the case of the oscillator with power-law spring, the LSL yields better results than the GSL. This assertion can be extended to oscillators with any type of non-linearity, the LSL taking into account additional probabilistic information with respect to the GSL.

(66)

where the average E[|X |) ] can be expressed in terms of ' by assuming the probability density (45), where 1 = '=d. For ) ¿ 2 integer, Eqs. (62) and (63) still hold, if k = k∗ . The results relative to values of ) ¡ 1 are not reported, being less accurate than that given by the GSL. This fact can be easily explained. Indeed, the proposed minimum cross-entropy approach does not take into account an information on the second-order moment, reducing the computational e+ort to the solution of a single non-linear equation. As mentioned earlier, the good performance of the LSL method based on the maximum entropy principle in the range 0 ¡ ) ¡ 1 is due to the fact that the approximate density (37) can well represent the exact one, having ' and k as free parameters. Indeed, as ) decreases from 1 to 0, the parameter k decreases from the value g1 to 0, while the parameter ' increases from 0 to the value g0 and the approximate probability density tends to the exact one equation (57). This speciFc feature of the LSL method based on the maximum entropy principle is not exhibited by the LSL method based on the minimum cross-entropy principle, which assumes a Fxed value k = k∗ deriving from a stochastic linearization. Nevertheless, the results obtained for ) ¿ 1 are of particular interest, as shown later. First, a priori probability density function derived by a Gaussian stochastic linearization has been postulated, i.e. k∗ = kG . Then, the LSL method based on the minimum cross-entropy principle has been applied, by assuming a priori probability density derived by the stochastic linearization criterion proposed by Kasakov [25]. This criterion requires that the mean square values of the non-linear restoring force f(X ) and the equivalent linear restoring force keq X be equal, that is E[f2 (X )] = E[(keq X )2 ], which leads to the following expression of the equivalent spring coe2cient: (67) keq = E[f2 (X )]=E[X 2 ]:

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

Fig. 7. Variance of the power-law spring oscillator for 1 ¡ ) ¡ 10: Gaussian stochastic linearization (GSL) results and local stochastic linearization (LSL) results by applying the minimum cross-entropy (MinxEnt) principle with a priori p.d.f. given by Gaussian stochastic linearization (GSL) and by Kasakov stochastic linearization (KSL) compared with the exact solution.

Fig. 8. Percentage error for the variance of the power-law spring oscillator for 1 ¡ ) ¡ 10: Gaussian stochastic linearization (GSL) results and local stochastic linearization (LSL) results by applying the minimum cross-entropy (MinxEnt) principle with a priori p.d.f. given by Gaussian stochastic linearization (GSL) and by Kasakov stochastic linearization (KSL).

In Fig. 7 the variance !);2 L∗ obtained by the LSL method based on the minimum cross-entropy principle and the variance !);2 G obtained by the GSL approach are compared with the exact solution, for the range 1 ¡ ) ¡ 10. In Fig. 8 the percentage errors e); . (. = G; L∗ ) are reported, for the range 1 ¡ ) ¡ 5. From these Fgures, it appears that for high non-linearity () ¿ 2) the proposed LSL ∼3 method is more accurate than the GSL and for ) =

797

the percentage error of the LSL method is negligible. Thus, for this value of ) the LSL constitutes a true linearization, in the terminology of Kozin [5]. On the contrary, the GSL is more accurate for weak non-linearity () ¡ 2). It has to be remarked that the LSL with a priori probability density derived by the Kasakov stochastic linearization (KSL) tends to enlarge the range where the percentage error of LSL is lesser than that associated with GSL. Moreover, for high non-linearity () ¿ 5), the latter leads to percentage errors that are lesser than that given by the LSL with a priori probability density derived by Gaussian stochastic linearization. 7.2. Two-wells du?ng oscillator Let us consider the following two-wells du2ng oscillator: (68) X; + X˙ − X + /X 3 = 2W (t); where / is a positive constant,  is the damping coe2cient and W (t) is a white noise with intensity d. This system has three equilibrium positions: one position of unstable equilibrium at X√= 0 and two positions of stable equilibrium at ±1= /. The corresponding potential energy function has two potential wells, symmetrically√disposed about the origin and with minima at ±1= /. The exact marginal probability density function of displacement is  2 

x4 1 x − +/ ; (69) pex (x) = N exp − d 2 4 where N is a normalization constant. For low white noise excitation (d su2ciently small), the response of this oscillator tends to stay within one of the potential wells. In this case, the probability density (68) becomes bi-modal, exhibiting two symmetric bumps. For high level of excitation, these bumps tend to disappear. Such an oscillator has been considered by Roberts and Spanos [1] by the Gaussian stochastic linearization, and by Alaoui Ismaili and Bernard [21] by the stochastic local linearization (in a di+erent version than that proposed in this study). The Gaussian stochastic linearization leads to the following approximate value of the variance of

798

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

Fig. 9. Variance of the two-wells Du2ng oscillator versus the intensity of the white noise d: Gaussian stochastic linearization (GSL) results, local stochastic linearization (LSL) results by applying the maximum entropy (MaxEnt) and by following the approach proposed in Ref. [17] compared with the exact solution.

Fig. 10. Percentage error for the variance of the two-wells Du2ng oscillator versus the intensity of the white noise d: Gaussian stochastic linearization (GSL) results, local stochastic linearization (LSL) results by applying the maximum entropy (MaxEnt) and by following the approach proposed in Ref. [17].

displacement !G2 = (1 + 1 + 12/d)=(6/):

value 1=(3/), with attendant percentage error reaching 66 23 %. Note that the LSL method proposed by Alaoui Ismaili and Bernard [21] leads to better results than that associated with the LSL based on the maximum entropy principle. This fact can be easily explained. Indeed, the former approach requires the knowledge of the exact probability density function, in order to minimize the cross-entropy between the original non-linear system and the locally linear one. Nevertheless, their method leads to approximate results. On the contrary, if the averages appearing in expressions (41) of the equivalent coe2cients 'L and kL are evaluated by assuming the exact probability density function, the proposed LSL approach leads to the exact results for the averages E[|X |] and E[X 2 ]. In Figs. 11(a) – (c) the marginal probability density function of displacement is shown, for di+erent values of the intensity of the white noise. From this Fgure the good performance of the LSL methods is evidenced, especially for very low excitation. In particular, for small values of d, the probability density function tends to become equivalent to a normalized combination of two Gaussian √ proba∼ −d=( /'L ), bility densities, with variance d=kL = centered √ about√the two stable equilibria located at −1= / and 1= /, according to the asymptotic analysis performed by Alaoui Ismaili and Bernard [21].

(70)

The local stochastic linearization method based on the maximum entropy principle will now be compared with the Gaussian stochastic linearization method and with the local stochastic linearization method proposed by Alaoui Ismaili and Bernard [21]. In Fig. 9 the approximate variances derived by the GSL, by the LSL following the approach by Alaoui Ismaili and Bernard and by the proposed LSL based on the maximum entropy principle are compared with the exact solution, for various values of the intensity of the white noise. The analysis has been conducted by Fxing / at 10. In Fig. 10, the corresponding percentage errors in reference with the exact solution are reported. It appears that the GSL leads to results that are very far from the exact ones. On the contrary, both the LSL approaches are capable of representing quite well the speciFc feature of such a considered oscillator for low levels of the excitation. In particular, for d → 0, the response of the oscillator tends to become deterministic, with the probability density characterized by two √ spikes located at ±1= /. In this case, the variance of the response process tends to the limit value 1=/, which is predicted by the LSL methods. On the contrary, the GSL method tends to the limit

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

799

Fig. 11. Marginal probability density function of displacement of the two-wells Du2ng oscillator for di+erent values of the intensity of the white noise d: Gaussian stochastic linearization (GSL) results, local stochastic linearization (LSL) results by applying the maximum entropy (MaxEnt) and by following the approach proposed in Ref. [17] compared with the exact solution; (a) d = 0:01; (b) d = 0:005; (c) d = 0:002.

8. Conclusions A new look at the Gaussian stochastic linearization method for non-linear systems under white noise input was provided and the equivalence of this method with the maximum entropy principle with second-order moment constraints has been highlighted. A local stochastic linearization method has been proposed, based on the substitution of the original non-linear system with an equivalent locally linear one, whose coe2cients have been derived in a classical way by minimizing the mean square error between the two systems. The equivalence of the proposed local stochastic linearization method with the maximum entropy principle with an “ad hoc” choice of moment constraints has been evidenced. In addition, an alternative version of the local stochastic linearization method has been proposed, based on the minimum cross-entropy principle.

The numerical analyses performed on the power-law spring oscillator conFrm than that two proposed local stochastic linearization methods lead to better results than that given by the Gaussian stochastic linearization method. The advantages in using the local stochastic linearization method based on the maximum entropy principle has been pointed out in the study of the two-wells Du2ng oscillator. In contrast to the Gaussian stochastic linearization method, the proposed method is capable of representing the speciFc features of the system, which have three equilibrium positions. Acknowledgements This paper was written when Giuseppe Ricciardi served as a Visiting Professor at Florida Atlantic University. His work was supported by the

800

G. Ricciardi, I. Elishako3 / International Journal of Non-Linear Mechanics 37 (2002) 785–800

Ministero della Universit@a e della Ricerca ScientiAca e Tecnologica of Italy, CoFnanziamento MURST 2000 (Prof. Fabrizio Vestroni, Coordinator). The work by Isaac Elishako+ was supported in part by the NASA Glenn Research Center (Dr. C.C. Chamis, Program Director). These supports are gratefully appreciated. References [1] J.B. Roberts, P.D. Spanos, Random Vibration and Statistical Linearization, Wiley, Chichester, 1991. [2] L. Socha, T.T. Soong, Linearization in analysis of nonlinear stochastic systems, Appl. Mech. Rev. 44 (1991) 399–422. [3] G.I. Schu;eller, H.J. Pradlwarter, Advances in the stochastic structural dynamics under the perspective of reliability estimation, in: L. Fryba, L. Naprstek (Eds.), Structural Dynamics EURODYN-99, Balkema, Rotterdam, 1999, pp. 267–272. [4] I. Elishako+, Stochastic linearization technique: a new interpretation and a selective review, Shock Vibr. Dig. 32 (2000) 179–188. [5] F. Kozin, The method of statistical linearization for non-linear stochastic vibrations, in: F. Ziegler, G.I. Schu;eller (Eds.), IUTAM Symposium. Nonlinear Stochastic Dynamic Systems, Springer, Berlin, 1988, pp. 45–56. [6] A.N. Malakhov, Cumulat Analysis of Random Non-Gaussian Processes and their Transformations, Sowietskoe Radio, Moskow, 1978 (in Russian). [7] V.V. Bolotin, Random Vibration of Elastic Systems, The Hague, Martinus Nijho+, Dordrecht (Boston), 1984. [8] W.F. Wu, Y.K. Lin, Cumulant-neglect closure for nonlinear oscillators under parametric and external excitations, Int. J. Nonlinear Mech. 19 (1984) 349–362. [9] M. Pawleta, L. Socha, Cumulant-neglect closure of nonstationary solutions of stochastic systems, ASME, J. Appl. Mech. (ASME) 57 (1990) 776–779. [10] R.J. Chang, Maximum entropy approach for stationary response of non-linear stochastic oscillator, J. Appl. Mech. (ASME) 58 (1991) 266–271. [11] K. Sobczyk, J. Trebicki, Maximum entropy principle in stochastic dynamics, Probab. Eng. Mech. 5 (3) (1990) 102–110.

[12] K. Sobczyk, J. Trebicki, Maximum entropy principle and non-linear stochastic oscillators, Physica A 193 (1993) 448–468. [13] G. Ricciardi, S. Lacquaniti, Stationary p.d.f. of non-linear oscillators and the minimum cross-entropy principle, Proceedings of the Euromech 413 Colloquium on “Stochastic Dynamics of Nonlinear Mechanical Systems”, (CD-ROM), 12–14 June 2000, Palermo, Italy, 2000. [14] A.I. Khinchin, Mathematical Foundation of Information Theory, Dover, New York, 1957. [15] E.T. Jaynes, Information theory and statistical mechanics, Phys. Rev. 106 (1957) 620–630. [16] S. Kullback, Information Theory and Statistics, Wiley, New York, 1959. [17] E.T. Jaynes, Where do we stand on maximum entropy? in: R.D. Levine, M. Tribus M. (Eds.), Maximum Entropy Formalism Conference, Massachusetts Institute of Technology, 1979, pp. 15 –118. [18] A. Katz, Statistical Mechanics, Freeman, San Francisco, 1967. [19] J.N. Kapur, H.K. Kesavan, Entropy Optimization Principles with Applications, Academic Press, London, 1992. [20] S. Kullback, R.A. Leibler, On information and su2ciency, Ann. Math. Statist. 22 (1951) 79–86. [21] M. Alaoui Ismaili, P. Bernard, Asymptotic analysis and linearization of the randomly perturbed two-wells Du2ng oscillator, Probab. Eng. Mech. 12 (3) (1997) 171–178. [22] P. Hick, G. Stevens, Approximate solutions to the cosmic ray transport equation: the maximum entropy method, Astron. Astrophys. 172 (1987) 350–358. [23] X.T. Zhang, I. Elishako+, R.Ch. Zhang, A stochastic linearization technique based on minimum mean square deviation of potential energies, in: Y.K. Lin, I. Elishako+ (Eds.), Stochastic Structural Dynamics — New Theoretical Developments, Springer, Berlin, 1991, pp. 327–338. [24] I. Elishako+, Multiple combinations of the stochastic linearization criteria by the moment approach, J. Sound & Vibr. 237 (3) (2000) 550–559. [25] I.E. Kasakov, An approximate method for the statistical investigation for nonlinear systems (in Russian), Tr. Voenno-Vozdushnoi Inzh. Akad. imeni Prof. N.E. Zhukovskogo 399 (1954) 1–52.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.