Restricted simplicial decomposition with side constraints

June 13, 2017 | Autor: Angel Marin | Categoría: Applied Mathematics, Networks, Numerical Analysis and Computational Mathematics
Share Embed


Descripción

Restricted Simplicia1 Decomposition with Side Constraints Angel Marin Departamento de Matemhtica Aplicada, Universidad Politbcnica de Madrid, E.T.S. lngenieros Aeronhuticos, Plaza Cardenal Cisneros, 3, Madrid 28040, Spain

Restricted Simplicia1 Decomposition with Side Constraints is a price decomposition method designed for large-scale nonlinear problems with a set of linear constraints with special structure and an additional set of linear side constraints. The resultant algorithm iterates by solving a linear subproblem subject to the set of structured constraints and a small nonlinear master problem whose feasible region is defined by the intersection of a simplex and the set of side constraints. The number of variables of the master problem is controlled by a parameter named r . The conditions required for the parameter r and the rules for dropping extreme points of the simplex are established for global convergence of the algorithm. Computational results are presented for nonlinear network problems with side constraints. 0 7995 John Wiley & Sons, Inc.

1. INTRODUCTION In large convex optimization models with linear constraints, it is common that some constraints have special structures, so that if the other constraints are not considered the resultant model can be efficiently solved by algorithms adapted to solve the special structures. The constraints with structure are called the easy ones and the others are the dif-liuilt ones or side constrainis. Networks with side constraints are clear examples of models where the two types of constraints are present. I n this case, the easj’ constraints are the network constraints. which typically are unicommodity and multicommodity. The unicommodity network constraints appear, for instance, in problems of electricity distribution (Gascon et a]. [ 81) or in problems of optimal scheduling of a power system consisting of hydro and thermal unities (Oliveira et al. [25]). A combination of network constraints with a large number of different side constraints is used in these instances. Multicommodity network constraints appear, for instance, in transport planning or in telecommunication

NETWORKS, VOI. 26 (1 995)199-21 5 8. Sons, Inc.

(cj 1995 John Wiley

networks. In traffic assignment problems in order to formulate user equilibrium, the models are defined using nonlinear objective functions with multicommodity constraints, but, frequently, it is also necessary to consider arc capacities. These models are formulated, i.e., when safety or traffic control is studied (Gartner et al. [ 71 and Marin and Pablos [ 221). Heuristics or extensions of the Frank and Wolfe method have been defined to study these models (Daganzo [ 21, Hearn and Ribera [ 101, and Marin [ 201). In medium-term planning of transmission networks, it is necessary to design the network so that the capacity available on transmission sections will be utilized optimally. The multicommodity minimal cost flow model with network. reliability, and capacity constraints for optimization may be solved with this type of method (Marin [21]). The forestry-cutting problem, where a large number of woodlands are to be scheduled for cutting over an n-year plan, can be formulated by a generalized transportation model with upper-bounded slack variables: by fixing some constraints, it is possible to obtain subproblems that are transportation models (Lasdon [ 191).

CCC 0028-3045/95/040199-17

199

The constrained matrix problem which arises in the estimation of social accounting matrices is characterized by transportation constraints. Nagurney and Eydeland [ 241 formulated the problem as a nonlinear network flow problem with transportation-type constraints, where, if the sum of the rows or of the columns elements is not considered. a subproblem is obtained with a very similar structure to the one governing the spatial price equilibrium. Dantzig and Wolfe [ 31 were the first authors w h o used the scheme of price decomposition to separate the conditions of a linear problem. Geoffrion [ 9 ] described this type of method as an inner linearization. In a nonlinear context, Holloway [ 131 extended the Frank and Wolfe method, which uses only information of the last generated extreme point (line search) to determine the next iterate, to take into account all the previous generated linear solutions ( simplicial decomposition). Von Hohenbalken [31] proved the finite convergence of the Simplicia1 Decomposition Method for pseudoconvex functions and polytopes. The extension of the simplicial decomposition taking into account side constraints in a nonlinear problem has been studied by Rutenberg [27] and Shen Hsia [28]. Dembo et al. [ 4 ] and Kennington and Whisman [ 161 developed linear price decomposition to obtain the separation between network and side constraints. Hearn et al. [ 1 I , 121 developed a restricted extension of the simplicial decomposition, which uses only a small number of extreme points in each iteration of a master problem. They proved the finite and global convergence of the method, supposing that the above number is greater than the dimension of the optimal face of the feasible set. Ventura and Hearn [ 301 extended restricted simplicial decomposition for convex constrained problems. proving finite and global convergence. Kontogiorgis [ 181 solved block-angular problems with side constraints, by decomposition using alternating directions methods, which can be thought ofas block GaussSeidel iterative schemes for an augmented Lagrangian, that exploit the block structure. Other developed methods in nonlinear network problems which consider network and side constraints simultaneously can be found in Bertsekas [ 1 ], Escudero [ 5 ], Kamesan and Meyer [ 14 ], and Toint and Tuyttens [ 291. Problem formulation, notation, and the proposed algorithm are described first. Second, global convergence is proved with special attention to the existence of a feasible descent direction. Some conditions have been necessary: The number of retained extreme points must be greater than the number of difficult conditions plus one, a feasible basis of the master problem must exist, and a new criterion is needed to determine the vector which leaves the basis. Taking into account that the proposed algorithm needs

initial data impossible to obtain in applied problems, a heuristic method is defined to start the algorithm. To study the applicability of the proposed method, the network context is used because of its many applications. Computational experience is given for unicommodity and multicommodity networks, with equality and inequality side constraints. Termination criteria have been defined with different measures of the proximity to the optimal solution.

2. THE PROBLEM The following problem is considered:

where A , is a i17 X n matrix, A2 is a rn' X n matrix, b1 E '31" b2E 3"". s,I, zl E %", and/( x) is a continuously differentiable pseudoconvex function. The linear condition set Fcan be decomposed into F2, the set of easy conditions:

.

where I and 21 are the lower and upper bounds, and F I , the set of difficult conditions:

Let U', be the set of retained extreme points of the set F2 and U , be a set that may be empty or contains one of the iterates. The set U'is defined by

M'= W,U

w,.

Let H ( A ' ) be the convex hull of W , i.e.,

H(M')= ( s :.\- =

I'

P

,=o

I=o

1a(:'):'; 1a(:') = 1; a ( z ' ) 2 0, with z ' E W } ,

where a ( z ' ) is the weight for z ' . Let H ( U'"' ) be the convex hull of W k + 'at iteration k. F , ( k ) . a subset of FI, is defined by F , ( k ) = { s E H ( W " + 1 ) , A , x =b , } .

Other elements of the notation of the proposed algorithm will be

RESTRICTED SlMPLlClAL DECOMPOSITION

p = I W,I : the cardinality of the set W,. Vf( x): gradient off( x) at x.

201

j E arg.min.{ a k ( z i ) / s f ; sf > 0,

B: basis matrix of master problem. r: number of retained extreme points. xy: the inner product of the vectors x and y. u: dual variable w.r.t. A , x = b l , that will be defined by

Vi = 1 , . . . , m

and Bk is the basis ofthe Previous MP(k). Set WYI = { x k } ,wB+' = W $ U { y k }- { z ' } . STEP3. Set W k f I

( u , u ' ) = (Of( x ) z ) B - l ,

+ l } , where sk = B k l

=

W?' U W ? ' , and solve MP(k): P

where u' is the dual variable w.r.t. Zy=oa ( z ' ) = 1 and where Z is a matrix whose columns are extreme points of the set F2.

Min. f(2 a(zi)zi) i=O

(AlZO,.

. . , A l z " ) ( a ( z o ) ,. . . , a ( z P ) ) = bl P

c .(z')

3. RESTRICTED SlMPLlClAL DECOMPOSITION WITH SIDE CONSTRAINTS ALGORITHM

+

STEP0. Set r 2 m 1, assuming that m c n . Set I w ~ I E [ m I , r ] . Set xoE F and xoE H ( WB),where H ( W ! ) is a simplex formed by p extreme points of F2. Set uo be the dual variable w.r.t. A l x = bl at xo. SetW:= { x o } , k = O , a n d g o t o S t e p l .

+

STEP1. Solve SP(k): Min.[V.f(X k ) - u k A J ( y - X k ) , s.t. A,y

= b2, y

E [I, u].

Let y k be an optimal solution to SP(k ) . If [ v f ( x k ) - u k ~ l ] ( y-k x k ) 2 0, x k is an optimal solution of P1, stop. Otherwise, go to Step 2. =

w:u

I w,"(< r: Set

WFI

=

.. , p ,

where zi are extreme points belong to the set W k + I .Let xk+I be an optimal solution of MP(k). Let uk+l be a set of optimal dual variables w.r.t. the first set of constraints of MP( k ) . Discard z'with zero weight from W k + ' . Increase k by 1, and go back to Step 1.

4. GLOBAL CONVERGENCE The following definition and properties of simplices from Rockafellar [ 261 will be necessary: 1 distinct points in 2" Let { zo, z ' , . . . , z"} be p with p I n , where the vectors zI - zo, . . . , zp - zo are linearly independent. Then, the set H ( W) = H ( z o , Z I , . . . , z"), the convex hull of { zo, zI, . . . , z p } , is called a p-simnp/.s in 2".Since H ( W ) is always contained in a manifold of dimension p, a p-simplex is said to have dimension p. If x is an element of a p-simplex, H ( W), then x can be uniquely expressed as a convex combination of the points, zo, Z I , . . . , zp, defining H ( W). i.e.,

+

x=

P

P

i=O

i=O

2 a(2')z'; 2 a(#) = 1;

a ( z ' )2 0, V i = O , 1, . . . , p

W~ and W?'

{yk}. = r: Obtain the extreme point z j , which leaves

If I W I W f by the criterion

1

a ( z ' ) 2 0, Vi = 0,.

Restricted Simplicia1 Decomposition with Side Constraints (SCRSD) is a decomposition method based on the interchange of prices (Lagrange multipliers) and primal solution information between a nonlinear master problem and a linear subproblem. The master problem (MP) finds the optimum in the intersection of the convex hull of a previous iterate and a subset of extreme points generated by the subproblem and the side constraints. Note that the feasible solution set of MP( k ) is the above set FI( k ) .The MP yields the side constraint prices which are used by the subproblem (SP) to generate the extreme points. Thus, the SCRSD algorithm will be defined by

STEP 2. If

=

i=O

and a(zo), . . . , a(z p ) are unique. If x is an element of a p-simplex, H ( W ) , and the convex weight, a(z'), for some j E { 0, 1, . . . , p}, is positive in the expression of

202

MAR~N

x as a convex combination of zo, z 1, . . . , zp, then H ( zo, Z I , . . . , zJ- 1 ,x, zJ+I , . . . , z P ) is also a p-simplex. If H ( z o , z l , . . . , z p ) is a p-simplex, then H ( z o , Z I , . . . z j - I Z j + l , . . . , zp), for somej E { 0, 1 , . . . , p}, is a (p - 1 )-simplex. To prove global convergence, we first show, after some initial lemmas, that given an optimal solution, x k , of the master problem there always exists a feasible descent direction.

z is an element of H ( W), because

5

)

Lemma 4.1. x* is an optimal solution to ( P I ) , where f ( x) is a pseudoconvex function. if and only if, Of(x*)(x-.x*)20

VXEF

Proof: This is a standard nonlinear programming result. Lemma 4.2. x* is an optimal solution to ( P I ) . if and only if [Of( x * ) - u*A I ] ( X - x * ) 2 0

- 1 - a ( z J. )= 1. 1 - a(zJ) 1 -a(z’) a(zi)

i=O.i#j

The following problem MP1 (k) has the same solution as that of MP(k), since both have the same KKT conditions:

Therefore, d = z - Xis a feasible direction to MP 1 (k). However, ( V . f ( i )- C A l ) d = ( V f ( X )- EAl)(Z - X) -

a(Z J ) (Vf(X) - EAl)(T - Z J ) < 0, 1 - a(zJ)

i.e.. d is a feasible descent direction to MPI ( k ) and X is not optimal.

v x E Fz,

where u* is a vector of optimal Lagrange multipliers corresponding to A l x = b l .

The following lemma and theorem ensures that the sets H ( H I k ) generated by SCRSD algorithm are simplices.

Proqf: Ifx* is an optimal solution to (PI ).the KarushKuhn-Tucker (KKT) conditions must hold. The following problem (P2) has the same solution as (PI ), since both have the same KKT conditions:

Lemma 4.4. Let H ( W ) = H ( z O ,z I , . . . , z p ) be a psimplex which is also a subset of the convex feasible set, F2.Furthermore, let X E arg.rnin.{ f(x), x E H( W ) ,A , x = hl } ,and assume

( P 2 ) M i n { f ( x )- u * A l x } , V x E F2; therefore, by Lemma 4. I , x* satisfies the following condition: [Of( x*) - u * A , ] ( x - x * ) 2 0,

Lemma 4.3. Given X E argmin.{ f ( x) , x E H( W ) ,A I x = hl } , let ij be a vector of optimal Lagrange multipliers to A I X = 6 , . Then, if for some z j E R’, (Vf(X) - CAI )(z’ - X) > 0, we have a(z’) = 0.

Proof: (By contradiction.) Assuming that a ( z’) > 0, define z as a( Z J )

1

-

a(zJ)

a(zi)zi, i=O

c

V i = O , 1, . . . , p . If? is not a minimizer off( x) over F, then there exists a y E F2 such that ( O f ( x )- E A l ) ( y - 2) < 0 and the following set is a ( p + 1 )-simplex: H ( z 0 , z l , . . . , . . . , -P y).

.

P r w f ( B y contradiction.) Assume that H ( z o , z l , . . . , z p ,y ) is not a ( p + 1 )-simplex, i.e., the vectors z’ - z o , ..., z/’ - 7 0 , y - zo are linearly dependent. In other words. there exist coefficients a r ( z l ) , . . , , a r ( z p )which satisfy P

(X

= 1, a ( z ’ ) > 0,

.(Z’)

i=O

vx E F2.

Conversely, the argument follows the reverse order.

z=X+

P

I’

s=

y

- ZJ)

- zo =

.

c a’(z’)(z’ -

ZO).

i= I

Then, P

P

y

=

c ar(zi)zi, i=O

where a r ( z o )= 1 -

2 ar(zi). i= I

RESTRICTED SIMPLICIAL DECOMPOSITION

Moreover, it follows from the optimality of X that ( V f ( X )- iiA,)( z i - 2)2 0,

Vi = 0, 1, . . . ,p ,

and from Lemma 4.3, as a(z ' ) > 0, Vi = 0, 1,

. . . ,p ,

Since (y - X) is a descent direction over F2, we have [Vf(X) - i i A ] ( y - 2 ) < 0,

v y E F,,

and by Lemma 4.2, (y - 3 ) is also a descent direction over F , [Vf(X) - b A ] ( y - X) < 0,

v y EF.

However,

203

If an element of W khas to be discarded, the new updated set defines a (p - 1)-simplex and an additional element generates a p-simplex. At the end of Sfep 3, all zero-weighted elements from W k + 'are discarded, and the properties of the simplices guarantee that the remaining elements still define a simplex. Since the set W k + 'at the end of Sfep 3 is the same as the one at the beginning of Sfep 1, the theorem is proved. w A feasible descent direction will be characterized by the set of points which belong to the feasible set of the master problem, FI( k ) ,at iteration k , but do not belong to the convex hull of the extreme points, H ( W k ) ,at iteration k - I . Lemma 4.5 establishes the representation of the points of the above set as a function of the extreme points.

Lemma 4.5. Any point belonging to F , ( k ) = { x E H ( W k + l ) , A l x =bl}and nottoH(Wk)canbewritten as

[V,f(X) - U A , ] ( y- X) = [ O f ( X )- i A l ] P

x (2a f ( z i ) z i- 2) i=O P

=

c a'(zf)

a ( z i ) 2 0, V i = 0, 1 , .

. . , p ( k + 1 ) - 1;

i=O

x (Vf(X) - i i A , ) ( z ' - X) = 0. This is a contradiction and H ( z o ,z 1,. . . , z p ,y) must be a (p + 1 ) simplex. Theorem 4.1. In the SCRSD algorithm, the set H ( W k ) is a simplex for all k 2 0. Proof: (By induction.) H ( W k )is a simplex at the start of Step 1, when k = 0, W t = { z l , . . . , z p } , and W : = { xo};therefore, W o= { xo,z 1,. . . ,z p }is ap-simplex. Assuming that H ( W k )is a p-simplex for k 2 0, then it must be shown that H ( W k + 'is) also a simplex. Assume without loss of generality that W f = { z ' , . . . , z p } , W = { x' } , and W k= W,"U W $ defines a p-simplex. The weights of these elements in the representation of x k as their linear combination are positive; otherwise, they would be discarded at the end of Step 3. Since [Of( x k ) - ukAl](yk- x k ) 2 0 implies that the algorithm must stop and W k f lis not generated, it assumed below that [ V f ( x k )- ukAl](yk- x k ) < 0. In Sfep 2, if no element has to be discarded, I W,"I < r ; then from Lemma 4.4., the newly defined Wk+'produces a (p + I )-simplex.

where p ( k ) is the cardinality of the set W," + I Proof: (By contradiction.) Assuming that there exists an x E F , ( k ) such that x $E H ( W k ) and for which a(ZP(k+ 1 ) ) = a ( y k )= 0 in the representation of x as convex , are two poscombination of the elements of W k + l there sibilities:

:.

where x' is the last optimal solution of the master problem not discarded so far. As x k E H ( W k )= H(x', Z I , . . . , z ~ ( ~ it) )admits , a representation as

204

MAR~N

then, in both cases, xadmits a representation (ii), or what is the same, x E H(W " ) ,because a ( y k ) = 0. Conversely, also by contradiction, if x belongs to H ( W k )and H ( W k + ' )obviously, , a ( y k )will be zero and this is a contradiction.

with

Then, by (4.1 ) and (4.2), Lemma 4.6. Let x k be an optimal solution of the master problem MP(k - 1 ); then, x k is also a feasible solution of the master problem MP( k ) .

Proof Two cases may be distinguished: ( i ) If 1 W 2 I < r , then H ( W k )C H ( W k + l and, ) so, F l ( k - 1) c F , ( k ) . (ii) If I W t I = r , then x k E W k + l ,and as A l x k = b , , then x k E F , ( k ) .

Theorem 4.2. If x k is the solution of the master problem MP( k - 1 ), and it is not the optimal solution to problem P1, and if the SCRSD algorithm generates a feasible set, FI( k ) ,which has a point that does not belong to H ( W k ) , then a feasible descent direction is defined so that the new solution xk+l of the master problem MP( k ) satisfies

or P(k+ I )

2

a(zi)(Vf(xk)

-

u k A 1 ) ( z ' - x';) 2 0. (4.3)

i=n

Using Lemma 4.3,it is known with respect to MP(k if

- 1 ) that

then

a ( z ' ) = 0, V z ' E

wk,

so that Proof: As x k is a feasible solution to MP( k ) and xkc' is an optimal solution to MP( k ) ,

a(Z ' ) ( V . f (

xk)

- U k ; A l ) ( zi - xk) =

0, Vz'E

Suppose thatf( x k + l )= f(x k ) . By Lemma 4.6, x k E FI( k ) ,and as x" is also an optimal solution to MP(k), then

wk. (4.4)

Taking from Step 2 of the algorithm,

as, in general,

u"l(x

- X k ) = 0, V x E

F1(k);

, consider only the From both definitions of W k + Iwe second one because its analysis includes the first case. So we can put

then p(h-+l)

(Vf(xk)- u'AI)(x

- x k )2 0,

2 V x E F I ( k ) . (4.1)

a(z')(Vf(xk) - U k A I ) ( Z f - xk)

i=O

P(k+I)

ForallxEF,(k),

=

c

a(zi)(Vf(xk)- UkAl)(Z' - X k )

i= I.i#j

+ a ( x k ) ( V f ( x k )Xk) + . ( y k ) ( V f ( x k ) - u"'Al)(yk- X k ) . u",)(xk

-

(4.5)

205

RESTRICTED SIMPLICIAL DECOMPOSITION

The first member of (4.5) will be nonnegative by (4.3), so the same can be said for the second one. The first two terms of the second member will be zero, the first of them by (4.4). So, the third term is nonnegative, a ( y k ) ( V f (x k ) - v k A l ) ( y k- x k ) 2 0.

ProoJ: A direction, d, is feasible in C2with respect to an X E CI if and only if

+

A,(.% pd) = b,;

for all p E [0, @I, with @ > 0. (4.9)

(4.6)

But, by Lemma 4.5, taking into account the hypothesis, it can be said that

The points of that direction d can be expressed by 2

a ( y k ) > 0.

(4.7)

So (4.6) and (4.7) give us

x + pdE H ( W ' ) ,

+ pd =

P

+ A a ( z ' ) ) z " +A a ( y ) y .

(a(z') i= I

Following the classical decomposition of linear constraints, condition (4.9) can be written

(4.10)

But (4.8) is a contradiction, since y k is an optimal 4 solution to SP(k ) . In the above Theorem 4.2, we showed the existence of a feasible descent direction if there exists an x E F , ( k ) with x 4 H ( W k ) ;so now we prove this condition. The proof is decomposed into a set of lemmas and corollaries. In the next two lemmas, reference to the iteration number will be discarded. The following Lemma 4.7 shows that if the new extreme point y is added to the convex combination of the old extreme points, under certain conditions, the new convex combination is nonempty.

Lemma 4.7. Let Wbe the set of extreme points, consisting of the old extreme points, z o , z l , . . . , z p , and the new one, y, with p + 1 2 rn + 1. Without loss of generality, it is assumed that B is a matrix whose columns are a feasible basis of the master problem given by the set of linearly independent vectors: { ( ? I z ' ) , . . . , (flZ'"+') 1. In addition, let us define the set C,= { x E 8'': x E H ( W ) , A l x = bl } and the set C2 = { x E !Itn: x E H ( W ' ) , A , x = b l } , where W' = W U { y } - { z j } and z'is obtained by the following criterion: j E argrnin.

A feasible direction will be associated with the choice AaN= 0, A C Y=~ A a ( y ) =

Then, (4.10) will be

5(z ' ) / s i , with si > 0, Vi =

1,.

. . , rn

+ l } , where s = B-' ( A ; y ) +

and (Y(z'), V i = 1, . . . , rn 1 are the weights of a point X E CI in a convex combination of the elements of W satisfying G( z') > 0, Vi = 1, . . . ,p and (Y( y ) = 0. If such a point exists, there exists a point that belongs to C, and whose weights satisfy a ( z ' ) 2 0, V i = 1, . . . , j - 1, j 1, . . . ,p ; a ( y ) > 0; .(Z') = 0.

+

where B is the basic matrix cited in the hypothesis; N, the nonbasic matrix; and S , the superbasic matrix of the master problem, where, in our case.

So, defining s as

s = .-'(A;'),

p > 0.

206

MAR~N

Then, the first basic component which takes the value zero will be given by

,with si > 0, V i

=

I . . ..,m + 1 1 ,

+

Proqf Since X I , x2 E T I ,every x = x 1 p(x2 - x') with p E %+ satisfies A I X = bl . As XI and x2 E TI ,then xi, x2 E H ( W ' ) ,and as by Theorem 4. I , H ( W ' ) is a p-simplex, and x1and x2can be written as a unique convex combination of the points of wl:

where some si must be positive because the condition of (4.10) imposes that m+ I

c

1=

5;

= 1.

and as the representation is unique, we will have

I

a l ( z ' ) = 0. V i = I , ,

Taking p equal to 6 from (4.10) yields

x + 6d =

1

. +1

( a ( z ' ) - sib)z',

Vi

=

I , . .. m

a(z')z',

Vi

=

m

6.v

7

+ 2.. .. , p

fori=p+ 1

.. , p ;

a l ( x l )= 1.

Similarly, x2can be expressed by

c cY2(z')z'+ P

2

=

;=

a2(x')x',

I

(weight respect to y ) .

So, at least a basic component j takes the value zero and the final point belongs to C2, since it satisfies the condition (4.10) and its weight vector has the value of a ( z i ) 2 0 . V i = 1. . . . , j - I , j + 1,..., p ; a(z')

=

0;

P

c

a2(z')

+ a2(x')= 1

i= I

The points between X I and x2can be written as

a ( y )> 0

The assumption that B is nonsingular does not mean loss of generality, because we can, in a preprocessing step, eliminate the dependent linear conditions in the master problem. The next two corollaries are simple extensions of the proof of the above Lemma 4.7 and they are given without proof: Corollary 4.1. Lemma 4.7 is valid if 0 is taken small enough. In this case, the weight of x E C2 will be a(z') 2 0, Vi = 1, . . . .p ; a ( y ) > 0. where a (z') is not zero. Corollary 4.2. Lemma 4.7 is valid if an additional extreme point x k , a linear combination of zl, . . . , zp is added together with y. In this case, the weight of x E C2will be a ( z i ) 2 0,V i = 0, I , . . . ,J - 1,j + I , . . . , p; a ( z ' ) = 0, 4 Y ) > 0. Lemma4.8. Let = { x E H ( W ' ) : A , x = b l } , V i = 1, 2, where W1 and Wz are defined as the following sets: W 1 = { X I , z', . . . , zp}, with H ( W ' )ap-simplex, and W2' W l - { } = { z ' , . . . , zp}. If there exists an x2 # X I with x2 E TI, then the set T2 is not empty.

Choosing p equal to D, where

thenflE%+,sincea2(x1)€[0, 1[.Giventhatifa2(x1) = I , then a '( z') = 0, V i = 1, . . . ,p , and x2 = x ' against the hypothesis. So, we can choose the point X defined by

x = c 1 -a *a (2z('x) ' )

Z

';

;= I

this point will satisfy that X E H ( W 2 ) since , a2( Z')

1 - a2(x')

20,

V i = 1, . . . , p

RESTRICTED SIMPLICIAL DECOMPOSITION

207

( b ) W $ = { X I} ,where x' is the extreme point from MP( 1 - 1 ) which still has not been discarded by weight equal to zero. At the end of iteration k - 1, the extreme points of W will be

and

and as X is a point between X I and x2, by definition, satisfies AIX = b l , so X E T 2 .

X

Corollary 4.3. The weights 6 ( z ' ) of the above X E T2 satisfy G ( z ' )2 a 2 ( z i ) ,Vi = I , . . . ,p, where a 2 ( z ' )are the weights of x2 E T I .

W,k= { Z I ). . . , Z P } ,

W $ ={ X I } ,

and with the changes of Step 2 at iteration k , we have

ProoJ: The relation between 5 and a 2is given by,

5 ( z ' )=

a2( Z ' )

1 - a2(x') '

Vi = 1 , . . . , p ,

where if a 2 ( z ' ) = 0, then 5(z') = 0, and if a 2 ( z ' ) > 0, then Z(z') > a 2 ( z ' ) , since 1 - a 2 ( x 1 €10, ) 11.

The proof is divided into two parts: First, since x k E F l ( k - 1 ) with a ( y k )= 0, by Lemma 4.7, and with the same argument as ( 1 ), the existence of a xs E F l ( k ) can be proved with the following weights: a s ( z ' ) 2 0 , V ' i = 0 , 1,...,j - l , j + 1, . . . , p ; d(ZJ) =

Theorem 4.3. Let W kbe the set of extreme points at the end of the iteration k - 1. Set p 2 rn + 1. Let a basis of the master problem be given by the following set of linearly . . . , ( : I z " " ) } . Set z J to be independent vectors: { ,)2zI:( the vector that leaves the basis by the criterion indicated in Lemma 4.7. Then, there exists an x E F l ( k ) , x g H(Wk).

Proof Set y k to an optimal solution to SP(k). Consider two cases: 1. If IW$l < r , then W F 1 = W xk7 W"' 5

=

Wf

u IYk}.

Let x k be the optimal solution to MP(k - 1 ) after discarding its weights of value zero. It belongs to the set C , ( k )defined at Lemma 4.7 and its weights in W k +I will be ak(zi> ) 0,

Vi = 0, 1, . . . ,p , and a k ( y k )= 0,

Then, by Corollary 4.1 of Lemma 4.7,there exists an x E C 2 w i t h a ( y k ) > O i n Wk+'. As C2C Fl ( k ) ,by Lemma 4.6, x E FI(k), and as a ( y k )> 0, by Lemma 4.5, x $E H ( W k ) . 2. If I W,kI = r , there are two cases: (a) If W : = @, then W $= { z ' , . . . , z p } ,with p = T T m + 1, and Wk+I = { x k ,z I , . . ., ,i- I z J + ' , . . , , z p , y k }. Then, by the Corollary 4.2 of Lemma 4.7 and with the same argument as ( 1 ), we have that x E F l ( k )and x $? H ( W k ) .

0;

Cu"(yk)

> 0.

Second, xs E F l ( k ) with xs # XI, because XI E FI( I - 1 ), and by Lemma 4.4, H ( W k +I ) is a p-simplex. By Lemma 4.8, there exists a x ' € FI( k ) with a'(xI) = 0, and as a s ( y k )> 0, by Corollary 4.3., a ' ( y k )> 0.As a ' ( y k ) > 0, by Lemma 4.5, rn it can be concluded that x 4 H ( W k . Lemma 4.9. The SCRSD algorithm generates a sequence ofdual variables, { u k } ,w.r.t.Alx = b , ,which has at least a convergent subsequence.

ProoJ: The dual variables can be expressed by

( u k , u i ) = [Vf(xk)Zk]Bk1, Vk, and since for spaces of finite dimension all the norms are equivalent,

but Of(x k ) is a bounded function because f is a continuously differentiable function defined in the compact set F. 2 is the matrix which columns are the extreme points and they are a finite number. B;' is the inverse matrix of the basis of master problem and it is known only there are a finite number of them. As the norms of the right member of the inequality (4.12) are bounded, the norm of ( u k , u i ) is also bounded.

208

MARiN

Since { u k , u ; } is a bounded sequence, there exists a compact set I i n ! J F 1 :{ u k , u'k} C I , and by the BolzanoWeierstrass theorem, any infinite subset of I has an accumulation point, so the above sequence { u k , u ; ) has a convergent subsequence.

Theorem 4.4. Let { x k }, { y k } , and { u k } be three sequences generated by SCRSD algorithm. Then, there cannot exist a subsequence defined by K with the following four properties:

Set 6 = min { a2 } and applying the mean value theorem to (f( x) - u k + I A l x ) there , must exist a fl E 10, 1 [ such that f(

Xk

+ y ( y k - x") + y(yX' - x")

-

+Of( x k

+ @y(yk

-

I

Uk+l =f(xk)

- Uk+lAIXk

xk))y(yk - xk)

.

(4.15)

Then, the results stated in (4.13), (4.14), and (4.15) yield (i) (ii) (iii) (iv)

xk-xa,kEK. yk+yy",kEK. uk + u a , k E K. ( V f ( x m) u a A l ) ( y a - x7-)< 0.

f(sk+I)

- uk+lAIxk+I

- u'+IAIx'

-

ko

-

Proof: (By contradiction.) Assume that subsequences with the four properties above exist, i.e.. there exists a r > 0 such that

Since f ( x) is continuously differentiable, there must exist a ko E K sufficiently large so that

(Of(xk) - u k A A , ) ( y k- xk) < -

r -, 2

V k 2 ko,

1

rY -+ ~ ' A l y ( y~ xk) . 4

(4.16)

Sincef( x) is continuous and { f ( x')} is monotonically decreasing, as shown in Theorems 4.2 and 4.3, f ( x m ) , the limit of { f ( x')} , exists. Then, taking the limit of k in (4.16) yields -ry

4

> 0.

But this is a contradiction, because positive.

r

and y are

Theorem 4.5. If,f( x) is continuously differentiable, the SCRSD algorithm either terminates at a solution or generates a sequence { x k } for which any subsequential limit is a solution.

and a 6, > 0 such that

Using the KKT conditions of the master problem at iteration k , it can be shown that if x k + ' and v k + l are optimal primal and dual solutions of the master problem. then xk+l is also an optimal solution to the following equivalent problem:

Proof: If the SCRSD algorithm terminates, the current iterate x k must satisfy the stopping criterion in Step 1. By Lemma 4.2, x k must be a solution. When a sequence { x k }is generated. Theorem 4.4 guarantees that the limit of every convergent subsequence is a solution. The global convergence of SCRSD algorithm has been proved assuming the existence of a feasible basis of the master problem, the use of the above criterion of dropping vectors from the basis, and the condition that the number of retained extreme points must be greater than the number of difficult conditions plus one.

Therefore, there exists 62 > 0 such that f ( X k + I ) - uk+

' A lx k +

I I.f( Xk

5. HEURISTIC INITIALIZATION

+ y(yk - 2 ) )

-Uk+'AI(xk++y(yk-xk)), Vy E [ 0,621,V k E K

1

.

(4.14)

The SCRSD algorithm needs an initial feasible solution to P 1, xo,expressed as a convex combination of p extreme points of F2 defining a simplex, and a vector of Lagrange

RESTRICTED SIMPLICIAL DECOMPOSITION

multipliers, vo, correspondingto A I x = b, at xo. All this information is difficult to obtain in a real problem with thousands of variables and constraints. The proposed heuristic starts from an SP solution, which is obtained solving SP1 (k):

Min c k y , V y E F2,

initial data are obtained when the MP is solved and an optimal solution xo is obtained. Though this algorithm is heuristic, for the test problems with inequality side constraints, it has yielded the necessary initial extreme points in only a few seconds.

6. COMPUTATIONAL CONSIDERATIONS

where ck is an arbitrary vector. Let zk be an optimal solution of the above SPl (k).The following extreme point must be located at the opposite subspace of the one where zk lies in relation with the hyperplane defined by A l x I61. The function b determine the position of z k relative to the side constraints. 6 will be a vector which components are defined by

, V i = 1,.

209

.., p .

To obtain the new extreme point, the following SP2(k) will be defined:

Min - M 6 ( z k ) A , ( y- z k ) , V y E F2, where M is a large penalization. If there exists an SP feasible solution in the intersection of subspaces opposite to those containing of z k , that solution will be chosen; if not, we chose an optimal solution of SP close to the above intersection. The heuristic initialization defines at each iteration linear cost coefficients penalizing with -A4 one component of the optimization vector, with the rest zero. Afterward, a new extreme point of F2is obtained defining a new SP and a new 6, in relation to the previous extreme point. Of course, the repeated or linearly dependent extreme points are discarded. Taking into account the above considerations, the heuristic initialization is defined by = 1 , . . . ,n . C k = - M , c j = O , V j = 1, . . . , n , V j # k .

do for each k

Solve SPI (k):let y k be an optimal solution. Evaluate a( y k ) . Define: ck = - M G ( ~ ~ ) A ~ . Solve SP2(k):let z kbe an optimal solution. y k and zk are the new extreme points. Discard unneeded extreme points enddo The algorithm finishes when the necessary number of extreme points of F2has been generated. The rest of the

The computational experiments with the SCRSD algorithm have been run on randomly generated networks of different sizes. The generated problems are defined by pseudoconvex functions, unicommodity or multicommodity networks, and side constraints. The unicommodity constraints have been generated using the NETGEN code (Klingman et al. [17]). The multicommodity constraints have been generated by my own code taking as base the above Netgen. Other parameters of the problem, not included in the original Netgen, are the demand, the flow upper bound, the coefficients of the arc cost functions, and the terms of the side constraints. All these have been generated using a random number generator from a discrete uniform dis1, . . , , b } , where a and b are the tribution: U {a, a lower and upper integer bounds, which have been chosen to obtain compatible constraint sets. Equality and inequality side constraints have been considered. The problem with inequality side constraints, A , xI b, ,can be reduced to the equality case, introducing slack variables, which can be used as a part of the initial basis of the MP, so fewer extreme points need to be generated. Since the above heuristic initialization method has difficulty obtaining the indicated initial data for problems with equality side constraints, the test problems have been run using only inequality constraints. The size of the random-generated networks is given by the following values: the number of arcs ( n u ) ;number of nodes (nn); number of demands (or commodities) ( n d ) or number of sources and sinks ( n s s ) ,in the multicommodity or in the unicommodity cases, respectively: the number of side constraints ( n s c )and the number of active side constraints (nasc). The nasc is obtained, a posteriori, when the optimum solution is obtained, counting the number of slack variables equal to zero. Problems with active side constraints are generated taking initially enough large components of vector b, ;then, when the optimum solution is obtained, the slack variables determine the changes in b, needed to obtain problem with active side constraints. The subproblems are linear models which take into account the easy conditions or conditions, in general, that can be solved by exploiting the particular structures of the problem. This is the case of the unicommodity network constraints, so when A 2 x = b2 is defined by a node

+

arc incidence matrix, with x bounded, it is possible to use a primal simplex that can explore the network structure, e.g., NETFLO code (Kennington and Helgason [ 15] ) . The other problems have subproblems with multicommodity network constraints. The linear system is different for each commodity and the vector x is considered unbounded. The problem can be decomposed by commodities. Each one can be solved by efficient shortest path algorithms,e.g., L2QUE code (Gallo and Pallottino [ 6 ] ). There is a wide range of methods for solving the MP that are available in computer packages. Since the master problem must be solved repeatedly, it is important that a superlinearly convergent method be used, and this requires the storage of an adequate approximation of the Hessian matrix, so a reduced gradient algorithm has been used.

The MINOS code (Murtagh and Saunders [ 231) hasbeen used to solve the MP. The default values of Netflo, L2que, and Minos are employed to finish the SP and the MP. Two main types of stopping criteria have been used in the SCRSD code: the optimal solution and the lower-bound criteria. The former refers to the nonexistence of descent directions, so as Lemma 4.2 established, x k is the optimal solution if it is verified that [V.f( xk) - u",](y - Xk) z 0, v y E F2. It is possible to know in each iteration a lower bound of the optimal value of the objective function that characterizes the proximity of the iterate to the optimal so-

TABLE l. SCRSD code results with unicommodity constraints R E I 0.0001

R E I 0.01 Problem

r

Iterations

nu = 900 nn = 450

= 35 =

45 = 54

32 31 34

20 25 = 30 = 35 = 45

27 23 23 23 19

45 50

nss = 450 nsc = 30 nusc = 0

nu = 450 nn = 300 nss = 60 nsc = 15 nusc = 4

= =

nu = 450 nn = 300 nss = 60 nsc = 40 nusc = 0

=

CPU Time

Iterations

CPU Time

2200 2200 2200

2 2 130. 24905. 22230.

75.0 77. I 76.4 76. I 82.1

2100

2378. 604. 521. 516. 632.

30. I 29.7 29.9 29.9 30.4

37 47 52 46 46

134.7 221.7 325.7 379.5 38 I .O

339 550 298

97 76 76 69

=

55

= 2

65 75

10 8 8 8 8

n u = 100 nn = 50 nss = 50 nsc = 20 nusc = 2

25 30 2 35

15 13 13

29.6 26.3 25.2

101

=

35 24

231.5 101.3 64.9

nu = 50 nn = 30 nss = 30 nsc = 20

25 30 2 35

16 12 12

32. I 23.8 24. I

80 29 19

179.3 73.2 48.4

= 12

II II

31.2 29.3 30.8 30. I

33 12 12 12

83.1 29.4 33.6 32.9

nusc

=

=

=

=

=

2

nu = 40 nn = 20 nss = 20 nsc = 10 nusc = 0

= =

2

15 17 19

11

II

RESTRICTED SlMPLlClAL DECOMPOSITION

211

TABLE II. SCRSD code results with multicommodity constraints ~

RE

RE I0.00 I

5 0.01

Problem

r

Iterations

CPU Time

Iterations

CPU Time

nu = 700 nn = 300 nd = 317 nsc = 20 nusc = 5

25 30 = 35 = 40 = 45 = 54

46 36 38 38 38 38

621. 612. 775. 764. 764. 763.

1100

z 1570. 2 2 8 17. 2250. 2425. 2830. 2356.

nu = 500 nn = 200 nd = 221 nsc = 30 nusc = 2

=

35

36 34 32

365.9 708.4 546.9

2100 72 61

5 6

4.7 5.8

7 8

= =

= 45 =

nu = 28 n n = 17 nd = 4 nsc = 8 nusc = 4

54

=9

z 10

2 100

68 68 64 60

2 1220.

1862. 21 19.

5.6 6.8

As the sequence { L B k }is not necessarily monotone increasing, it will be advisable to use the biggest value:

lution. This permits an alternative stopping criterion, which is obtained by characterizing the above proximity using the lower bound (LB) of the objective function, which is defined by the sum of the values of the objective function of MP( k) and SP( k):

LBk+max.{LBi,

V i = 1, . . . . k}.

So, the LB criterion stops the SCRSD algorithm when

LBk = f d x k ) + f s P ( Y k ) .

cpu time@ seconds) 0 1,OOE-Ol

100

200

300

400

500

600

700 -4

1,00E-02 R=20 L

Ep

.a 3

+R=25

1,00E-03

--

I .

-t-

-

1,00E-04

1,00E-05

Fig. 1. R450SC15,SCRSD computational results

R=30 R=35

R=45

212

MAR~N

1.00E-02

-

1.00E-03

--0-

.-f

R=50 R=55

1,00E-04

-t

R=45

U

a

d -

1,00E-05

R=65 R=75

~

.~

1,00E-06

Fig. 2. R450SC15, SCRSD computational results

the relative error ( R E ) is less than number close to zero:

RE

=

c,

fMp(Xk)- L

a small positive real

B ~

I c.

L B ~

7. COMPUTATIONAL TESTS Personal computer, 80386-like machines, 28 Mhz. with

4 Mbit of internal memory, and a Fortran compiler have been used to run the computational tests. Unicommodity

500

0

o,i

: m

cpu time(in seconds) lo00 1500 2000 I

I

problems up to n u = 900, n n = 450, nss = 450, and nsc = 30 and multicommodity problems up to n u = 700, n n = 300, n d = 317, and nsc = 20 have been tested, these limits having been fixed by the memory needed to solve the MP. Especially, the multicommodity problems can be considered large problems, given that the nu and n n must be considered per commodity, so the problem R700SC20 with 3 17 commodities and 700 arcs has a total number of 22 1900 variables. In the problem name, two numbers can be identified: The first is the number of arcs and the second is the number of side constraints. Using workstations or more sophisticated machines,

2500

3000

I

I

1 1

b

-

--w-

0.01

R=35 R=45

E

R=55

P)

-: a U

E

0.001 1

0,0001

Fig. 3. R900SC30, SCRSD computational results

RESTRICTED SlMPLlClAL DECOMPOSITION

213

cpu time (in seconds) 0

1,OOE-01

500

1000

1500

2000

2500

3000

I

I

I

I

I

I

I

h 1,00E-02

I

b

-

R=35

5 .-> 0

CI

(I

5

1,00E-03 -..u--a

1,00E-04

Fig. 4. R500SC30, SCRSD computational results

the above limits can be increased and the CPU time of the computational results could be easily divided by 10 or more. Each problem was solved by the SCRSD algorithm with different values of r. The previously defined relative error has been used to measure the proximity of the iterate to the optimum. Tables 1 and I1 give the CPU time in seconds and the iterations necessary to achieve the relative error of for unicommodity and of and for and multicommodity networks. Figures 1-5 display the computational behavior of SCRSD algorithm for five problems

of different sizes with different values of r, where the xaxis represents the CPU time in seconds and the y-axis represents the RE. The values are obtained with 100 iterations of SCRSD, with the exception of R900SC30 for which 200 iterations have been used. Properties of computational behavior displayed in the figures have been difficult to generalize, due to dependence on the structure of the problem and the initial solution. The best r values are frequently low and medium range ones, but in some cases, like the R50SC20 and the R100SC20 problems, the best run has been obtained with r > 34.

cpu time(in seconds) 500 1,OOE-Ol

1000

1500

2000

2500

3000 I

-

R=25 R=30

1,OOE-02

-

a'

R=35

.-al

-

-p! 1,00E-03

R=40

(I

R=45

+-R = M 1,00E-04

Fig. 5. R700SC20, SCRSD computational results

214

MAR~N

In comparing the results with the ones obtained using codes of general purpose, without capacity to exploit special structures of the problem, we observe the inability of Minos to solve large problems. Minos could only solve up to the problem R450SC40 for unicommodity networks and up to the problem R28SC8 for multicommodity networks. The problems R900SC30, R500SC30, and R700SC20 can not be solved directly by Minos. In the problem R450SC40, Minos obtained the optimum value of the objective function of 6035 127.9 in 464 iterations and 182 seconds of CPU time, which is the same optimum value that SCRSD obtained with r > 45, but with r = 45 the optimum value of 6035 109.9 was obtained. The performances of the SCRSD algorithm improve with the size and the difficulty ofthe problem. For problem R700SC20 with 700 arcs and 317 commodities (demands), the SCRSD has no competitors. Several extensions can be developed, one being the application of SCRSD to other special structures, other than unicommodity and multicommodity networks. For example, models with generalized transportation subproblems as suggested at the introduction are of interest. Additional improvement in SCRSD would be associated with the definition of an exact initialization method. This could be important since the computational results are very sensitive to the initial solution.

Thanks to Robert R. Meyer who has given feedback on drafts of this paper. This research was supported by CAICYT ( PASS0045). Spanish Government.

REFERENCES D. P. Bertsekas, Projected Newton methods for optimization problems with simple constraints. SI,4.\fJ. Control Optitn. 20 ( 1982 ) 22 1-246. C. F. Daganzo, On the traffic assignment problem with flow dependent costs. Trans. Res. I 1 ( 1977 ) 433-44 I . G. B. Dantzig and P. Wolfe, Decomposition principle for linear programs. Oper. Res. 8( 1 ) ( 1960) 101-1 1 1 . R. S. Dembo, P. Henaff. and S. Spera. NLPNET. Userrj: Guide, version 3.2. Yale University. New Haven ( 1988). L. F. Escudero, A motivation for using the truncatedNewton approach in a very large-scale nonlinear network problem. Math. Program. Stirdv 26 ( 1986) 240-244. G. Gallo and S. Pallottino. Shortest path algorithms. Ann. Oper. Res. 3 (1988) 3-79. N. Gartner, S. B. Gershwin, and J. D. Little, Pilot study of computer-based urban traffic management. Trans.Res. 14B (1980) 203-217. V. Gascon, A. Benchakroun, and J. A. Ferland, Electricity distribution planning model; a network design approach

solving the master problem of the Benders decomposition method. Internal Report Universitk de Montr6al( 1992). A. M. Geoffrion, Elements of large-scale mathematical programming. Munag. Sci. 16 ( 1970) 652-691. D. W. Hearn and J. Ribera, Convergence of the FrankWolfe method for certain variable traffic assignment problems. Truns. Res. 15B ( 198 I ) 437-442. D. W. Hearn, S. Lawphongpanich, and J. A. Ventura, Finiteness in restricted simplicial decomposition. Oper. Res. Lett. 4 ( 1985) 125-130. D. W. Hearn, S. Lawphongpanich, and J. A. Ventura. Restricted simplicial decomposition: Computation and extensions. Math. Program. Stzidv31 (1987) 99-1 18. C. A. Holloway. An extension ofthe Frank-Wolfe method of feasible directions. Math. Program. 6 ( 1974) 14-27. P. V. Kamesan and R. R. Meyer, Multipoint methods for nonlinear networks. Computer Sciences Department, Technical Report No. 468, University of WisconsinMadison ( 1982). J. L. Kennington and R. V. Helgason, Algorithms,ft)rNetwork Programming Wiley, New York ( 1980). J. L. Kennington and J. A. Whisman, NETSIDE User’s Guide. Technical Report 86-OR-01, Department of Operation Research. Southern Methodist University ( 1988). D. Klingman, A. Napier. and J. Stutz, NETGEN-A program for generating large scale uncapacitated assignment. transportation and minimum cost flow network problems. Munug. Sci. 20(5)(1974) 814-821. S. A. Kontogiorgis, Alternating directions methods for the parallel solution of large-scale block-structured optimization problems. Computer Sciences Department, Tecnical Report No. 94-1 3. University of WisconsinMadison ( 1994). L. S. Lasdon. Optimization Theory jiir Large Sjstems. MacMillan. New York (1970). A. Marin, Extension of the Frank-Wolfe to solve convex separable nonlinear networks with lover and upper bounded flow. IV Latin-Iberian-American Congress on Operalions Research and Systems Engineering. Rio Janeiro (1988). A. Marin, Price-directive decomposition applied to telecommunication network planning with side constraints. Discussion circle. 13th International Tclegrufic Congress, Copenhagen ( 1991 ). A. Man’n and P. Pablos, Flow optimization with traffic safety constraints. Synposiiim on TrafficSafety Theory and Research Methods. Amsterdam ( 1988 ) . B. A. Murtagh and M. A. Saunders, Minos 5.1 User’s Guide. Technical Report 83-20R, Stanford University (1983). A. Nagurney and A. Eydeland, A parallel network equilibration algorithm for a class ofconstrained matrix problems. Trans. Sci. 26( I ) ( 1992) 59-68. P. Oliveira, S. McKee, and C . Coles, Optimal scheduling of a hidro thermal power generation system. Eia. J. Oper. Res. 71 (1993) 334-340.

RESTRICTED SlMPLlClAL DECOMPOSITION

[26] R. T. Rockafellar, Convex Ana~.v.sis.Princeton University, Princeton, NJ ( 1970). [27] D. P. Rutenberg, Generalized networks, generalized upper bounding and decomposition of the convex simplex method. Manag. Sci.16(5) (1970) 388-401. W. Shen Hsia, On Rutenberg’s decomposition method. [28] Manag.Sci. 21(1)(1974) 10-12. [29] Ph.L. Toint and D. Tuyttens, On large scale nonlinear network optimization. Mafh. Program. 48 (1990) 125159.

215

[30] J. A. Ventura and D. W. Hearn, Restricted simplicia1 decomposition for convex constrained problems. Math. Program. 59 (1993) 71-85. [3 I ] B. Von Hohenbalken, Simplicia1 decomposition in nonlinear programming algorithms. Mafh. Program. 13 (1977) 49-68.

Received February 20, 1992 Accepted April 10, 1995

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.