Optimization of Temporal Processes: A Model Predictive Control Approach

Share Embed


Descripción

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 1, FEBRUARY 2009

169

Optimization of Temporal Processes: A Model Predictive Control Approach Zhe Song and Andrew Kusiak, Member, IEEE

Abstract—A dynamic predictive-control model of a nonlinear and temporal process is considered. Evolutionary computation and data mining algorithms are integrated for solving the model. Data-mining algorithms learn dynamic equations from process data. Evolutionary algorithms are then applied to solve the optimization problem guided by the knowledge extracted by data-mining algorithms. Several properties of the optimization model are shown in detail, in particular, a selection of regressors, time delays, prediction and control horizons, and weights. The concepts proposed in this paper are illustrated with an industrial case study in combustion process.

,

Minimum and maximum time delays for controllable variables in . Set of all controllable variables.

,

Two subsets of

.

Prediction horizon. Control horizon. Reference value for .

Index Terms—Data mining, evolutionary strategy, model predictive control, nonlinear temporal process, optimization.

Cost function related with , , . th weight constant. ,

NOMENCLATURE

Positive semi-definite matrices. Number of different data mining algorithms used to identify the process function.

Vector of controllable variables of a process. th controllable variable.

Process function identified by data mining algorithm .

Vector of noncontrollable variables of a process.

Offspring size.

th noncontrollable variable.

Parent size or initial population size.

Performance variable of a process.

Solution matrix of th individual.

Search space of .

Mutation matrix of th individual.

Function captures the mapping between and .

Normal distribution.

Sampling time stamp. ,

, ,

Maximum possible time delays for , . ,

I. INTRODUCTION

,

Sets of time delay constants selected for corresponding variables , , under the performance variable .

,

Lower and upper bounds of set

.

,

Lower and upper bounds of set

.

,

Lower and upper bounds of set

.

,

Minimum and maximum time delays for controllable variables in .

Manuscript received September 06, 2007; revised December 23, 2007. First published May 02, 2008; current version published January 30, 2009. This work was supported in part by the Iowa Energy Center under Grant IEC 04-06. The authors are with the Department of Mechanical and Industrial Engineering, University of Iowa, Iowa City, IA 52242-1527 USA (e-mail: [email protected]). Digital Object Identifier 10.1109/TEVC.2008.920680

O

PTIMIZING nonlinear and nonstationary processes presents challenges for traditional solution approaches. , where A process can be represented as a triplet is a vector of controllable variables, is a vector of noncontrollable variables (measurable); and is a system performance variable. A greater value of indicates a better performance. The value of a performance variable changes in response to controllable and noncontrollable variables. The controllable and noncontrollable variables are considered in this research as input variables. The underlying relation, where is a function ship is represented as capturing the process, and it may change over time. Finding optimal control settings for optimizing can be formulated as a single-objective optimization problem with constraints (1)

1089-778X/$25.00 © 2008 IEEE Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

(1)

170

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 1, FEBRUARY 2009

In the model (1), is a feasible search space. In many industrial applications the noncontrollable variables , the underlying , and the search space are time-dependent. function Finding optimal control settings for nonlinear and temporal processes poses several challenges. . For 1) It is difficult to derive analytic models describing example, modeling the relationship between combustion process efficiency and input variables is not trivial. Thus, it is difficult to solve model (1) with traditional optimization techniques. is nonstationary. Updating is nec2) The function essary in practical applications. For example, a combustor ages over time. Regular maintenance and repair changes the combustor’s properties, thus impacting the combustion process. Recent advances in evolutionary computation (EC) and data mining (DM) present an opportunity to model and optimize complex systems using operational data. In recent years, data-mining algorithms have proved to be effective in applications such as manufacturing, marketing, combustion process, and so on [3], [15], [20], [24]. Many research and application projects of EC have led to success [1], [2], [7], [8], [10], [18], [23], [28]. The research results reported in the literature offer a promising direction for solving complex problems that are difficult to handle by traditional analytical methods. In this paper, DM and EC are combined to optimize a nonlinear and temporal process. A predictive control model is developed to control the process. The dynamic equations included in the model are learnt by data-mining algorithms. Various properties of the model predictive control problem are analyzed in this paper. II. DYNAMIC MODELING A process can be considered as a dynamic multi-input-single-output (MISO) system. Assume can be determined based on the previous system status . The positive integers are the maximum possible time delays to be considered for the corresponding variables. The dynamic MISO model could be extracted from the historical process data by data-mining algorithms. Although many system identification algorithms are available, most of them assume a certain system structure [31]. Data mining offers many efficient algorithms extracting predictive models from large amounts of data. To obtain an accurate dynamic model that can be generalized to optimize the process, selection of appropriate predictors is important. Data mining offers algorithms that can perform such a task. For example, the boosting tree [12], [13] algorithm can be used to determine the predictor’s importance. Wrapper technique and a genetic random search can be combined to determine the best set of predictors [11], [26]. , a regressor (predictor) seFor performance variable lection algorithm selects a set of important predictors among

. For ease of discussion, different index sets of need to be defined. , Definition 1: For response variable is a set composed of integers serelated to ’s previous values lected from . and arranged in ascending sequence, Similarly, is a set selected for predictors related to . Similarly, from is a set selected from for predictors related to . In total there are individual sets for . can be rewritten as Based on Definition 1, the the following dynamic model:

where

expands by enumerating all possible elements in the corresponding sets. III. MODEL PREDICTIVE CONTROL The concept of model predictive control (MPC) is used to determine optimal control settings of the process. However, optimization is performed by an evolutionary algorithm. The MPC has proven to be effective in industrial applications [16], [21], [22]. Illustrative applications of the MPC solved by EC algorithms are discussed in [14]. A computational intelligence algorithm used to solve the MPC is presented in [27]. However, the research reported in the literature indicates that the relationship between controllable and noncontrollable variables of the MPC has not been fully analyzed. In particular, the weights used in the MPC need greater research, as they are important. The basic elements of the MPC are [11], [30] as follows. 1) Sample the response, controllable, and noncontrollable variables of the process. 2) Build a model with the data collected on the previous step. 3) Use the process model to predict its future behavior over when a control action is applied a prediction horizon along a control horizon . 4) Calculate the optimal control sequence that minimizes some cost function . 5) Apply the controllable setting and repeat the procedure at the next sampling time. Suppose at sampling time , an attempt is made to find an optimal control sequence of selected controllable variables , which minimize some cost func, and a reference tion. Assume the prediction horizon is

Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

SONG AND KUSIAK: OPTIMIZATION OF TEMPORAL PROCESSES

171

signal for the performance variable is . The value of is usually greater (better) than at any time. Like the MPC, the optimization model (1) can be rewritten, as shown in (2) at the bottom of the page, where ; ; is a positive and are positive semi-definite matrices. weight; and Solving the model (2) brings the system’s current performance to the desired one . In some applications, constraint could be relaxed to specific ones. In this research, the dynamic models will be constructed by data-mining algorithms. An EC algorithm will be used to solve model (2) as analytical models are not available. To further analyze the MPC problem, sup, the system status is pose at sampling time , and all historical system information is available. Based on the dynamic equation , some relationships among the input process variables need to be clarified. For a dynamic equation

are .

lower

, bounds

Observation 2: At sampling time , one can only modify conto optimize . is called an actrollable variables in tionable variable set of . To make sure all controllable variables are used to optimize , Observation 2 offers hints for selecting predictors and the corresponding time delays. Observation 2 is explained by the following example. For , at sampling time , is optimized by modifying the values of controllable variables, . As has already i.e., is optimized with only. happened, Based on Observation 2, the vector in model (2) is , i.e., composed of variables belonging to . Observation 3: There exist a lower and an upper bound and control horizon for model for prediction horizon (2). , ; , . Besides, and should satisfy the equation . Consider the dynamic model , , . Determine whether makes sense. As increases from to ,

for

Definition 2: Let be the smallest time delay of controllable variables ; the largest time delay of controllable variables ; the smallest time delay of noncontrollable variables ; the largest time delay of noncontrollable variables . , and Observation 1: If there exists a constant holds, there is not sufficient information about noncontrollable variables to optimize by finding the optimal control setting of at sampling time . To explain Observation 1, consider an illustrative example. , at sampling time , controlFor optimizing is to be determined, i.e., lable variable . As is not known, it is . difficult to find the optimal value of be a set composed of all controllable Definition 3: Let , and for a response variable , variables. define . It is obvious . Let , and for any controllable that , holds. variables

. Since is not known, . It is also obvious there is no way to accurately predict that has been already determined from the historical values . The control action can only start by changing . The control horizon could from to . The upper bound of is 4, i.e., start from . IV. SOLVING THE MPC WITH AN EVOLUTIONARY COMPUTATION ALGORITHM The MPC is usually solved by nonlinear programming algorithms. In this study, the MPC is solved with an evolutionary computation algorithm. Robustness of optimal solutions is of importance in many applications. In this paper, the robustness of model (2) is ensured by using various data-mining algorithms and combining predictions. Different definitions of robustness can be found in [9] and [25]. different data-mining algorithms to learn the dyUsing namic equation, model (2) can be rewritten into model (3).

(2)

Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

172

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 1, FEBRUARY 2009

See (3) at the bottom of the page. In model (3), different , where data mining algorithms are used to construct . The final prediction of is the average value of the predictions. and satisfy all the conditions disAssume cussed before. The objective function

Definition 5: The general form of the th individual in the , where evolutionary strategy is defined as

.. .

..

.. .

.

can be written as .. . , where is already determined irrespective of what the values are. As a result, is calculated for each individual solution and use it as a fitness function value in EA. Observation 4: Optimal solution of model (2) or (3) may at when the optimal value of decrease is implemented at sampling time . Maintaining the weight relatively larger than the other weights would is not decreased by changing to the assure at optimal value. Each individual solution could be represented as a matrix. Recall that

is a vector with index varying from

Algorithm 1 1: Initialize individuals (candidate solutions) to form the initial parent population. 2: Repeat until stopping criteria is satisfied. 2.1: Select and recombine parents from the parent population to generate offspring (children). children.

2.3: Select the best values.

to

children based on fitness function

2.4: Use the selected generation.

. Let

.. .

.

Each element of is used as a standard deviation of a normal distribution with zero mean. The basic steps of the evolutionary strategy algorithm are [10] as follows.

2.2: Mutate the .. .

..

children as parents for the next

A. Mutation .. .

..

.

.. .

(4)

In this paper, evolutionary strategies (ES) [10] are intuitively selected to solve model (3). Definition 4: Let be the offspring size, and be both the number of offspring selected and the initial population size. Individuals in the parent population are numbered from 1 to . Individuals in the offspring population are numbered from 1 to .

An individual can be mutated by following (5) and mutated first, mutated next. See (5) (6), with .. .

..

.

.. . (5)

where is a random number drawn from normal distriis a bution with 0 mean and standard deviation . random number drawn from normal distribution with a mean

(3)

Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

SONG AND KUSIAK: OPTIMIZATION OF TEMPORAL PROCESSES

173

TABLE I DATA SET DESCRIPTION

Fig. 1. Illustration of the proposed boiler combustion process optimization by data mining and evolutionary strategy algorithms.

of 0 and standard deviation . ically for , while Hadamard matrix product [29]

is generated specifis for all entries. “ ” is the

air flow. Fig. 1 illustrates the proposed optimization framework. Boiler combustion process generates data streams stored in data (PI) historian. The data streams are then periodically used by the data mining algorithms to update process models with the new data. Then, evolutionary strategy algorithm determines the optimal air flow settings. The optimal air flow could be either recommended to the boiler operators or directly input as a set point of the air flow controller. A. Data Set, Sampling Frequency, and Predictor Selection

(6) is a matrix of the same size as . Each element where is generated from a normal distribution with mean of 0 and the corresponding standard deviation in matrix . B. Selection and Recombination of Parents Definition 6: Let SelectedParents be an index set composed of n unique randomly selected index from 1 to . SelectedParents changes every time it is generated. The value n is usually much smaller than (frequently is 2). To generate children, parents are selected from the parent population and recombined times. Assume each time parents are selected randomly to produce one child by using (7) (7) A discrete recombination operator [10] was applied in this research; however, it did not perform as well as the intermediary recombination operator used in (7). C. Children Selection The selection process in the evolutionary strategy algorithm is simple. Once the children are generated, the best of them are selected based on the fitness function value. V. INDUSTRIAL CASE STUDY To validate the concepts introduced in this paper, industrial data were collected from Boiler 11 (fluidized-bed combustion) of the University of Iowa Power Plant (UI PP). The boiler burns coal and biomass (oat hulls). The coal and oat hulls’ ratio changes depending on the availability of oat hulls. The load demand of Boiler 11 changes in time. Based on the domain expertise, finding the optimal air flow becomes the first choice. Boiler 11 has two air flow inputs, the primary and the secondary

From the UI PP data historian, 5868 data points were sampled at 5-min intervals. Data set 1 in Table I is the total data set composed of 5868 data points starting from “2/1/07 2:50 AM” and continuing to “2/21/07 11:45 AM.” During this time period, the boiler operations could be described as normal. Data set 1 was divided into two data sets. Data set 2 was used to extract a model by data-mining algorithms. It consisted of 4715 data points. Data set 3 was used to test the model learnt from data set 2. Considering the noise in the industrial data, data set 1 was denoised by the moving-average approach with a lag of 4. All the data points were scaled in the interval [0, 1]. In this paper, boiler efficiency is heuristically modeled as a function of coal and primary air ratio [coal flow (klbs/h)/primary air flow (klbs/h)], coal and secondary air ratio [coal flow (klbs/h)/secondary air flow (klbs/h)], coal and oat hull ratio [coal flow (klbs/h)/oat hull flow (klbs/h)], and coal and oat hull quality (BTU/lb). However, other variables could be considered based on the application context. Using the coal and primary air ratio, the primary air flow could be determined based on the current coal flow. Similarly, the secondary air flow could be determined by using the current coal flow and coal and secondary air ratio. The coal and oat hull ratio is assumed to be a noncontrollable variable from a practical point. , , , , and , The maximum time delays, , are assumed to be 9. In the context of 5-min sampling intervals, min are assumed to be maximum time delays. For example, if the operator changes the primary air flow right now, it would take at most 45 min to observe that this change has some effect on boiler efficiency. After running the boosting tree algorithm [12], [13] on data set 2, the importance of each predictor is calculated. Considering observations 1–3, 11 predictors are can selected in Table II. Then, the current boiler efficiency be written as

Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

(8)

174

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 1, FEBRUARY 2009

Fig. 2. Predicted values and observed values of the first 200 data points of data set 3.

TABLE II PREDICTORS USED IN THE DYNAMIC EQUATION OF BOILER EFFICIENCY

B. Learning Dynamic Equations From the Process Data In this paper, five different data-mining algorithms were used to learn the dynamic (8) from data set 2. They include the classification and regression tree algorithm (C&R Tree) [6], the boosting tree regression algorithm [12], [13], the random

TABLE III PREDICTION ACCURACY OF DIFFERENT ALGORITHMS FOR DATA SET 3

forests regression algorithm [5], the back-propagation feedforward neural network (NN), and the radial basis function (RBF) algorithm [4], [17]. The extracted models were tested using data set 3. Table III summarizes the models’ prediction accuracy based on data set 3. Fig. 2(a)–(e) show the first 200 predicted and observed values of data set 3. The NN performed best among the five algorithms. Random forests performed the worst. In summary, all the algorithms made high quality predictions on the testing data sets and captured the system dynamics. However, updating the learned models by new data points are necessary for a temporal process. The modeling task is simplified by using data-mining algorithms.

Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

SONG AND KUSIAK: OPTIMIZATION OF TEMPORAL PROCESSES

175

C. Evolutionary Strategy Algorithm Based on (8) and Observations 2 and 3, prediction horizon is set as 9, control horizon is set as 4. Thus, an individual in the evolutionary strategy is represented as two 2 5 matrices

(9) Model (3) can be expressed as shown in (10) at the bottom of the page, where , and only needs to be calculated. and

Fig. 3. ES solving model (10) for different offspring sizes at sampling time “2/18/2007 2:05 AM”; the objective function value is the best individual’s fitness value at each generation.

is limited to between

in this case study based on analyzing the historical

data distribution of . and are determined based on the The ES parameters heuristics and , with , in this case the number of actionable variables [10]. The lower are and upper bounds for the standard deviation values of set to 0.005 and 0.1, respectively. Two parents are selected to generate one child. For the initial population at sampling time , each column of is generated by drawing random numbers and

uniformly between each column of

. Similarly

is generated by drawing random numbers and

uniformly between

. Although research

[10], [19], suggests the selection pressure to be numerous experiments were conducted, and it was found that performs well, and 50 generations are enough for the algorithm to converge to local optima. are set as constants since the boiler efficiency is always between 0 and 1. The weight vector plays an important role based on Observation 4. Different weights are discussed later. and are set as either

or

. Figs. 2–4 are based on

, , and , 2, 3, 4, 6, 7, 8, 9, . At sampling time “2/18/2007 2:05 AM,” model (10) was solved with ES. Fig. 3 shows that the objective function value

Fig. 4. ES solving the model (10) for different offspring sizes at sampling time “2/21/2007 5:05 AM”; the objective function value is the best individual’s fitness value at each generation.

of the best individual in each generation is decreased as the number of generations goes up. The selection pressure is important for the ES to converge to the optimal values. Fig. 4 illustrates another example of solving model (10) at sampling achieves the time “2/21/2007 5:05 AM.” best performance in terms of the computational time and the ES conobjective function value. For verges to a higher value of the objective function. However, for the ES appears to be unstable. Fixing the selection pressure at increases . The graph in Fig. 5 demonstrates that increasing does not

(10) Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

176

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 1, FEBRUARY 2009

TABLE IV EXPERIMENT PERFORMED TO EVALUATE THE IMPACT OF WEIGHTS

Fig. 5. ES solving model (10) at “2/21/2007 5:05 AM,” with the same selection pressure and increasing population size; the objective function value is the best individual’s fitness value at each generation.

significantly affect the performance of the ES optimizing model (10) once the selection pressure is fixed. D. Virtual Testing for Prediction of Efficiency Gains At sampling time , the current values of and become the optimal values determined from model (10). To see whether the combustion efficiency

is improved, a virtual testing approach is used. The virtual testing approach [20] validates the concepts introduced in this paper without performing live testing. The basic idea behind virtual testing is to develop a valid computational model of the combustion process to test the proposed method. The virtual model is derived from historical data and simulates the real-time process. An artificial neural network is used to based on data set 1. Thus, one can assume the construct underlying combustion process during “2/1/07 2:50 AM” and . Then, is used to “2/21/07 11:45 AM” is modeled by predict the combustion efficiency based on the inputs. Also, recall that model (10) is based on the predictive models to , which are learnt from data set 2, and data set 3 has not been used to derive the five models. The optimization results to are tested on which derived from models represents the real-time combustion process in the time window “2/1/07 2:50 AM” and “2/21/07 11:45 AM”. and be represented by Let the optimal values of and . If and are implemented at time , to , combustion efficiency will be changed from where

Two potential factors could affect the efficiency gains. One could decrease to is based on Observation 4, , if the weight is not large enough. The other is, if through do not accurately capture , the models the optimal solution of model (10) may decrease the efficiency . For the second factor, models to predicted by could be updated by new process data to make sure they capture the underlying process accurately in real applications. are tested. For the first factor, different weights about Based on Observation 4, if the value of , , , 2, 3, 4, 6, 7, 8, 9, , the optimal solution will lead to an increased combustion efficiency. Five experiments were performed to assess the impact of different combinations of weights (see Table IV). In each experiment, model (10) was solved for each data point in data set 3. Each data point in data set 3 is controlled by the optimal values and , and and are compared to see whether there is an efficiency gain. All the experiments in . Table IV were run with 50 generations, and and were obtained through the best individual in the 50th generation. increases relative For experiments 1 through 5, as would be to other weights, based on Observation 4, with higher confidence. greater than The data in Table V indicates that the equal weights in Exp. 1, result in a large number of data points with decreased efficiency. In Exp. 2, the number of nonpositive efficiency gains is significantly reduced. For Exp. 3–5, all data points are controlled with positive efficiency gains. The average efficiency gain increases from Exp. 1 to Exp. 5. Just a 1% (i.e., 0.01 efficiency gain) improvement in combustion efficiency translates into significant savings in the power industry. Table V shows that the proposed approach could successfully improve combustion efficiency without doing live tests. Fig. 6 plots the efficiency gains of the first 300 data points of data set 3. Note that in Exp. 3–5, is large enough relative to other weights, the efas the ficiency gains in the three experiments tend to be the same. VI. CONCLUSION A generic framework optimizing a nonlinear and temporal process was presented. In this framework three powerful techniques were combined, namely: evolutionary computation, data mining, and model-predictive control. The dynamics of the temporal process can be captured by periodically updating predictive models by learning from the recent process data. Unlike the traditional MPC, noncontrollable variables are directly considered in the predictive models, thus providing more accurate dynamic information about the process. The weights of the MPC

Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

SONG AND KUSIAK: OPTIMIZATION OF TEMPORAL PROCESSES

177

and

the

controllable

variable . third part concerns noncontrollable variables arranged in ascending order of time delays, where is the controllable variable for which and is the controllable variable, where . The dynamic equation can be rewritten as: , The

Fig. 6. Efficiency gains of the first 300 data points of data set 3 and controlled optimal values of x (t) and x (t).

TABLE V EXPERIMENT RESULTS FOR THE DATA OF TABLE IV

model are shown to be important in optimizing the process at each sampling step. Although the evolutionary strategy algorithm is used in this paper, other evolutionary computation algorithms could optimize the model predictive control (MPC) model. Selection pressure is a critical parameter in the evolutionary strategy algorithm so that better individuals are used in the next generation. A recombination operator is another important factor in the evolutionary strategy algorithm. A discrete recombination operator was initially used in this research and then replaced by a better performing recombination operator. It took about a minute to produce a suboptimal solution for a data point of data set 3 using a standard PC. A 5-min sampling interval provides enough time for the algorithm to solve the model in many applications. Future research effort could focus on a multiobjective optimization, where several competing performance variables could considered. Multiobjective evolutionary algorithms could be developed to solve the multiobjective problem.

. At sampling time , is optimized by changing the values of . The modified value of optimizes as

Since at sampling system status is

time , the current and past known, if any item among is greater than 0, future information about noncontrollable variables is needed. To continuously optimize by the MPC approach, , i.e., one needs to make sure that . Proof of Observation 2: Rewrite

into a simpler form by keeping only controllable variables .

APPENDIX Proof of Observation 1: The predictors of can be partitioned into three parts with predictors in ordered in each part. The first part concerns ’s past status ordered in ascending sequence of time delays: . The second part contains controllable variables, , arranged in ascending order of time delays. For the controllable the following holds variable

At sampling time

,

. To optimize could be changed. Since ,

,

, for

, are all his-

torical values that have already happened. It is impossible to change historical data to optimize . Proof of Observation 3: Due to the limited information about noncontrollable variables, there should exist an upper bound for and . The , first part of the objective function is

Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

178

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, VOL. 13, NO. 1, FEBRUARY 2009

which can be expanded by substituting equation

using the dynamic .

In

other

words,

could be larger than . Note that the value of than

is greater (better)

at any time, so could hold. That is to say if is replaced with

to As

increases from

to

, index of will become larger than 0 at some point provided that is large enough. , some It easy to see that when is equal to of will become . When increases further, some s of will become larger than , which means that future information about some of the noncontrollable variables is needed to predict . is . Therefore, the upper bound for , need To determine the upper bound of to hold, otherwise model (2) cannot be optimized. is always greater than . It is also obvious that As increases from to , index of will become when is and become larger than if fur, all ther increases. Therefore, before reaches values of are historical and cannot be changed. In other words, to are already determined by historical values. to can be changed by changing to

. In order to optimize model (2), , so the lower bound for is . It is also obvious that , the upper bound of is . Let . Proof of Observation 4: For ease of discussion, weight matrices and are assumed to be zero. When they are not zero, the conclusion still holds, as shown in the proof. Let be a solution, and the values of the corresponding performance variables are , which implies that does not change during the control be the optimal solution horizon. Let of model (3), and the values of corresponding performance variables are . Since is the optimal solution, the objective function should be smaller than or equal to

does

not

guarantee

. However,

,

may be decreased

. Rewrite as ,

, and let and expand

the

inequality

into

, which equals to . Note that equals holds once

an

. To make sure optimal solution

is determined, has to be

zero. The values of the weights

can be adjusted to make approach 0. One

easy way is to make

large enough and make

,

small enough. REFERENCES

[1] J. M. Alonso, F. Alvarruiz, J. M. Desantes, L. Hernández, V. Hernández, and G. Moltó, “Combining neural networks and genetic algorithms to predict and reduce diesel engine emissions,” IEEE Trans. Evol. Comput., vol. 11, no. 1, pp. 46–55, Feb. 2007. [2] A. Benedetti, M. Farina, and M. Gobbi, “Evolutionary multiobjective industrial design: The case of a racing car tire-suspension system,” IEEE Trans. Evol. Comput., vol. 10, pp. 230–244, Jun. 2006. [3] M. J. A. Berry and G. S. Linoff, Data Mining Techniques: For Marketing, Sales, and Customer Relationship Management. New York: Wiley, 2004. [4] C. Bishop, Neural Networks For Pattern Recognition. Oxford, U.K.: Oxford Univ. Press, 1995. [5] L. Breiman, “Random forests,” Mach. Learn., vol. 45, pp. 5–32, 2001. [6] L. Breiman, J. H. Friedman, and R. A. Olshen, Classification and Regression Trees. Monterey, CA: Wadsworth, 1984. [7] D. Büche, P. Stoll, R. Dornberger, and P. Koumoutsakos, “Multiobjective evolutionary algorithm for the optimization of noisy combustion processes,” IEEE Trans. Syst., Man., Cybern. C, vol. 32, pp. 460–473, Nov. 2002. [8] Z. Cai and Y. Wang, “A multiobjective optimization-based evolutionary algorithm for constrained optimization,” IEEE Trans. Evol. Comput., vol. 10, pp. 658–675, Dec. 2006. [9] K. Deb and H. Gupa, “Introducing robustness in multi-objective optimization,” Evol. Comput., vol. 14, pp. 463–494, 2006. [10] A. E. Eiben and J. E. Smith, Introduction to Evolutionary Computation. New York: Springer-Verlag, 2003, pp. 299–1299. [11] J. Espinosa, J. Vandewalle, and V. Wertz, Fuzzy Logic, Identification and Predictive Control. London, U.K.: Springer-Verlag, 2005. [12] J. H. Friedman, “Stochastic gradient boosting,” Comput. Stat. Data Anal., vol. 38, no. 4, pp. 367–378, Feb. 28, 2002. [13] J. H. Friedman, “Greedy function approximation: A gradient boosting machine,” Ann. Stat., vol. 29, pp. 1189–1232, 2001.

Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

SONG AND KUSIAK: OPTIMIZATION OF TEMPORAL PROCESSES

[14] H. Ghezelayagh and K. Y. Lee, “Intelligent predictive control of a power plant with evolutionary programming optimizer and neuro-fuzzy identifier,” in Proc. 2002 Congr. Evol. Comput., 2002, pp. 1308–1313. [15] J. A. Harding, M. Shahbaz, S. Srinivas, and A. Kusiak, “Data mining in manufacturing: A review,” ASME Trans. J. Manuf. Sci. Eng., vol. 128, pp. 969–976, 2006. [16] V. Havlena and J. Findejs, “Application of model predictive control to advanced combustion control,” Contr. Eng. Pract., vol. 13, pp. 671–680, 2005. [17] S. Haykin, Neural Networks: A Comprehensive Foundation. New York: Macmillan, 1994. [18] J. S. Heo, K. Y. Lee, and R. Garduno-Ramirez, “Multiobjective control of power plants using particle swarm optimization techniques,” IEEE Trans. Energy Conv., vol. 21, pp. 552–561, Jun. 2006. [19] T. Jansen, K. A. D. Jong, and I. Wegener, “On the choice of the offspring population size in evolutionary algorithms,” Evol. Comput., vol. 13, pp. 413–440, 2005. [20] A. Kusiak and Z. Song, “Combustion efficiency optimization and virtual testing: A data-mining approach,” IEEE Trans. Ind. Inform., vol. 2, pp. 176–184, Aug. 2006. [21] X. J. Liu and C. W. Chan, “Neural-fuzzy generalized predictive control of boiler steam temperature,” IEEE Trans. Energy Conv., vol. 21, pp. 900–908, Dec. 2006. [22] C. H. Lu and C. C. Tsai, “Generalized predictive control using recurrent fuzzy neural networks for industrial processes,” J. Process. Contr., vol. 17, pp. 83–92, 2007. [23] C. K. Tan, S. Kakietek, S. J. Wilcox, and J. Ward, “Constrained optimisation of pulverised coal fired boiler using genetic algorithms and artificial neural networks,” Int. J. COMADEM, vol. 9, pp. 39–46, 2006. [24] P. N. Tan, M. Steinbach, and V. Kumar, Introduction to Data Mining. Boston, MA: Pearson, 2006. [25] S. Tsutsui and A. Ghosh, “Genetic algorithms with a robust solution searching scheme,” IEEE Trans. Evol. Comput., vol. 1, no. 3, pp. 201–208, Sep. 1997. [26] I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed. San Francisco, CA: Morgan Kaufmann, 2005. [27] T. Zhang and J. Lu, “A pso-based multivariable fuzzy decision-making predictive controller for a once-through 300-MW power plant,” Int. J. Cybern. Syst., vol. 7, pp. 417–441, 2006.

179

[28] H. Zhou, K. Cen, and J. Fan, “Modeling and optimization of the nox emission characteristics of a tangentially fired boiler with artificial neural networks,” Energy, vol. 29, pp. 167–183, 2004. [29] 2007 [Online]. Available: http://en.wikipedia.org/wiki/Matrix_multiplication [30] E. F. Camacho and C. Bordons, Model Predictive Control. London, U.K.: Springer-Verlag, 1999. [31] Y. C. Zhu, Multivariable System Identification For Process Control. New York: Pergamon, 2001.

Zhe Song received the B.S. degree in 2001 and the M.S. degree in 2004 from the China University of Petroleum, DongYing City, China. He is currently pursuing the Ph.D. degree in industrial engineering at the University of Iowa, Iowa City. His research concentrates on data mining, computational intelligence, optimization, and various applications in process and discrete manufacturing industry. He is a member of the Intelligent Systems Laboratory, University of Iowa.

Andrew Kusiak (M’89) is a Professor with the Department of Mechanical and Industrial Engineering, University of Iowa, in Iowa City. He is interested in applications of computational intelligence in automation, renewable energy, manufacturing, product development, and healthcare. He has published numerous books and technical papers in journals sponsored by professional societies, such as AAAI, ASME, IEEE, IIE, ESOR, IFIP, IFAC, INFORMS, ISPE, and SME. He speaks frequently at international meetings, conducts professional seminars, and consults for industrial corporations. Dr. Kusiak has served on editorial boards of over 40 journals. He is the IIE Fellow and the Editor-in-Chief of the Journal of Intelligent Manufacturing.

Authorized licensed use limited to: The University of Iowa. Downloaded on January 30, 2009 at 10:32 from IEEE Xplore. Restrictions apply.

Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.