Adaptive neural network prediction model for energy consumption

Share Embed


Descripción

Adaptive Neural Network Prediction Model for Energy Consumption Maryam Jamela Ismail1, Rosdiazli Ibrahim2, Idris Ismail3 Electrical and Electronics Engineering Department, University Technologi Petronas, Tronoh, Perak, Malaysia. 1 [email protected], [email protected], [email protected] network model will be re-trained and the weights will be updated. The gas consumption can increase or decrease depending on the demand and the season. Thus, an adaptive neural network is needed to ensure that the model is always updated for the new data.

Abstract—This paper discusses on the adaptive neural network model for predicting the energy consumption at a metering station. The function of the metering system is to calculate the energy consumption of the outgoing gas flow. To ensure the robustness of the developed model, it is suggested to make the model an adaptive model that will periodically update the weights. This will ensure the reliability of the model. A dynamic prediction model that can adapt itself to changes in the energy consumption pattern is desirable especially for short-term energy prediction. It is also important for an online running of the metering system. Two methods of weights update are proposed and tested, namely the accumulative training and sliding window training. The developed adaptive neural network model is then compared with the static neural network. Adaptive neural network for energy consumption has shown better result and recommended for implementation in the metering station.

II.

A. Artificial Neural Network An artificial neural network is a system based on the operation of biological neural networks or in other words, is an emulation of biological neural system. It is made up of interconnecting artificial neurons constructed with programming that mimic the properties of biological neurons. The neural network has been used in a lot of application such as prediction and forecasting, pattern recognition and identification, signal filtering, data compression, data mining, artificial life, adaptive control, optimization and scheduling, complex mapping and more. It has shown success in many fields of study. In engineering, neural networks serve two important functions: as pattern classifiers and as nonlinear adaptive filters. Artificial neural networks (ANNs) are fundamentally a ‘black box’ technology. In this kind of technology, the ‘black box’ has the ability to learn the input output correlation by training the input to produce the expected output. From the example, the trained ANN can then give similar output to tested input. The artificial neural network is well known for its capability to learn from example and deal with non-linear and complicated problems. They are good for task involving incomplete-data sets, fuzzy or incomplete information [1]. One of the applications that neural network best in is forecasting and prediction. A huge number of researchers have recommended neural network technique for short-term, midterm or long term gas consumption forecasting [2-6].

Keywords-accumulative training method; adaptive neural network; metering system;sliding window training method

I.

BACKGROUND

INTRODUCTION

An intellectual approach such as the neural network can be used to provide a good prediction model. This has been very popular among the researchers to use neural network for prediction. In this research, neural network approach has been used to develop a model to predict the energy consumption from the metering system. Typically, a metering system consists of a turbine meter, measuring devices (i.e. pressure transmitter, temperature transmitter), gas chromatography, and flow computer. This system is functioning to calculate the energy consumption based on value of inputs from all the measuring equipments. The inputs are gross volume, pressure, temperature, calorific value, and specific gravity. The accuracy of the system must be ensured so that it would not affect the billing integrity between the distributors and consumer. The model will be used to predict the energy consumption. The prediction model will be utilised as a tool to verify the existing metering system as well as to construct a more reliable metering system for billing integrity. In neural network, the weights of the model play a big role in the learning process of the model. The weights will be multiplied with the input to give the weighted inputs. From these weighted inputs, the activation function will transform them into the unit’s activation or output. An adaptive neural network can updates the weights periodically whenever new data are available. The neural

B. Adaptive Neural Network In determining the best neural network model, a few optimization parameters are taken into consideration such as the weights, number of hidden layer and nodes and number of input variables. The weights of the neural network can be technically determined from priori knowledge about the system or process. Other technique of determining the weights are by estimating from the data. The performance of the estimation technique determines how well the weights capture the behavior of the process [7]. In static neural network, the weights are only trained once and it remained constant at all time. When there is new

___________________________________ 978-1-61284-840-2/11/$26.00 ©2011 IEEE



data available, the neural network model may become no longer valid in time. Therefore it is better to have and adaptive model that will change the weights adaptively according to time present and new data collection. The model can be re-trained and the weights can be reestimated periodically according to the trending of the prediction. The two basic ways of estimating the weights adaptively are batch and on-line learning. Data in batch training usually will contain more than a data which in this research is the short-term data like daily, weekly or monthly. The weights will be estimated when the batch of data is trained within the time period. In the other hand, the on-line training basically the weights will be re-estimated and updated after every new observation or data available [7]. There are many ways of re-estimating or updating the weights. Two options discussed in this paper are the accumulative training and sliding window training. In accumulative training, the data will be collected accumulatively for re-training the model when new data are available. Advantage of this method is that it able to identify the trends of energy variation. However, the disadvantage is when the volume of data accumulated continuously increases and may become too large to be manageable [8]. Another way which is the sliding window training has constant size of training data set for re-training the model. When new data are available and added to the training data, the oldest data are dropped from the data set. It is like sliding a time window across a time series of measurements. Things to take into account is how big the size of the sliding window. Too small and constant size of the training data can make the adaptive neural network model to perform faster but data may only contain recent information and prediction result may not accurately reflect the behavior [8]. Nevertheless too large of window size may cause longer processing time and the result may be overfitting. III.

to the input group. The output is simply the energy (E) at the output layer as the target in this neural network prediction model. The measure of the neural network performance is defined from the root means square error (RMSE). The equation of the RMSE is: (1) The error is calculated between the energy predicted (yp) and current energy (y) for each training data set and validation data set. N is the number of data. B. Adaptive ANN Prediction Model In order to build a robust system of energy prediction that is able to predict accurately, it is important to make sure that the system are always updated and efficient at all time. Therefore, an adaptive artificial neural network model is developed and simulated. This model will have the advantage from the static neural network in terms of the adaptive of the model weights. The weights are important to determine the most minimum error of prediction. It is done by training the network. The goal of the training process is to find the set of weight values that will cause the output from the neural network to match the actual target values as well as giving the minimum error. Therefore, adaptive weights will help the network to be updated according to present time. Both methods of adaptive weights which are the accumulative training and sliding window training are trained and tested. This is to compare and analyse between these two methods that may give better result for the ANN Energy Prediction Model. Duration of one year of data is tested for each method and data tested for 2008 and 2007. The accumulative training data size started from 3 months and accumulated with a month of data consecutively (3months, 4 months, 5 months and onwards). The test is done using a few ways of weights updates which will be discussed in detail in next section with comparison to the static neural network model. Meanwhile for sliding window training method, there are 3 groups of sliding window size are tested which are 3 months size, 4 months size and 5 months window size. For better presentation, table below shows the groups for sliding window size.

RESEARCH APPROACH

A. Neural Network Structure In this project, the Multilayer Feedforward architecture is chosen as a preliminary model. The structure is most widely used in a lot of application including the gas consumption prediction because of its simple structure yet capable to learn and predict successfully. Data was taken for duration of 2 years (Year 2007 and 2008) on hourly basis every day from the flow computer and gas chromatography in the metering system. These data is then sorted by monthly and data which is invalid or behave irrelevant unlike the others are filtered out. Data filtered is the data that out of ranges or abnormal (spike, zero reading). There is average of 680 data every month after filtering process. At the input layer, there are five inputs altogether which are the gross volume (Vg), temperature (T), pressure (P), carolific value (CV) and specific gravity (sg). The inputs are selected based on the energy consumption calculation regards to ISO 6979 and AGA Report No.8 and one other possible contributor from the gas chromatography in metering system. An extra element of bias is also included

TABLE I. Group I II III

IV.

SLIDING WINDOW SIZE GROUPS Window Size 3 months (2007, 2008) 4 months (2007, 2008) 5 months (2007, 2008)

RESULT AND DISCUSSION

A. Static ANN Energy Prediction Model A structure of multi-layer feedforward neural network with three layers (input layer, one hidden layer, and output layer) is constructed. Previously, a static neural network model has been developed by focusing on parameters



selection in developing the energy prediction neural network model which is the learning algorithm, the activation function, and the number of neurons. The activation function for the hidden layer and activation function for output layer are manipulated to determine the most preferable one. This structure is trained using a few learning algorithms such as the Levenberg-Marquardt algorithm [9], Resilient Backpropagation [10] and others. Basically the developed neural network structure of the model is as shown in Fig 1. In this structure, the parameters selected to build up the neural network energy prediction model are: i) Learning Algortihm: Levenberg-Marquardt with Bayesian Regularization ii) Data division: Set B (50% training, 50% validation) iii) Number of neurons: 9

Similar to the Static ANN model, there are five inputs altogether going into the Adaptive ANN model which are the gross volume (Vg), pressure (P), temperature (T), carolific value (CV), and specific gravity (sg). The output is the predicted energy (Ep), is then mapped out together with the actual energy (Ea). In this system, the RMSE of the actual energy and predicted energy are calculated and displayed on the system. The RMSE is recorded and exported as well into the Matlab workspace. The weights that are connected to the neurons are randomly set during the training of the ANN model. Once it has reached the performance goal set after a number of epochs (iteration), the weights matrix are fixed to the inputs and hidden neurons until the next training. The training can be done depends on the operator and the size of the training data can be kept constant or accumulated. C. Accumulative Training Method for Weights Update First of all, the adaptive model is tested for the 2008 data. There are only 10 months of data available for 2008 that is from January until October and the training size started from 3 months of data. The validation is done for the next 1 month of data. Then, the next training is done by adding up the 1 month data with the previous training data so it became 4 months of training data. The steps are continued for each consecutive month. As for the 2007 data, there are 12 months of data. The initial training size is also 3 months of data. This method is tested for a few types of adaptive neural network and static neural network model. The first adaptive model is the model that only updates the weights only once which similar to static neural network. This is only for comparison with the static neural network. The weights are fixed. The second adaptive neural network model is the model that updated for each month regardless of the RMSE performance. The last adaptive neural network is the model that updated the weights depends on the RMSE performance. If the RMSE is increasing, the model will be updated by adding up one month of training data and if the RMSE is lower or not increasing, the model is remained with the training data size at that point of time. Below are the RMSE performances for each types of model for the two years of data.

Below is the subsystem inside the ANN simulink block. This is actually the ANN model developed earlier which consists of 5-9-1 ANN structure (5 inputs, 9 hidden neurons, 1 output). All inputs are connected to each hidden neuron in the hidden layer.

Vg

P

T

E

CV

sg

Figure 1. ANN Energy Prediction Model Structure.

B. Development of Adaptive Neural Network of Energy Prediction Since the static neural network model has constant weights, the model will not be updated once new data is available and the model might no longer valid to the new environment. The inaccurate result will only result to unreliable system and not robust. Therefore, with the development of adaptive ANN model, this model can be trained periodically so that the model will be kept updated to the present time and new environment. This adaptive neural network model fundamentally has the same structure as the static neural network model. The only parameter that is updated during training is the weights of the model periodically when new set of data are available. The methods and ways of training the adaptive neural network mentioned in the previous section which are the accumulative training and sliding window training.

Comparison of RMSE between static ANN and adaptive ANN (accumulative training) for data 2008 350 Adaptive ANN (static weights) Static ANN Adaptive ANN (updated each month) Adaptive ANN (updated regards to RMSE)

300

RMSE (GJ)

250

200

150

100

50

0 April

May

June

July Months

August

September

October

Figure 2. Comparison of RMSE between static ANN and Adaptive ANN (accumulative training for data 2008).



Comparison of RMSE between groups of window size (sliding window training for 2007 data)

Comparison of RMSE between Static ANN and Adaptive ANN (accumulative training for data 2007)

900

1000 Static ANN Adaptive ANN (static weights) Adaptive ANN (updated each month) Adaptive ANN (updated regards to RMSE)

900 800

700

700

600

600

RMSE (GJ)

RMSE (GJ)

Window Size - Group I (3 months) Window Size - Group II (4 months) Window Size - Group III (5 months)

800

500 400

500 400 300

300

200

200

100

100 0 April

May

June

July

August Months

September

October

November

0 June

December

July

August

September Month

October

November

December

Figure 3. Comparison of RMSE between static ANN and Adaptive ANN (accumulative training for data 2007).

Figure 5. Comparison of RMSE between groups of sliding window size (sliding window training for data 2008).

From both Fig.2 and Fig.3 above, it is observed that the adaptive model has given better performance in which less RMSE obtained. When the adaptive neural network is only trained once which means the weights are fixed and constant like the static neural network, the RMSE is still high almost similar to the static neural network. In fact, for data 2008, the RMSE of the adaptive neural network with constant weights has mostly highest RMSE than the rest of the models. Between the adaptive neural network that updates the weights every months and the adaptive neural network that updates the weights when the RMSE increased, there is only slight different of both RMSE each month. These adaptive neural networks have shown satisfactorily low RMSE than the static neural network. However, the adaptive neural network with each month update has shown the best result of all.

In this simulation, it can be observed that for 2008 data, Group III of window size which is 5 months gave the best among other groups. Group II has the poorest performance of all and Group I literally has only slight difference in RMSE with Group III. Similarly for data 2007, Group III showed the best performance than the rest with only one high RMSE in August. Group II is more steady for the RMSE did not increase or decrease tremendously in the prediction. Group I yet follow the same behavior as Group III and also almost have the same value except for month June. It is good to have larger window size for training but too large of data may cause overtraining problem such as overfitting data and long processing time. The result sometimes might not good enough as compared to small window size. However, too small of training data may also cause the model not completely able to capture the trend of the energy. For this paper and project, the window size chosen is Group I since it has proven that with only a small size if training data is adequate enough to predict the energy well.

D. Sliding Window Method for Weights Update In this training, the window size is kept constant for a few groups of size as mentioned in previous section. The first group has window size of 3 months, the second group has window size of 4 months and the third group has window size of 5 months data. The adaptive neural network of each group of window size is tested for 2008 and 2007 data. The result of the test is plotted as shown below.

V.

A dynamic prediction model that can adapt itself to changes in the energy consumption pattern is desirable especially for short-term energy prediction [8]. The proposed method for adaptive neural network model is the sliding window training with window size of 3 months data. This is because the amount of data is adequate to train the model and predict accurately. This model is expected to build a robust and more reliable prediction system. A reliable neural network model for energy prediction is very important in accurate and precise forecasting. Successful implementation of this model will not only help the distributor in gas transmission smoothness but will also benefits the distributor. The system will be implemented and integrated into the metering system in the metering station. Therefore, the system will be tested the reliability and robustness once installed into the metering system.

Comparison of RMSE between groups of window size ( sliding window training for 2008 data) 70 65

Window Size - Group I (3 months) Window Size - Group II (4 months) Window Size - Group III (5 months)

60

RMSE (GJ)

55 50 45 40 35 30 25 20 June

July

August Month

September

CONCLUSION

October

Figure 4. Comparison of RMSE between groups of sliding window size (sliding window training for data 2008).



“Development of feed-forward network models to predict gas consumption,” Neural Networks, 1994. IEEE World Congress on Computational Intelligence., 1994 IEEE International Conference on, 1994, pp. 802-805 vol.2.

ACKNOWLEDGMENT M.J. Ismail, R. Ibrahim and I. Ismail would like to thank UTP for the graduate assistant scheme program to fund the research and PETRONAS for the collaboration with UTP in developing and implementing the project at site.

[6]

D. Peharda, M. Delimar, and S. Loncaric, “Short term hourly forecasting of gas consumption using neural networks,” Information Technology Interfaces, 2001. ITI 2001. Proceedings of the 23rd International Conference on, 2001, pp. 367-371.

[7]

P. Laurinen and J. R ning, “An adaptive neural network model for predicting the post roughing mill temperature of steel slabs in the reheating furnace,” Journal of Materials Processing Technology, vol. 168, Oct. 2005, pp. 423-430.

[8]

J. Yang, H. Rivard, and R. Zmeureanu, “On-line building energy prediction using adaptive artificial neural networks,” Energy and Buildings, vol. 37, Dec. 2005, pp. 1250-1259.

[9]

J.J. Moré, “The Levenberg-Marquardt algorithm: Implementation and theory,” Numerical Analysis, Springer Berlin / Heidelberg, 1978, pp. 105-116.

REFERENCES [1]

S.A. Kalogirou, “Applications of artificial neural-networks for energy systems,” Applied Energy, vol. 67, Sep. 2000, pp. 17-35.

[2]

P. Musilek, E. Pelikán, T. Brabec, and M. Sim°unek, “Recurrent Neural Network Based Gating for Natural Gas Load Prediction System,” Neural Networks, IJCNN'06, 2006, pp. 3736-3741.

[3]

[4]

[5]

R. Kizilaslan and B. Karlik, “Comparison neural networks models for short term forecasting of natural gas consumption in Istanbul,” Applications of Digital Information and Web Technologies, 2008. ICADIWT 2008., 2008, pp. 448-453. R. Brown and I. Matin, “Development of artificial neural network models to predict daily gas consumption,” Industrial Electronics, Control, and Instrumentation, 1995., Proceedings of the 1995 IEEE IECON 21st International Conference on, 1995, pp. 1389-1394 vol.2.

[10] M. Riedmiller and H. Braun, “A direct adaptive method for faster backpropagation learning: the RPROP algorithm,” Neural Networks, 1993., IEEE International Conference on, 1993, pp. 586591 vol.1.

R. Brown, P. Kharouf, Xin Feng, L. Piessens, and D. Nestor,



Lihat lebih banyak...

Comentarios

Copyright © 2017 DATOSPDF Inc.