Free Essay

Arificial Neural Network

In: Computers and Technology

Submitted By safwan
Words 3437
Pages 14
A Review of ANN-based Short-Term Load Forecasting Models
Y. Rui A.A. El-Keib

Department of Electrical Engineering University of Alabama, Tuscaloosa, AL 35487

Abstract - Artificial Neural Networks (AAN) have recently been receiving considerable attention and a large number of publications concerning ANN-based short-term load forecasting (STLF) have appreared in the literature. An extensive survey of ANN-based load forecasting models is given in this paper. The six most important factors which affect the accuracy and efficiency of the load forecasters are presented and discussed. The paper also includes conclusions reached by the authors as a result of their research in this area. Keywords: artificial neural networks, short-term load forecasting models

Accurate and robust load forecasting is of great importance for power system operation. It is the basis of economic dispatch, hydro-thermal coordination, unit commitment, transaction evaluation, and system security analysis among other functions. Because of its importance, load forecasting has been extensively researched and a large number of models were proposed during the past several decades, such as Box-Jenkins models, ARIMA models, Kalman filtering models, and the spectral expansion techniques-based models. Generally, the models are based on statistcal methods and work well under normal conditions, however, they show some deficiency in the presence of an abrupt change in environmental or sociological variables which are believed to affect load patterns. Also, the employed techniques for those models use a large number of complex relationships, require a long computational time, and may result in numerical instabilities. Therefore, some new forecasting models were introduced recently. As a result of the development of Artificial Intelligence (AI), Expert System (ES) and Artificial Neural Networks (ANN) have been applied to solve the STLF problems. An ES forecasts the load according to rules extracted from experts' knowledge and operators' experience. This method is promising, however, it is important to note that the expert opinion may not always be consistent, and the reliability of

such opinion may be in question. Over the past two decades, ANNs have been receiving considerable attention and a large number of papers on their application to solve power system problems has appeared in the literature. This paper presents an extensive survey of ANN-based STLF models. Although many factors affect the accuracy and efficiency of the ANN-based load forecaster, the following six factors are believed to be the most important ones. In section 2, various kinds of Back-Propagation (BP) network structures are presented and discussed. The selection of input variables is reviewed in section 3. In section 4, different ways of selecting the training set are presented and evaluated. Because of the drawbacks of the BP algorithm, some efficient modifications are discussed in section 5. In section 6 and 7, the determination of the number of hidden neurons and the parameters of the BP algorithm are respectively presented. Conclusions follow in section 8.

The BP network structures
Artificial Neural Networks have parallel and distributed processing structures. They can be thought of as a set of computing arrays consisting of series of repetitive uniform processors placed on a grid. Learning is achieved by changing the interconnection between the processors [1]. To date, there exists many types of ANNs which are characterized by their topology and learning rules. As for the STLF problem, the BP network is the most widely used one. With the ability to approximate any continuous nonlinear function, the BP network has extraordinary mapping (forecasting) abilities. The BP network is a kind of multilayer feed forward network, and the transfer function within the network is usually a nonlinear function such as the Sigmoid function. The typical BP network structure for STLF is a three-layer network, with the nonlinear Sigmoid function as the transfer function [2-8]. An example of this network is shown in Figure 1. In addition to the typical Sigmoid function, a linear transfer function from the input layer directly to the output layer as shown in Figure 2 was proposed in [9] to account

for linear components of the load. The authors of [9] have reported that this approach has improved their forecasting results by more than 1%.

Figure 1 A typical BP network structure

forecasting error over the period of a whole year has improved considerably. It is proven that a 3-layer ANN with suitable dimension is sufficient to approximate any continuous non-linear function. In [13], it is illustrated that the 4-layer structure is easier to be trapped in a local minima while possesing the other features of the 3-layer ANNs. However, attracted by the compact architecture and efficiency of the learning process of the 4-layer ANN, a load forecaster using this structure was recoomended in [1,14] and promising results were reported. Based on the above discussion, the topology of BP network can be of 3-layers or 4-layers, the transfer function can be linear, nonlinear or a combination of both. Also, the network can be either fully connected or non-fully connected. From our experience we have found that the BP network structure is problem dependent, and a structure that is suitable for a given power system is not necassarily suitable for another.

Input variables of BP network
As was pointed out earlier, the BP network is a kind of array which can realize nonlinear mapping from the inputs to the outputs. Therefore, the selection of input variables of a load forecasting network is of great importance. In general, there are two selection methods. One is based on experience [1,3,9,14], and the other is based on statistical analysis such as the ARIMA [11] and correlation analysis [6]. If we denote the load at hour k as l(k), a typical selection of inputs based on operation experience will be l(k-1), l(k-24), t(k-1), etc., where t(k) is the temperature corresponding to the load l(k). Unlike those methods which are based on experience, [6] applies auto-correlation analysis on the historical load data to determine the input variables. Auto-correlation analysis shows that correlation of peaks occurs at the multiples of 24 hour lags. This indicates that the loads at the same hours have very strong correlation with each other. Therefore, they can be chosen as input variables. In [11], the authors apply ARIMA procedures and auto-correlation analysis to determine the necessary load related inputs. After load related inputs are determined, the corresponding temperature related inputs are determined. The authors in [10] discuss the method of using ANN to forecast the load curve under extreme climatic conditions. In addition to using conventional information such as historical loads and temperature as input variables, wind-speed, sky-cover are also chosen. In all, the input variables can be classified into 8 classes: 1. historical loads [1-3,6,7,9-12,15]

Figure 2 An ANN Structure with linear transfer function Because fully connected BP networks need more training time and are not adaptive enough to temperature changes, a non-fully connected BP model is proposed in [10,11]. The reported results show that although a fully connected ANN is able to capture the load characteristics, a non-fully connected ANN is more adaptive to respond to temperature changes. The results also show that the forecasting accuracy is significantly improved for abrupt temperature changing days. Moreover, [11] presents a new approach of which combines several sub-ANNs together to give better forecasting results. Recently, a recurrent high order neural network (RHONN) is proposed [12]. Due to its dynamic nature, the RHONN forecasting model is able to adapt quickly to changing conditions such as important load variations or changes of the daily load pattern. It is reported in [12] that the

2. historical and future temperatures [1-3,6,9-11,15] 3. hour of day index [1,3,4,6,11] 4. day of week index [1,4,6,11] 5. wind-speed [4,10] 6. sky-cover [4,10] 7. rainfall [4] 8. wet or dry day [4]. There are no general rules that can be followed to determine input variables. This largely depends on engineering judgment and experience. Our investigations revealed that for a normal climate area, the first 4 classes of variables are sufficient to give acceptable forecasting results. However, for an extreme weather-conditioned area the later 4 classes are recommended, because of the highly nonlinear relationship between the loads and the weather conditions.

Selection of training set
ANNs can only perform what they were trained to do. As for the case of STLF, the selection of the training set is a crucial one. The criteria for selecting the training set is that the characteristics of all the training pairs in the training set must be similar to those of the day to be forecasted. Choosing as many training pairs as possible is not the correct approach for the following reasons: i) Load periodicity. The 7 days of a week have rather different patterns. Therefore, using Sundays' load data to train the network which is to be used to forecast Mondays' loads would yield wrong results. ii) Because loads posses different trends in different periods, recent data is more useful than old data. Therefore, a very large training set which includes old data is less useful to track the most recent trends. As discussed in i), to obtain good forecasting results, day type information must be taken into account. There are two ways to do this. One way is to construct different ANNs for each day type, and feed each ANN with the corresponding day type training sets [6,15]. The other is to use only one ANN but contain the day type information in the input variables [1,7,11]. The two methods have their advantages and disadvantages. The former uses a number of relatively small size networks, while the later has only one network of a relatively large size. In [9], the authors realized that the selection of the training cases significantly affect the forecasting result, and developed a selection method based on the "least distance criteria". Using this approach, the forecasting results have shown significant improvement. It is worth noting that the day type classification is system dependent. For instance, in some systems, Mondays' load may be similar to that of Tuesdays', but in others this will not be true. A typical classification given in [1] categarizes the

historical loads into five classes. These are Monday, Tuesday-Thursday, Friday, Saturday, and Sunday/Public holiday. A different way, used in [2], collects the data with characteristics similar to the day being forecasted, and combines these data with the data from the previous 5 days to form a training set. In addition to the above conventional day type classification methods, some unsupervised ANN models are used to identify the day type patterns. The unsupervised learning concept, also called self-organization can be effectively used to discover similarities among unlabeled patterns. An unsupervised ANN is employed in [5,14] to identify the different day types. In all, because of the great importance of appropriate selection of the training set, several day type classification methods are proposed, which can be categorized into two types. One includes conventional method which uses observation and comparison [1,2,9]. The other, is based on unsupervised ANN concepts and selects the training set automatically [10,14].

Modification of the BP algorithm
The BP algorithm is widely used in STLF and has some good features such as, its ability to easily accommodate weather variables, and its implicit expressions relating inputs and outputs. However, it also has some drawbacks. These are its time consuming training process and its convergence to local minima. The authors of [16] report their investigation of the problem and point out that one of the major reasons for these drawbacks is "premature saturation," which is a phenomenon that remain constant at a significantly high value for some period of the time during the learning process. A method to prevent this phenomenon by the appropriate selecting of the initial weights is proposed in [16]. In [17], the authors discuss the effects of the momentum factor to the algorithm. The original BP algorithm does not have a momentum factor and is difficult to converge. The BP algorithm with momentum (BPM) converges much faster than the conventional BP algorithm. In [3,18], it is shown that the use of the BPM in STLF significantly improves the training process. The authors of [8] present extensive studies on the effects of various factors such as the learning step, the momentum factor to BPM. They proposed a new learning algorithm for adaptive training of neural networks. This algorithm converges faster than the BPM, and makes the selection of initial parameter much easier. A new learning algorithm motivated by the principle of "forced dynamic" for the total error function is proposed in [19]. The rate of change of the network weights is chosen

such that the error function to be minimized is forced to "decay" in a certain mode. Another modified approach to the conventional BP algorithm is proposed in [20]. The modification consists of a new total error function. This error function updates the weights in direct proportion to the total error. With this modification, the periods of stagnation are much shorter and the possibility of trapping in a local minima is greatly reduced.

There are no general rules to obtain an optimal learning step. The values used in [1,4,14] are 0.9, 0.25, and 0.05 respectively. iii). Momentum factor Like the learning step, the momentum factor is also system dependent. The values chosen by [1,4,14] are 0.6, 0.9, and 0.9 respectively. In contrast to the learning step whose value can be larger than 1.0, the upper limit of the momentum factor is 1.0 [18]. This upper limit can be obtained from the physical meaning of momentum factor. It is the forgetting factor of the previous weight changes. The algorithm diverges if the value of the momentum factor is greater than 1.0 is used. The authors of [8] compare the efficiency and accuracy of the neural network using different learning steps and momentum factors, and show that with an adaptive algorithm, the parameters can be chosen from a much wider range. In our investigation, we have observed that the initial weights with values between -0.5 and 0.5 yield good results. As for the learning step and the momentum factor, they should not be fixed but gradually decreased with the increase of the iteration index. Using an adaptive algorithm such as the one proposed by [8] would yield a more stable algorithm.

Number of hidden neurons
Determination the optimal number of hidden neurons is a crucial issue. If it is too small, the network can not posses sufficient information, and thus yields inaccurate forecasting results. On the other hand, if it is too large, the training process will be very long [1]. The authors in [21] discuss the number of hidden neurons in binary value cases. In order to make the mapping between the output value and input pattern arbitrary for I learning patterns, the necessary and sufficient number of hidden neurons is I-1. The authors of [22] also state that a multilayer perceptron with k-1 hidden neurons can realize arbitrary functions defined on a k-element set. Up to our knowledge, there is no absolute criteria to determine the exact number of hidden neurons that will lead to an optimal solution. Different numbers of hidden neurons are used in [1,10,11,14]. Based on our experience, the appropriate number of hidden neurons is system dependent, mainly determined by the size of the training set and the number of input variables.

A summary of an extensive survey of existing ANN-based STLF models is presented. Six factors which are believed to have a considerable effect on the accuracy, reliability, and robustness of the models are emphasized The surveyed publications and the authors' own experience lead to the conclusion that the ANN structure, input variables, number of hidden neurons, and BP algorithm parameters are mainly system dependent. The development of a more general ANN model to handle the STLF problem is a challenging problem and should be investigated timely.

Parameters of the BP algorithm
Three parameters need to be determined before BP network can be trained and is able to forecast. These are i) Weights: The initial weights should be small random numbers. It is proven that if the initial weights in the same layer are equal, the BP algorithm can not converge [18]. ii) Learning step: The effectiveness and convergence of the BP algorithm depend significantly on the value of the learning step. However, the optimum value of the learning step is system dependent. For systems which posses broad minima that yield small gradient values, a large value of the learning step will result in a more rapid convergence. However, for a system with steep and narrow minima, a small value of learning step is more suitable [24].

[1]D. Srinivasan, A neural network short-term load forecaster, Electric Power Research, pp. 227-234, 28 (1994). [2]O. Mohammed, Practical Experiences with an Adaptive Neural Network short-term load forecasting system, IEEE/PES 1994 Winter Meeting, Paper # 94 210-5 PWRS. [3]D.C. Park, Electric load forecasting using an

artificial neural network, IEEE Trans. on Power Systems, Vol. 6, No. 2, pp. 412-449, May 1991. [4]T.S. Dillon, Short-term load forecasting using an adaptive neural network, Electrical Power & Energy Systems, pp. 186-191, 1991. [5]M. Djukanvic, Unsupervised/supervised learning concept for 24-hour load forecasting, IEE Proc.-C, Vol. 140, No. 4, pp. 311-318, July, 1993. [6]K.Y. Lee, Short-Term Load Forecasting Using an Artificial neural Network, IEEE Trans. on Power Systems, Vol. 7, No. 1, pp. 124-131, Feb. 1992. [7]C.N. Lu, Neural Network Based Short Term Load Forecasting, IEEE Trans. on Power Systems, Vol. 8, No. 1, pp. 336-341, Feb. 1993. [8]K.L. Ho, Short Term Load Forecasting Using a Multilayer neural Network with an Adaptive Learning Algorithm, IEEE Trans.on Power Systems, Vol. 7, No. 1, pp. 141-149, Feb. 1992. [9]T.M. Peng, Advancement in the application of neural networks for short-term load forecasting, IEEE/PES 1991 Summer Meeting, Paper # 451-5 PWRS. [10]B.S. Kermanshahi, Load forecasting Under extreme climatic conditions, Proceedings, IEEE Second International Forum on the Applications of Neural Networks to Power Systems, April, 1993, Yokohoma, Japan. [11]S.T. Chen, Weather sensitive short-term load forecasting using nonfully connected artificial neural networks, IEEE/PES 1991 Summer Meeting, Paper # 449-9 PWRS. [12]G.N. Kariniotakis, Load forecasting using dynamic high-order neural networks, pp. 801-805, Proceedings, IEEE Second International Forum on the Applications of Neural Networks to Power Systems, April, 1993, Yokohoma, Japan. [13]J. Villiers, Back-propagation Neural Nets with One and Two Hidden Layers, IEEE Trans. on Neural Networks, Vol. 4, No. 1, pp. 136-146, Jan. 1992. [14]Y.Y. Hsu, Design of artificial neural networks for short-term load forecasting, IEE Proc.C, Vol. 138, No. 5, pp. 407-418, Sept. 1991. [15]A.D. Papalexopoulos, Application of neural network technology to short-term system load forecasting, pp. 796-800, Proceedings, IEEE Second International Forum on the Applicaitons of Neural Networks to Power Systems, April, 1993, Yokohoma, Japan. [16]Y. Lee, An Analysis of Premature Saturation in

Back Propagation Learning, Neural Networks, Vol. 6, pp. 719-728, 1993. [17]V.V. Phansalkar, Analysis of the Back-Propagation Algorithm with Momentum, IEEE Trans. on Neural Networks, Vol. 5, No. 3, May 1994. [18]Y. Rui, P. Jin, The modelling method for ANN-based forecaster, CDC' 94, China, 1994. [19]G.P. Alexander, An Accelerated Learning Algorithm for Multilayer Perceptron Networks, IEEE Trans. on Neural Networks, Vol. 5, No. 3, pp. 493-497, May 1994. [20]A.V. Ooyen, Improving the Convergence of the Back-Propagation Algorithm, Neural Network, Vol. 5, pp. 465-471, 1992. [21]M. Arai, Bounds on the Number of Hidden units in Binary-Valued Three-Layer Neural Networks, Neural Networks, Vol. 6, pp. 855-860, 1993. [22]S.C. Huang, Bounds on the Number of Hidden Neurons in Multilayer Perceptrons, IEEE trans on Neural Networks, Vol. 2, No. 1, pp. 47-55, Jan. 1991. [23]Y.Rui, P. Jin, Power load forecasting using ANN, Journal of Hehai University, 1993. [24]J.M. Zurada, Introduction to Artificial Neural Systems, West Publishing Company, 1992.…...

Similar Documents

Free Essay

Prediction of Oil Prices Using Neural Networks

...Oil Price Prediction using Artificial Neural Networks Author: Siddhant Jain, 2010B3A7506P Birla Institute of Technology and Science, Pilani Abstract: Oil is an important commodity for every industrialised nation in the modern economy. The upward or downward trends in Oil prices have crucially influenced economies over the years and a priori knowledge of such a trend would be deemed useful to all concernd - be it a firm or the whole country itself. Through this paper, I intend to use the power of Artificial Neural Networks (ANNs) to develop a model which can be used to predict oil prices. ANNs are widely used for modelling a multitude of financial and economic variables and have proven themselves to be a very powerful tool to handle volumes of data effectively and analysing it to perform meaningful calculations. MATLAB has been employed as the medium for developing the neural network and for efficiently handling the volume of calculations involved. Following sections shall deal with the theoretical and practical intricacies of the aforementioned model. The appendix includes snapshots of the generated results and other code snippets. Artificial Neural Networks: Understanding To understand any of the ensuing topics and the details discussed thereof, it is imperative to understand what actually we mean by Neural Networks. So, I first dwell into this topic: In simplest terms a Neural Network can be defined as a computer system modelled on the human brain and nervous......

Words: 3399 - Pages: 14

Free Essay

Artificial Neural Network for Biomedical Purpose

...ARTIFICIAL NEURAL NETWORKS METHODOLOGICAL ADVANCES AND BIOMEDICAL APPLICATIONS Edited by Kenji Suzuki Artificial Neural Networks - Methodological Advances and Biomedical Applications Edited by Kenji Suzuki Published by InTech Janeza Trdine 9, 51000 Rijeka, Croatia Copyright © 2011 InTech All chapters are Open Access articles distributed under the Creative Commons Non Commercial Share Alike Attribution 3.0 license, which permits to copy, distribute, transmit, and adapt the work in any medium, so long as the original work is properly cited. After this work has been published by InTech, authors have the right to republish it, in whole or part, in any publication of which they are the author, and to make other personal use of the work. Any republication, referencing or personal use of the work must explicitly identify the original source. Statements and opinions expressed in the chapters are these of the individual contributors and not necessarily those of the editors or publisher. No responsibility is accepted for the accuracy of information contained in the published articles. The publisher assumes no responsibility for any damage or injury to persons or property arising out of the use of any materials, instructions, methods or ideas contained in the book. Publishing Process Manager Ivana Lorkovic Technical Editor Teodora Smiljanic Cover Designer Martina Sirotic Image Copyright Bruce Rolff, 2010. Used under license from First published March, 2011 Printed......

Words: 43079 - Pages: 173

Free Essay

Artificial Neural Network Essentials

...  NEURAL NETWORKS by Christos Stergiou and Dimitrios Siganos |   Abstract This report is an introduction to Artificial Neural Networks. The various types of neural networks are explained and demonstrated, applications of neural networks like ANNs in medicine are described, and a detailed historical background is provided. The connection between the artificial and the real thing is also investigated and explained. Finally, the mathematical models involved are presented and demonstrated. Contents: 1. Introduction to Neural Networks 1.1 What is a neural network? 1.2 Historical background 1.3 Why use neural networks? 1.4 Neural networks versus conventional computers - a comparison   2. Human and Artificial Neurones - investigating the similarities 2.1 How the Human Brain Learns? 2.2 From Human Neurones to Artificial Neurones   3. An Engineering approach 3.1 A simple neuron - description of a simple neuron 3.2 Firing rules - How neurones make decisions 3.3 Pattern recognition - an example 3.4 A more complicated neuron 4. Architecture of neural networks 4.1 Feed-forward (associative) networks 4.2 Feedback (autoassociative) networks 4.3 Network layers 4.4 Perceptrons 5. The Learning Process  5.1 Transfer Function 5.2 An Example to illustrate the above teaching procedure 5.3 The Back-Propagation Algorithm 6. Applications of neural networks 6.1 Neural networks in practice 6.2 Neural networks in medicine 6.2.1 Modelling and Diagnosing the...

Words: 7770 - Pages: 32

Free Essay

A Comparative Study of "Fuzzy Logic, Genetic Algorithm & Neural Network" in Wireless Network Security

... GENETIC ALGORITHM & NEURAL NETWORK" IN WIRELESS NETWORK SECURITY (WNS) ABSTRACT The more widespread use of networks meaning increased the risk of being attacked. In this study illustration to compares three AI techniques. Using for solving wireless network security problem (WNSP) in Intrusion Detection Systems in network security field. I will show the methods used in these systems, giving brief points of the design principles and the major trends. Artificial intelligence techniques are widely used in this area such as fuzzy logic, neural network and Genetic algorithms. In this paper, I will focus on the fuzzy logic, neural network and Genetic algorithm technique and how it could be used in Intrusion Detection Systems giving some examples of systems and experiments proposed in this field. The purpose of this paper is comparative analysis between three AI techniques in network security domain. 1 INTRODUCTION This paper shows a general overview of Intrusion Detection Systems (IDS) and the methods used in these systems, giving brief points of the design principles and the major trends. Hacking, Viruses, Worms and Trojan horses are various of the main attacks that fear any network systems. However, the increasing dependency on networks has increased in order to make safe the information that might be to arrive by them. As we know artificial intelligence has many techniques are widely used in this area such as fuzzy logic, neural network and Genetic algorithms......

Words: 2853 - Pages: 12

Free Essay

Neural Network

...– MGT 501 Neural Network Technique Outline * Overview ………………………………………………………….……… 4 * Definition …………………………………………………4 * The Basics of Neural Networks……………………………………………5 * Major Components of an Artificial Neuron………………………………..5 * Applications of Neural Networks ……………….9 * Advantages and Disadvantages of Neural Networks……………………...12 * Example……………………………………………………………………14 * Conclusion …………………………………………………………………14 Overview One of the most crucial and dominant subjects in management studies is finding more effective tools for complicated managerial problems, and due to the advancement of computer and communication technology, tools used in management decisions have undergone a massive change. Artificial Neural Networks (ANNs) is an example, knowing that it has become a critical component of business intelligence. The below article describes the basics of neural networks as well as some work done on the application of ANNs in management sciences. Definition of a Neural Network? The simplest definition of a neural network, particularly referred to as an 'artificial' neural network (ANN), is provided by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen who defines a neural network as follows: "...a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs."Neural Network Primer:......

Words: 3829 - Pages: 16

Free Essay

The Effects of Toll Like Receptor 2 Deletion on Social Behavior Neural Network

...Sanil Modi Biol-4910 Summer 2014 M-F Dr. De Vries, Christopher T. Fields The Effects Of Toll Like Receptor 2 Deletion on Social Behavior Neural Network Introduction During this past summer I had the opportunity to conduct research at the Neuroscience Institute at Georgia State University. The research I participated in was under Christopher T. Fields who is working on getting his doctorial degree. In these last few months I have worked on many exiting projects, learned how create experiments and analyze them. From the first day of lab I learned to work with many different lab instruments, software and mastered the structures of the mice brain. The instruments I started working with were a digital microscope and its software Stereo Investigator that took pictures at HD quality of mice brain. Shortly after came the analysis of the pictures we captured and the software used was ImageJ and Excel. In ImageJ you can measure different thresholds of the mice brain and get analysis which is imputed into excel and then the numbers from excel are put into a statistical software where graphs are made and you can check if your experiments had any change from the control. What I also learned was how mice brains are put on a slide. First you would use a cryostat, which slices the mice brain at the amount of thickness needed. While you are slicing the mice brains you are putting them onto a slide. Then they are taken from the slide put into a buffer solution, which lets you add to......

Words: 1884 - Pages: 8

Free Essay

Segmentation Using Neural Networks

...SEGMENTATION WITH NEURAL NETWORK B.Prasanna Rahul Radhakrishnan Valliammai Engineering College Valliammai Engineering College Abstract: Our paper work is on Segmentation by Neural networks. Neural networks computation offers a wide range of different algorithms for both unsupervised clustering (UC) and supervised classification (SC). In this paper we approached an algorithmic method that aims to combine UC and SC, where the information obtained during UC is not discarded, but is used as an initial step toward subsequent SC. Thus, the power of both image analysis strategies can be combined in an integrative computational procedure. This is achieved by applying “Hyper-BF network”. Here we worked a different procedures for the training, preprocessing and vector quantization in the application to medical image segmentation and also present the segmentation results for multispectral 3D MRI data sets of the human brain with respect to the tissue classes “ Gray matter”, “ White matter” and “ Cerebrospinal fluid”. We correlate manual and semi automatic methods with the results. Keywords: Image analysis, Hebbian learning rule, Euclidean metric, multi spectral image segmentation, contour tracing. Introduction: Segmentation can be defined as the identification of meaningful image components. It is a fundamental task in image processing providing the basis for any kind......

Words: 2010 - Pages: 9

Free Essay

Neural Networks for Matching in Computer Vision

...Neural Networks for Matching in Computer Vision Giansalvo Cirrincione1 and Maurizio Cirrincione2 Department of Electrical Engineering, Lab. CREA University of Picardie-Jules Verne 33, rue Saint Leu, 80039 Amiens - France Universite de Technologie de Belfort-Montbeliard (UTBM) Rue Thierry MIEG, Belfort Cedex 90010, France 1 2 Abstract. A very important problem in computer vision is the matching of features extracted from pairs of images. At this proposal, a new neural network, the Double Asynchronous Competitor (DAC) is presented. It exploits the self-organization for solving the matching as a pattern recognition problem. As a consequence, a set of attributes is required for each image feature. The network is able to find the variety of the input space. DAC exploits two intercoupled neural networks and outputs the matches together with the occlusion maps of the pair of frames taken in consideration. DAC can also solve other matching problems. 1 Introduction In computer vision, structure from motion (SFM) algorithms recover the motion and scene parameters by using a sequence of images (very often only a pair of images is needed). Several SFM techniques require the extraction of features (corners, lines and so on) from each frame. Then, it is necessary to find certain types of correspondences between images, i.e. to identify the image elements in different frames that correspond to the same element in the scene. This......

Words: 3666 - Pages: 15

Free Essay

Neural Network

...ARTIFICIAL NEURAL NETWORK FOR SPEECH RECOGNITION One of the problem found in speech recognition is recording samples never produce identical waveforms. This happens due to different in length, amplitude, background noise, and sample rate. This problem can be encountered by extracting speech related information using Spectogram. It can show change in amplitude spectra over time. For example in diagram below: X Axis : TimeY Axis : FrequencyZ Axis : Colour intensity represents magnitude | | A cepstral analysis is a popular method for feature extraction in speech recognition applications and can be accomplished using Mel Frequency Cepstrum Coefficient (MFCC) analysis Input Layer is 26 Cepstral CoefficientsHidden Layer is 100 fully-connected hidden-layerWeight is range between -1 and +1 * It is initially random and remain constantOutput : * 1 output unit for each target * Limited to values between 0 and +1 | | First of all, spoken digits were recorded. Seven samples of each digit consist of “one” through “eight” and a total of 56 different recordings with varying length and environmental conditions. The background noise was removed from each sample. Then, calculate MFCC using Malcolm Slaney’s Auditory Toolbox which is c=mfcc(s,fs,fix((3*fs)/(length(s)-256))). Choose intended target and create a target vector. If the training network recognise spoken one, target has a value of +1 for each of the known “one” stimuli and 0 for everything else. This will be......

Words: 341 - Pages: 2

Free Essay

Thermal Power Plant Analysis Using Artificial Neural Network

...UNIVERSITY INTERNATIONAL CONFERENCE ON ENGINEERING, NUiCONE-2012, 06-08DECEMBER, 2012 1 Thermal Power Plant Analysis Using Artificial Neural Network Purva Deshpande1, Nilima Warke2, Prakash Khandare3, Vijay Deshpande4 VESIT, Chembur,, 3,4 Mahagenco, Mumbai., 1,2 Abstract--Coal-based thermal power stations are the leaders in electricity generation in India and are highly complex nonlinear systems. The thermal performance data obtained from MAHAGENCO KORADI UNIT 5 thermal power plant shows that heat rate and boiler efficiency is changing constantly and the plant is probably losing some Megawatts of electric power, and more fuel usage thus resulting in much higher carbon footprints. It is very difficult to analyse the raw data recorded weekly during the full power operation of the plant because a thermal power plant is a very complex system with thousands of parameters. Thus there is a need for nonlinear modeling for the power plant performance analysis in order to meet the growing demands of economic and operational requirements. The intention of this paper is to give an overview of using artificial neural network (ANN) techniques in power systems. Here Back Propagation Neural Network (BPNN) and Radial Basis Neural Network (RBNN) are used for comparative purposes to model the thermodynamic process of a coal-fired power plant, based on actual......

Words: 3399 - Pages: 14

Free Essay

A 3-Layer Artificial Neural Network

...1. Describe (a) the basic structure of and (b) the learning process for a 3-layer artificial neural network. A 3-layer artificial neural network consists of an input, output and a hidden layer in the middle. For e.g. To recognize male and female faces, the input layer would be made up of a computer program analyzing a camera shot. The output layer would be the word male or female appearing on the screen. The hidden layer is where all action takes place and connections are made between input and output. In an ANN these connections are mathematical. It works by learning from success (hits) and failures (misses) by making adjustments in these mathematical connections. 2. According to Churchland, why does intrapersonal (within one person) moral conflict occur? Intrapersonal moral conflict occurs when some contextual feature is alternately magnified or minimized and one’s overall perceptual take flips back and forth between two distinct activation patterns in the neighborhood of 2 distinct prototypes. In such case, an individual is morally conflicted eg. Should I protect a friends feeling by lying about someone’s hurtful slur or should I tell him the truth? 3. According to Churchland, when should moral correction occur and why? According to Churchland, moral correction should occur at an early age, before child turns into a young adult. Reasons - 1. Firstly, cognitive plasticity and eagerness to imitate......

Words: 549 - Pages: 3

Free Essay

Neural Network

...EEL5840: Machine Intelligence Introduction to feedforward neural networks Introduction to feedforward neural networks 1. Problem statement and historical context A. Learning framework Figure 1 below illustrates the basic framework that we will see in artificial neural network learning. We assume that we want to learn a classification task G with n inputs and m outputs, where, y = G(x) , (1) x = x1 x2 … xn T and y = y 1 y 2 … y m T . (2) In order to do this modeling, let us assume a model Γ with trainable parameter vector w , such that, z = Γ ( x, w ) (3) where, z = z1 z2 … zm T . (4) Now, we want to minimize the error between the desired outputs y and the model outputs z for all possible inputs x . That is, we want to find the parameter vector w∗ so that, E ( w∗ ) ≤ E ( w ) , ∀w , (5) where E ( w ) denotes the error between G and Γ for model parameter vector w . Ideally, E ( w ) is given by, E(w) = ∫ y – z 2 p ( x ) dx (6) x where p ( x ) denotes the probability density function over the input space x . Note that E ( w ) in equation (6) is dependent on w through z [see equation (3)]. Now, in general, we cannot compute equation (6) directly; therefore, we typically compute E ( w ) for a training data set of input/output data, { ( x i, y i ) } , i ∈ { 1, 2, …, p } , (7) where x i is the n -dimensional input vector, x i = x i 1 x i 2 … x in T (8) x2 y2 … … Unknown mapping G xn ym z1 z2 Trainable model Γ … zm -1- model outputs y1 … inputs x1...

Words: 7306 - Pages: 30

Free Essay

Neural Plasticity

... Neural Plasticity Team D PSY/340 June 5, 2016 Taleshia L. Chandler, Ph.D Neural Plasticity The current patient, Stephanie, has experienced a stroke, a temporary interruption of normal blood flow to her brain. There are certain functions and limitations of neural plasticity in the patient’s recovery process. Neuroplasticity is defined as the ability of the nervous system to respond to intrinsic or extrinsic stimuli by reorganizing its structure, function, and connections. While almost all survivors of brain damage experience some behavioral recovery, every patient will vary in his or her recovery process. According to Johansson, MD, PHD (2000), there are several mechanisms that are involved in brain plasticity. Specifically, such as in Stephanie’s case, time is of the essence. Brain damage can be triggered by a few factors. The most frequent type of stroke known to cause brain damage is known as ischemia, which is the aftermath of any type of confliction in an artery in which a blood clot is created. The less usual type is called a hemorrhage, which is the result of a damaged artery. Once a patient just like Stephanie has experienced a stroke, physicians must immediately determine whether the stroke was ischemic or hemorrhagic. Making such determination is complicating by nature and physicians have their clock ticking because time is limited. (Kalat, 2013, Chapter 5). Knowing that a hemorrhagic stroke is less likely than a ischemic one, physicians take a chance and......

Words: 778 - Pages: 4

Premium Essay

Using Neural Networks to Forecast Stock Markets

...Using Neural Networks to Forecast Stock Market Prices Abstract This paper is a survey on the application of neural networks in forecasting stock market prices. With their ability to discover patterns in nonlinear and chaotic systems, neural networks offer the ability to predict market directions more accurately than current techniques. Common market analysis techniques such as technical analysis, fundamental analysis, and regression are discussed and compared with neural network performance. Also, the Efficient Market Hypothesis (EMH) is presented and contrasted with chaos theory and neural networks. This paper refutes the EMH based on previous neural network work. Finally, future directions for applying neural networks to the financial markets are discussed. 1 Introduction From the beginning of time it has been man’s common goal to make his life easier. The prevailing notion in society is that wealth brings comfort and luxury, so it is not surprising that there has been so much work done on ways to predict the markets. Various technical, fundamental, and statistical indicators have been proposed and used with varying results. However, no one technique or combination of techniques has been successful enough to consistently "beat the market". With the development of neural networks, researchers and investors are hoping that the market mysteries can be unraveled. This paper is a survey of current market forecasting techniques with an emphasis on why they are......

Words: 6887 - Pages: 28

Free Essay

Rough Set Approach for Feature Reduction in Pattern Recognition Through Unsupervised Artificial Neural Network

...First International Conference on Emerging Trends in Engineering and Technology Rough Set Approach for Feature Reduction in Pattern Recognition through Unsupervised Artificial Neural Network A. G. Kothari A.G. Keskar A.P. Gokhale Rucha Pranjali Lecturer Professor Professor Deshpande Deshmukh agkothari72@re B.Tech Student B.Tech Student Department of Electronics & Computer Science Engineering, VNIT, Nagpur Abstract The Rough Set approach can be applied in pattern recognition at three different stages: pre-processing stage, training stage and in the architecture. This paper proposes the application of the Rough-Neuro Hybrid Approach in the pre-processing stage of pattern recognition. In this project, a training algorithm has been first developed based on Kohonen network. This is used as a benchmark to compare the results of the pure neural approach with the RoughNeuro hybrid approach and to prove that the efficiency of the latter is higher. Structural and statistical features have been extracted from the images for the training process. The number of attributes is reduced by calculating reducts and core from the original attribute set, which results into reduction in convergence time. Also, the above removal in redundancy increases speed of the process reduces hardware complexity and thus enhances the overall efficiency of the pattern recognition algorithm Keywords: core, dimensionality reduction, feature extraction, rough sets, reducts, unsupervised ANN as......

Words: 2369 - Pages: 10