function [crit,WW]=BP(InpPat, TarPat, Wini, alfa, n_iter,NP) % %[crit,WW]=BP(InpPat, TarPat, Wini, alfa, n_iter,NP) % %This function computes the 2-norm of the error vector, considering a BP learning rule % %Input Arguments: InpPat - Matrix of input data % TarPat - Matrix of output desired data % Wini - vector of initial weights % alfa - learning rate % n_iter - how many iterations % NP - toplogy of the MLP - Two hidden layers network, with just 1 output % The weights are arranged as: % ni*NP(1) % NP(1)`*NP(2) % NP(2)*1 % Bias for the 1st hidden layer % Bias for the 2nd hidden layer % Bias for the output neuron % %Output arguments:crit - vector of the 2-morm of the errors % WW - matrix of the evolution of the weights W=Wini for iter=1:n_iter [Y,G,E]=OneLay2(InpPat,TarPat,W,NP); crit(iter)=norm(TarPat-Y); WW(iter,:)=W; W=W-alfa*E'; end