function [x,resnorm,residual,exitflag,output]=LM_mat(InpPat,TarPat, Wini,n_iter,NP) % %[crit,WW]=LM_mat(InpPat,TarPat, Wini,n_iter,NP) % %This function implements a Levenberg-Marquardt type of learning rule, using the function lsqnonçin available in the optimization toolbox % %Input Arguments: InpPat - Matrix of input data % TarPat - Matrix of output desired data % Wini - vector of initial weights % n_iter - how many iterations % NP - toplogy of the MLP - Two hidden layers network, with just 1 output % The weights are arranged as: % ni*NP(1) % NP(1)`*NP(2) % NP(2)*1 % Bias for the 1st hidden layer % Bias for the 2nd hidden layer % Bias for the output neuron % %Output arguments:x - final weights options = optimset('LargeScale','off','Jacobian','on','LevenbergMarquardt','on','TolFun',10^-8,'Display','iter','MaxIter',n_iter); [x,resnorm,residual,exitflag,output] = lsqnonlin('funlm',Wini,[],[],options,InpPat,TarPat,NP);