function [x]=LM_mat(InpPat,TarPat, Wini,n_iter,NP) % %[crit,WW]=LM_mat(InpPat,TarPat, Wini,n_iter,NP) % %This function implements a quasi-Newton type of learning rule, using the function fminunc available in the optimization toolbox % %Input Arguments: InpPat - Matrix of input data % TarPat - Matrix of output desired data % Wini - vector of initial weights % n_iter - how many iterations % NP - toplogy of the MLP - Two hidden layers network, with just 1 output % The weights are arranged as: % ni*NP(1) % NP(1)`*NP(2) % NP(2)*1 % Bias for the 1st hidden layer % Bias for the 2nd hidden layer % Bias for the output neuron % %Output arguments:x - final weights %options = optimset('GradObj','on','Display','iter','MaxIter',n_iter); %x = fminunc('funls',Wini,options,InpPat,TarPat,NP); options=foptions; options(1)=1; options(14)=400; x=leastsq('funlm',Wini,options,'jac',InpPat,TarPat,NP);