【DELM分類】基于松鼠算法改進(jìn)深度學(xué)習(xí)極限學(xué)習(xí)機(jī)實(shí)現(xiàn)數(shù)據(jù)分類附matlab代碼
1 簡(jiǎn)介
人工神經(jīng)網(wǎng)絡(luò)的最大缺點(diǎn)是訓(xùn)練時(shí)間太長(zhǎng)從而限制其實(shí)時(shí)應(yīng)用范圍,近年來(lái),極限學(xué)習(xí)機(jī)(Extreme Learning Machine, ELM)的提出使得前饋神經(jīng)網(wǎng)絡(luò)的訓(xùn)練時(shí)間大大縮短,然而當(dāng)原始數(shù)據(jù)混雜入大量噪聲變量時(shí),或者當(dāng)輸入數(shù)據(jù)維度非常高時(shí),極限學(xué)習(xí)機(jī)算法的綜合性能會(huì)受到很大的影響.深度學(xué)習(xí)算法的核心是特征映射,它能夠摒除原始數(shù)據(jù)中的噪聲,并且當(dāng)向低維度空間進(jìn)行映射時(shí),能夠很好的起到對(duì)數(shù)據(jù)降維的作用,因此我們思考利用深度學(xué)習(xí)的優(yōu)勢(shì)特性來(lái)彌補(bǔ)極限學(xué)習(xí)機(jī)的弱勢(shì)特性從而改善極限學(xué)習(xí)機(jī)的性能.為了進(jìn)一步提升DELM預(yù)測(cè)精度,本文采用麻雀搜索算法進(jìn)一步優(yōu)化DELM超參數(shù),仿真結(jié)果表明,改進(jìn)算法的預(yù)測(cè)精度更高。












2 部分代碼
clc;clear all;close all;
[f p]=uigetfile('*');
X=importdata([p f]);
data=X.data;
data2=data(:,1:end-1);class=data(:,end);
data1=knnimpute(data2);
%%%%%%%%Feature selection
FSL=0;FSU=1;
D=size(data1,2);
for i=1:10
FS(i,:)=FSL+randi([0 1],[1 D])*(FSU-FSL);
try
fit(i)=fitness(data1,class,FS(i,:));
catch
? ?fit(i)=1;
? ?continue;
end
end
ind=find(fit==min(fit));
FSnew=FS(ind,:);
pdp=0.1;
row=1.204;V=5.25;S=0.0154;cd=0.6;CL=0.7;hg=1;sf=18;
Gc=1.9;
D1=1/(2*row*V.^2*S*cd);L=1/(2*row*V.^2*S*CL);
tanpi=D1/L;dg=hg/(tanpi*sf);aa=randi([1 length(ind)]);
iter=1;maxiter=2;
while(iter<maxiter)
for i=1:10
if(rand>=pdp)
? ?FS(i,:)=round(FS(i,:)+(dg*Gc*abs(FSnew(1,:)-FS(i,:))));
else
? FS(i,:)=FSL+randi([0 1],[1 D])*(FSU-FSL);
end
Fh=FS;
fit1(i)=fitness(data1,class,FS(i,:));
ind1=find(fit1==min(fit1));
FSnew1=FS(ind1,:);
if(rand>pdp)
? ?FS(i,:)=round(FS(i,:)+(dg*Gc*abs(FSnew(aa,:)-FS(i,:))));
else
? FS(i,:)=FSL+randi([0 1],[1 D])*(FSU-FSL);
end
Fa=FS;
fit2(i)=fitness(data1,class,FS(i,:));
ind2=find(fit2==min(fit2));
FSnew2=FS(ind2,:);
end
Sc=sqrt(sum(abs(Fh-Fa)).^2);
Smin=(10*exp(-6))/(365).^(iter/(maxiter/2.5));
if(Sc<Smin)
? ?season=summer;
? ?for i=1:10
? ? ? ?FS(i,:)=FSL+levy(1,D,1.5)*(FSU-FSL);
? ?end
else
? ?season=winter;
? ?break;
end
%%%Searching method
fit3(i)=fitness(data1,class,FS(i,:));
ind3=find(fit3==min(fit3));
final=abs(round([Fh(ind1,:);Fa(ind2,:);FS(ind3,:)]));
for i=1:size(final,1)
? ?fitt(i)=fitness(data1,class,final(i,:));
end
best(iter)=min(fitt);
[ff inn]=min(fitt);
bestfeat(iter,:)=final(inn,:);pdp=best(iter);
iter=iter+1;
end
sel=find(bestfeat(end,:));
disp('Selected Features');disp(sel)
dataA =data2(:,sel); ?% some test data
p = .7 ; ? ? % proportion of rows to select for training
N = size(dataA,1); ?% total number of rows
tf = false(N,1); ? % create logical index vector
tf(1:round(p*N)) = true; ?
tf = tf(randperm(N)); ? % randomise order
dataTraining = dataA(tf,:);labeltraining=class(tf);
dataTesting = dataA(~tf,:);labeltesting=class(~tf);
disp('Training feature size');disp(length(dataTraining))
disp('Testing feature size');disp(length(dataTesting))
svt=svmtrain(dataTraining,labeltraining);
out1=svmclassify(svt,dataTesting);
mdl=fitcknn(dataTraining,labeltraining);
out2=predict(mdl,dataTesting);
%%%%%%% ?NB %%%%%%%%
mdl=fitcensemble(dataTraining,labeltraining);
out3=predict(mdl,dataTesting);
tp=length(find(out3==labeltesting));
msgbox([{['Out of ',num2str(length(out3))]},{[num2str(tp),'are correctly classified']}])
delete(gcp('nocreate'))
disp('%%%%%%%% ?KNN %%%%%%%%%%%%%%')
[EVAL CF] = Evaluate(out2,labeltesting);
disp('Accuracy (%)');disp(EVAL(1)*100);
disp('Precision (%)');disp(EVAL(4)*100);
disp('Recall (%)');disp(EVAL(5)*100);
disp('Fmeasure (%)');disp(EVAL(6)*100);
disp('True Positive');disp(CF(1))
disp('True Negative');disp(CF(2))
disp('False Positive');disp(CF(3))
disp('False Negative');disp(CF(4))
disp('%%%%%%%% ?SVM %%%%%%%%%%%%%%')
[EVAL3 CF] = Evaluate(out1,labeltesting);
disp('Accuracy (%)');disp(EVAL3(1)*100);
disp('Precision (%)');disp(EVAL3(4)*100);
disp('Recall (%)');disp(EVAL3(5)*100);
disp('Fmeasure (%)');disp(EVAL3(6)*100);
disp('True Positive');disp(CF(1))
disp('True Negative');disp(CF(2))
disp('False Positive');disp(CF(3))
disp('False Negative');disp(CF(4))
disp('%%%%%% ?NB %%%%%%%%%%%%%%')
[EVAL2 CF] = Evaluate(out3,labeltesting);
disp('Accuracy (%)');disp(EVAL2(1)*100);
disp('Precision (%)');disp(EVAL2(4)*100);
disp('Recall (%)');disp(EVAL2(5)*100);
disp('Fmeasure (%)');disp(EVAL2(6)*100);
disp('True Positive');disp(CF(1))
disp('True Negative');disp(CF(2))
disp('False Positive');disp(CF(3))
disp('False Negative');disp(CF(4))
3 仿真結(jié)果

4 參考文獻(xiàn)
[1]馬萌萌. 基于深度學(xué)習(xí)的極限學(xué)習(xí)機(jī)算法研究[D]. 中國(guó)海洋大學(xué), 2015.
博主簡(jiǎn)介:擅長(zhǎng)智能優(yōu)化算法、神經(jīng)網(wǎng)絡(luò)預(yù)測(cè)、信號(hào)處理、元胞自動(dòng)機(jī)、圖像處理、路徑規(guī)劃、無(wú)人機(jī)等多種領(lǐng)域的Matlab仿真,相關(guān)matlab代碼問(wèn)題可私信交流。
部分理論引用網(wǎng)絡(luò)文獻(xiàn),若有侵權(quán)聯(lián)系博主刪除。
