【ElM分類】基于哈里斯鷹優(yōu)化ElM神經(jīng)網(wǎng)絡(luò)實現(xiàn)數(shù)據(jù)分類附matlab代碼
1 簡介
為了提高核極限學(xué)習(xí)機(jī)(ELM)的分類正確率,采用哈里斯鷹算法(HHO)對懲罰系數(shù),寬度參數(shù)兩個參數(shù)進(jìn)行優(yōu)化.首先,根據(jù)乳腺良惡性腫瘤數(shù)據(jù)庫訓(xùn)練集并利用哈里斯鷹算法優(yōu)化核極限學(xué)習(xí)機(jī);然后,通過HHO-ELM和ELM對測試集進(jìn)行分類診斷;最后,對比分析HHO-ELM和ELM的分類性能,測試結(jié)果表明,HHO-ELM的總體診斷正確率相較于ELM提高了10%,且惡性腫瘤的診斷正確率明顯優(yōu)于ELM.
2 部分代碼
function [fbst, xbst, performance] = hho( objective, d, lmt, n, T, S)
%Harris hawks optimization algorithm
% inputs:
% ? objective - function handle, the objective function
% ? d - scalar, dimension of the optimization problem
% ? lmt - d-by-2 matrix, lower and upper constraints of the decision varable
% ? n - scalar, swarm size
% ? T - scalar, maximum iteration
% ? S - scalar, times of independent runs
% data: 2021-05-09
% author: elkman, github.com/ElkmanY/
%% Levy flight
beta = 1.5;
sigma = ( gamma(1+beta)*sin(pi*beta/2)/gamma((1+beta)/2)*beta*2^((beta-1)/2) ).^(1/beta);
Levy = @(x) 0.01*normrnd(0,1,d,x)*sigma./abs(normrnd(0,1,d,x)).^(1/beta);
%% algorithm procedure
tic;
for s = 1:S
? ?%% ?Initialization
? ?X = lmt(:,1) + (lmt(:,2) - lmt(:,1)).*rand(d,n);
? ?for t = 1:T
? ? ? ?F = objective(X);
? ? ? ?[f_rabbit(s,t), i_rabbit] = min(F);
? ? ? ?x_rabbit(:,t,s) = X(:,i_rabbit);
? ? ? ?xr = x_rabbit(:,t,s);
? ? ? ?J = 2*(1-rand(d,1));
? ? ? ?E0 = 2*rand(1,n)-1;
? ? ? ?E(t,:) = 2*E0*(1-t/T);
? ? ? ?absE = abs(E(t));
? ? ? ?p1 = absE>=1; ? %eq(1)
? ? ? ?r = rand(1,n);
? ? ? ?p2 = (r>=0.5) & (absE>=0.5) & (absE<1); %eq(4)
? ? ? ?p3 = (r>=0.5) & (absE<0.5); ?%eq(6)
? ? ? ?p4 = (r<0.5) & (absE>=0.5) & (absE<1); %eq(10)
? ? ? ?p5 = (r<0.5) & (absE<0.5); %eq(11)
? ? ? ?%% update locations
? ? ? ?rh = randi([1,n],1,n);
? ? ? ?flag1 = rand(1,n)>=0.5;
? ? ? ?Y = xr - E(t,:).*abs( J.*xr - X );
? ? ? ?Z = Y + rand(d,n).*Levy(n);
? ? ? ?flag2 = (objective(Y)<objective(Z)) & (objective(Y)<F);
? ? ? ?flag3 = (objective(Y)>objective(Z)) & (objective(Z)<F);
? ? ? ?flag4 = (~flag2) & (~flag3);
? ? ? ?X_ = ? ?p1.*( ? (X(:,rh) - rand(1,n).*abs( X(:,rh) - 2*rand(1,n).*X )).*flag1 +...
? ? ? ? ? ?((X(:,rh) - mean(X)) - rand(1,n).*( lmt(:,1) + (lmt(:,2) - lmt(:,1)).*rand(d,n) )).*(~flag1) ? )...
? ? ? ? ? ?+ ? p2.*( ? xr - X - E(t,:).*abs( J.*xr - X ) ? )...
? ? ? ? ? ?+ ? p3.*( ? xr - E(t,:).*abs( xr - X ) ? )...
? ? ? ? ? ?+ ? p4.*( ? Y.*flag2 + Z.*flag3 + ( lmt(:,1) + (lmt(:,2) - lmt(:,1)).*rand(d,n) ).*flag4 ?)...
? ? ? ? ? ?+ ? p5.*( ? Y.*flag2 + Z.*flag3 + ( lmt(:,1) + (lmt(:,2) - lmt(:,1)).*rand(d,n) ).*flag4 ?);
? ? ? ?X_(:,i_rabbit) = xr;
? ? ? ?X = X_;
? ?end
end
%% ê?3?-outputs
performance = [min(f_rabbit(:,T));mean(f_rabbit(:,T));std(f_rabbit(:,T))];
timecost = toc;
[fbst, ibst] = min(f_rabbit(:,T));
xbst = x_rabbit(:,T,ibst);
%% ??í?-plot data
% Convergence Curve
figure('Name','Convergence Curve');
box on
semilogy(1:T,mean(f_rabbit,1),'b','LineWidth',1.5);
xlabel('Iteration','FontName','Aril');
ylabel('Fitness/Score','FontName','Aril');
title('Convergence Curve','FontName','Aril');
if d == 2
? ?% Trajectory of Global Optimal
? ?figure('Name','Trajectory of Global Optimal');
? ?x1 = linspace(lmt(1,1),lmt(1,2));
? ?x2 = linspace(lmt(2,1),lmt(2,2));
? ?[X1,X2] = meshgrid(x1,x2);
? ?V = reshape(objective([X1(:),X2(:)]'),[size(X1,1),size(X1,1)]);
? ?contour(X1,X2,log10(V),100); % notice log10(V)
? ?hold on
? ?plot(x_rabbit(1,:,1),x_rabbit(2,:,1),'r-x','LineWidth',1);
? ?hold off
? ?xlabel('\it{x}_1','FontName','Time New Roman');
? ?ylabel('\it{x}_2','FontName','Time New Roman');
? ?title('Trajectory of Global Optimal','FontName','Aril');
end
end
3 仿真結(jié)果


4 參考文獻(xiàn)
[1]彭甜, 孫偉, 張楚,等. 基于改進(jìn)哈里斯鷹算法優(yōu)化ELM的風(fēng)速預(yù)測方法及系統(tǒng):.?
博主簡介:擅長智能優(yōu)化算法、神經(jīng)網(wǎng)絡(luò)預(yù)測、信號處理、元胞自動機(jī)、圖像處理、路徑規(guī)劃、無人機(jī)等多種領(lǐng)域的Matlab仿真,相關(guān)matlab代碼問題可私信交流。
部分理論引用網(wǎng)絡(luò)文獻(xiàn),若有侵權(quán)聯(lián)系博主刪除。
