用matlab求預測一組數據的bp神經網路模型,可以分
1、給定已經數據,作為一個原始序列;
2、設定自回歸階數,一般2~3,太高不一定好;
3、設定預測某一時間段
4、設定預測步數
5、用BP自定義函數進行預測
6、根據預測值,用plot函數繪制預測數據走勢圖
其主要實現代碼如下:
clc
% x為原始序列(行向量)
x=[208.72 205.69 231.5 242.78 235.64 218.41];
%x=[101.4 101.4 101.9 102.4 101.9 102.9];
%x=[140 137 112 125 213 437.43];
t=1:length(x);
% 自回歸階數
lag=3;
%預測某一時間段
t1=t(end)+1:t(end)+5;
%預測步數為fn
fn=length(t1);
[f_out,iinput]=BP(x,lag,fn);
P=vpa(f_out,5);
A=[t1' P'];
disp('預測值')
disp(A)
% 畫出預測圖
figure(1),plot(t,iinput,'bo-'),hold on
plot(t(end):t1(end),[iinput(end),f_out],'rp-'),grid on
title('BP神經網路預測某地鐵線路客流量')
xlabel('月號'),ylabel('客流量(百萬)');
運行結果:
❷ 求基於BP神經網路的圖像復原演算法的matlab代碼
function Solar_SAE
tic;
n = 300;
m=20;
train_x = [];
test_x = [];
for i = 1:n
%filename = strcat(['D:\Program Files\MATLAB\R2012a\work\DeepLearn\Solar_SAE\64_64_3train\' num2str(i,'%03d') '.bmp']);
%filename = strcat(['E:\matlab\work\c0\TrainImage' num2str(i,'%03d') '.bmp']);
filename = strcat(['E:\image restoration\3-(' num2str(i) ')-4.jpg']);
b = imread(filename);
%c = rgb2gray(b);
c=b;
[ImageRow ImageCol] = size(c);
c = reshape(c,[1,ImageRow*ImageCol]);
train_x = [train_x;c];
end
for i = 1:m
%filename = strcat(['D:\Program Files\MATLAB\R2012a\work\DeepLearn\Solar_SAE\64_64_3test\' num2str(i,'%03d') '.bmp']);
%filename = strcat(['E:\matlab\work\c0\TestImage' num2str(i+100,'%03d') '-1.bmp']);
filename = strcat(['E:\image restoration\3-(' num2str(i+100) ').jpg']);
b = imread(filename);
%c = rgb2gray(b);
c=b;
[ImageRow ImageCol] = size(c);
c = reshape(c,[1,ImageRow*ImageCol]);
test_x = [test_x;c];
end
train_x = double(train_x)/255;
test_x = double(test_x)/255;
%train_y = double(train_y);
%test_y = double(test_y);
% Setup and train a stacked denoising autoencoder (SDAE)
rng(0);
%sae = saesetup([4096 500 200 50]);
%sae.ae{1}.activation_function = 'sigm';
%sae.ae{1}.learningRate = 0.5;
%sae.ae{1}.inputZeroMaskedFraction = 0.0;
%sae.ae{2}.activation_function = 'sigm';
%sae.ae{2}.learningRate = 0.5
%%sae.ae{2}.inputZeroMaskedFraction = 0.0;
%sae.ae{3}.activation_function = 'sigm';
%sae.ae{3}.learningRate = 0.5;
%sae.ae{3}.inputZeroMaskedFraction = 0.0;
%sae.ae{4}.activation_function = 'sigm';
%sae.ae{4}.learningRate = 0.5;
%sae.ae{4}.inputZeroMaskedFraction = 0.0;
%opts.numepochs = 10;
%opts.batchsize = 50;
%sae = saetrain(sae, train_x, opts);
%visualize(sae.ae{1}.W{1}(:,2:end)');
% Use the SDAE to initialize a FFNN
nn = nnsetup([4096 1500 500 200 50 200 500 1500 4096]);
nn.activation_function = 'sigm';
nn.learningRate = 0.03;
nn.output = 'linear'; % output unit 'sigm' (=logistic), 'softmax' and 'linear'
%add pretrained weights
%nn.W{1} = sae.ae{1}.W{1};
%nn.W{2} = sae.ae{2}.W{1};
%nn.W{3} = sae.ae{3}.W{1};
%nn.W{4} = sae.ae{3}.W{2};
%nn.W{5} = sae.ae{2}.W{2};
%nn.W{6} = sae.ae{1}.W{2};
%nn.W{7} = sae.ae{2}.W{2};
%nn.W{8} = sae.ae{1}.W{2};
% Train the FFNN
opts.numepochs = 30;
opts.batchsize = 150;
tx = test_x(14,:);
nn1 = nnff(nn,tx,tx);
ty1 = reshape(nn1.a{9},64,64);
nn = nntrain(nn, train_x, train_x, opts);
toc;
tic;
nn2 = nnff(nn,tx,tx);
toc;
tic;
ty2 = reshape(nn2.a{9},64,64);
tx = reshape(tx,64,64);
tz = tx - ty2;
tz = im2bw(tz,0.1);
%imshow(tx);
%figure,imshow(ty2);
%figure,imshow(tz);
ty = cat(2,tx,ty2,tz);
montage(ty);
filename3 = strcat(['E:\image restoration\3.jpg']);
e=imread(filename3);
f= rgb2gray(e);
f=imresize(f,[64,64]);
%imshow(ty2);
f=double (f)/255;
[PSNR, MSE] = psnr(ty2,f)
imwrite(ty2,'E:\image restoration\bptest.jpg','jpg');
toc;
%visualize(ty);
%[er, bad] = nntest(nn, tx, tx);
%assert(er < 0.1, 'Too big error');
❸ 有沒有用python實現的遺傳演算法優化BP神經網路的代碼
下面是函數實現的代碼部分:
clc
clear all
close all
%% 載入神經網路的訓練樣本 測試樣本每列一個樣本 輸入P 輸出T,T是標簽
%樣本數據就是前面問題描述中列出的數據
%epochs是計算時根據輸出誤差返回調整神經元權值和閥值的次數
load data
% 初始隱層神經元個數
hiddennum=31;
% 輸入向量的最大值和最小值
threshold=[0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1];
inputnum=size(P,1); % 輸入層神經元個數
outputnum=size(T,1); % 輸出層神經元個數
w1num=inputnum*hiddennum; % 輸入層到隱層的權值個數
w2num=outputnum*hiddennum;% 隱層到輸出層的權值個數
N=w1num+hiddennum+w2num+outputnum; %待優化的變數的個數
%% 定義遺傳演算法參數
NIND=40; %個體數目
MAXGEN=50; %最大遺傳代數
PRECI=10; %變數的二進制位數
GGAP=0.95; %代溝
px=0.7; %交叉概率
pm=0.01; %變異概率
trace=zeros(N+1,MAXGEN); %尋優結果的初始值
FieldD=[repmat(PRECI,1,N);repmat([-0.5;0.5],1,N);repmat([1;0;1;1],1,N)]; %區域描述器
Chrom=crtbp(NIND,PRECI*N); %初始種群
%% 優化
gen=0; %代計數器
X=bs2rv(Chrom,FieldD); %計算初始種群的十進制轉換
ObjV=Objfun(X,P,T,hiddennum,P_test,T_test); %計算目標函數值
while gen
❹ bp神經網路預測代碼
在matlab中,樣本是以列的方式排列的,即一列對應一個樣本。如果你的樣本無誤的話,就是一個輸入8輸出2的神經網路。作圖直接用plot函數。
參考附件的代碼,這是一個電力負荷預測例子,也是matlab編程。
BP(Back Propagation)神經網路是是一種按誤差逆傳播演算法訓練的多層前饋網路,是目前應用最廣泛的神經網路模型之一。BP網路能學習和存貯大量的輸入-輸出模式映射關系,而無需事前揭示描述這種映射關系的數學方程。它的學習規則是使用最速下降法,通過反向傳播來不斷調整網路的權值和閾值,使網路的誤差平方和最小。BP神經網路模型拓撲結構包括輸入層(input)、隱層(hidden layer)和輸出層(output layer)。
❺ bp神經網路 matlab
可以做,示例如下,是擬合一個6輸入1輸出的函數:
在matlab2013b里運行。必須有神經網路工具箱。
clearall;closeall;
x=[123456789;123212112;...
133455542;211221221;...
111222231;121221211];
t=[133455542];
net=feedforwardnet(10);%隱層節點數
net=configure(net,x,t);
net.divideParam.trainRatio=0.7;
net.divideParam.valRatio=0.15;
net.divideParam.testRatio=0.15;
net=train(net,x,t);
y2=net(x);
x_axis=1:length(t);
plot(x_axis,t,x_axis,y2)
legendtargetprediction
❻ matlab BP神經網路預測代碼
P=[1;2;3;4;5];%月
P=[P/50];
T=[2;3;4;5;6];%月訓練樣本
T=[T/50];
threshold=[0 1;0 1;0 1;0 1;0 1;0 1;0 1];
net=newff(threshold,[15,7],{'tansig','logsig'},'trainlm');
net.trainParam.epochs=2000;
net.trainParam.goal=0.001;
LP.lr=0.1;
net=train(net,P,T);
P_test=[6月]';%6月數據預內測容7月
P_test=[P_test/50];
y=sim(net,P_test)
y=[y*50]
❼ BP神經網路matlab源程序代碼講解
newff 創建前向BP網路格式:
net = newff(PR,[S1 S2...SNl],{TF1 TF2...TFNl},BTF,BLF,PF)
其中:PR —— R維輸入元素的R×2階最大最小值矩陣; Si —— 第i層神經元的個數,共N1層; TFi——第i層的轉移函數,默認『tansig』; BTF—— BP網路的訓練函數,默認『trainlm』; BLF—— BP權值/偏差學習函數,默認』learngdm』 PF ——性能函數,默認『mse』;(誤差)
e.g.
P = [0 1 2 3 4 5 6 7 8 9 10];T = [0 1 2 3 4 3 2 1 2 3 4];
net = newff([0 10],[5 1],{'tansig' 'purelin'});net.trainparam.show=50; %每次循環50次net.trainParam.epochs = 500; %最大循環500次
net.trainparam.goal=0.01; %期望目標誤差最小值
net = train(net,P,T); %對網路進行反復訓練
Y = sim(net,P)Figure % 打開另外一個圖形窗口
plot(P,T,P,Y,'o')
❽ 構造bp神經網路及matlab代碼問題
你這是多輸入單輸出問題,隱層神經元數量用試湊法。樣本數量不需要那麼多,一百條差不多了。附件是一個預測案例代碼供參考。
newff函數的格式為:
net=newff(PR,[S1 S2 ...SN],{TF1 TF2...TFN},BTF,BLF,PF),函數newff建立一個可訓練的前饋網路。輸入參數說明:
PR:Rx2的矩陣以定義R個輸入向量的最小值和最大值;
Si:第i層神經元個數;
TFi:第i層的傳遞函數,默認函數為tansig函數;
BTF:訓練函數,默認函數為trainlm函數;
BLF:權值/閥值學習函數,默認函數為learngdm函數;
PF:性能函數,默認函數為mse函數。
❾ 求基於BP神經網路的圖像復原代碼,著急用,幫幫我
function Solar_SAE
tic;
n = 300;
m=20;
train_x = [];
test_x = [];
for i = 1:n
%filename = strcat(['D:\Program Files\MATLAB\R2012a\work\DeepLearn\Solar_SAE\64_64_3train\' num2str(i,'%03d') '.bmp']);
%filename = strcat(['E:\matlab\work\c0\TrainImage' num2str(i,'%03d') '.bmp']);
filename = strcat(['E:\image restoration\3-(' num2str(i) ')-4.jpg']);
b = imread(filename);
%c = rgb2gray(b);
c=b;
[ImageRow ImageCol] = size(c);
c = reshape(c,[1,ImageRow*ImageCol]);
train_x = [train_x;c];
end
for i = 1:m
%filename = strcat(['D:\Program Files\MATLAB\R2012a\work\DeepLearn\Solar_SAE\64_64_3test\' num2str(i,'%03d') '.bmp']);
%filename = strcat(['E:\matlab\work\c0\TestImage' num2str(i+100,'%03d') '-1.bmp']);
filename = strcat(['E:\image restoration\3-(' num2str(i+100) ').jpg']);
b = imread(filename);
%c = rgb2gray(b);
c=b;
[ImageRow ImageCol] = size(c);
c = reshape(c,[1,ImageRow*ImageCol]);
test_x = [test_x;c];
end
train_x = double(train_x)/255;
test_x = double(test_x)/255;
%train_y = double(train_y);
%test_y = double(test_y);
% Setup and train a stacked denoising autoencoder (SDAE)
rng(0);
%sae = saesetup([4096 500 200 50]);
%sae.ae{1}.activation_function = 'sigm';
%sae.ae{1}.learningRate = 0.5;
%sae.ae{1}.inputZeroMaskedFraction = 0.0;
%sae.ae{2}.activation_function = 'sigm';
%sae.ae{2}.learningRate = 0.5
%%sae.ae{2}.inputZeroMaskedFraction = 0.0;
%sae.ae{3}.activation_function = 'sigm';
%sae.ae{3}.learningRate = 0.5;
%sae.ae{3}.inputZeroMaskedFraction = 0.0;
%sae.ae{4}.activation_function = 'sigm';
%sae.ae{4}.learningRate = 0.5;
%sae.ae{4}.inputZeroMaskedFraction = 0.0;
%opts.numepochs = 10;
%opts.batchsize = 50;
%sae = saetrain(sae, train_x, opts);
%visualize(sae.ae{1}.W{1}(:,2:end)');
% Use the SDAE to initialize a FFNN
nn = nnsetup([4096 1500 500 200 50 200 500 1500 4096]);
nn.activation_function = 'sigm';
nn.learningRate = 0.03;
nn.output = 'linear'; % output unit 'sigm' (=logistic), 'softmax' and 'linear'
%add pretrained weights
%nn.W{1} = sae.ae{1}.W{1};
%nn.W{2} = sae.ae{2}.W{1};
%nn.W{3} = sae.ae{3}.W{1};
%nn.W{4} = sae.ae{3}.W{2};
%nn.W{5} = sae.ae{2}.W{2};
%nn.W{6} = sae.ae{1}.W{2};
%nn.W{7} = sae.ae{2}.W{2};
%nn.W{8} = sae.ae{1}.W{2};
% Train the FFNN
opts.numepochs = 30;
opts.batchsize = 150;
tx = test_x(14,:);
nn1 = nnff(nn,tx,tx);
ty1 = reshape(nn1.a{9},64,64);
nn = nntrain(nn, train_x, train_x, opts);
toc;
tic;
nn2 = nnff(nn,tx,tx);
toc;
tic;
ty2 = reshape(nn2.a{9},64,64);
tx = reshape(tx,64,64);
tz = tx - ty2;
tz = im2bw(tz,0.1);
%imshow(tx);
%figure,imshow(ty2);
%figure,imshow(tz);
ty = cat(2,tx,ty2,tz);
montage(ty);
filename3 = strcat(['E:\image restoration\3.jpg']);
e=imread(filename3);
f= rgb2gray(e);
f=imresize(f,[64,64]);
%imshow(ty2);
f=double (f)/255;
[PSNR, MSE] = psnr(ty2,f)
imwrite(ty2,'E:\image restoration\bptest.jpg','jpg');
toc;
%visualize(ty);
%[er, bad] = nntest(nn, tx, tx);
%assert(er < 0.1, 'Too big error');