1、相關(guān)理論
CNN 模型為深度學(xué)習(xí)模型,其具有局部連接、權(quán) 值共享和空間相關(guān)等特性,以及強(qiáng)魯棒性和容錯(cuò)能力,適用于提取深層數(shù)據(jù)特征。經(jīng)典的 CNN 模型結(jié)構(gòu)包含輸入層、隱含層、全連接層和輸出層。卷積神經(jīng)網(wǎng)絡(luò)的模型如下圖所示。
2、數(shù)據(jù)集的準(zhǔn)備
以手寫數(shù)據(jù)集為例,搭建卷積神經(jīng)網(wǎng)絡(luò)進(jìn)行分類識(shí)別。數(shù)據(jù)集下載地址: ,下面為部分?jǐn)?shù)據(jù)集圖片展示。
3、數(shù)據(jù)集的讀取與劃分
將下載好的數(shù)據(jù)集保存好,digitDatasetPath 填寫數(shù)據(jù)集的保存路徑即可。每一類隨機(jī)選擇750張圖片作為測試數(shù)據(jù),其他的作為訓(xùn)練數(shù)據(jù)。
%%數(shù)據(jù)集的讀取
digitDatasetPath = 'D:\\MTALAB2019\\手寫數(shù)據(jù)集\\DigitDataset';
imds = imageDatastore(digitDatasetPath, ...
'IncludeSubfolders',true,'LabelSource','foldernames');
%%數(shù)據(jù)集的劃分
numTrainFiles = 750;
[imdsTrain,imdsValidation] = splitEachLabel(imds,numTrainFiles,'randomize');
4、卷積神經(jīng)網(wǎng)絡(luò)的搭建
layers = [
imageInputLayer([28 28 1]) %%輸入層
%%卷積層
convolution2dLayer(3,8,'Padding','same')
batchNormalizationLayer
reluLayer
%%池化層
maxPooling2dLayer(2,'Stride',2)
%%卷積層
convolution2dLayer(3,16,'Padding','same')
batchNormalizationLayer
reluLayer
%%池化層
maxPooling2dLayer(2,'Stride',2)
%%卷積層
convolution2dLayer(3,32,'Padding','same')
batchNormalizationLayer
reluLayer
%全連接層
fullyConnectedLayer(10)
softmaxLayer
classificationLayer];
網(wǎng)絡(luò)搭建好了,就需要對網(wǎng)絡(luò)的參數(shù)進(jìn)行設(shè)置,相關(guān)參數(shù)代碼如下:
options = trainingOptions('sgdm', ...
'InitialLearnRate',0.01, ...
'MaxEpochs',10, ...
'Shuffle','every-epoch', ...
'ValidationData',imdsValidation, ...
'ValidationFrequency',30, ...
'Verbose',false, ...
'Plots','training-progress');
** 5、訓(xùn)練卷積神經(jīng)網(wǎng)絡(luò)**
net = trainNetwork(imdsTrain,layers,options);
** 訓(xùn)練結(jié)果如下 **
6、測試與運(yùn)行結(jié)果
YPred = classify(net,imdsValidation);
YValidation = imdsValidation.Labels;
accuracy = sum(YPred == YValidation)/numel(YValidation)
accuracy = 0.9868
-
神經(jīng)網(wǎng)絡(luò)
+關(guān)注
關(guān)注
42文章
4807瀏覽量
102792 -
cnn
+關(guān)注
關(guān)注
3文章
354瀏覽量
22636 -
卷積神經(jīng)網(wǎng)絡(luò)
+關(guān)注
關(guān)注
4文章
369瀏覽量
12196
發(fā)布評(píng)論請先 登錄
評(píng)論