欢迎来到尧图网

客户服务 关于我们

您的位置:首页 > 新闻 > 会展 > 深度学习基础--CNN经典网络之InceptionV1研究与复现(pytorch)

深度学习基础--CNN经典网络之InceptionV1研究与复现(pytorch)

2025/4/19 5:14:48 来源:https://blog.csdn.net/weixin_74085818/article/details/147150244  浏览:    关键词:深度学习基础--CNN经典网络之InceptionV1研究与复现(pytorch)
  • 🍨 本文为🔗365天深度学习训练营 中的学习记录博客
  • 🍖 原作者:K同学啊

前言

  • InceptionV1是提出并行卷积结构,是CNN的经典网络之一;
  • 本次任务是探究InceptionV1结构并进行复现实验;
  • 欢迎收藏 + 关注,本人将会持续更新

1、InceptionV1

理论知识

GoogLeNet首次出现在2014年ILSVRC 比赛中获得冠军。这次的版本通常称其为Inception V1。

InceptionV1特点:

  • 有22层深,参数量为5M,同一时期的VGGNet性能和InceptionV1差不多,但是参数量大。

InceptionV1的核心单元是Inception Module,这里提出来卷积并行结构,实现了在同一层就可以提取不同特征,如下图a所示。

在这里插入图片描述

👀 👀 a图分析

并行:同时进行运算。

  • 1 * 1卷积结构:维度不变,主要用于降维或升维,减少参数量和计算复杂度。
  • 3 * 35 * 5卷积结构:维度不变,于提取更复杂的局部特征。
  • 最大池化层:维度不变,用于特征压缩。

上面三层均不改变维度,最后进行矩阵相加运算,并在通道数进行拼接([batch_size,C1+C2+C3+C4,H,W]).

但是,虽然增加这样的网络结构可以提升性能,但是面临计算了大的问题,故后面参考Network-in-Network的思想,使用1 * 1卷积核来降维,这样虽然加大了网络深度,但是也减少了参数量和计算量,网络结构如上图b所示。

Network-in-Network(NiN)是一种深度学习架构,它在2013年由Lin等人提出,旨在提高传统卷积神经网络(CNNs)的性能。NiN通过引入微小的多层感知器(MLP)来替代传统的线性滤波器(即标准卷积核),以此增加模型对输入数据的表达能力

举例说明

在这里插入图片描述

假设:前一层的输出为100 * 100 * 128

  • 经过256个5 * 5卷积核的卷积层之后(stride=1,padding=2),输出的数据为100*100*256,其中,卷积参数为:5 * 5 * 128 * 256 + 256 = 8.192e9

假设:前一层参数先通过1 * 1卷积(降低了通道数,但是维度不变)后。在经过5 * 5的卷积层,最后输出为数据是:100 * 100 * 256卷积参数1 * 1 * 128 * 32 + 32 +32 * 5 * 5 * 256 + 256 = 2.048e9计算量减少3/4.

给定一个卷积层,其参数总数可以通过以下公式计算:

  • 权重参数数量 = K_h * K_w * C_in * C_out
  • 偏置参数数量 = C_out

所以,总参数数量 = K_h * K_w * C_in * C_out + C_out

从上面可以看出1 * 1卷积核的最大作用是降低输入特征通道,减少参数量与计算量。


Inception Module中,基本由1*1卷积,3*3卷积,5*5卷积,3*3最大池化这四个基本单元组成,对四个基本单元进行不同尺度的信息,进行融合,得到更好的特征表现,这就是Inception Module的核心思想。

算法结构

在这里插入图片描述

黄色是头部,主要用于数据处理的,绿色是上面介绍的Inception Module结构。

注意:这个网络结构增加了两个辅助分支,作用是:

  • 避免梯度消失;
  • 将中间的某一层输出用作分类,起到模型融合作用。

详细网络结果如下

在这里插入图片描述

2、模型复现与实验

去掉两个辅助分支,只复现主要分支(详细请看网络结构),并进行实验,对猴痘病图片进行分类。

1、导入数据

1、导入库

import torch  
import torch.nn as nn
import torchvision 
import numpy as np 
import os, PIL, pathlib # 设置设备
device = "cuda" if torch.cuda.is_available() else "cpu"device 
'cuda'

2、查看数据信息和导入数据

data_dir = "./data/"data_dir = pathlib.Path(data_dir)# 类别数量
classnames = [str(path).split("\\")[0] for path in os.listdir(data_dir)]classnames
['Monkeypox', 'Others']

3、展示数据

import matplotlib.pylab as plt  
from PIL import Image # 获取文件名称
data_path_name = "./data/Monkeypox/"
data_path_list = [f for f in os.listdir(data_path_name) if f.endswith(('jpg', 'png'))]# 创建画板
fig, axes = plt.subplots(2, 8, figsize=(16, 6))for ax, img_file in zip(axes.flat, data_path_list):path_name = os.path.join(data_path_name, img_file)img = Image.open(path_name) # 打开# 显示ax.imshow(img)ax.axis('off')plt.show()


在这里插入图片描述

4、数据导入

from torchvision import transforms, datasets # 数据统一格式
img_height = 224
img_width = 224 data_tranforms = transforms.Compose([transforms.Resize([img_height, img_width]),transforms.ToTensor(),transforms.Normalize(   # 归一化mean=[0.485, 0.456, 0.406],std=[0.229, 0.224, 0.225] )
])# 加载所有数据
total_data = datasets.ImageFolder(root="./data/", transform=data_tranforms)

5、数据划分

# 大小 8 : 2
train_size = int(len(total_data) * 0.8)
test_size = len(total_data) - train_size train_data, test_data = torch.utils.data.random_split(total_data, [train_size, test_size])

6、动态加载数据

batch_size = 32 train_dl = torch.utils.data.DataLoader(train_data,batch_size=batch_size,shuffle=True
)test_dl = torch.utils.data.DataLoader(test_data,batch_size=batch_size,shuffle=False
)
# 查看数据维度
for data, labels in train_dl:print("data shape[N, C, H, W]: ", data.shape)print("labels: ", labels)break
data shape[N, C, H, W]:  torch.Size([32, 3, 224, 224])
labels:  tensor([1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 0, 1, 1, 0, 1,0, 0, 0, 0, 1, 0, 0, 0])

2、构建InceptionV1

注释很详细,只复现主干。
在这里插入图片描述


在这里插入图片描述

import torch.nn.functional as F# Inception 主要网络结构可以概括为 头部 数据处理,Inception Module组合
# 这是对于Inception Module组合的封装
class Inception_block(nn.Module):def __init__(self, in_channels, ch1x1, ch3x3red, ch3x3, ch5x5red, ch5x5, pool_proj):super().__init__()# 1 * 1self.part1 = nn.Sequential(nn.Conv2d(in_channels, ch1x1, kernel_size=1),nn.BatchNorm2d(ch1x1),nn.ReLU(inplace=True))# 1 * 1 + 3 * 3self.part2 = nn.Sequential(nn.Conv2d(in_channels, ch3x3red, kernel_size=1),nn.BatchNorm2d(ch3x3red),nn.ReLU(inplace=True),nn.Conv2d(ch3x3red, ch3x3, kernel_size=3, padding=1),nn.BatchNorm2d(ch3x3),nn.ReLU(inplace=True))# 1 * 1 + 5 * 5self.part3 = nn.Sequential(nn.Conv2d(in_channels, ch5x5red, kernel_size=1),nn.BatchNorm2d(ch5x5red),nn.ReLU(inplace=True),nn.Conv2d(ch5x5red, ch5x5, kernel_size=5, padding=2),nn.BatchNorm2d(ch5x5),nn.ReLU(inplace=True))# 3 * 3 maxPool + 1 * 1self.part4 = nn.Sequential(nn.MaxPool2d(kernel_size=3, stride=1, padding=1),nn.Conv2d(in_channels, pool_proj, kernel_size=1),nn.BatchNorm2d(pool_proj),nn.ReLU(inplace=True))def forward(self, x):out1 = self.part1(x)out2 = self.part2(x)out3 = self.part3(x)out4 = self.part4(x)outs = [out1, out2, out3, out4]return torch.cat(outs, 1)  # 按照第一个维度进行拼接, [batch_size,C1+C2+C3+C4,H,W]'''  
定义Inception,核心就是 数据头处理,Inception Module组合
'''
class InceptionV1(nn.Module):def __init__(self):super().__init__()# 头数据处理self.conv1 = nn.Conv2d(3, 64, kernel_size=7, stride=2, padding=3)self.maxpool1 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)self.conv2 = nn.Conv2d(64, 64, kernel_size=1, stride=1, padding=0)self.conv3 = nn.Conv2d(64, 192, kernel_size=3, stride=1, padding=1)self.maxpool2 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)# Inception Mdoule叠加self.inception3a = Inception_block(192, 64, 96, 128, 16, 32, 32)self.inception3b = Inception_block(256, 128, 128, 192, 32, 96, 64)self.maxpool3 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)self.inception4a = Inception_block(480, 192, 96, 208, 16, 48, 64)self.inception4b = Inception_block(512, 160, 112, 224, 24, 64, 64)self.inception4c = Inception_block(512, 128, 128, 256, 24, 64, 64)self.inception4d = Inception_block(512, 112, 144, 288, 32, 64, 64)self.inception4e = Inception_block(528, 256, 160, 320, 21, 128, 128)self.maxpool4 = nn.MaxPool2d(kernel_size=3, stride=2, padding=1)self.inception5a = Inception_block(832, 256, 160, 320, 32, 128, 128)self.inception5b = nn.Sequential(Inception_block(832, 384, 192, 384, 48, 128, 128),nn.AvgPool2d(kernel_size=7, stride=1, padding=0),  # 平均池化nn.Dropout(0.4)  # 防止过拟合)# 全连接,用于分类self.classifier = nn.Sequential(nn.Linear(in_features=1024, out_features=1024),nn.ReLU(),nn.Linear(in_features=1024, out_features=len(classnames)),nn.Softmax(dim=1) # 用Softmax分类)def forward(self, x):x = self.conv1(x)x = F.relu(x)x = self.maxpool1(x)x = self.conv2(x)x = F.relu(x)x = self.conv3(x)x = F.relu(x)x = self.maxpool2(x)x = self.inception3a(x)x = self.inception3b(x)x = self.maxpool3(x)x = self.inception4a(x)x = self.inception4b(x)x = self.inception4c(x)x = self.inception4d(x)x = self.inception4e(x)x = self.maxpool4(x)x = self.inception5a(x)x = self.inception5b(x)x = torch.flatten(x, start_dim=1)  # 展开x = self.classifier(x)return xmodel = InceptionV1().to(device)
model(torch.randn(32, 3, 224, 224).to(device)).shape
torch.Size([32, 2])
# 显示网络结构
import torchsummarytorchsummary.summary(model, (3, 224, 224))
print(model)
----------------------------------------------------------------Layer (type)               Output Shape         Param #
================================================================Conv2d-1         [-1, 64, 112, 112]           9,472MaxPool2d-2           [-1, 64, 56, 56]               0Conv2d-3           [-1, 64, 56, 56]           4,160Conv2d-4          [-1, 192, 56, 56]         110,784MaxPool2d-5          [-1, 192, 28, 28]               0Conv2d-6           [-1, 64, 28, 28]          12,352BatchNorm2d-7           [-1, 64, 28, 28]             128ReLU-8           [-1, 64, 28, 28]               0Conv2d-9           [-1, 96, 28, 28]          18,528BatchNorm2d-10           [-1, 96, 28, 28]             192ReLU-11           [-1, 96, 28, 28]               0Conv2d-12          [-1, 128, 28, 28]         110,720BatchNorm2d-13          [-1, 128, 28, 28]             256ReLU-14          [-1, 128, 28, 28]               0Conv2d-15           [-1, 16, 28, 28]           3,088BatchNorm2d-16           [-1, 16, 28, 28]              32ReLU-17           [-1, 16, 28, 28]               0Conv2d-18           [-1, 32, 28, 28]          12,832BatchNorm2d-19           [-1, 32, 28, 28]              64ReLU-20           [-1, 32, 28, 28]               0MaxPool2d-21          [-1, 192, 28, 28]               0Conv2d-22           [-1, 32, 28, 28]           6,176BatchNorm2d-23           [-1, 32, 28, 28]              64ReLU-24           [-1, 32, 28, 28]               0Inception_block-25          [-1, 256, 28, 28]               0Conv2d-26          [-1, 128, 28, 28]          32,896BatchNorm2d-27          [-1, 128, 28, 28]             256ReLU-28          [-1, 128, 28, 28]               0Conv2d-29          [-1, 128, 28, 28]          32,896BatchNorm2d-30          [-1, 128, 28, 28]             256ReLU-31          [-1, 128, 28, 28]               0Conv2d-32          [-1, 192, 28, 28]         221,376BatchNorm2d-33          [-1, 192, 28, 28]             384ReLU-34          [-1, 192, 28, 28]               0Conv2d-35           [-1, 32, 28, 28]           8,224BatchNorm2d-36           [-1, 32, 28, 28]              64ReLU-37           [-1, 32, 28, 28]               0Conv2d-38           [-1, 96, 28, 28]          76,896BatchNorm2d-39           [-1, 96, 28, 28]             192ReLU-40           [-1, 96, 28, 28]               0MaxPool2d-41          [-1, 256, 28, 28]               0Conv2d-42           [-1, 64, 28, 28]          16,448BatchNorm2d-43           [-1, 64, 28, 28]             128ReLU-44           [-1, 64, 28, 28]               0Inception_block-45          [-1, 480, 28, 28]               0MaxPool2d-46          [-1, 480, 14, 14]               0Conv2d-47          [-1, 192, 14, 14]          92,352BatchNorm2d-48          [-1, 192, 14, 14]             384ReLU-49          [-1, 192, 14, 14]               0Conv2d-50           [-1, 96, 14, 14]          46,176BatchNorm2d-51           [-1, 96, 14, 14]             192ReLU-52           [-1, 96, 14, 14]               0Conv2d-53          [-1, 208, 14, 14]         179,920BatchNorm2d-54          [-1, 208, 14, 14]             416ReLU-55          [-1, 208, 14, 14]               0Conv2d-56           [-1, 16, 14, 14]           7,696BatchNorm2d-57           [-1, 16, 14, 14]              32ReLU-58           [-1, 16, 14, 14]               0Conv2d-59           [-1, 48, 14, 14]          19,248BatchNorm2d-60           [-1, 48, 14, 14]              96ReLU-61           [-1, 48, 14, 14]               0MaxPool2d-62          [-1, 480, 14, 14]               0Conv2d-63           [-1, 64, 14, 14]          30,784BatchNorm2d-64           [-1, 64, 14, 14]             128ReLU-65           [-1, 64, 14, 14]               0Inception_block-66          [-1, 512, 14, 14]               0Conv2d-67          [-1, 160, 14, 14]          82,080BatchNorm2d-68          [-1, 160, 14, 14]             320ReLU-69          [-1, 160, 14, 14]               0Conv2d-70          [-1, 112, 14, 14]          57,456BatchNorm2d-71          [-1, 112, 14, 14]             224ReLU-72          [-1, 112, 14, 14]               0Conv2d-73          [-1, 224, 14, 14]         226,016BatchNorm2d-74          [-1, 224, 14, 14]             448ReLU-75          [-1, 224, 14, 14]               0Conv2d-76           [-1, 24, 14, 14]          12,312BatchNorm2d-77           [-1, 24, 14, 14]              48ReLU-78           [-1, 24, 14, 14]               0Conv2d-79           [-1, 64, 14, 14]          38,464BatchNorm2d-80           [-1, 64, 14, 14]             128ReLU-81           [-1, 64, 14, 14]               0MaxPool2d-82          [-1, 512, 14, 14]               0Conv2d-83           [-1, 64, 14, 14]          32,832BatchNorm2d-84           [-1, 64, 14, 14]             128ReLU-85           [-1, 64, 14, 14]               0Inception_block-86          [-1, 512, 14, 14]               0Conv2d-87          [-1, 128, 14, 14]          65,664BatchNorm2d-88          [-1, 128, 14, 14]             256ReLU-89          [-1, 128, 14, 14]               0Conv2d-90          [-1, 128, 14, 14]          65,664BatchNorm2d-91          [-1, 128, 14, 14]             256ReLU-92          [-1, 128, 14, 14]               0Conv2d-93          [-1, 256, 14, 14]         295,168BatchNorm2d-94          [-1, 256, 14, 14]             512ReLU-95          [-1, 256, 14, 14]               0Conv2d-96           [-1, 24, 14, 14]          12,312BatchNorm2d-97           [-1, 24, 14, 14]              48ReLU-98           [-1, 24, 14, 14]               0Conv2d-99           [-1, 64, 14, 14]          38,464BatchNorm2d-100           [-1, 64, 14, 14]             128ReLU-101           [-1, 64, 14, 14]               0MaxPool2d-102          [-1, 512, 14, 14]               0Conv2d-103           [-1, 64, 14, 14]          32,832BatchNorm2d-104           [-1, 64, 14, 14]             128ReLU-105           [-1, 64, 14, 14]               0Inception_block-106          [-1, 512, 14, 14]               0Conv2d-107          [-1, 112, 14, 14]          57,456BatchNorm2d-108          [-1, 112, 14, 14]             224ReLU-109          [-1, 112, 14, 14]               0Conv2d-110          [-1, 144, 14, 14]          73,872BatchNorm2d-111          [-1, 144, 14, 14]             288ReLU-112          [-1, 144, 14, 14]               0Conv2d-113          [-1, 288, 14, 14]         373,536BatchNorm2d-114          [-1, 288, 14, 14]             576ReLU-115          [-1, 288, 14, 14]               0Conv2d-116           [-1, 32, 14, 14]          16,416BatchNorm2d-117           [-1, 32, 14, 14]              64ReLU-118           [-1, 32, 14, 14]               0Conv2d-119           [-1, 64, 14, 14]          51,264BatchNorm2d-120           [-1, 64, 14, 14]             128ReLU-121           [-1, 64, 14, 14]               0MaxPool2d-122          [-1, 512, 14, 14]               0Conv2d-123           [-1, 64, 14, 14]          32,832BatchNorm2d-124           [-1, 64, 14, 14]             128ReLU-125           [-1, 64, 14, 14]               0Inception_block-126          [-1, 528, 14, 14]               0Conv2d-127          [-1, 256, 14, 14]         135,424BatchNorm2d-128          [-1, 256, 14, 14]             512ReLU-129          [-1, 256, 14, 14]               0Conv2d-130          [-1, 160, 14, 14]          84,640BatchNorm2d-131          [-1, 160, 14, 14]             320ReLU-132          [-1, 160, 14, 14]               0Conv2d-133          [-1, 320, 14, 14]         461,120BatchNorm2d-134          [-1, 320, 14, 14]             640ReLU-135          [-1, 320, 14, 14]               0Conv2d-136           [-1, 21, 14, 14]          11,109BatchNorm2d-137           [-1, 21, 14, 14]              42ReLU-138           [-1, 21, 14, 14]               0Conv2d-139          [-1, 128, 14, 14]          67,328BatchNorm2d-140          [-1, 128, 14, 14]             256ReLU-141          [-1, 128, 14, 14]               0MaxPool2d-142          [-1, 528, 14, 14]               0Conv2d-143          [-1, 128, 14, 14]          67,712BatchNorm2d-144          [-1, 128, 14, 14]             256ReLU-145          [-1, 128, 14, 14]               0Inception_block-146          [-1, 832, 14, 14]               0MaxPool2d-147            [-1, 832, 7, 7]               0Conv2d-148            [-1, 256, 7, 7]         213,248BatchNorm2d-149            [-1, 256, 7, 7]             512ReLU-150            [-1, 256, 7, 7]               0Conv2d-151            [-1, 160, 7, 7]         133,280BatchNorm2d-152            [-1, 160, 7, 7]             320ReLU-153            [-1, 160, 7, 7]               0Conv2d-154            [-1, 320, 7, 7]         461,120BatchNorm2d-155            [-1, 320, 7, 7]             640ReLU-156            [-1, 320, 7, 7]               0Conv2d-157             [-1, 32, 7, 7]          26,656BatchNorm2d-158             [-1, 32, 7, 7]              64ReLU-159             [-1, 32, 7, 7]               0Conv2d-160            [-1, 128, 7, 7]         102,528BatchNorm2d-161            [-1, 128, 7, 7]             256ReLU-162            [-1, 128, 7, 7]               0MaxPool2d-163            [-1, 832, 7, 7]               0Conv2d-164            [-1, 128, 7, 7]         106,624BatchNorm2d-165            [-1, 128, 7, 7]             256ReLU-166            [-1, 128, 7, 7]               0Inception_block-167            [-1, 832, 7, 7]               0Conv2d-168            [-1, 384, 7, 7]         319,872BatchNorm2d-169            [-1, 384, 7, 7]             768ReLU-170            [-1, 384, 7, 7]               0Conv2d-171            [-1, 192, 7, 7]         159,936BatchNorm2d-172            [-1, 192, 7, 7]             384ReLU-173            [-1, 192, 7, 7]               0Conv2d-174            [-1, 384, 7, 7]         663,936BatchNorm2d-175            [-1, 384, 7, 7]             768ReLU-176            [-1, 384, 7, 7]               0Conv2d-177             [-1, 48, 7, 7]          39,984BatchNorm2d-178             [-1, 48, 7, 7]              96ReLU-179             [-1, 48, 7, 7]               0Conv2d-180            [-1, 128, 7, 7]         153,728BatchNorm2d-181            [-1, 128, 7, 7]             256ReLU-182            [-1, 128, 7, 7]               0MaxPool2d-183            [-1, 832, 7, 7]               0Conv2d-184            [-1, 128, 7, 7]         106,624BatchNorm2d-185            [-1, 128, 7, 7]             256ReLU-186            [-1, 128, 7, 7]               0Inception_block-187           [-1, 1024, 7, 7]               0AvgPool2d-188           [-1, 1024, 1, 1]               0Dropout-189           [-1, 1024, 1, 1]               0Linear-190                 [-1, 1024]       1,049,600ReLU-191                 [-1, 1024]               0Linear-192                    [-1, 2]           2,050Softmax-193                    [-1, 2]               0
================================================================
Total params: 6,998,081
Trainable params: 6,998,081
Non-trainable params: 0
----------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 69.56
Params size (MB): 26.70
Estimated Total Size (MB): 96.83
----------------------------------------------------------------
InceptionV1((conv1): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3))(maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(conv2): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1))(conv3): Conv2d(64, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(maxpool2): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(inception3a): Inception_block((part1): Sequential((0): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(part2): Sequential((0): Conv2d(192, 96, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(96, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part3): Sequential((0): Conv2d(192, 16, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(16, 32, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(4): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part4): Sequential((0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)(1): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1))(2): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(3): ReLU(inplace=True)))(inception3b): Inception_block((part1): Sequential((0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(part2): Sequential((0): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part3): Sequential((0): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(32, 96, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(4): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part4): Sequential((0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)(1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(3): ReLU(inplace=True)))(maxpool3): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(inception4a): Inception_block((part1): Sequential((0): Conv2d(480, 192, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(part2): Sequential((0): Conv2d(480, 96, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(96, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(208, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part3): Sequential((0): Conv2d(480, 16, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(16, 48, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(4): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part4): Sequential((0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)(1): Conv2d(480, 64, kernel_size=(1, 1), stride=(1, 1))(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(3): ReLU(inplace=True)))(inception4b): Inception_block((part1): Sequential((0): Conv2d(512, 160, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(part2): Sequential((0): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(112, 224, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(224, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part3): Sequential((0): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(24, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part4): Sequential((0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)(1): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(3): ReLU(inplace=True)))(inception4c): Inception_block((part1): Sequential((0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(part2): Sequential((0): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part3): Sequential((0): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(24, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(24, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part4): Sequential((0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)(1): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(3): ReLU(inplace=True)))(inception4d): Inception_block((part1): Sequential((0): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(112, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(part2): Sequential((0): Conv2d(512, 144, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(144, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(144, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(288, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part3): Sequential((0): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(32, 64, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(4): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part4): Sequential((0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)(1): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1))(2): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(3): ReLU(inplace=True)))(inception4e): Inception_block((part1): Sequential((0): Conv2d(528, 256, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(part2): Sequential((0): Conv2d(528, 160, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part3): Sequential((0): Conv2d(528, 21, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(21, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(21, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part4): Sequential((0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)(1): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1))(2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(3): ReLU(inplace=True)))(maxpool4): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)(inception5a): Inception_block((part1): Sequential((0): Conv2d(832, 256, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(part2): Sequential((0): Conv2d(832, 160, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(160, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(320, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part3): Sequential((0): Conv2d(832, 32, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(32, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(32, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part4): Sequential((0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)(1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1))(2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(3): ReLU(inplace=True)))(inception5b): Sequential((0): Inception_block((part1): Sequential((0): Conv2d(832, 384, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True))(part2): Sequential((0): Conv2d(832, 192, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(192, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))(4): BatchNorm2d(384, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part3): Sequential((0): Conv2d(832, 48, kernel_size=(1, 1), stride=(1, 1))(1): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(2): ReLU(inplace=True)(3): Conv2d(48, 128, kernel_size=(5, 5), stride=(1, 1), padding=(2, 2))(4): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(5): ReLU(inplace=True))(part4): Sequential((0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=False)(1): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1))(2): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)(3): ReLU(inplace=True)))(1): AvgPool2d(kernel_size=7, stride=1, padding=0)(2): Dropout(p=0.4, inplace=False))(classifier): Sequential((0): Linear(in_features=1024, out_features=1024, bias=True)(1): ReLU()(2): Linear(in_features=1024, out_features=2, bias=True)(3): Softmax(dim=1))
)

3、模型训练

1、构建训练集

def train(dataloader, model, loss_fn, optimizer):size = len(dataloader.dataset)batch_size = len(dataloader)train_acc, train_loss = 0, 0 for X, y in dataloader:X, y = X.to(device), y.to(device)# 训练pred = model(X)loss = loss_fn(pred, y)# 梯度下降法optimizer.zero_grad()loss.backward()optimizer.step()# 记录train_loss += loss.item()train_acc += (pred.argmax(1) == y).type(torch.float).sum().item()train_acc /= sizetrain_loss /= batch_sizereturn train_acc, train_loss

2、构建测试集

def test(dataloader, model, loss_fn):size = len(dataloader.dataset)batch_size = len(dataloader)test_acc, test_loss = 0, 0 with torch.no_grad():for X, y in dataloader:X, y = X.to(device), y.to(device)pred = model(X)loss = loss_fn(pred, y)test_loss += loss.item()test_acc += (pred.argmax(1) == y).type(torch.float).sum().item()test_acc /= sizetest_loss /= batch_sizereturn test_acc, test_loss

3、设置超参数

loss_fn = nn.CrossEntropyLoss()  # 损失函数     
learn_lr = 1e-4             # 超参数
optimizer = torch.optim.Adam(model.parameters(), lr=learn_lr)   # 优化器

4、模型训练

train_acc = []
train_loss = []
test_acc = []
test_loss = []epoches = 40for i in range(epoches):model.train()epoch_train_acc, epoch_train_loss = train(train_dl, model, loss_fn, optimizer)model.eval()epoch_test_acc, epoch_test_loss = test(test_dl, model, loss_fn)train_acc.append(epoch_train_acc)train_loss.append(epoch_train_loss)test_acc.append(epoch_test_acc)test_loss.append(epoch_test_loss)# 输出template = ('Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%, Test_loss:{:.3f}')print(template.format(i + 1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss))print("Done")
Epoch: 1, Train_acc:64.2%, Train_loss:0.638, Test_acc:63.4%, Test_loss:0.650
Epoch: 2, Train_acc:70.9%, Train_loss:0.582, Test_acc:71.3%, Test_loss:0.584
Epoch: 3, Train_acc:77.0%, Train_loss:0.537, Test_acc:61.1%, Test_loss:0.680
Epoch: 4, Train_acc:79.0%, Train_loss:0.517, Test_acc:78.3%, Test_loss:0.519
Epoch: 5, Train_acc:81.8%, Train_loss:0.485, Test_acc:83.9%, Test_loss:0.475
Epoch: 6, Train_acc:82.6%, Train_loss:0.480, Test_acc:76.9%, Test_loss:0.533
Epoch: 7, Train_acc:87.4%, Train_loss:0.436, Test_acc:82.1%, Test_loss:0.483
Epoch: 8, Train_acc:87.6%, Train_loss:0.433, Test_acc:86.0%, Test_loss:0.433
Epoch: 9, Train_acc:89.5%, Train_loss:0.415, Test_acc:85.3%, Test_loss:0.456
Epoch:10, Train_acc:88.9%, Train_loss:0.425, Test_acc:87.9%, Test_loss:0.427
Epoch:11, Train_acc:89.6%, Train_loss:0.412, Test_acc:86.0%, Test_loss:0.442
Epoch:12, Train_acc:92.5%, Train_loss:0.388, Test_acc:89.0%, Test_loss:0.418
Epoch:13, Train_acc:91.7%, Train_loss:0.398, Test_acc:88.3%, Test_loss:0.424
Epoch:14, Train_acc:89.8%, Train_loss:0.414, Test_acc:86.2%, Test_loss:0.443
Epoch:15, Train_acc:91.5%, Train_loss:0.397, Test_acc:89.7%, Test_loss:0.414
Epoch:16, Train_acc:94.3%, Train_loss:0.369, Test_acc:89.7%, Test_loss:0.411
Epoch:17, Train_acc:93.3%, Train_loss:0.376, Test_acc:86.7%, Test_loss:0.445
Epoch:18, Train_acc:94.3%, Train_loss:0.369, Test_acc:90.4%, Test_loss:0.404
Epoch:19, Train_acc:93.9%, Train_loss:0.370, Test_acc:90.9%, Test_loss:0.401
Epoch:20, Train_acc:95.3%, Train_loss:0.359, Test_acc:90.4%, Test_loss:0.405
Epoch:21, Train_acc:94.7%, Train_loss:0.363, Test_acc:93.2%, Test_loss:0.378
Epoch:22, Train_acc:95.2%, Train_loss:0.360, Test_acc:90.2%, Test_loss:0.415
Epoch:23, Train_acc:95.6%, Train_loss:0.356, Test_acc:92.3%, Test_loss:0.388
Epoch:24, Train_acc:95.9%, Train_loss:0.355, Test_acc:90.7%, Test_loss:0.400
Epoch:25, Train_acc:93.2%, Train_loss:0.377, Test_acc:93.0%, Test_loss:0.385
Epoch:26, Train_acc:95.2%, Train_loss:0.358, Test_acc:90.9%, Test_loss:0.404
Epoch:27, Train_acc:96.1%, Train_loss:0.353, Test_acc:93.7%, Test_loss:0.380
Epoch:28, Train_acc:95.2%, Train_loss:0.359, Test_acc:87.4%, Test_loss:0.440
Epoch:29, Train_acc:95.5%, Train_loss:0.358, Test_acc:92.8%, Test_loss:0.394
Epoch:30, Train_acc:96.4%, Train_loss:0.348, Test_acc:88.3%, Test_loss:0.432
Epoch:31, Train_acc:96.6%, Train_loss:0.347, Test_acc:93.2%, Test_loss:0.385
Epoch:32, Train_acc:96.4%, Train_loss:0.347, Test_acc:93.7%, Test_loss:0.383
Epoch:33, Train_acc:96.3%, Train_loss:0.350, Test_acc:88.3%, Test_loss:0.434
Epoch:34, Train_acc:95.6%, Train_loss:0.356, Test_acc:91.1%, Test_loss:0.401
Epoch:35, Train_acc:95.3%, Train_loss:0.360, Test_acc:92.8%, Test_loss:0.391
Epoch:36, Train_acc:95.2%, Train_loss:0.360, Test_acc:91.8%, Test_loss:0.399
Epoch:37, Train_acc:95.4%, Train_loss:0.358, Test_acc:90.4%, Test_loss:0.404
Epoch:38, Train_acc:95.4%, Train_loss:0.359, Test_acc:91.4%, Test_loss:0.397
Epoch:39, Train_acc:97.0%, Train_loss:0.344, Test_acc:93.2%, Test_loss:0.385
Epoch:40, Train_acc:97.3%, Train_loss:0.338, Test_acc:91.6%, Test_loss:0.400
Done

5、结果可视化

import matplotlib.pyplot as plt
#隐藏警告
import warnings
warnings.filterwarnings("ignore")               #忽略警告信息epochs_range = range(epoches)plt.figure(figsize=(12, 3))
plt.subplot(1, 2, 1)plt.plot(epochs_range, train_acc, label='Training Accuracy')
plt.plot(epochs_range, test_acc, label='Test Accuracy')
plt.legend(loc='lower right')
plt.title('Training Accuracy')plt.subplot(1, 2, 2)
plt.plot(epochs_range, train_loss, label='Training Loss')
plt.plot(epochs_range, test_loss, label='Test Loss')
plt.legend(loc='upper right')
plt.title('Training= Loss')
plt.show()


在这里插入图片描述

效果不错

3、参考资料

经典神经网络论文超详细解读(三)——GoogLeNet InceptionV1学习笔记(翻译+精读+代码复现)_googlenet论文-CSDN博客

版权声明:

本网仅为发布的内容提供存储空间,不对发表、转载的内容提供任何形式的保证。凡本网注明“来源:XXX网络”的作品,均转载自其它媒体,著作权归作者所有,商业转载请联系作者获得授权,非商业转载请注明出处。

我们尊重并感谢每一位作者,均已注明文章来源和作者。如因作品内容、版权或其它问题,请及时与我们联系,联系邮箱:809451989@qq.com,投稿邮箱:809451989@qq.com

热搜词