Pytorch下的迁移学习

    xiaoxiao2022-07-14  156

    在实际应用中,自己设计网络往往是不合理的,因此对已经训练好的模型进行调整,作为新的模型,这种方式叫做迁移学习。 迁移学习主要有以下两个应用场景:

    Finetuning :该模式下使用已训练好的模型初始化网络特征提取:冻结网络底层结构,除了全连接层。重新设计新的全连接层,并对其参数进行训练。

    导入相应的包

    # Author: Little chen from __future__ import print_function import torch import torch.nn as nn import torch.optim as optim from torch.optim import lr_scheduler import numpy as np import torchvision from torchvision import datasets,models,transforms import matplotlib.pyplot as plt import time import os import copy plt.ion()

    加载数据

    在这里将使用torchvision和torch.utils.data加载训练集和测试集。 我在这里将设计一个简单的分类ants和bees的分类器。数据集可以从这里下载。其中,每类训练集包含100张图像,每类训练集包含75张图像。由于数据集比较小,因而采用数据增强(Data augmentation),并对训练集和验证集进行归一化处理。

    data_transforms = { 'train':transforms.Compose({ transforms.RandomResizedCrop((224,224)), transforms.RandomHorizontalFlip(), transforms.ToTensor(), transforms.Normalize(mean=[0.485,0.456,0.406],std=[0.229,0.224,0.225]) }), 'val':transforms.Compose([ transforms.Resize((256,256)), transforms.CenterCrop((224,224)), transforms.ToTensor(), transforms.Normalize(mean=[0.485,0.456,0.406],std=[0.229,0.224,0.225]) ]) } data_dir = 'E:/hymenoptera_data' image_datasets = {x: datasets.ImageFolder(os.path.join(data_dir,x), data_transforms[x]) for x in ['train','val']} dataloaders = {x: torch.utils.data.DataLoader(image_datasets[x],batch_size=4, shuffle=True,num_workers=4) for x in ['train','val']} datasets_sizes = {x: len(image_datasets[x]) for x in ['train','val']} class_names = image_datasets['train'].classes device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')

    可视化处理:观察其中的几幅图像

    def imshow(inp, title=None): inp = inp.numpy().transpose((1, 2, 0)) mean = np.array([0.485, 0.456, 0.406]) std = np.array([0.229, 0.224, 0.225]) inp = std * inp + mean inp = np.clip(inp, 0, 1) plt.imshow(inp) if title is not None: plt.title(title) plt.pause(10) # pause a bit so that plots are updated inputs, classes = next(iter(dataloaders['train'])) out = torchvision.utils.make_grid(inputs) imshow(out,title=[class_names[x] for x in classes])

    训练模型

    下面实现一个训练模型的通用方法:

    设置学习率保存最佳模型 下面代码中的参数scheduler是torch.optim.lr_scheduler中的LR scheduler 对象 def train_model(model,criterion,optimizer,sheduler,num_epochs=25): since = time.time() best_model_wts = copy.deepcopy(model.state_dict()) best_acc = 0.0 for epoch in range(num_epochs): print('Epoch {}/{}'.format(epoch,num_epochs-1)) print('-' * 10) for phase in ['train','val']: if phase == 'train': sheduler.step() model.train() else: model.eval() running_loss = 0.0 running_corrects = 0 for inputs,labels in dataloaders[phase]: inputs = inputs.to(device) labels = inputs.to(device) optimizer.zero_grad() with torch.set_grad_enabled(phase == 'train'): outputs = model(inputs) _,preds = torch.max(outputs,1) loss = criterion(outputs,labels) if phase == 'train': loss.backword() optimizer.step() running_loss += loss.item()*inputs.size(0) running_corrects += torch.sum(preds == labels.data) epoch_loss = running_loss / datasets_sizes[phase] epoch_acc = running_corrects.double()/datasets_sizes[phase] print('{} Loss: {:.4f} Acc: {:.4f'.format(phase,epoch_loss,epoch_acc)) if phase == 'val' and epoch_acc > best_acc: best_acc = epoch_acc best_model_wts = copy.deepcopy(model.state_dict) print() time_elapsed = time.time() - since print('Training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60,time_elapsed % 60)) print('Best val acc: {:4f}'.format(best_acc)) model.load_stat_dict(best_model_wts) return model

    Finetuning 卷积神经网络

    加载预训练网络并重置顶层全连接网络

    model_ft = models.resnet18(pretrained=True) num_ftrs = model_ft.fc.in_features model_ft.fc = model_ft.Linear(num_ftrs,2) model_ft = model_ft.to(device) criterion = nn.CrossEntropyLoss() optimizer_ft = optim.SGD(model_ft.parameters(),lr=0.001,momentum=0.9) exp_lr_scheduler = lr_scheduler.StepLR(optimizer_ft,step_size=7,gamma=0.1)#每7个时期对LR衰减0.1倍

    训练和验证

    model_ft = train_model(model_ft,criterion,optimizer_ft,exp_lr_scheduler,num_epochs=25)

    输出:

    Epoch 0/24 ---------- train Loss: 0.5797 Acc: 0.6844 val Loss: 0.2633 Acc: 0.9020 Epoch 1/24 ---------- train Loss: 0.4807 Acc: 0.7705 val Loss: 0.6446 Acc: 0.7255 Epoch 2/24 ---------- train Loss: 0.6137 Acc: 0.7623 val Loss: 0.6395 Acc: 0.7974 Epoch 3/24 ---------- train Loss: 0.5810 Acc: 0.7910 val Loss: 0.3823 Acc: 0.8366 Epoch 4/24 ---------- train Loss: 0.4411 Acc: 0.8279 val Loss: 0.6081 Acc: 0.8562 Epoch 5/24 ---------- train Loss: 0.6720 Acc: 0.7582 val Loss: 0.2470 Acc: 0.8693 Epoch 6/24 ---------- train Loss: 0.3643 Acc: 0.8238 val Loss: 0.2233 Acc: 0.8824 Epoch 7/24 ---------- train Loss: 0.3677 Acc: 0.8443 val Loss: 0.1998 Acc: 0.9281 Epoch 8/24 ---------- train Loss: 0.2423 Acc: 0.8893 val Loss: 0.2009 Acc: 0.9020 Epoch 9/24 ---------- train Loss: 0.3458 Acc: 0.8484 val Loss: 0.1980 Acc: 0.9020 Epoch 10/24 ---------- train Loss: 0.2745 Acc: 0.8770 val Loss: 0.1974 Acc: 0.9085 Epoch 11/24 ---------- train Loss: 0.3043 Acc: 0.8648 val Loss: 0.1889 Acc: 0.9020 Epoch 12/24 ---------- train Loss: 0.3017 Acc: 0.8566 val Loss: 0.2205 Acc: 0.8889 Epoch 13/24 ---------- train Loss: 0.2322 Acc: 0.8975 val Loss: 0.1870 Acc: 0.9085 Epoch 14/24 ---------- train Loss: 0.2776 Acc: 0.8852 val Loss: 0.1767 Acc: 0.9216 Epoch 15/24 ---------- train Loss: 0.1823 Acc: 0.9467 val Loss: 0.1879 Acc: 0.9281 Epoch 16/24 ---------- train Loss: 0.3140 Acc: 0.8689 val Loss: 0.1772 Acc: 0.9412 Epoch 17/24 ---------- train Loss: 0.3295 Acc: 0.8770 val Loss: 0.1873 Acc: 0.9216 Epoch 18/24 ---------- train Loss: 0.3400 Acc: 0.8361 val Loss: 0.2008 Acc: 0.9085 Epoch 19/24 ---------- train Loss: 0.3275 Acc: 0.8648 val Loss: 0.1914 Acc: 0.9150 Epoch 20/24 ---------- train Loss: 0.2170 Acc: 0.9139 val Loss: 0.2222 Acc: 0.9020 Epoch 21/24 ---------- train Loss: 0.2360 Acc: 0.8934 val Loss: 0.2031 Acc: 0.9085 Epoch 22/24 ---------- train Loss: 0.2943 Acc: 0.8484 val Loss: 0.1837 Acc: 0.9477 Epoch 23/24 ---------- train Loss: 0.2554 Acc: 0.8975 val Loss: 0.1759 Acc: 0.9346 Epoch 24/24 ---------- train Loss: 0.2747 Acc: 0.8689 val Loss: 0.1753 Acc: 0.9346 Training complete in 1m 7s Best val Acc: 0.947712 visualize_model(model_ft)

    特征提取

    这里除了顶层外,需要冻结其他层,同时需要将requires_grad == False来冻结参数,以防止在backword时计算梯度。

    model_conv = torchvision.models.resnet18(pretrained=True) for param in model_conv.parameters(): param.requires_grad = False # Parameters of newly constructed modules have requires_grad=True by default num_ftrs = model_conv.fc.in_features model_conv.fc = nn.Linear(num_ftrs, 2) model_conv = model_conv.to(device) criterion = nn.CrossEntropyLoss() # Observe that only parameters of final layer are being optimized as # opposed to before. optimizer_conv = optim.SGD(model_conv.fc.parameters(), lr=0.001, momentum=0.9) # Decay LR by a factor of 0.1 every 7 epochs exp_lr_scheduler = lr_scheduler.StepLR(optimizer_conv, step_size=7, gamma=0.1)

    训练和验证

    model_conv = train_model(model_conv, criterion, optimizer_conv, exp_lr_scheduler, num_epochs=25)

    输出:

    Epoch 0/24 ---------- train Loss: 0.6236 Acc: 0.6926 val Loss: 0.2246 Acc: 0.9150 Epoch 1/24 ---------- train Loss: 0.4739 Acc: 0.7910 val Loss: 0.1979 Acc: 0.9412 Epoch 2/24 ---------- train Loss: 0.4347 Acc: 0.7828 val Loss: 0.1912 Acc: 0.9346 Epoch 3/24 ---------- train Loss: 0.4254 Acc: 0.8197 val Loss: 0.1704 Acc: 0.9412 Epoch 4/24 ---------- train Loss: 0.5347 Acc: 0.7746 val Loss: 0.1460 Acc: 0.9542 Epoch 5/24 ---------- train Loss: 0.6257 Acc: 0.7582 val Loss: 0.2418 Acc: 0.9150 Epoch 6/24 ---------- train Loss: 0.3703 Acc: 0.8279 val Loss: 0.1999 Acc: 0.9216 Epoch 7/24 ---------- train Loss: 0.4697 Acc: 0.7705 val Loss: 0.1882 Acc: 0.9216 Epoch 8/24 ---------- train Loss: 0.3529 Acc: 0.8238 val Loss: 0.1662 Acc: 0.9477 Epoch 9/24 ---------- train Loss: 0.4021 Acc: 0.8238 val Loss: 0.1558 Acc: 0.9542 Epoch 10/24 ---------- train Loss: 0.3810 Acc: 0.8566 val Loss: 0.1507 Acc: 0.9542 Epoch 11/24 ---------- train Loss: 0.3658 Acc: 0.8320 val Loss: 0.1585 Acc: 0.9477 Epoch 12/24 ---------- train Loss: 0.3809 Acc: 0.8115 val Loss: 0.1528 Acc: 0.9477 Epoch 13/24 ---------- train Loss: 0.2164 Acc: 0.9180 val Loss: 0.1700 Acc: 0.9542 Epoch 14/24 ---------- train Loss: 0.2853 Acc: 0.8730 val Loss: 0.1745 Acc: 0.9281 Epoch 15/24 ---------- train Loss: 0.3199 Acc: 0.8566 val Loss: 0.1572 Acc: 0.9412 Epoch 16/24 ---------- train Loss: 0.3254 Acc: 0.8402 val Loss: 0.1530 Acc: 0.9608 Epoch 17/24 ---------- train Loss: 0.3239 Acc: 0.8525 val Loss: 0.1735 Acc: 0.9346 Epoch 18/24 ---------- train Loss: 0.2927 Acc: 0.8730 val Loss: 0.1513 Acc: 0.9542 Epoch 19/24 ---------- train Loss: 0.3030 Acc: 0.8648 val Loss: 0.1551 Acc: 0.9542 Epoch 20/24 ---------- train Loss: 0.3867 Acc: 0.8402 val Loss: 0.1542 Acc: 0.9542 Epoch 21/24 ---------- train Loss: 0.2517 Acc: 0.8975 val Loss: 0.1560 Acc: 0.9542 Epoch 22/24 ---------- train Loss: 0.2935 Acc: 0.8811 val Loss: 0.1617 Acc: 0.9542 Epoch 23/24 ---------- train Loss: 0.3000 Acc: 0.8730 val Loss: 0.1631 Acc: 0.9477 Epoch 24/24 ---------- train Loss: 0.3124 Acc: 0.8443 val Loss: 0.1572 Acc: 0.9477 Training complete in 0m 35s Best val Acc: 0.960784 visualize_model(model_conv) plt.ioff() plt.show()

    最新回复(0)