본문 바로가기
코드 구현 및 오류

[논문 구현] MobileNetV2 논문 구현

by 쑤스토리 2022. 4. 17.

MobileNetV2 model 구현 후 main 함수를 구현하고자 한다.
코드는 https://github.com/kuangliu/pytorch-cifar 을 참고하여 구현하였다.

GitHub - kuangliu/pytorch-cifar: 95.47% on CIFAR10 with PyTorch

95.47% on CIFAR10 with PyTorch. Contribute to kuangliu/pytorch-cifar development by creating an account on GitHub.

github.com


1 Import


먼저 필요한 함수들을 import 해준다.

import enum
from numpy import char
from pyparsing import Char
import torch
from torch import batch_norm
import torch.nn as nn
import torch.optim as optim
import torch.functional as F
import torch.backends.cudnn as cudnn
import matplotlib.pyplot as plt

import torchvision
import torchvision.transforms as transforms
import cv2
import numpy as np

from mobilenetv2 import MobileNetV2 # model 함수 가져오기

import os

os.environ['CUDA_VISIBLE_DEVICES'] = "7"

print('device 개수:', torch.cuda.device_count())
print('현재 device:', torch.cuda.current_device())

GPU 설정을 위해 os.environ['CUDA_VISIBLE_DEVICES'] 를 7 로 설정해주었지만, 각자 gpu 상태에 따라 작성해주면 된다.
GPU 설정에 대한 자세한 설명은 아래 글을 참고하길 바란다.

https://shshin9812.tistory.com/15

[editor & 서버] VSCode로 디버깅, gpu 설정 - with HRNet

https://github.com/HRNet/HRNet-Human-Pose-Estimation 코드를 바탕으로 작성하였다. VSCode Debugging run → start debuging → python file 선택 옆 설정 버튼을 누르면 launch.json이 나온다. launch.json {..

shshin9812.tistory.com

2 ArgParse

A common pattern is to use Python’s argparse module to read in user arguments


이 다음으로 user에게 받을 인자들에 대해 Argparse로 정의 해준다.

import argparse

parser = argparse.ArgumentParser(description = 'Pytorch CIFAR10 Training')
parser.add_argument('--lr', default=0.1, type=float, help = 'learning rate')
parser.add_argument('--resume','-r', action = 'store_true', help='resume from checkpoint')
parser.add_argument('--gpu', default='7', type=str, help = 'gpu_ids')
args = parser.parse_args()

learning rate의 default 값을 0.1로, gpu의 default 값을 "7"로 해준 것을 확인 할 수 있다.

3 Data 불러오기, Data Loader, Data transform

The torchvision package consists of popular datasets, model architectures, and common image transformations for computer vision.
Data loading utility is the torch.utils.data.DataLoader class. It represents a Python iterable over a dataset, with support for map-style and iterable-style datasets,customizing data loading order, automatic batching, single- and multi-process data loading,automatic memory pinning.


CIFAR10 data를 torchvision를 통해 불러와주고, torch.utils.data.DataLoader class를 통해 batch size만큼 batch를 자동적으로 생성하게끔 해준다. 데이터의 다양성을 위해서 앞서 import 해준 transforms를 통해 학습 데이터 train 데이터 가공을 해준다.

# data download
trainset = torchvision.datasets.CIFAR10(
    root = './data', train= True, download=True, transform=transform_train
)

testset = torchvision.datasets.CIFAR10(
    root = './data', train=False, download=True, transform=transform_test
)

transform_train = transforms.Compose([
    transforms.RandomCrop(32, padding =4), # 랜덤하게 crop 해주기
    transforms.RandomHorizontalFlip(), # 랜덤하게 이미지 flip 해주기
    transforms.ToTensor(), # Convert a PIL Image or numpy.ndarray to tensor
    transforms.Normalize((0.4914,0.4822,0.4465),(0.2023,0.1994,0.2010))
])

transform_test = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
])

# data loader

trainloader = torch.utils.data.DataLoader(
    trainset, batch_size = 128, shuffle = True, num_workers =2
)

testloader = torch.utils.data.DataLoader(
    testset, batch_size = 100, shuffle = True, num_workers =2
)

classes = ('plane', 'car', 'bird', 'cat', 'deer',
           'dog', 'frog', 'horse', 'ship', 'truck') # CIFAR10의 target 값들

4 Model, Checkpoint 설정


Model을 .cuda( ) 를 통해 gpu에 올릴 수 있게하며, checkpoint directory에 best Accuracy와 해당 epoch값을 저장해준다.

print('===> building model')

model = MobileNetV2()
model = model.cuda()

if args.resume:
    #loading checkpoint
    print('==> Resuming from checkpoint .. ')
    assert os.path.isdir('checkpoint'), 'Error : no check point in dir found'
    checkpoint = torch.load('./checkpoint/ckpt.pth')
    model.load_state_dict(checkpoint['net'])
    best_acc = checkpoint['acc']
    start_epoch = checkpoint['epoch']

5 Loss function, Optimizer, Schedular 정의


Loss function은 binary classification이 아닌 여러개의 classification에 적합한 CrossEntropyLoss를 사용해주고, optimizer로는 Stochastic Gradient Descent 를 사용한다.

Q. State_dict ?

더보기

A state_dict is simply a Python dictionary object that maps each layer to its parameter tensor

Example

# Define model
class TheModelClass(nn.Module):
    def __init__(self):
        super(TheModelClass, self).__init__()
        self.conv1 = nn.Conv2d(3, 6, 5)
        self.pool = nn.MaxPool2d(2, 2)
        self.conv2 = nn.Conv2d(6, 16, 5)
        self.fc1 = nn.Linear(16 * 5 * 5, 120)
        self.fc2 = nn.Linear(120, 84)
        self.fc3 = nn.Linear(84, 10)

    def forward(self, x):
        x = self.pool(F.relu(self.conv1(x)))
        x = self.pool(F.relu(self.conv2(x)))
        x = x.view(-1, 16 * 5 * 5)
        x = F.relu(self.fc1(x))
        x = F.relu(self.fc2(x))
        x = self.fc3(x)
        return x

# Initialize model
model = TheModelClass()

# Initialize optimizer
optimizer = optim.SGD(model.parameters(), lr=0.001, momentum=0.9)

# Print model's state_dict
print("Model's state_dict:")
for param_tensor in model.state_dict():
    print(param_tensor, "\t", model.state_dict()[param_tensor].size())

# Print optimizer's state_dict
print("Optimizer's state_dict:")
for var_name in optimizer.state_dict():
    print(var_name, "\t", optimizer.state_dict()[var_name])

Output

Model's state_dict:
conv1.weight     torch.Size([6, 3, 5, 5])
conv1.bias   torch.Size([6])
conv2.weight     torch.Size([16, 6, 5, 5])
conv2.bias   torch.Size([16])
fc1.weight   torch.Size([120, 400])
fc1.bias     torch.Size([120])
fc2.weight   torch.Size([84, 120])
fc2.bias     torch.Size([84])
fc3.weight   torch.Size([10, 84])
fc3.bias     torch.Size([10])

Optimizer's state_dict:
state    {}
param_groups     [{'lr': 0.001, 'momentum': 0.9, 'dampening': 0, 'weight_decay': 0, 
'nesterov': False, 'params': [4675713712, 4675713784, 4675714000, 4675714072, 4675714216, 
4675714288, 4675714432, 4675714504, 4675714648, 4675714720]}]

#loss function, optimizer, schedular 정의
criterion = nn.CrossEntropyLoss() # classification이 binary가 아닐때의 loss 식
optimizer = optim.SGD(model.parameters(), lr = args.lr, momentum=0.9, 
                    weight_decay=5e-4)
schedular = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=200)

6 Train 함수


주석을 통해 정리해놓았다.

def train(epoch):
    print('\n Epoch : %d' %epoch)
    model.train() # model train상태로 들어가기
    train_loss = 0 # loss 0으로 초기화
    correct = 0
    total = 0

    for batch_idx, (inputs, targets) in enumerate(trainloader):
        inputs, targets = inputs.cuda(), targets.cuda() # gpu 에 올려주기
        optimizer.zero_grad() # gradients of all optimized to zero (초기화)
        outputs = model(inputs) # model에 input 넣어서 거쳐진 output 값
        loss = criterion(outputs, targets)  #cross entropy loss로 target과 output사이의 loss계산
        loss.backward() # back propogation
        optimizer.step() # update parameters

        #print(targets.size(0)) # 128개의 batch_idx 별로 실제 정답 값
        #print(targets.size()) # 위의 tensor 형태

        train_loss += loss.item()
        _, predicted = outputs.max(1) # 각 행마다 가장 큰 값의 index 가져옴(가장 높은 예측률)
        total += targets.size(0)
        correct += predicted.eq(targets).sum().item() # predicted(예측값)과 target(실제값)일치율

        if batch_idx % 100 == 0 :
            print(batch_idx, len(trainloader), 'Loss : %.3f | ACC : %.3f %% (%d/%d)'
            %(train_loss/(batch_idx+1), 100.*correct/total, correct, total))

7 Test 함수

def test(epoch):
    global best_acc
    model.eval() # eval 모드
    test_loss = 0
    correct = 0
    total = 0

    with torch.no_grad():
        for batch_idx, (inputs, targets) in enumerate(testloader):
            inputs, targets = inputs.cuda(), targets.cuda() # gpu에 올려주기
            outputs = model(inputs) 
            loss = criterion(outputs, targets)

            test_loss += loss.item()
            _, predicted = outputs.max(1)
            total += targets.size(0)
            correct = predicted.eq(targets).sum().item()

            if batch_idx%100 == 0:
                print(batch_idx, len(testloader), 'Loss : %.3f | Acc : %.3f %% (%d/%d)'
                        %(test_loss/(batch_idx+1), 100.*correct/total, correct, total))

8 Best Acc 저장


현재 acc가 best_acc 보다 크다면 acc로 update해주어야 하기 때문에 다음과 같이 코드를 작성해준다. 만약 checkpoint directory가 없다면 생성해주어 이를 save해준다. 그리고 현재 acc값을 best_acc로 재정의해준다.

    acc = 100.*correct/total

    if acc > best_acc :
        print('saving best acc...')
        state = {
            'net': model.state_dict(),
            'acc' : acc,
            'epoch' : epoch,
        }
        if not os.path.isdir('checkpoint'):
            os.mkdir('checkpoint')
        torch.save(state, './checkpoint/ckpt.pth')
        best_acc = acc

9 실행


epoch 만큼 train, test를 시행해준다.

for epoch in range(start_epoch, start_epoch+150):
    train(epoch)
    test(epoch)
    schedular.step()


reference : https://github.com/kuangliu/pytorch-cifar

댓글