Train and evaluate your MLP Mixer network using PyTorch

preface

Project address: https://github.com/Fafa-DL/Awesome-Backbones

Operation tutorial: https://www.bilibili.com/video/BV1SY411P7Nd

Original paper of MLP Mixer: Click me to jump

1. Data set production

1.1 production of label documents

  • Download project code to local

  • This demonstration takes the flower data set as an example, and the directory structure is as follows:

├─flower_photos
│  ├─daisy
│  │      100080576_f52e8ee070_n.jpg
│  │      10140303196_b88d3d6cec.jpg
│  │      ...
│  ├─dandelion
│  │      10043234166_e6dd915111_n.jpg
│  │      10200780773_c6051a7d71_n.jpg
│  │      ...
│  ├─roses
│  │      10090824183_d02c613f10_m.jpg
│  │      102501987_3cdb8e5394_n.jpg
│  │      ...
│  ├─sunflowers
│  │      1008566138_6927679c8a.jpg
│  │      1022552002_2b93faf9e7_n.jpg
│  │      ...
│  └─tulips
│  │      100930342_92e8746431_n.jpg
│  │      10094729603_eeca3f2cb6.jpg
│  │      ...
  • Create the tag file annotations. In awesome backbones / data / Txt, write the class alias index to the file by line;
daisy 0
dandelion 1
roses 2
sunflowers 3
tulips 4

1.2 data set division

  • Open awesome backbones / tools / split_ data. py
  • Modify the original dataset path and the divided save path. It is strongly recommended that the divided save path datasets should not be changed. In the next step, the operation is based on the folder by default
init_dataset = 'A:/flower_photos'
new_dataset = 'A:/Awesome-Backbones/datasets'
  • Open the terminal under awesome backbones / and enter the command:
python tools/split_data.py
  • The format of the divided data set is as follows:
├─...
├─datasets
│  ├─test
│  │  ├─daisy
│  │  ├─dandelion
│  │  ├─roses
│  │  ├─sunflowers
│  │  └─tulips
│  └─train
│      ├─daisy
│      ├─dandelion
│      ├─roses
│      ├─sunflowers
│      └─tulips
├─...

1.3 preparation of data set information file

  • Ensure that the divided dataset is under awesome backbones / datasets. If not, it is under get_ annotation. Modify the data set path under py;
datasets_path   = 'Your dataset path'
  • Open the terminal under awesome backbones / and enter the command:
python tools/get_annotation.py
  • Get the generated dataset information file train under awesome backbones / data Txt and test txt

2. Modify parameter file

  • Each model has its own configuration file, which is saved in awesome backbones / models
  • By backbone, neck, head and head Loss constitutes a complete model
  • MLP Mixer parameter profile found
  • In model_ Modify num in CFG_ Classes is the category size of your dataset
  • According to the performance of your computer_ Modify batch in CFG_ Size and num_workers
  • If there is a pre training weight, it can be pre trained_ Set weights to True and assign the path of pre training weights to pre trained_ weights
  • If you need to freeze training, then freeze_flag is set to True. Optional frozen include backbone, neck and head
  • In optimizer_ Modify the initial learning rate in CFG and debug it according to your batch size. If pre training weight is used, it is recommended to reduce the learning rate
  • See core / optimizers / LR for the update of learning rate_ update. py
  • For more specific configuration file modification, please refer to Profile interpretation

3. Training

  • Confirm awesome backbones / data / annotations Txt label ready
  • Confirm that awesome backbones / data / train Txt and test Txt and annotations Txt correspondence
  • Select the model you want to train and find the corresponding configuration file under awesome backbones / Models /
  • Modify parameters as explained in the configuration file
  • Open the terminal to run in awesome backbones
python tools/train.py models/mlp_mixer/mpl_mixer_base_p16.py

4. Evaluation

  • Confirm awesome backbones / data / annotations Txt label ready
  • Confirm that awesome backbones / data / test Txt and annotations Txt correspondence
  • Find the corresponding configuration file under awesome backbones / Models /
  • Modify the weight path in the parameter configuration file, and the rest remain unchanged
ckpt = 'Your training weight path'
  • Open the terminal to run in awesome backbones
python tools/evaluation.py models/mlp_mixer/mpl_mixer_base_p16.py

Tags: Python Pytorch Computer Vision Deep Learning Convolutional Neural Networks

Posted by rifts on Thu, 05 May 2022 15:43:18 +0300