mmdetectionv1.0.0-选择几张卡训练一个模型,单机多卡开多个训练任务(还有问题)---修改config-batch图像数目,学习率下降等--训练期间测试开启-训练灰度

Rena ·
更新时间:2024-09-20
· 658 次阅读

训练灰度

如果你想训练灰度图,在这个版本,你应该:

mmdetection/mmdet/datasets/pipelines/loading.py

@PIPELINES.register_module class LoadImageFromFile(object): def __init__(self, to_float32=False, color_type='color'): self.to_float32 = to_float32 self.color_type = color_type def __call__(self, results): if results['img_prefix'] is not None: filename = osp.join(results['img_prefix'], results['img_info']['filename']) else: filename = results['img_info']['filename'] img = mmcv.imread(filename, self.color_type) if self.to_float32: img = img.astype(np.float32) results['filename'] = filename results['img'] = img results['img_shape'] = img.shape results['ori_shape'] = img.shape return results def __repr__(self): return '{} (to_float32={}, color_type={})'.format( self.__class__.__name__, self.to_float32, self.color_type)

img = mmcv.imread(filename, self.color_type)  后面接一段代码:

import cv2
img = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)

img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)

训练期间测试开启

--validate

#!/usr/bin/env bash PYTHON=/home/apple/anaconda3/envs/py36_mmdetection/bin/python CONFIG=$1 GPUS=$2 GPUNAME=$3 PORT=${PORT:-29500} $PYTHON -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT \ $(dirname "$0")/train.py $CONFIG --gpus $GPUNAME --validate --launcher pytorch ${@:4}

修改多少次进行一次测试:

configs/mask_rcnn_r50_fpn_1x.py

# yapf:enable
evaluation = dict(interval=50)

修改config-batch图像数目,学习率下降等

修改:

train_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(type='LoadAnnotations', with_bbox=True, with_mask=True),
    dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
    dict(type='RandomFlip', flip_ratio=0.5),
    dict(type='Normalize', **img_norm_cfg),
    dict(type='Pad', size_divisor=32),
    dict(type='DefaultFormatBundle'),
    dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']),
]
test_pipeline = [
    dict(type='LoadImageFromFile'),
    dict(
        type='MultiScaleFlipAug',
        img_scale=(1333, 800),
        flip=False,
        transforms=[
            dict(type='Resize', keep_ratio=True),
            dict(type='RandomFlip'),
            dict(type='Normalize', **img_norm_cfg),
            dict(type='Pad', size_divisor=32),
            dict(type='ImageToTensor', keys=['img']),
            dict(type='Collect', keys=['img']),
        ])
]

..............................

data = dict(
    imgs_per_gpu=4,
    workers_per_gpu=2,
    train=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/instances_train2017.json',
        img_prefix=data_root + 'train2017/',
        pipeline=train_pipeline),
    val=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/instances_val2017.json',
        img_prefix=data_root + 'val2017/',
        pipeline=test_pipeline),
    test=dict(
        type=dataset_type,
        ann_file=data_root + 'annotations/instances_val2017.json',
        img_prefix=data_root + 'val2017/',
        pipeline=test_pipeline))
..............................

# learning policy
lr_config = dict(
    policy='step',
    warmup='linear',
    warmup_iters=500,
    warmup_ratio=1.0 / 3,
    step=[40, 80, 120])

52%   71C    P2   185W / 250W |  10227MiB / 11019MiB |     72%      Default 

显存刚好利用上,再大一点就不行了

https://github.com/open-mmlab/mmdetection/tree/v1.0.0

选择几张卡训练一个模型,单机多卡开多个训练任务

使用命令:

./tools/dist_train.sh configs/mask_rcnn_r50_fpn_1x.py 1 2

需要改代码:(--resume_from checkpoint/mask_rcnn_r50_fpn_1x_20181010-069fa190.pth 是我自己添加的,你跟据你的看情况,我的项目这样训练收敛快)

mmdetection/tools/dist_train.sh

#!/usr/bin/env bash PYTHON=/root/train/results/new/anaconda3_py3.7/bin/python CONFIG=$1 GPUS=$2 GPUNAME=$3 PORT=${PORT:-29502} nohup $PYTHON -m torch.distributed.launch --nproc_per_node=$GPUS --master_port=$PORT \ $(dirname "$0")/train.py $CONFIG --resume_from checkpoint/mask_rcnn_r50_fpn_1x_20181010-069fa190.pth --gpus $GPUNAME --launcher pytorch ${@:4} > out_marker.log 2>&1 &

mmdetection/tools/train.py

def parse_args():
    parser = argparse.ArgumentParser(description='Train a detector')
    parser.add_argument('config', help='train config file path')
    parser.add_argument('--work_dir', help='the dir to save logs and models')
    parser.add_argument(
        '--resume_from', help='the checkpoint file to resume from')
    parser.add_argument(
        '--validate',
        action='store_true',
        help='whether to evaluate the checkpoint during training')
    parser.add_argument(
        '--gpus',
        #type=int, 注释掉
        default=1,
        help='number of gpus to use '
        '(only applicable to non-distributed training)')
    parser.add_argument('--seed', type=int, default=None, help='random seed')
    parser.add_argument(
        '--deterministic',
        action='store_true',
        help='whether to set deterministic options for CUDNN backend.')
    parser.add_argument(
        '--launcher',
        choices=['none', 'pytorch', 'slurm', 'mpi'],
        default='none',
        help='job launcher')
    parser.add_argument('--local_rank', type=int, default=0)
    parser.add_argument(
        '--autoscale-lr',
        action='store_true',
        help='automatically scale lr with the number of gpus')
    args = parser.parse_args()
    if 'LOCAL_RANK' not in os.environ:
        os.environ['LOCAL_RANK'] = str(args.local_rank)

    return args


def main():
    args = parse_args()

    cfg = Config.fromfile(args.config)
    # set cudnn_benchmark
    if cfg.get('cudnn_benchmark', False):
        torch.backends.cudnn.benchmark = True
    # update configs according to CLI args
    if args.work_dir is not None:
        cfg.work_dir = args.work_dir
    if args.resume_from is not None:
        cfg.resume_from = args.resume_from

    #cfg.gpus = args.gpus  注释然后添加如下:
    cfg.gpus = str(args.gpus).split(',')    
    if(len(cfg.gpus))==1:
        cfg.gpus = int(cfg.gpus[0])
    os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpus)
    if args.autoscale_lr:
        # apply the linear scaling rule (https://arxiv.org/abs/1706.02677)
        cfg.optimizer['lr'] = cfg.optimizer['lr'] * cfg.gpus / 8

mmdetection/mmdet/api/train.py

def _non_dist_train(model, dataset, cfg, validate=False, logger=None, timestamp=None): if validate: raise NotImplementedError('Built-in validation is not implemented ' 'yet in not-distributed training. Use ' 'distributed training or test.py and ' '*eval.py scripts instead.') # prepare data loaders dataset = dataset if isinstance(dataset, (list, tuple)) else [dataset] data_loaders = [ build_dataloader( ds, cfg.data.imgs_per_gpu, cfg.data.workers_per_gpu, #cfg.gpus, len(cfg.gpus), dist=False) for ds in dataset ] # put model on gpus #model = MMDataParallel(model, device_ids=range(cfg.gpus)).cuda() model = MMDataParallel(model, device_ids=cfg.gpus).cuda()
作者:知识在于分享



batch 选择 训练 学习率 学习 模型 config 测试

需要 登录 后方可回复, 如果你还没有账号请 注册新账号