Hi Team
Great work and thanks for your contribution!
I'm working on training the mask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_3x_coco_lrf.py with custom dataset in coco format.
- The
DistOptimizerHook has been depreciated from mmcv.runner.hooks.optimizer and has been recommended to use OptimizerHook replacing the same.
# do not use mmdet version fp16
fp16 = None
optimizer_config = dict(
type="DistOptimizerHook",
update_interval=1,
grad_clip=None,
coalesce=True,
bucket_size_mb=-1,
use_fp16=True,
)
- I've replaced with below code which goes with default
OptimizerHook
# do not use mmdet version fp16
fp16 = None
optimizer_config = dict(
grad_clip=None,
)
- Installed mmcv using source code:
git clone https://github.com/open-mmlab/mmcv.git -b 1.x /openmmlab/mmcv \
&& cd /openmmlab/mmcv \
&& MMCV_WITH_OPS=1 pip install --no-cache-dir -e . -v
Would like to know recommended configuration with fp16 and optimizer_config in place DistOptimizerHook for distributed training.
Hi Team
Great work and thanks for your contribution!
I'm working on training the mask_rcnn_focalnet_small_patch4_mstrain_480-800_adamw_3x_coco_lrf.py with custom dataset in coco format.
DistOptimizerHookhas been depreciated frommmcv.runner.hooks.optimizerand has been recommended to useOptimizerHookreplacing the same.OptimizerHookWould like to know recommended configuration with
fp16andoptimizer_configin placeDistOptimizerHookfor distributed training.