Thank you for your great work!
I am trying to train a model to perform Amodal Instance Segmentation and input my own images to see the prediction results of hidden regions.
I would like to receive the following and other predicted results.
However, I am unable to detect hidden regions.
So, please let me ask some questions.
Questions
-
About the label data
datasets/coco/annotations/instances_train_2017_transform_slight_correct.json
Which of the following ABCs is appropriate?
I did C but did not get good results.
- A. Use a file downloaded from google drive or onedrive.
- B. Run BCNet/process.h to the json file downloaded from google drive or onedrive and use the output file.
- C. Run BCNet/process.h to the
instance_train.json contained in the 2017 Train/Val annotations [241MB] and use the output file.
datasets/coco/annotations/instances_val2017.json
I used the instances_val2017.json included in the 2017Train/Val annotation [241MB],
Do I also need to run BCNet/process.h against instances_val2017.json?
-
About config file
I am currently running BCNet/all.sh and using the BCNet/configs/fcos/fcos_imprv_R_50_FPN.yaml listed in it as the training config.
Would BCNet/configs/fcos/fcos_imprv_R_101_FPN.yaml be more appropriate?
-
About visualize.sh
Is the same setting as for `all.sh' correct?
I am concerned that the config used for visualize.sh and all.sh is different.