[WIP] Useful Tools¶
Apart from training/testing scripts, We provide lots of useful tools under the
tools/
directory.
Analysis Tools¶
Plot training logs¶
tools/analyze_logs.py
plots loss/mIoU curves given a training log file. pip install seaborn
first to install the dependency.
python tools/analysis_tools/analyze_logs.py xxx.json [--keys ${KEYS}] [--legend ${LEGEND}] [--backend ${BACKEND}] [--style ${STYLE}] [--out ${OUT_FILE}]
Examples:
Plot the mIoU, mAcc, aAcc metrics.
python tools/analysis_tools/analyze_logs.py log.json --keys mIoU mAcc aAcc --legend mIoU mAcc aAcc
Plot loss metric.
python tools/analysis_tools/analyze_logs.py log.json --keys loss --legend loss
Confusion Matrix (experimental)¶
In order to generate and plot a nxn
confusion matrix where n
is the number of classes, you can follow the steps:
1.Generate a prediction result in pkl format using test.py
¶
python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [--out ${PATH_TO_RESULT_FILE}]
Example:
python tools/test.py \
configs/fcn/fcn_r50-d8_4xb2-40k_cityscapes-512x1024.py \
checkpoint/fcn_r50-d8_512x1024_40k_cityscapes_20200604_192608-efe53f0d.pth \
--out result/pred_result.pkl
2. Use confusion_matrix.py
to generate and plot a confusion matrix¶
python tools/confusion_matrix.py ${CONFIG_FILE} ${PATH_TO_RESULT_FILE} ${SAVE_DIR} --show
Description of arguments:
config
: Path to the test config file.prediction_path
: Path to the prediction .pkl result.save_dir
: Directory where confusion matrix will be saved.--show
: Enable result visualize.--color-theme
: Theme of the matrix color map.--cfg_options
: Custom options to replace the config file.
Example:
python tools/confusion_matrix.py \
configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py \
result/pred_result.pkl \
result/confusion_matrix \
--show
Get the FLOPs and params (experimental)¶
We provide a script adapted from flops-counter.pytorch to compute the FLOPs and params of a given model.
python tools/analysis_tools/get_flops.py ${CONFIG_FILE} [--shape ${INPUT_SHAPE}]
You will get the result like this.
==============================
Input shape: (3, 2048, 1024)
Flops: 1429.68 GMac
Params: 48.98 M
==============================
Note
This tool is still experimental and we do not guarantee that the number is correct. You may well use the result for simple comparisons, but double check it before you adopt it in technical reports or papers.
(1) FLOPs are related to the input shape while parameters are not. The default input shape is (1, 3, 1280, 800). (2) Some operators are not counted into FLOPs like GN and custom operators.
Miscellaneous¶
Publish a model¶
Before you upload a model to AWS, you may want to (1) convert model weights to CPU tensors, (2) delete the optimizer states and (3) compute the hash of the checkpoint file and append the hash id to the filename.
python tools/misc/publish_model.py ${INPUT_FILENAME} ${OUTPUT_FILENAME}
E.g.,
python tools/publish_model.py work_dirs/pspnet/latest.pth psp_r50_512x1024_40k_cityscapes.pth
The final output filename will be psp_r50_512x1024_40k_cityscapes-{hash id}.pth
.
Print the entire config¶
tools/misc/print_config.py
prints the whole config verbatim, expanding all its
imports.
python tools/misc/print_config.py \
${CONFIG} \
--graph \
--cfg-options ${OPTIONS [OPTIONS...]} \
Description of arguments:
config
: The path of a pytorch model config file.--graph
: Determines whether to print the models graph.--cfg-options
: Custom options to replace the config file.
Model conversion¶
tools/model_converters/
provide several scripts to convert pretrain models released by other repos to MMSegmentation style.
ViT Swin MiT Transformer Models¶
ViT
tools/model_converters/vit2mmseg.py
convert keys in timm pretrained vit models to MMSegmentation style.python tools/model_converters/vit2mmseg.py ${SRC} ${DST}
Swin
tools/model_converters/swin2mmseg.py
convert keys in official pretrained swin models to MMSegmentation style.python tools/model_converters/swin2mmseg.py ${SRC} ${DST}
SegFormer
tools/model_converters/mit2mmseg.py
convert keys in official pretrained mit models to MMSegmentation style.python tools/model_converters/mit2mmseg.py ${SRC} ${DST}
Model Serving¶
In order to serve an MMSegmentation
model with TorchServe
, you can follow the steps:
1. Convert model from MMSegmentation to TorchServe¶
python tools/torchserve/mmseg2torchserve.py ${CONFIG_FILE} ${CHECKPOINT_FILE} \
--output-folder ${MODEL_STORE} \
--model-name ${MODEL_NAME}
Note
${MODEL_STORE} needs to be an absolute path to a folder.
2. Build mmseg-serve
docker image¶
docker build -t mmseg-serve:latest docker/serve/
3. Run mmseg-serve
¶
Check the official docs for running TorchServe with docker.
In order to run in GPU, you need to install nvidia-docker. You can omit the --gpus
argument in order to run in CPU.
Example:
docker run --rm \
--cpus 8 \
--gpus device=0 \
-p8080:8080 -p8081:8081 -p8082:8082 \
--mount type=bind,source=$MODEL_STORE,target=/home/model-server/model-store \
mmseg-serve:latest
Read the docs about the Inference (8080), Management (8081) and Metrics (8082) APIs
4. Test deployment¶
curl -O https://raw.githubusercontent.com/open-mmlab/mmsegmentation/master/resources/3dogs.jpg
curl http://127.0.0.1:8080/predictions/${MODEL_NAME} -T 3dogs.jpg -o 3dogs_mask.png
The response will be a “.png” mask.
You can visualize the output as follows:
import matplotlib.pyplot as plt
import mmcv
plt.imshow(mmcv.imread("3dogs_mask.png", "grayscale"))
plt.show()
You should see something similar to:
And you can use test_torchserve.py
to compare result of torchserve and pytorch, and visualize them.
python tools/torchserve/test_torchserve.py ${IMAGE_FILE} ${CONFIG_FILE} ${CHECKPOINT_FILE} ${MODEL_NAME}
[--inference-addr ${INFERENCE_ADDR}] [--result-image ${RESULT_IMAGE}] [--device ${DEVICE}]
Example:
python tools/torchserve/test_torchserve.py \
demo/demo.png \
configs/fcn/fcn_r50-d8_512x1024_40k_cityscapes.py \
checkpoint/fcn_r50-d8_512x1024_40k_cityscapes_20200604_192608-efe53f0d.pth \
fcn