API Reference

mmseg.apis

mmseg.core

seg

mmseg.core.seg.build_pixel_sampler(cfg, **default_args)[source]

Build pixel sampler for segmentation map.

class mmseg.core.seg.BasePixelSampler(**kwargs)[source]

Base class of pixel sampler.

sample(seg_logit, seg_label)[source]

Placeholder for sample function.

class mmseg.core.seg.OHEMPixelSampler(context, thresh=None, min_kept=100000)[source]

Online Hard Example Mining Sampler for segmentation.

Parameters:
  • context (nn.Module) – The context of sampler, subclass of BaseDecodeHead.
  • thresh (float, optional) – The threshold for hard example selection. Below which, are prediction with low confidence. If not specified, the hard examples will be pixels of top min_kept loss. Default: None.
  • min_kept (int, optional) – The minimum number of predictions to keep. Default: 100000.
sample(seg_logit, seg_label)[source]

Sample pixels that have high loss or with low prediction confidence.

Parameters:
  • seg_logit (torch.Tensor) – segmentation logits, shape (N, C, H, W)
  • seg_label (torch.Tensor) – segmentation label, shape (N, 1, H, W)
Returns:

segmentation weight, shape (N, H, W)

Return type:

torch.Tensor

evaluation

class mmseg.core.evaluation.EvalHook(dataloader, interval=1, by_epoch=False, **eval_kwargs)[source]

Evaluation hook.

dataloader

A PyTorch dataloader.

Type:DataLoader
interval

Evaluation interval (by epochs). Default: 1.

Type:int
after_train_epoch(runner)[source]

After train epoch hook.

after_train_iter(runner)[source]

After train epoch hook.

evaluate(runner, results)[source]

Call evaluate function of dataset.

class mmseg.core.evaluation.DistEvalHook(dataloader, interval=1, gpu_collect=False, by_epoch=False, **eval_kwargs)[source]

Distributed evaluation hook.

dataloader

A PyTorch dataloader.

Type:DataLoader
interval

Evaluation interval (by epochs). Default: 1.

Type:int
tmpdir

Temporary directory to save the results of all processes. Default: None.

Type:str | None
gpu_collect

Whether to use gpu or cpu to collect results. Default: False.

Type:bool
after_train_epoch(runner)[source]

After train epoch hook.

after_train_iter(runner)[source]

After train epoch hook.

mmseg.core.evaluation.mean_dice(results, gt_seg_maps, num_classes, ignore_index, nan_to_num=None)[source]

Calculate Mean Dice (mDice)

Parameters:
  • results (list[ndarray]) – List of prediction segmentation maps
  • gt_seg_maps (list[ndarray]) – list of ground truth segmentation maps
  • num_classes (int) – Number of categories
  • ignore_index (int) – Index that will be ignored in evaluation.
  • nan_to_num – If specified, NaN values will be replaced by the numbers defined by the user. Default: None.
mmseg.core.evaluation.mean_iou(results, gt_seg_maps, num_classes, ignore_index, nan_to_num=None)[source]

Calculate Mean Intersection and Union (mIoU)

Parameters:
  • results (list[ndarray]) – List of prediction segmentation maps
  • gt_seg_maps (list[ndarray]) – list of ground truth segmentation maps
  • num_classes (int) – Number of categories
  • ignore_index (int) – Index that will be ignored in evaluation.
  • nan_to_num – If specified, NaN values will be replaced by the numbers defined by the user. Default: None.
mmseg.core.evaluation.eval_metrics(results, gt_seg_maps, num_classes, ignore_index, metrics=['mIoU'], nan_to_num=None)[source]

Calculate evaluation metrics :param results: List of prediction segmentation maps :type results: list[ndarray] :param gt_seg_maps: list of ground truth segmentation maps :type gt_seg_maps: list[ndarray] :param num_classes: Number of categories :type num_classes: int :param ignore_index: Index that will be ignored in evaluation. :type ignore_index: int :param metrics: Metrics to be evaluated, ‘mIoU’ and ‘mDice’. :type metrics: list[str] | str :param nan_to_num: If specified, NaN values will be replaced

by the numbers defined by the user. Default: None.
mmseg.core.evaluation.get_classes(dataset)[source]

Get class names of a dataset.

mmseg.core.evaluation.get_palette(dataset)[source]

Get class palette (RGB) of a dataset.

utils

mmseg.core.utils.add_prefix(inputs, prefix)[source]

Add prefix for dict.

Parameters:
  • inputs (dict) – The input dict with str keys.
  • prefix (str) – The prefix to add.
Returns:

The dict with keys updated with prefix.

Return type:

dict

mmseg.datasets

datasets

pipelines

mmseg.models

segmentors

backbones

decode_heads

losses

mmseg.models.losses.accuracy(pred, target, topk=1, thresh=None)[source]

Calculate accuracy according to the prediction and target.

Parameters:
  • pred (torch.Tensor) – The model prediction, shape (N, num_class, …)
  • target (torch.Tensor) – The target of each prediction, shape (N, , …)
  • topk (int | tuple[int], optional) – If the predictions in topk matches the target, the predictions will be regarded as correct ones. Defaults to 1.
  • thresh (float, optional) – If not None, predictions with scores under this threshold are considered incorrect. Default to None.
Returns:

If the input topk is a single integer,

the function will return a single float as accuracy. If topk is a tuple containing multiple integers, the function will return a tuple containing accuracies of each topk number.

Return type:

float | tuple[float]

class mmseg.models.losses.Accuracy(topk=(1, ), thresh=None)[source]

Accuracy calculation module.

forward(pred, target)[source]

Forward function to calculate accuracy.

Parameters:
  • pred (torch.Tensor) – Prediction of models.
  • target (torch.Tensor) – Target for each prediction.
Returns:

The accuracies under different topk criterions.

Return type:

tuple[float]

mmseg.models.losses.cross_entropy(pred, label, weight=None, class_weight=None, reduction='mean', avg_factor=None, ignore_index=-100)[source]

The wrapper function for F.cross_entropy()

mmseg.models.losses.binary_cross_entropy(pred, label, weight=None, reduction='mean', avg_factor=None, class_weight=None, ignore_index=255)[source]

Calculate the binary CrossEntropy loss.

Parameters:
  • pred (torch.Tensor) – The prediction with shape (N, 1).
  • label (torch.Tensor) – The learning label of the prediction.
  • weight (torch.Tensor, optional) – Sample-wise loss weight.
  • reduction (str, optional) – The method used to reduce the loss. Options are “none”, “mean” and “sum”.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • class_weight (list[float], optional) – The weight for each class.
  • ignore_index (int | None) – The label index to be ignored. Default: 255
Returns:

The calculated loss

Return type:

torch.Tensor

mmseg.models.losses.mask_cross_entropy(pred, target, label, reduction='mean', avg_factor=None, class_weight=None, ignore_index=None)[source]

Calculate the CrossEntropy loss for masks.

Parameters:
  • pred (torch.Tensor) – The prediction with shape (N, C), C is the number of classes.
  • target (torch.Tensor) – The learning label of the prediction.
  • label (torch.Tensor) – label indicates the class label of the mask’ corresponding object. This will be used to select the mask in the of the class which the object belongs to when the mask prediction if not class-agnostic.
  • reduction (str, optional) – The method used to reduce the loss. Options are “none”, “mean” and “sum”.
  • avg_factor (int, optional) – Average factor that is used to average the loss. Defaults to None.
  • class_weight (list[float], optional) – The weight for each class.
  • ignore_index (None) – Placeholder, to be consistent with other loss. Default: None.
Returns:

The calculated loss

Return type:

torch.Tensor

class mmseg.models.losses.CrossEntropyLoss(use_sigmoid=False, use_mask=False, reduction='mean', class_weight=None, loss_weight=1.0)[source]

CrossEntropyLoss.

Parameters:
  • use_sigmoid (bool, optional) – Whether the prediction uses sigmoid of softmax. Defaults to False.
  • use_mask (bool, optional) – Whether to use mask cross entropy loss. Defaults to False.
  • reduction (str, optional) – . Defaults to ‘mean’. Options are “none”, “mean” and “sum”.
  • class_weight (list[float], optional) – Weight of each class. Defaults to None.
  • loss_weight (float, optional) – Weight of the loss. Defaults to 1.0.
forward(cls_score, label, weight=None, avg_factor=None, reduction_override=None, **kwargs)[source]

Forward function.

mmseg.models.losses.reduce_loss(loss, reduction)[source]

Reduce loss as specified.

Parameters:
  • loss (Tensor) – Elementwise loss tensor.
  • reduction (str) – Options are “none”, “mean” and “sum”.
Returns:

Reduced loss tensor.

Return type:

Tensor

mmseg.models.losses.weight_reduce_loss(loss, weight=None, reduction='mean', avg_factor=None)[source]

Apply element-wise weight and reduce loss.

Parameters:
  • loss (Tensor) – Element-wise loss.
  • weight (Tensor) – Element-wise weights.
  • reduction (str) – Same as built-in losses of PyTorch.
  • avg_factor (float) – Avarage factor when computing the mean of losses.
Returns:

Processed loss values.

Return type:

Tensor

mmseg.models.losses.weighted_loss(loss_func)[source]

Create a weighted version of a given loss function.

To use this decorator, the loss function must have the signature like loss_func(pred, target, **kwargs). The function only needs to compute element-wise loss without any reduction. This decorator will add weight and reduction arguments to the function. The decorated function will have the signature like loss_func(pred, target, weight=None, reduction=’mean’, avg_factor=None, **kwargs).

Example:
>>> import torch
>>> @weighted_loss
>>> def l1_loss(pred, target):
>>>     return (pred - target).abs()
>>> pred = torch.Tensor([0, 2, 3])
>>> target = torch.Tensor([1, 1, 1])
>>> weight = torch.Tensor([1, 0, 1])
>>> l1_loss(pred, target)
tensor(1.3333)
>>> l1_loss(pred, target, weight)
tensor(1.)
>>> l1_loss(pred, target, reduction='none')
tensor([1., 1., 2.])
>>> l1_loss(pred, target, weight, avg_factor=2)
tensor(1.5000)