Please find the evaluation metrics for semantic segmentation.
The winning entry will be evaluated using the performance metric 0<M<=100, defined as follows: where 0<P<=1, measures model-accuracy as defined by the unweighted mean intersection-over union score over all classes for the test-set, and S>0, measures model-complexity, as defined by the number of model parameters, measured in unit of model-size in MB. More precisely, model-size in MB will be computed as follows: S = number of learned model parameters x 4 (floating point) / (1024*1024)
Please find common submission errors and corresponding solutions below.
assert pred.shape == gt.shape
This happens for cases where #masks / segmentation of images is different from #masks from ground truth. This usually means that the #masks you generated doesn’t equal to ground truth. Please go back and check your results.
raise ValueError(“Invalid submission zip!”) ValueError: Invalid submission zip!
This has been fixed now and please re-submit your results for evaluation.
- Challenge participation deadline: September 15, 2019
- Clarifications on metrics below-
GA: Global Pixel Accuracy
CA: Mean Class Accuracy for different classes
- Back: Background (non-eye part of peri-ocular region)
- Sclera: Sclera
- Iris: Iris
- Pupil: Pupil
Precision: Computed using sklearn.metrics.precision_score(pred, gt, ‘weighted’)
Recall: Computed using sklearn.metrics.recall_score(pred, gt, ‘weighted’)
F1: Computed using sklearn.metrics.f1_score(pred, gt, ‘weighted’)
IoU: Computed using the function below
def compute_mean_iou(flat_pred, flat_label): ''' compute mean intersection over union (IOU) over all classes :param flat_pred: flattened prediction matrix :param flat_label: flattened label matrix :return: mean IOU ''' unique_labels = np.unique(flat_label) num_unique_labels = len(unique_labels) Intersect = np.zeros(num_unique_labels) Union = np.zeros(num_unique_labels) for index, val in enumerate(unique_labels): pred_i = flat_pred == val label_i = flat_label == val Intersect[index] = float(np.sum(np.logical_and(label_i, pred_i))) Union[index] = float(np.sum(np.logical_or(label_i, pred_i))) mean_iou = np.mean(Intersect / Union) return mean_iou