[core] prediction_set#
- pyhealth.metrics.prediction_set.rejection_rate(y_pred)[source]#
Rejection rate, defined as the proportion of samples with prediction set size != 1
- pyhealth.metrics.prediction_set.miscoverage_ps(y_pred, y_true)[source]#
Miscoverage rates for all samples (similar to recall).
Example
>>> y_pred = np.asarray([[1,0,0],[1,0,0],[1,1,0],[0, 1, 0]]) >>> y_true = np.asarray([1,0,1,2]) >>> error_ps(y_pred, y_true) array([0. , 0.5, 1. ])
Explanation: For class 0, the 1-th prediction set ({0}) contains the label, so the miss-coverage is 0/1=0. For class 1, the 0-th prediction set ({0}) does not contain the label, the 2-th prediction set ({0,1}) contains the label. Thus, the miss-coverage is 1/2=0.5. For class 2, the last prediction set is {1} and the label is 2, so the miss-coverage is 1/1=1.
- pyhealth.metrics.prediction_set.error_ps(y_pred, y_true)[source]#
Miscoverage rates for unrejected samples, where rejection is defined to be sets with size !=1).
Example
>>> y_pred = np.asarray([[1,0,0],[1,0,0],[1,1,0],[0, 1, 0]]) >>> y_true = np.asarray([1,0,1,2]) >>> error_ps(y_pred, y_true) array([0., 1., 1.])
Explanation: For class 0, the 1-th sample is correct and not rejected, so the error is 0/1=0. For class 1, the 0-th sample is incorrerct and not rejected, the 2-th is rejected. Thus, the error is 1/1=1. For class 2, the last sample is not-rejected but the prediction set is {1}, so the error is 1/1=1.
- pyhealth.metrics.prediction_set.miscoverage_overall_ps(y_pred, y_true)[source]#
Miscoverage rate for the true label. Only for multiclass.
Example
>>> y_pred = np.asarray([[1,0,0],[1,0,0],[1,1,0]]) >>> y_true = np.asarray([1,0,1]) >>> miscoverage_overall_ps(y_pred, y_true) 0.333333
Explanation: The 0-th prediction set is {0} and the label is 1 (not covered). The 1-th prediction set is {0} and the label is 0 (covered). The 2-th prediction set is {0,1} and the label is 1 (covered). Thus the miscoverage rate is 1/3.
- pyhealth.metrics.prediction_set.error_overall_ps(y_pred, y_true)[source]#
Overall error rate for the un-rejected samples.
Example
>>> y_pred = np.asarray([[1,0,0],[1,0,0],[1,1,0]]) >>> y_true = np.asarray([1,0,1]) >>> error_overall_ps(y_pred, y_true) 0.5
Explanation: The 0-th prediction set is {0} and the label is 1, so it is an error (no rejection as its prediction set has only one class). The 1-th sample is not rejected and incurs on error. The 2-th sample is rejected, thus excluded from the computation.