Metrics#
- class legateboost.BaseMetric#
The base class for metrics.
Implement this class to create custom metrics.
- abstract metric(y: ndarray, pred: ndarray, w: ndarray) float #
Computes the metric between the true labels y and predicted labels pred, weighted by w.
- Parameters:
y – True labels.
pred – Predicted labels.
w – Weights for each sample.
- Returns:
The metric between the true labels y and predicted labels pred, weighted by w.
- static minimize() bool #
Returns True if the metric should be minimized, False otherwise.
- Returns:
True if the metric should be minimized, False otherwise.
- abstract name() str #
Returns the name of the metric as a string.
- Returns:
The name of the metric.
- class legateboost.MSEMetric#
Class for computing the mean squared error (MSE) metric between the true labels and predicted labels.
\(MSE(y, p) = \frac{1}{n} \sum_{i=1}^{n} (y_i - p_i)^2\)
- metric(y: ndarray, pred: ndarray, w: ndarray) float #
Computes the metric between the true labels y and predicted labels pred, weighted by w.
- Parameters:
y – True labels.
pred – Predicted labels.
w – Weights for each sample.
- Returns:
The metric between the true labels y and predicted labels pred, weighted by w.
- name() str #
Returns the name of the metric as a string.
- Returns:
The name of the metric.
- class legateboost.NormalLLMetric#
The mean negative log likelihood of the labels, given mean and variance parameters.
\(L(y_i, p_i) = -log(\frac{1}{\sqrt{2\pi exp(p_{i, 1})}} exp(-\frac{(y_i - p_{i, 0})^2}{2 exp(2 p_{i, 1})}))\)
Where \(p_{i, 0}\) is the mean and \(p_{i, 1}\) is the log standard deviation.
- metric(y: ndarray, pred: ndarray, w: ndarray) float #
Computes the metric between the true labels y and predicted labels pred, weighted by w.
- Parameters:
y – True labels.
pred – Predicted labels.
w – Weights for each sample.
- Returns:
The metric between the true labels y and predicted labels pred, weighted by w.
- name() str #
Returns the name of the metric as a string.
- Returns:
The name of the metric.
- class legateboost.NormalCRPSMetric#
Continuous Ranked Probability Score for normal distribution. Can be used with
NormalObjective
.References
- [1] Tilmann Gneiting, Adrian E. Raftery (2007)
Strictly Proper Scoring Rules, Prediction, and Estimation
- metric(y: ndarray, pred: ndarray, w: ndarray) float #
Computes the metric between the true labels y and predicted labels pred, weighted by w.
- Parameters:
y – True labels.
pred – Predicted labels.
w – Weights for each sample.
- Returns:
The metric between the true labels y and predicted labels pred, weighted by w.
- name() str #
Returns the name of the metric as a string.
- Returns:
The name of the metric.
- class legateboost.GammaDevianceMetric#
The mean gamma deviance.
\(E = 2[(\ln{\frac{p_i}{y_i}} + \frac{y_i}{p_i} - 1)w_i]\)
- metric(y: ndarray, pred: ndarray, w: ndarray) float #
Computes the metric between the true labels y and predicted labels pred, weighted by w.
- Parameters:
y – True labels.
pred – Predicted labels.
w – Weights for each sample.
- Returns:
The metric between the true labels y and predicted labels pred, weighted by w.
- name() str #
Returns the name of the metric as a string.
- Returns:
The name of the metric.
- class legateboost.GammaLLMetric#
The mean negative log likelihood of the labels, given parameters predicted by the model.
- metric(y: ndarray, pred: ndarray, w: ndarray) float #
Computes the metric between the true labels y and predicted labels pred, weighted by w.
- Parameters:
y – True labels.
pred – Predicted labels.
w – Weights for each sample.
- Returns:
The metric between the true labels y and predicted labels pred, weighted by w.
- name() str #
Returns the name of the metric as a string.
- Returns:
The name of the metric.
- class legateboost.QuantileMetric(quantiles: ndarray = array([0.25, 0.5, 0.75]))#
The quantile loss, otherwise known as check loss or pinball loss.
\(L(y, p) = \frac{1}{n}\sum_{i=1}^{n} \frac{1}{k}\sum_{j=1}^{k} (q_j - \mathbb{1})(y_i - p_{i, j})\)
where
\(\mathbb{1} = 1\) if \(y_i - p_{i, j} \leq 0\) and \(\mathbb{1} = 0\) otherwise.
- metric(y: ndarray, pred: ndarray, w: ndarray) float #
Computes the metric between the true labels y and predicted labels pred, weighted by w.
- Parameters:
y – True labels.
pred – Predicted labels.
w – Weights for each sample.
- Returns:
The metric between the true labels y and predicted labels pred, weighted by w.
- name() str #
Returns the name of the metric as a string.
- Returns:
The name of the metric.
- class legateboost.LogLossMetric#
Class for computing the logarithmic loss (logloss) metric between the true labels and predicted labels.
For binary classification:
\(logloss(y, p) = -\frac{1}{n} \sum_{i=1}^{n} [y_i \log(p_i) + (1 - y_i) \log(1 - p_i)]\)
For multi-class classification:
\(logloss(y, p) = -\frac{1}{n} \sum_{i=1}^{n} \sum_{j=1}^{k} y_{ij} \log(p_{ij})\)
where n is the number of samples, k is the number of classes, y is the true labels, and p is the predicted probabilities.
- metric(y: ndarray, pred: ndarray, w: ndarray) float #
Computes the metric between the true labels y and predicted labels pred, weighted by w.
- Parameters:
y – True labels.
pred – Predicted labels.
w – Weights for each sample.
- Returns:
The metric between the true labels y and predicted labels pred, weighted by w.
- name() str #
Returns the name of the metric as a string.
- Returns:
The name of the metric.
- class legateboost.ExponentialMetric#
Class for computing the exponential loss metric.
\(exp(y, p) = \sum_{i=1}^{n} \exp(-\frac{1}{K} y_i^T f_i)\) where \(K\) is the number of classes, and \(y_{i,k} = 1\) if \(k\) is the label and \(y_{i,k} = -1/(K-1)\) otherwise. \(f_{i,k} = ln(p_{i, k}) * (K - 1)\) with \(p_{i, k}\) a probability.
- metric(y: ndarray, pred: ndarray, w: ndarray) float #
Computes the metric between the true labels y and predicted labels pred, weighted by w.
- Parameters:
y – True labels.
pred – Predicted labels.
w – Weights for each sample.
- Returns:
The metric between the true labels y and predicted labels pred, weighted by w.
- name() str #
Returns the name of the metric as a string.
- Returns:
The name of the metric.