Objectives#
- class legateboost.BaseObjective#
The base class for objective functions.
Implement this class to create custom objectives.
- abstract gradient(y: ndarray, pred: ndarray) Tuple[ndarray, ndarray] #
Computes the functional gradient and hessian of the squared error objective function.
- Parameters:
y – The true labels.
pred – The predicted labels.
- Returns:
The functional gradient and hessian of the squared error objective function.
- abstract initialise_prediction(y: ndarray, w: ndarray, boost_from_average: bool) ndarray #
Initializes the base score of the model. May also validate labels.
- Parameters:
y – The target values.
w – The sample weights.
boost_from_average (bool) – Whether to initialize the predictions from the average of the target values.
- Returns:
The initial predictions for a single example.
- abstract metric() BaseMetric #
Returns the default error metric for the objective function.
- Returns:
The default error metric for the objective function.
- transform(pred: ndarray) ndarray #
Transforms the predicted labels. E.g. sigmoid for log loss.
- Parameters:
pred – The predicted labels.
- Returns:
The transformed labels.
- class legateboost.SquaredErrorObjective#
The Squared Error objective function for regression problems.
This objective function computes the mean squared error between the predicted and true labels.
\(L(y_i, p_i) = \frac{1}{2} (y_i - p_i)^2\)
See also
- gradient(y: ndarray, pred: ndarray) Tuple[ndarray, ndarray] #
Computes the functional gradient and hessian of the squared error objective function.
- Parameters:
y – The true labels.
pred – The predicted labels.
- Returns:
The functional gradient and hessian of the squared error objective function.
- initialise_prediction(y: ndarray, w: ndarray, boost_from_average: bool) ndarray #
Initializes the base score of the model. May also validate labels.
- Parameters:
y – The target values.
w – The sample weights.
boost_from_average (bool) – Whether to initialize the predictions from the average of the target values.
- Returns:
The initial predictions for a single example.
- metric() MSEMetric #
Returns the default error metric for the objective function.
- Returns:
The default error metric for the objective function.
- transform(pred: ndarray) ndarray #
Transforms the predicted labels. E.g. sigmoid for log loss.
- Parameters:
pred – The predicted labels.
- Returns:
The transformed labels.
- class legateboost.NormalObjective#
The normal distribution objective function for regression problems.
This objective fits both mean and variance parameters, where
SquaredErrorObjective
only fits the mean.The objective minimised is the negative log likelihood of the normal distribution.
\(L(y_i, p_i) = -log(\frac{1}{\sqrt{2\pi exp(p_{i, 1})}} exp(-\frac{(y_i - p_{i, 0})^2}{2 exp(2 p_{i, 1})}))\)
Where \(p_{i, 0}\) is the mean and \(p_{i, 1}\) is the log standard deviation.
See also
- gradient(y: ndarray, pred: ndarray) Tuple[ndarray, ndarray] #
Computes the functional gradient and hessian of the squared error objective function.
- Parameters:
y – The true labels.
pred – The predicted labels.
- Returns:
The functional gradient and hessian of the squared error objective function.
- initialise_prediction(y: ndarray, w: ndarray, boost_from_average: bool) ndarray #
Initializes the base score of the model. May also validate labels.
- Parameters:
y – The target values.
w – The sample weights.
boost_from_average (bool) – Whether to initialize the predictions from the average of the target values.
- Returns:
The initial predictions for a single example.
- mean(param: ndarray) ndarray #
Return the mean for the Normal distribution.
- metric() NormalLLMetric #
Returns the default error metric for the objective function.
- Returns:
The default error metric for the objective function.
- transform(pred: ndarray) ndarray #
Transforms the predicted labels. E.g. sigmoid for log loss.
- Parameters:
pred – The predicted labels.
- Returns:
The transformed labels.
- var(param: ndarray) ndarray #
Return the variance for the Normal distribution.
- class legateboost.GammaDevianceObjective#
Gamma regression with the log link function. For the expression of the deviance, see
legateboost.metrics.GammaDevianceMetric
.The response \(y\) variable should be positive values.
- gradient(y: ndarray, pred: ndarray) Tuple[ndarray, ndarray] #
Computes the functional gradient and hessian of the squared error objective function.
- Parameters:
y – The true labels.
pred – The predicted labels.
- Returns:
The functional gradient and hessian of the squared error objective function.
- initialise_prediction(y: ndarray, w: ndarray, boost_from_average: bool) ndarray #
Initializes the base score of the model. May also validate labels.
- Parameters:
y – The target values.
w – The sample weights.
boost_from_average (bool) – Whether to initialize the predictions from the average of the target values.
- Returns:
The initial predictions for a single example.
- metric() GammaDevianceMetric #
Returns the default error metric for the objective function.
- Returns:
The default error metric for the objective function.
- transform(pred: ndarray) ndarray #
Inverse log link.
- class legateboost.GammaObjective#
Regression with the \(\Gamma\) distribution function using the shape scale parameterization.
- gradient(y: ndarray, pred: ndarray) Tuple[ndarray, ndarray] #
Computes the functional gradient and hessian of the squared error objective function.
- Parameters:
y – The true labels.
pred – The predicted labels.
- Returns:
The functional gradient and hessian of the squared error objective function.
- initialise_prediction(y: ndarray, w: ndarray, boost_from_average: bool) ndarray #
Initializes the base score of the model. May also validate labels.
- Parameters:
y – The target values.
w – The sample weights.
boost_from_average (bool) – Whether to initialize the predictions from the average of the target values.
- Returns:
The initial predictions for a single example.
- mean(param: ndarray) ndarray #
Return the mean for the Gamma distribution.
- metric() GammaLLMetric #
Returns the default error metric for the objective function.
- Returns:
The default error metric for the objective function.
- scale(param: ndarray) ndarray #
Return the scale parameter for the Gamma distribution.
- shape(param: ndarray) ndarray #
Return the shape parameter for the Gamma distribution.
- transform(pred: ndarray) ndarray #
Transforms the predicted labels. E.g. sigmoid for log loss.
- Parameters:
pred – The predicted labels.
- Returns:
The transformed labels.
- var(param: ndarray) ndarray #
Return the variance for the Gamma distribution.
- class legateboost.QuantileObjective(quantiles: ndarray = array([0.25, 0.5, 0.75]))#
Minimises the quantile loss, otherwise known as check loss or pinball loss.
\(L(y_i, p_i) = \frac{1}{k}\sum_{j=1}^{k} (q_j - \mathbb{1})(y_i - p_{i, j})\)
where
\(\mathbb{1} = 1\) if \(y_i - p_{i, j} \leq 0\) and \(\mathbb{1} = 0\) otherwise.
This objective function is non-smooth and therefore can converge significantly slower than other objectives.
See also
- gradient(y: ndarray, pred: ndarray) Tuple[ndarray, ndarray] #
Computes the functional gradient and hessian of the squared error objective function.
- Parameters:
y – The true labels.
pred – The predicted labels.
- Returns:
The functional gradient and hessian of the squared error objective function.
- initialise_prediction(y: ndarray, w: ndarray, boost_from_average: bool) ndarray #
Initializes the base score of the model. May also validate labels.
- Parameters:
y – The target values.
w – The sample weights.
boost_from_average (bool) – Whether to initialize the predictions from the average of the target values.
- Returns:
The initial predictions for a single example.
- metric() BaseMetric #
Returns the default error metric for the objective function.
- Returns:
The default error metric for the objective function.
- class legateboost.LogLossObjective#
The Log Loss objective function for binary and multi-class classification problems.
This objective function computes the log loss between the predicted and true labels.
\(L(y_i, p_i) = -y_i \log(p_i) - (1 - y_i) \log(1 - p_i)\)
See also
- gradient(y: ndarray, pred: ndarray) Tuple[ndarray, ndarray] #
Computes the functional gradient and hessian of the squared error objective function.
- Parameters:
y – The true labels.
pred – The predicted labels.
- Returns:
The functional gradient and hessian of the squared error objective function.
- initialise_prediction(y: ndarray, w: ndarray, boost_from_average: bool) ndarray #
Initializes the base score of the model. May also validate labels.
- Parameters:
y – The target values.
w – The sample weights.
boost_from_average (bool) – Whether to initialize the predictions from the average of the target values.
- Returns:
The initial predictions for a single example.
- metric() LogLossMetric #
Returns the default error metric for the objective function.
- Returns:
The default error metric for the objective function.
- transform(pred: ndarray) ndarray #
Transforms the predicted labels. E.g. sigmoid for log loss.
- Parameters:
pred – The predicted labels.
- Returns:
The transformed labels.
- class legateboost.ExponentialObjective#
Exponential loss objective function for binary classification. Equivalent to the AdaBoost multiclass exponential loss in [1].
Defined as:
\(L(y_i, p_i) = exp(-\frac{1}{K} y_i^T p_i)\)
where \(K\) is the number of classes, and \(y_{i,k} = 1\) if \(k\) is the label and \(y_{i,k} = -1/(K-1)\) otherwise.
References
[1] Hastie, Trevor, et al. “Multi-class adaboost.” Statistics and its Interface 2.3 (2009): 349-360.
- gradient(y: ndarray, pred: ndarray) Tuple[ndarray, ndarray] #
Computes the functional gradient and hessian of the squared error objective function.
- Parameters:
y – The true labels.
pred – The predicted labels.
- Returns:
The functional gradient and hessian of the squared error objective function.
- initialise_prediction(y: ndarray, w: ndarray, boost_from_average: bool) ndarray #
Initializes the base score of the model. May also validate labels.
- Parameters:
y – The target values.
w – The sample weights.
boost_from_average (bool) – Whether to initialize the predictions from the average of the target values.
- Returns:
The initial predictions for a single example.
- metric() ExponentialMetric #
Returns the default error metric for the objective function.
- Returns:
The default error metric for the objective function.
- transform(pred: ndarray) ndarray #
Transforms the predicted labels. E.g. sigmoid for log loss.
- Parameters:
pred – The predicted labels.
- Returns:
The transformed labels.