数据, 术→技巧

使用Optuna优化XGBoost超参数

钱魏Way · · 5 次浏览

在之前的文章中,分别介绍了决策树模型XGBoost贝叶斯优化工具Optuna,在实际使用中还是会多多少少遇到一些问题。今天文章主要针对Optuna优化XGBoost做下梳理。

XGBoost的目标函数

XGBoost提供了多种内置的目标函数,不同的目标函数可以用于解决不同类型的问题。以下是一些常见的目标函数:

  • 回归问题:
    • reg:squarederror: 用于回归问题的平方损失函数(又叫做最小二乘损失)。这是回归任务的默认目标函数。
    • reg:squaredlogerror: 对数平方损失,即平方根平方误差的对数。
    • reg:logistic: 逻辑回归。
    • reg:pseudohubererror: 伪Huber损失函数,这是一个二次的近似误差。
  • 二元分类问题:
    • binary:logistic: 逻辑回归,输出概率。
    • binary:logitraw: 逻辑回归,输出的是未变换的得分。
    • binary:hinge: 损失函数为合页损失,这是用于支持向量机的损失函数。
  • 多元分类问题:
    • multi:softmax: 使用softmax的多类别分类,需要设置num_class(类别数)。
    • multi:softprob: 和softmax类似,但是返回的是每一类的预测的概率。
  • 排序和其他任务:
    • rank:pairwise: 用于排序任务的成对损失函数。
    • rank:ndcg: 标准化优惠累积收益的等级指标。
    • rank:map: 平均精度的等级指标。

这些都是常见的情况,XGBoost还支持自定义的目标函数,你可以定义自己的目标函数来适应你的实际需求。

XGBoost的核心超参数

XGBoost有许多超参数可以调整,下面是一些最重要的超参数:

  • learning_rate(或eta):这是每一步的收缩因子,在每一步都会乘以这个因子,用来防止过拟合。值通常在0到1之间。
  • max_depth:这是每棵树的最大深度。增加这个值会使模型更复杂,也可能产生更好的预测,但是可能会导致过拟合。
  • min_child_weight:这是叶子节点的最小权重。它与GBM的min_child_leaf类似,但不完全相同。用于控制过拟合。
  • gamma(或min_split_loss):在树的叶节点进一步划分所需的最小损失减少。算法收到更大的gamma值的限制,它使得算法更保守。
  • subsample:这是用于训练每棵树的子样本的比例。这个参数可以防止过拟合,但是设置得过低可能会出现欠拟合。
  • colsample_bytree:这是用于训练每棵树的特征的比例。
  • lambda(或reg_lambda):L2正则化权重项。增加此值将使模型更保守。
  • alpha(或reg_alpha):L1正则化权重项。增加此值将使模型更保守。
  • scale_pos_weight:在类别不平衡的问题中,这个参数可以使算法更快地收敛。

来自官方的代码示例:

"""
Optuna example that optimizes a classifier configuration for cancer dataset
using XGBoost.

In this example, we optimize the validation accuracy of cancer detection
using XGBoost. We optimize both the choice of booster model and its
hyperparameters.

"""

import numpy as np
import optuna

import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split
import xgboost as xgb


def objective(trial):
    (data, target) = sklearn.datasets.load_breast_cancer(return_X_y=True)
    train_x, valid_x, train_y, valid_y = train_test_split(data, target, test_size=0.25)
    dtrain = xgb.DMatrix(train_x, label=train_y)
    dvalid = xgb.DMatrix(valid_x, label=valid_y)

    param = {
        "verbosity": 0,
        "objective": "binary:logistic",
        # use exact for small dataset.
        "tree_method": "exact",
        # defines booster, gblinear for linear functions.
        "booster": trial.suggest_categorical("booster", ["gbtree", "gblinear", "dart"]),
        # L2 regularization weight.
        "lambda": trial.suggest_float("lambda", 1e-8, 1.0, log=True),
        # L1 regularization weight.
        "alpha": trial.suggest_float("alpha", 1e-8, 1.0, log=True),
        # sampling ratio for training data.
        "subsample": trial.suggest_float("subsample", 0.2, 1.0),
        # sampling according to each tree.
        "colsample_bytree": trial.suggest_float("colsample_bytree", 0.2, 1.0),
    }

    if param["booster"] in ["gbtree", "dart"]:
        # maximum depth of the tree, signifies complexity of the tree.
        param["max_depth"] = trial.suggest_int("max_depth", 3, 9, step=2)
        # minimum child weight, larger the term more conservative the tree.
        param["min_child_weight"] = trial.suggest_int("min_child_weight", 2, 10)
        param["eta"] = trial.suggest_float("eta", 1e-8, 1.0, log=True)
        # defines how selective algorithm is.
        param["gamma"] = trial.suggest_float("gamma", 1e-8, 1.0, log=True)
        param["grow_policy"] = trial.suggest_categorical("grow_policy", ["depthwise", "lossguide"])

    if param["booster"] == "dart":
        param["sample_type"] = trial.suggest_categorical("sample_type", ["uniform", "weighted"])
        param["normalize_type"] = trial.suggest_categorical("normalize_type", ["tree", "forest"])
        param["rate_drop"] = trial.suggest_float("rate_drop", 1e-8, 1.0, log=True)
        param["skip_drop"] = trial.suggest_float("skip_drop", 1e-8, 1.0, log=True)

    bst = xgb.train(param, dtrain)
    preds = bst.predict(dvalid)
    pred_labels = np.rint(preds)
    accuracy = sklearn.metrics.accuracy_score(valid_y, pred_labels)
    return accuracy


if __name__ == "__main__":
    study = optuna.create_study(direction="maximize")
    study.optimize(objective, n_trials=100, timeout=600)

    print("Number of finished trials: ", len(study.trials))
    print("Best trial:")
    trial = study.best_trial

    print("  Value: {}".format(trial.value))
    print("  Params: ")
    for key, value in trial.params.items():
        print("    {}: {}".format(key, value))

使用xgb.cv()来进行交叉验证来完善上面的代码:

import numpy as np
import optuna
import sklearn.datasets 
from sklearn.model_selection import train_test_split
import xgboost as xgb

def objective(trial):
    (data, target) = sklearn.datasets.load_breast_cancer(return_X_y=True)
    dtrain = xgb.DMatrix(data, label=target)

    param = {
        "verbosity": 0,
        "objective": "binary:logistic",
        "tree_method": "exact",
        "booster": trial.suggest_categorical("booster", ["gbtree", "gblinear", "dart"]),
        "lambda": trial.suggest_float("lambda", 1e-8, 1.0, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 1.0, log=True),
        "subsample": trial.suggest_float("subsample", 0.2, 1.0),
        "colsample_bytree": trial.suggest_float("colsample_bytree", 0.2, 1.0),
    }

    if param["booster"] in ["gbtree", "dart"]:
        param["max_depth"] = trial.suggest_int("max_depth", 3, 9, step=2)
        param["min_child_weight"] = trial.suggest_int("min_child_weight", 2, 10)
        param["eta"] = trial.suggest_float("eta", 1e-8, 1.0, log=True)
        param["gamma"] = trial.suggest_float("gamma", 1e-8, 1.0, log=True)
        param["grow_policy"] = trial.suggest_categorical("grow_policy", ["depthwise", "lossguide"])

    if param["booster"] == "dart":
        param["sample_type"] = trial.suggest_categorical("sample_type", ["uniform", "weighted"])
        param["normalize_type"] = trial.suggest_categorical("normalize_type", ["tree", "forest"])
        param["rate_drop"] = trial.suggest_float("rate_drop", 1e-8, 1.0, log=True)
        param["skip_drop"] = trial.suggest_float("skip_drop", 1e-8, 1.0, log=True)

    bst = xgb.cv(param, dtrain, nfold=5, metrics="error")

    return bst["test-error-mean"].min()

if __name__ == "__main__":
    study = optuna.create_study(direction="minimize")
    study.optimize(objective, n_trials=100, timeout=600)

    print("Number of finished trials: ", len(study.trials))
    print("Best trial:")
    trial = study.best_trial

    print("  Value: {}".format(trial.value))
    print("  Params: ")
    for key, value in trial.params.items():
        print("    {}: {}".format(key, value))

使用StratifiedKFold()实现交叉验证:

import numpy as np
import optuna
import sklearn.datasets
import sklearn.metrics
from sklearn.model_selection import train_test_split, StratifiedKFold
import xgboost as xgb

def objective(trial):
    (data, target) = sklearn.datasets.load_breast_cancer(return_X_y=True)
    param = {
        "verbosity": 0,
        "objective": "binary:logistic",
        "tree_method": "exact",
        "booster": trial.suggest_categorical("booster", ["gbtree", "gblinear", "dart"]),
        "lambda": trial.suggest_float("lambda", 1e-8, 1.0, log=True),
        "alpha": trial.suggest_float("alpha", 1e-8, 1.0, log=True),
        "subsample": trial.suggest_float("subsample", 0.2, 1.0),
        "colsample_bytree": trial.suggest_float("colsample_bytree", 0.2, 1.0),
    }

    if param["booster"] in ["gbtree", "dart"]:
        param["max_depth"] = trial.suggest_int("max_depth", 3, 9, step=2)
        param["min_child_weight"] = trial.suggest_int("min_child_weight", 2, 10)
        param["eta"] = trial.suggest_float("eta", 1e-8, 1.0, log=True)
        param["gamma"] = trial.suggest_float("gamma", 1e-8, 1.0, log=True)
        param["grow_policy"] = trial.suggest_categorical("grow_policy", ["depthwise", "lossguide"])

    if param["booster"] == "dart":
        param["sample_type"] = trial.suggest_categorical("sample_type", ["uniform", "weighted"])
        param["normalize_type"] = trial.suggest_categorical("normalize_type", ["tree", "forest"])
        param["rate_drop"] = trial.suggest_float("rate_drop", 1e-8, 1.0, log=True)
        param["skip_drop"] = trial.suggest_float("skip_drop", 1e-8, 1.0, log=True)

    # StratifiedKFold cross validation
    skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=0)

    accuracy = []
    for train_idx, valid_idx in skf.split(data, target):
        dtrain = xgb.DMatrix(data[train_idx], label=target[train_idx])
        dvalid = xgb.DMatrix(data[valid_idx], label=target[valid_idx])

        bst = xgb.train(param, dtrain)
        preds = bst.predict(dvalid)
        pred_labels = np.rint(preds)
        accuracy.append(sklearn.metrics.accuracy_score(target[valid_idx], pred_labels))

    return np.average(accuracy)

if __name__ == "__main__":
    study = optuna.create_study(direction="maximize")
    study.optimize(objective, n_trials=100, timeout=600)

    print("Number of finished trials: ", len(study.trials))
    print("Best trial:")
    trial = study.best_trial

    print("  Value: {}".format(trial.value))
    print("  Params: ")
    for key, value in trial.params.items():
        print("    {}: {}".format(key, value))

optuna.integration.XGBoostPruningCallback的使用

optuna.integration.XGBoostPruningCallback 是Optuna库内置的一个类,其目的是为了对XGBoost训练过程进行早期停止(也被称为pruning)。

在机器学习中,早期停止是一种防止过拟合的技术。在训练模型的时候,我们通常会在某个验证集上评估模型的性能。一旦发现模型在验证集上的性能停止提升(或者开始下降),我们就可以提前终止训练,防止模型在训练集上过度拟合。

XGBoostPruningCallback是Optuna专门为XGBoost设计的早期停止技术。它会在每一轮训练之后,检查模型在验证集上的性能是否有提升。如果在一定的连续轮数(即patience)内,模型的性能没有提升,那么就会提前终止训练。

这个类的使用非常简单。在调用xgb.train或者xgb.cv函数的时候,只需要将XGBoostPruningCallback的实例添加到callbacks参数中去就可以了。

下面是一个使用的例子:

import optuna
import xgboost as xgb

def objective(trial):
    dtrain = xgb.DMatrix(X_train, label=y_train)
    dvalid = xgb.DMatrix(X_valid, label=y_valid)

    param = {
        'objective': 'binary:logistic',
        'eval_metric': 'logloss',
        'max_depth': trial.suggest_int('max_depth', 1, 9),
        'eta': trial.suggest_loguniform('eta', 1e-8, 1.0),
        'gamma': trial.suggest_loguniform('gamma', 1e-8, 1.0),
        'subsample': trial.suggest_loguniform('subsample', 0.2, 1.0),
    }

    # Add a callback for pruning.
    pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'validation-logloss')
    bst = xgb.train(param, dtrain, evals=[(dvalid, 'validation')], callbacks=[pruning_callback])

    preds = bst.predict(dvalid)
    pred_labels = np.rint(preds)
    accuracy = sklearn.metrics.accuracy_score(y_valid, pred_labels)
    return accuracy

study = optuna.create_study(pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction='maximize')
study.optimize(objective, n_trials=100)

在这个例子中,我们在xgb.train函数调用中,添加了XGBoostPruningCallback。在每一轮训练后,XGBoost都会调用我们的回调函数,如果发现性能没有提升,那么Optuna就会抛出一个TrialPruned异常,从而提前终止训练。

你可以按照以下方式将代码修改为使用xgb.cv():

import optuna
import xgboost as xgb

def objective(trial):
    dtrain = xgb.DMatrix(X, label=y)

    param = {
        'objective': 'binary:logistic',
        'eval_metric': 'logloss',
        'max_depth': trial.suggest_int('max_depth', 1, 9),
        'eta': trial.suggest_loguniform('eta', 1e-8, 1.0),
        'gamma': trial.suggest_loguniform('gamma', 1e-8, 1.0),
        'subsample': trial.suggest_loguniform('subsample', 0.2, 1.0),
    }

    # Add a callback for pruning.
    pruning_callback = optuna.integration.XGBoostPruningCallback(trial, 'test-logloss-mean')
    cv_results = xgb.cv(param, dtrain, nfold=5, callbacks=[pruning_callback], num_boost_round=100)
    return cv_results['test-logloss-mean'].min()

study = optuna.create_study(pruner=optuna.pruners.MedianPruner(n_warmup_steps=10), direction='minimize')
study.optimize(objective, n_trials=100)

这个例子中的XGBoostPruningCallback中的监视指标改为了 “test-logloss-mean”,它是xgb.cv()在交叉验证中返回的表现指标。在objective()函数的返回值,我们返回了最小的测试logloss,所以我们需要把optuna.create_study()中的direction参数设为’minimize’。

参考链接:

发表回复

您的电子邮箱地址不会被公开。 必填项已用*标注