Skip to content

Releases: dmlc/xgboost

Release candidate of version 1.6.0

30 Mar 17:28
4bd5a33
Compare
Choose a tag to compare
Pre-release

Roadmap: #7726
Release note: #7746

1.5.2 Patch Release

17 Jan 15:58
742c19f
Compare
Choose a tag to compare

This is a patch release for compatibility with latest dependencies and bug fixes.

  • [dask] Fix asyncio with latest dask and distributed.
  • [R] Fix single sample SHAP prediction.
  • [Python] Update python classifier to indicate support for latest Python versions.
  • [Python] Fix with latest mypy and pylint.
  • Fix indexing type for bitfield, which may affect missing value and categorical data.
  • Fix num_boosted_rounds for linear model.
  • Fix early stopping with linear model.

1.5.1 Patch Release

23 Nov 09:49
Compare
Choose a tag to compare

This is a patch release for compatibility with the latest dependencies and bug fixes. Also, all GPU-compatible binaries are built with CUDA 11.0.

  • [Python] Handle missing values in dataframe with category dtype. (#7331)

  • [R] Fix R CRAN failures about prediction and some compiler warnings.

  • [JVM packages] Fix compatibility with latest Spark (#7438, #7376)

  • Support building with CTK11.5. (#7379)

  • Check user input for iteration in inplace predict.

  • Handle OMP_THREAD_LIMIT environment variable.

  • [doc] Fix broken links. (#7341)

Artifacts

You can verify the downloaded packages by running this on your Unix shell:

echo "<hash> <artifact>" | shasum -a 256 --check
3a6cc7526c0dff1186f01b53dcbac5c58f12781988400e2d340dda61ef8d14ca  xgboost_r_gpu_linux_afb9dfd4210e8b8db8fe03380f83b404b1721443.tar.gz
6f74deb62776f1e2fd030e1fa08b93ba95b32ac69cc4096b4bcec3821dd0a480  xgboost_r_gpu_win64_afb9dfd4210e8b8db8fe03380f83b404b1721443.tar.gz
565dea0320ed4b6f807dbb92a8a57e86ec16db50eff9a3f405c651d1f53a259d  xgboost.tar.gz

Release 1.5.0 stable

17 Oct 17:08
584b45a
Compare
Choose a tag to compare

This release comes with many exciting new features and optimizations, along with some bug
fixes. We will describe the experimental categorical data support and the external memory
interface independently. Package-specific new features will be listed in respective
sections.

Development on categorical data support

In version 1.3, XGBoost introduced an experimental feature for handling categorical data
natively, without one-hot encoding. XGBoost can fit categorical splits in decision
trees. (Currently, the generated splits will be of form x \in {v}, where the input is
compared to a single category value. A future version of XGBoost will generate splits that
compare the input against a list of multiple category values.)

Most of the other features, including prediction, SHAP value computation, feature
importance, and model plotting were revised to natively handle categorical splits. Also,
all Python interfaces including native interface with and without quantized DMatrix,
scikit-learn interface, and Dask interface now accept categorical data with a wide range
of data structures support including numpy/cupy array and cuDF/pandas/modin dataframe. In
practice, the following are required for enabling categorical data support during
training:

  • Use Python package.
  • Use gpu_hist to train the model.
  • Use JSON model file format for saving the model.

Once the model is trained, it can be used with most of the features that are available on
the Python package. For a quick introduction, see
https://xgboost.readthedocs.io/en/latest/tutorials/categorical.html

Related PRs: (#7011, #7001, #7042, #7041, #7047, #7043, #7036, #7054, #7053, #7065, #7213, #7228, #7220, #7221, #7231, #7306)

  • Next steps

    • Revise the CPU training algorithm to handle categorical data natively and generate categorical splits
    • Extend the CPU and GPU algorithms to generate categorical splits of form x \in S
      where the input is compared with multiple category values. split. (#7081)

External memory

This release features a brand-new interface and implementation for external memory (also
known as out-of-core training). (#6901, #7064, #7088, #7089, #7087, #7092, #7070,
#7216). The new implementation leverages the data iterator interface, which is currently
used to create DeviceQuantileDMatrix. For a quick introduction, see
https://xgboost.readthedocs.io/en/latest/tutorials/external_memory.html#data-iterator
. During the development of this new interface, lz4 compression is removed. (#7076).
Please note that external memory support is still experimental and not ready for
production use yet. All future development will focus on this new interface and users are
advised to migrate. (You are using the old interface if you are using a URL suffix to use
external memory.)

New features in Python package

  • Support numpy array interface and all numeric types from numpy in DMatrix
    construction and inplace_predict (#6998, #7003). Now XGBoost no longer makes data
    copy when input is numpy array view.
  • The early stopping callback in Python has a new min_delta parameter to control the
    stopping behavior (#7137)
  • Python package now supports calculating feature scores for the linear model, which is
    also available on R package. (#7048)
  • Python interface now supports configuring constraints using feature names instead of
    feature indices.
  • Typehint support for more Python code including scikit-learn interface and rabit
    module. (#6799, #7240)
  • Add tutorial for XGBoost-Ray (#6884)

New features in R package

  • In 1.4 we have a new prediction function in the C API which is used by the Python
    package. This release revises the R package to use the new prediction function as well.
    A new parameter iteration_range for the predict function is available, which can be
    used for specifying the range of trees for running prediction. (#6819, #7126)
  • R package now supports the nthread parameter in DMatrix construction. (#7127)

New features in JVM packages

  • Support GPU dataframe and DeviceQuantileDMatrix (#7195). Constructing DMatrix
    with GPU data structures and the interface for quantized DMatrix were first
    introduced in the Python package and are now available in the xgboost4j package.
  • JVM packages now support saving and getting early stopping attributes. (#7095) Here is a
    quick example in JAVA (#7252).

General new features

  • We now have a pre-built binary package for R on Windows with GPU support. (#7185)
  • CUDA compute capability 86 is now part of the default CMake build configuration with
    newly added support for CUDA 11.4. (#7131, #7182, #7254)
  • XGBoost can be compiled using system CUB provided by CUDA 11.x installation. (#7232)

Optimizations

The performance for both hist and gpu_hist has been significantly improved in 1.5
with the following optimizations:

  • GPU multi-class model training now supports prediction cache. (#6860)
  • GPU histogram building is sped up and the overall training time is 2-3 times faster on
    large datasets (#7180, #7198). In addition, we removed the parameter deterministic_histogram and now
    the GPU algorithm is always deterministic.
  • CPU hist has an optimized procedure for data sampling (#6922)
  • More performance optimization in regression and binary classification objectives on
    CPU (#7206)
  • Tree model dump is now performed in parallel (#7040)

Breaking changes

  • n_gpus was deprecated in 1.0 release and is now removed.
  • Feature grouping in CPU hist tree method is removed, which was disabled long
    ago. (#7018)
  • C API for Quantile DMatrix is changed to be consistent with the new external memory
    implementation. (#7082)

Notable general bug fixes

  • XGBoost no long changes global CUDA device ordinal when gpu_id is specified (#6891,
    #6987)
  • Fix gamma negative likelihood evaluation metric. (#7275)
  • Fix integer value of verbose_eal for xgboost.cv function in Python. (#7291)
  • Remove extra sync in CPU hist for dense data, which can lead to incorrect tree node
    statistics. (#7120, #7128)
  • Fix a bug in GPU hist when data size is larger than UINT32_MAX with missing
    values. (#7026)
  • Fix a thread safety issue in prediction with the softmax objective. (#7104)
  • Fix a thread safety issue in CPU SHAP value computation. (#7050) Please note that all
    prediction functions in Python are thread-safe.
  • Fix model slicing. (#7149, #7078)
  • Workaround a bug in old GCC which can lead to segfault during construction of
    DMatrix. (#7161)
  • Fix histogram truncation in GPU hist, which can lead to slightly-off results. (#7181)
  • Fix loading GPU linear model pickle files on CPU-only machine. (#7154)
  • Check input value is duplicated when CPU quantile queue is full (#7091)
  • Fix parameter loading with training continuation. (#7121)
  • Fix CMake interface for exposing C library by specifying dependencies. (#7099)
  • Callback and early stopping are explicitly disabled for the scikit-learn interface
    random forest estimator. (#7236)
  • Fix compilation error on x86 (32-bit machine) (#6964)
  • Fix CPU memory usage with extremely sparse datasets (#7255)
  • Fix a bug in GPU multi-class AUC implementation with weighted data (#7300)

Python package

Other than the items mentioned in the previous sections, there are some Python-specific
improvements.

  • Change development release postfix to dev (#6988)
  • Fix early stopping behavior with MAPE metric (#7061)
  • Fixed incorrect feature mismatch error message (#6949)
  • Add predictor to skl constructor. (#7000, #7159)
  • Re-enable feature validation in predict proba. (#7177)
  • scikit learn interface regression estimator now can pass the scikit-learn estimator
    check and is fully compatible with scikit-learn utilities. __sklearn_is_fitted__ is
    implemented as part of the changes (#7130, #7230)
  • Conform the latest pylint. (#7071, #7241)
  • Support latest panda range index in DMatrix construction. (#7074)
  • Fix DMatrix construction from pandas series. (#7243)
  • Fix typo and grammatical mistake in error message (#7134)
  • [dask] disable work stealing explicitly for training tasks (#6794)
  • [dask] Set dataframe index in predict. (#6944)
  • [dask] Fix prediction on df with latest dask. (#6969)
  • [dask] Fix dask predict on DaskDMatrix with iteration_range. (#7005)
  • [dask] Disallow importing non-dask estimators from xgboost.dask (#7133)

R package

Improvements other than new features on R package:

  • Optimization for updating R handles in-place (#6903)
  • Removed the magrittr dependency. (#6855, #6906, #6928)
  • The R package now hides all C++ symbols to avoid conflicts. (#7245)
  • Other maintenance including code cleanups, document updates. (#6863, #6915, #6930, #6966, #6967)

JVM packages

Improvements other than new features on JVM packages:

  • Constructors with implicit missing value are deprecated due to confusing behaviors. (#7225)
  • Reduce scala-compiler, scalatest dependency scopes (#6730)
  • Making the Java library loader emit helpful error messages on missing dependencies. (#6926)
  • JVM packages now use the Python tracker in XGBoost instead of dmlc. The one in XGBoost
    is shared between JVM packages and Python Dask and enjoys better maintenance (#7132)
  • Fix "key not found: train" error (#6842)
  • Fix model loading from stream (#7067)

General document improvements

  • Overhaul the installation documents. (#6877)
  • A few demos are added for AFT with dask (#6853), callback with dask (#6995), inference
    in C (#7151), process_type. (#7135)
  • Fix PDF format of document. (#7143)
  • Clarify the behavior of use_rmm. (#6808)
  • Clarify prediction function. (#6813)
  • Improve tutor...
Read more

Release candidate of version 1.5.0

26 Sep 05:35
1debabb
Compare
Choose a tag to compare
Pre-release

Roadmap: #6846
RC: #7260
Release note: #7271

1.4.2 Patch Release

13 May 11:35
522b897
Compare
Choose a tag to compare

This is a patch release for Python package with following fixes:

  • Handle the latest version of cupy.ndarray in inplace_predict. #6933
  • Ensure output array from predict_leaf is (n_samples, ) when there's only 1 tree. 1.4.0 outputs (n_samples, 1). #6889
  • Fix empty dataset handling with multi-class AUC. #6947
  • Handle object type from pandas in inplace_predict. #6927

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "3ffd4a90cd03efde596e51cadf7f344c8b6c91aefd06cc92db349cd47056c05a *xgboost.tar.gz" | shasum -a 256 --check

1.4.1 Patch Release

20 Apr 00:37
a347ef7
Compare
Choose a tag to compare

This is a bug fix release.

  • Fix GPU implementation of AUC on some large datasets. (#6866)

You can verify the downloaded source code xgboost.tar.gz by running this on your unix shell:

echo "f3a37e5ddac10786e46423db874b29af413eed49fd9baed85035bbfee6fc6635 *xgboost.tar.gz" | shasum -a 256 --check

Release 1.4.0 stable

11 Apr 00:43
Compare
Choose a tag to compare

Introduction of pre-built binary package for R, with GPU support

Starting with release 1.4.0, users now have the option of installing {xgboost} without
having to build it from the source. This is particularly advantageous for users who want
to take advantage of the GPU algorithm (gpu_hist), as previously they'd have to build
{xgboost} from the source using CMake and NVCC. Now installing {xgboost} with GPU
support is as easy as: R CMD INSTALL ./xgboost_r_gpu_linux.tar.gz. (#6827)

See the instructions at https://xgboost.readthedocs.io/en/latest/build.html

Improvements on prediction functions

XGBoost has many prediction types including shap value computation and inplace prediction.
In 1.4 we overhauled the underlying prediction functions for C API and Python API with an
unified interface. (#6777, #6693, #6653, #6662, #6648, #6668, #6804)

  • Starting with 1.4, sklearn interface prediction will use inplace predict by default when
    input data is supported.
  • Users can use inplace predict with dart booster and enable GPU acceleration just
    like gbtree.
  • Also all prediction functions with tree models are now thread-safe. Inplace predict is
    improved with base_margin support.
  • A new set of C predict functions are exposed in the public interface.
  • A user-visible change is a newly added parameter called strict_shape. See
    https://xgboost.readthedocs.io/en/latest/prediction.html for more details.

Improvement on Dask interface

  • Starting with 1.4, the Dask interface is considered to be feature-complete, which means
    all of the models found in the single node Python interface are now supported in Dask,
    including but not limited to ranking and random forest. Also, the prediction function
    is significantly faster and supports shap value computation.

    • Most of the parameters found in single node sklearn interface are supported by
      Dask interface. (#6471, #6591)
    • Implements learning to rank. On the Dask interface, we use the newly added support of
      query ID to enable group structure. (#6576)
    • The Dask interface has Python type hints support. (#6519)
    • All models can be safely pickled. (#6651)
    • Random forest estimators are now supported. (#6602)
    • Shap value computation is now supported. (#6575, #6645, #6614)
    • Evaluation result is printed on the scheduler process. (#6609)
    • DaskDMatrix (and device quantile dmatrix) now accepts all meta-information. (#6601)
  • Prediction optimization. We enhanced and speeded up the prediction function for the
    Dask interface. See the latest Dask tutorial page in our document for an overview of
    how you can optimize it even further. (#6650, #6645, #6648, #6668)

  • Bug fixes

    • If you are using the latest Dask and distributed where distributed.MultiLock is
      present, XGBoost supports training multiple models on the same cluster in
      parallel. (#6743)
    • A bug fix for when using dask.client to launch async task, XGBoost might use a
      different client object internally. (#6722)
  • Other improvements on documents, blogs, tutorials, and demos. (#6389, #6366, #6687,
    #6699, #6532, #6501)

Python package

With changes from Dask and general improvement on prediction, we have made some
enhancements on the general Python interface and IO for booster information. Starting
from 1.4, booster feature names and types can be saved into the JSON model. Also some
model attributes like best_iteration, best_score are restored upon model load. On
sklearn interface, some attributes are now implemented as Python object property with
better documents.

  • Breaking change: All data parameters in prediction functions are renamed to X
    for better compliance to sklearn estimator interface guidelines.

  • Breaking change: XGBoost used to generate some pseudo feature names with DMatrix
    when inputs like np.ndarray don't have column names. The procedure is removed to
    avoid conflict with other inputs. (#6605)

  • Early stopping with training continuation is now supported. (#6506)

  • Optional import for Dask and cuDF are now lazy. (#6522)

  • As mentioned in the prediction improvement summary, the sklearn interface uses inplace
    prediction whenever possible. (#6718)

  • Booster information like feature names and feature types are now saved into the JSON
    model file. (#6605)

  • All DMatrix interfaces including DeviceQuantileDMatrix and counterparts in Dask
    interface (as mentioned in the Dask changes summary) now accept all the meta-information
    like group and qid in their constructor for better consistency. (#6601)

  • Booster attributes are restored upon model load so users don't have to call attr
    manually. (#6593)

  • On sklearn interface, all models accept base_margin for evaluation datasets. (#6591)

  • Improvements over the setup script including smaller sdist size and faster installation
    if the C++ library is already built (#6611, #6694, #6565).

  • Bug fixes for Python package:

    • Don't validate feature when number of rows is 0. (#6472)
    • Move metric configuration into booster. (#6504)
    • Calling XGBModel.fit() should clear the Booster by default (#6562)
    • Support _estimator_type. (#6582)
    • [dask, sklearn] Fix predict proba. (#6566, #6817)
    • Restore unknown data support. (#6595)
    • Fix learning rate scheduler with cv. (#6720)
    • Fixes small typo in sklearn documentation (#6717)
    • [python-package] Fix class Booster: feature_types = None (#6705)
    • Fix divide by 0 in feature importance when no split is found. (#6676)

JVM package

  • [jvm-packages] fix early stopping doesn't work even without custom_eval setting (#6738)
  • fix potential TaskFailedListener's callback won't be called (#6612)
  • [jvm] Add ability to load booster direct from byte array (#6655)
  • [jvm-packages] JVM library loader extensions (#6630)

R package

  • R documentation: Make construction of DMatrix consistent.
  • Fix R documentation for xgb.train. (#6764)

ROC-AUC

We re-implemented the ROC-AUC metric in XGBoost. The new implementation supports
multi-class classification and has better support for learning to rank tasks that are not
binary. Also, it has a better-defined average on distributed environments with additional
handling for invalid datasets. (#6749, #6747, #6797)

Global configuration.

Starting from 1.4, XGBoost's Python, R and C interfaces support a new global configuration
model where users can specify some global parameters. Currently, supported parameters are
verbosity and use_rmm. The latter is experimental, see rmm plugin demo and
related README file for details. (#6414, #6656)

Other New features.

  • Better handling for input data types that support __array_interface__. For some
    data types including GPU inputs and scipy.sparse.csr_matrix, XGBoost employs
    __array_interface__ for processing the underlying data. Starting from 1.4, XGBoost
    can accept arbitrary array strides (which means column-major is supported) without
    making data copies, potentially reducing a significant amount of memory consumption.
    Also version 3 of __cuda_array_interface__ is now supported. (#6776, #6765, #6459,
    #6675)
  • Improved parameter validation, now feeding XGBoost with parameters that contain
    whitespace will trigger an error. (#6769)
  • For Python and R packages, file paths containing the home indicator ~ are supported.
  • As mentioned in the Python changes summary, the JSON model can now save feature
    information of the trained booster. The JSON schema is updated accordingly. (#6605)
  • Development of categorical data support is continued. Newly added weighted data support
    and dart booster support. (#6508, #6693)
  • As mentioned in Dask change summary, ranking now supports the qid parameter for
    query groups. (#6576)
  • DMatrix.slice can now consume a numpy array. (#6368)

Other breaking changes

  • Aside from the feature name generation, there are 2 breaking changes:
    • Drop saving binary format for memory snapshot. (#6513, #6640)
    • Change default evaluation metric for binary:logitraw objective to logloss (#6647)

CPU Optimization

  • Aside from the general changes on predict function, some optimizations are applied on
    CPU implementation. (#6683, #6550, #6696, #6700)
  • Also performance for sampling initialization in hist is improved. (#6410)

Notable fixes in the core library

These fixes do not reside in particular language bindings:

  • Fixes for gamma regression. This includes checking for invalid input values, fixes for
    gamma deviance metric, and better floating point guard for gamma negative log-likelihood
    metric. (#6778, #6537, #6761)
  • Random forest with gpu_hist might generate low accuracy in previous versions. (#6755)
  • Fix a bug in GPU sketching when data size exceeds limit of 32-bit integer. (#6826)
  • Memory consumption fix for row-major adapters (#6779)
  • Don't estimate sketch batch size when rmm is used. (#6807) (#6830)
  • Fix in-place predict with missing value. (#6787)
  • Re-introduce double buffer in UpdatePosition, to fix perf regression in gpu_hist (#6757)
  • Pass correct split_type to GPU predictor (#6491)
  • Fix DMatrix feature names/types IO. (#6507)
  • Use view for SparsePage exclusively to avoid some data access races. (#6590)
  • Check for invalid data. (#6742)
  • Fix relocatable include in CMakeList (#6734) (#6737)
  • Fix DMatrix slice with feature types. (#6689)

Other deprecation notices:

  • This release will be the last release to support CUDA 10.0. (#6642)

  • Starting in the next release, the Python package will require Pip 19.3+ due to the use
    of manylinux2014 tag. Also, CentOS 6, RHEL 6 and other old distributions will not be
    supported.

Known issue:

MacOS build of the JVM packages doesn't support multi-threading out of the box. To enable
mul...

Read more

1.3.3 Patch Release

20 Jan 13:52
000292c
Compare
Choose a tag to compare
  • Fix regression on best_ntree_limit. (#6616)

1.3.2 Patch Release

13 Jan 14:21
Compare
Choose a tag to compare
  • Fix compatibility with newer scikit-learn. (#6555)
  • Fix wrong best_ntree_limit in multi-class. (#6569)
  • Ensure that Rabit can be compiled on Solaris (#6578)
  • Fix best_ntree_limit for linear and dart. (#6579)
  • Remove duplicated DMatrix creation in scikit-learn interface. (#6592)
  • Fix evals_result in XGBRanker. (##6594)