From 97f301ca4b262486403fa194429bf4c58ae1b87c Mon Sep 17 00:00:00 2001 From: Christian Bager Bach Houmann Date: Thu, 13 Jun 2024 10:08:46 +0200 Subject: [PATCH] Update report_thesis/src/sections/background/ensemble_learning_models/etr.tex Co-authored-by: Pattrigue <57709490+Pattrigue@users.noreply.github.com> --- .../src/sections/background/ensemble_learning_models/etr.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/report_thesis/src/sections/background/ensemble_learning_models/etr.tex b/report_thesis/src/sections/background/ensemble_learning_models/etr.tex index fb9d39fb..c59ca2a5 100644 --- a/report_thesis/src/sections/background/ensemble_learning_models/etr.tex +++ b/report_thesis/src/sections/background/ensemble_learning_models/etr.tex @@ -22,5 +22,5 @@ \subsubsection{Extra Trees Regressor (ETR)} \gls{etr} extends the \gls{rf} model by introducing additional randomness in the tree-building process, specifically through random feature selection and random split points. While \gls{rf} uses bootstrap sampling and selects the best split from a random subset of features to create a set of diverse samples, \gls{etr} instead selects split points randomly within the chosen features, introducing additional randomness. This process results in even greater variability among the trees, aiming to reduce overfitting and improve the model's robustness. -As a trade off, \gls{etr} is less interpretable than a single decision tree, as the added randomness can introduce more bias than \gls{rf}. +As a trade-off, \gls{etr} is less interpretable than a single decision tree, as the added randomness can introduce more bias than \gls{rf}. However, it often achieves better generalization performance, especially in high-dimensional or noisy datasets. \ No newline at end of file