From bd51a5535f03afed9a7009937cdcce6a921cce19 Mon Sep 17 00:00:00 2001 From: Christian Bager Bach Houmann Date: Tue, 11 Jun 2024 11:31:41 +0200 Subject: [PATCH] Update report_thesis/src/sections/experiment_design/initial_experiment.tex --- .../src/sections/experiment_design/initial_experiment.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/report_thesis/src/sections/experiment_design/initial_experiment.tex b/report_thesis/src/sections/experiment_design/initial_experiment.tex index 7b5a46ea..ca1ce33c 100644 --- a/report_thesis/src/sections/experiment_design/initial_experiment.tex +++ b/report_thesis/src/sections/experiment_design/initial_experiment.tex @@ -4,7 +4,7 @@ \subsection{Design for Initial Experiment}\label{sec:initial-experiment} All models were trained on the same preprocessed data using the Norm 3 preprocessing method described in Section~\ref{sec:norm3}. This ensured that the models' performance could be evaluated under consistent and comparable conditions. -All models were trained using our data partitioning and cross-validation strategy, as described in Section~\ref{subsec:validation_testing_procedures}. +Furthermore, all experiments used our data partitioning and were evaluated using our testing and validation strategy, as described in Section~\ref{subsec:validation_testing_procedures}. To ensure as fair of a comparison between models as possible, all models were trained using as many default hyperparameters as possible, and those hyperparameters that did not have default options were selected based on values found in the literature. However, due to the nature of the neural network models' architecture, some extra time was spent on tuning the models to ensure a fair comparison. This included using batch normalization for the \gls{cnn} model, as early assesments showed that this was necessary to produce reasonable results.