From 4dcca2b4cbea8b862b63a88fff0563d120e906e8 Mon Sep 17 00:00:00 2001 From: Christian Bager Bach Houmann Date: Mon, 3 Jun 2024 15:36:10 +0200 Subject: [PATCH] Update report_thesis/src/sections/background/preprocessing/pca.tex Co-authored-by: Pattrigue <57709490+Pattrigue@users.noreply.github.com> --- report_thesis/src/sections/background/preprocessing/pca.tex | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/report_thesis/src/sections/background/preprocessing/pca.tex b/report_thesis/src/sections/background/preprocessing/pca.tex index ba65a110..1d55fd88 100644 --- a/report_thesis/src/sections/background/preprocessing/pca.tex +++ b/report_thesis/src/sections/background/preprocessing/pca.tex @@ -1,6 +1,6 @@ \subsubsection{Principal Component Analysis (PCA)}\label{subsec:pca} \gls{pca} is a dimensionality reduction technique used to reduce the number of features in a dataset while retaining as much information as possible. -We provide an intuitive explanation of \gls{pca} in this section based on \citet{dataminingConcepts} and \citet{Vasques2024}. +We provide an overview of \gls{pca} in this section based on \citet{dataminingConcepts} and \citet{Vasques2024}. \gls{pca} works by identifying the directions in which the\\$n$-dimensional data varies the most and projects the data onto these $k$ dimensions, where $k \leq n$. This projection results in a lower-dimensional representation of the data.