diff --git a/report_thesis/src/sections/background/preprocessing/pca.tex b/report_thesis/src/sections/background/preprocessing/pca.tex index ba65a110..1d55fd88 100644 --- a/report_thesis/src/sections/background/preprocessing/pca.tex +++ b/report_thesis/src/sections/background/preprocessing/pca.tex @@ -1,6 +1,6 @@ \subsubsection{Principal Component Analysis (PCA)}\label{subsec:pca} \gls{pca} is a dimensionality reduction technique used to reduce the number of features in a dataset while retaining as much information as possible. -We provide an intuitive explanation of \gls{pca} in this section based on \citet{dataminingConcepts} and \citet{Vasques2024}. +We provide an overview of \gls{pca} in this section based on \citet{dataminingConcepts} and \citet{Vasques2024}. \gls{pca} works by identifying the directions in which the\\$n$-dimensional data varies the most and projects the data onto these $k$ dimensions, where $k \leq n$. This projection results in a lower-dimensional representation of the data.