Skip to content

Commit

Permalink
move figure 3
Browse files Browse the repository at this point in the history
  • Loading branch information
Y0dler committed Nov 11, 2024
1 parent 6991e2c commit c9f7c21
Showing 1 changed file with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions paper/paper.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,9 +77,6 @@ Since the inference data is stored alongside graphs and report sheets, users may
$\texttt{PeakPerformance}$ accommodates the use of a pre-manufactured data pipeline for standard applications (Fig. 1) as well as the creation of custom data pipelines using only its core functions.
The provided data analysis pipeline was designed in a user-friendly way and is covered by multiple example notebooks.

![](./Fig3_PP-standalone.png)
__Figure 1:__ Overview of the pre-manufactured data analysis pipeline featured in $\texttt{PeakPerformance}$.

Before using $\texttt{PeakPerformance}$, the user has to supply raw data files containing a NumPy array with time in the first and intensity in the second dimension for each peak according as described in detail in the documentation.
Using the $\texttt{prepare\_model\_selection()}$ method, an Excel template file ("Template.xlsx") for inputting user information is prepared and stored in the raw data directory.

Expand All @@ -88,6 +85,9 @@ If this is not done by the user, an optional automated model selection step will
The automated model selection can be started using the $\texttt{model\_selection()}$ function from the pipeline module and will be performed successively for each mass trace.
The results for each model are ranked with the $\texttt{compare()}$ function of the ArviZ package based on Pareto-smoothed importance sampling leave-one-out cross-validation (LOO-PIT) [@RN146; @RN145].

![](./Fig3_PP-standalone.png)
__Figure 1:__ Overview of the pre-manufactured data analysis pipeline featured in $\texttt{PeakPerformance}$.

Subsequently, the peak analysis pipeline can be started with the function $\texttt{pipeline()}$ from the $\texttt{pipeline}$ module.
Depending on whether the "pre-filtering" option was selected, an optional filtering step will be executed to reject signals where clearly no peak is present before sampling, thus saving computation time.
This filtering step combines the $\texttt{find\_peaks()}$ function from the SciPy package [@scipy] with a user-defined minimum signal-to-noise threshold and may reject a great many signals before sampling, e.g. in the case of isotopic labeling experiments where every theoretically achievable mass isotopomer needs to be investigated, yet depending on the input labeling mixture, the majority of them might not be present in actuality.
Expand Down

0 comments on commit c9f7c21

Please sign in to comment.