Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: PeakPerformance - A tool for Bayesian inference-based fitting of LC-MS/MS peaks #7313

Closed
editorialbot opened this issue Oct 3, 2024 · 114 comments
Assignees
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Oct 3, 2024

Submitting author: @MicroPhen (Stephan Noack)
Repository: https://github.com/JuBiotech/peak-performance/
Branch with paper.md (empty if default branch): main
Version: v0.7.2
Editor: @csoneson
Reviewers: @Adafede, @lazear
Archive: 10.5281/zenodo.14261846

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/e7df0125519d8dc31d303d73f4f5e590"><img src="https://joss.theoj.org/papers/e7df0125519d8dc31d303d73f4f5e590/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/e7df0125519d8dc31d303d73f4f5e590/status.svg)](https://joss.theoj.org/papers/e7df0125519d8dc31d303d73f4f5e590)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@Adafede & @lazear, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @csoneson know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @Adafede

📝 Checklist for @lazear

@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.27 s (183.3 files/s, 477312.9 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
SVG                             10              1            128         108888
Python                          10            366           1162           2309
Markdown                         7            185              0            747
YAML                             9             25             30            224
TeX                              1             16              0            197
Jupyter Notebook                 5              0          12948            193
TOML                             1             10              1             49
DOS Batch                        1              8              1             26
reStructuredText                 4             25             32             22
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                            49            640          14309         112664
-------------------------------------------------------------------------------

Commit count by author:

   343	j.niesser
    35	Michael Osthege
    32	Jochen Nießer
    32	Osthege, Michael
     5	dependabot[bot]

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 1955

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

License info:

🟡 License found: GNU Affero General Public License v3.0 (Check here for OSI approval)

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@csoneson
Copy link
Member

csoneson commented Oct 3, 2024

👋🏼 @MicroPhen, @Adafede, @lazear - this is the review thread for the submission. All of our communications will happen here from now on.

As a reviewer, the first step is to create a checklist for your review by entering

@editorialbot generate my checklist

as the top of a new comment in this thread. These checklists contain the JOSS requirements. As you go over the submission, please check any items that you feel have been satisfied. The first comment in this thread also contains links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. Our goal is to work with the authors to help them meet our criteria instead of merely passing judgment on the submission. As such, the reviewers are encouraged to submit issues directly in the software repository. If you do so, please mention this thread so that a link is created (and I can keep an eye on what is happening). Please also feel free to comment and ask questions in this thread. It is often easier to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for reviews to be completed within about 2-4 weeks. Please let me know if any of you require some more time. We can also use EditorialBot (our bot) to set automatic reminders if you know you'll be away for a known period of time.

Please feel free to ping me (@csoneson) if you have any questions or concerns. Thanks!

@Adafede
Copy link

Adafede commented Oct 3, 2024

Review checklist for @Adafede

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/JuBiotech/peak-performance/?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@MicroPhen) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@Y0dler
Copy link

Y0dler commented Oct 4, 2024

First of all: thank you very much for agreeing to review our manuscript and program.

I just wanted to inform you that I created a branch to improve and streamline our GitHub repo's landing page a moment ago as it is a little bit outdated and confusing. More importantly, though, here is the link to our documentation which will be placed more prominently on the new landing page. It contains a lot of helpful information which should simplify your review.

@lazear
Copy link

lazear commented Oct 8, 2024

Review checklist for @lazear

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/JuBiotech/peak-performance/?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@MicroPhen) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1) Contribute to the software 2) Report issues or problems with the software 3) Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@lazear
Copy link

lazear commented Oct 8, 2024

@Y0dler couple comments/suggestions/etc (will update as I go - likely to be in multiple chunks):

  • You guys should be commended for the quality of your code and documentation. It's very good!

  • I think the bibliography/references are messed up, only one reference appears at the end of the paper proof.

  • I am not super familiar with bayesian peak picking, but I know there are other papers in this area - perhaps consider citing them and very briefly discuss differences/tradeoffs?

  • Is it possible to benchmark your approach vs standard normal models on benchmark data (e.g. synthetic data with known parameters + noise?)

  • Is this algorithm only suitable for analysis of a single transition in targeted LC/MS data? Could it handle multiple transitions/product ions from the same precursor?

  • What are the performance characteristics of your approach? Is this scalable to integrating 1000s or 100,000s of peaks in a feasible time scale?

@lazear
Copy link

lazear commented Oct 8, 2024

@csoneson clarifications requested:

  • Data sharing: The paper states that there is experimental data used for generating some example figures/analyses, along with a typical data availability statement: "available on reasonable request" (sic). It's unclear if this data is contained in the repository as well, since there are several pieces of example data. I think in this case (example figure) I am fine with it, but want to check if this is in line with JOSS policies, or does the data need to be available without request?
  • Contribution and authorship: the submitting author has no commits to the repository, but the first author is active in this thread. Is this OK? (I have no issues with this, submitting author appears to be the PI)

@Y0dler
Copy link

Y0dler commented Oct 8, 2024

Thanks for your comments and kind words, @lazear!

Regarding the references, I think they should be fine as far as I can see. But if you use the "View article proof on GitHub" link from editorialbot, it hides the last page with most of the references at first (until you hit the "More Pages" button at the very bottom). Maybe that was the problem? Otherwise you can also download the current draft from GitHub actions.

I am not super familiar with bayesian peak picking, but I know there are other papers in this area - perhaps consider citing them and very briefly discuss differences/tradeoffs?
Definitely a very good point, I think we need to add that and I'll see what I can do given that we're already way past the theoretical limit of 1000 words^^ I'll try to add it as soon as possible but it might take a couple of days.

Is it possible to benchmark your approach vs standard normal models on benchmark data (e.g. synthetic data with known parameters + noise?)
We have tested our approach with noisy synthetic data sets of normal-shaped and skew normal-shaped peaks and compared the results with the ground truth. Originally, this was also part of the paper but we moved it to the documentation in order to decrease the word count. You can find it here. I hope, this is what you meant. There is also a comparison with a commercially available vendor software using an experimental data set. To your later point regarding data sharing: this data set is currently not in the repository.

Is this algorithm only suitable for analysis of a single transition in targeted LC/MS data? Could it handle multiple transitions/product ions from the same precursor?
Currently, one would have to analyze multiple transitions sequentially so in case of multiple product ions, no matter if they are from the same precursor ion or not, the time series (or extracted ion chromatogram) would have to be supplied separately for each mass trace and one would have to enter the mass traces into the template Excel file so that they are analyzed in one batch run (but one after the other). I think we discussed a possibility to fit multiple peaks in a hierarchical model of sorts but for a more exact answer I would refer this one to @michaelosthege.

What are the performance characteristics of your approach? Is this scalable to integrating 1000s or 100,000s of peaks in a feasible time scale?
The performance depends on a number of parameters, e.g. which peak model is used for fitting, which sampler is used (ideally nutpie), whether a signal needs to be sampled again with more tuning samples or whether it is rejected right away etc.
The model selection takes a few minutes per mass trace but you only have to do that once for a batch run.
The actual peak analysis may take 20 - 30 s for a single peak and maybe 60 - 90 s for a double peak but in the notebooks it appears that most of that time is taken up by model instantiation and the sampling itself only lasts a few seconds. So, in order to scale this up to 1000s of peaks or more, one would need to parallelize the procedure on a computation cluster and/or implement a way to fit multiple peaks without re-instantiating the model every time. The latter is something that I think can be added to the software, the former is probably more on the user side. I suppose there are also ways to parallelize in Python directly but at our institute we chose the route with a computation cluster so this is not something we worked on.

Let me know if the answers were satisfactory and of course if you have more comments or questions.

@csoneson
Copy link
Member

Hi @lazear - thanks for your questions

  • Data sharing: The paper states that there is experimental data used for generating some example figures/analyses, along with a typical data availability statement: "available on reasonable request" (sic).

According to the JOSS data sharing policy, "If any original data analysis results are provided within the submitted work, such results have to be entirely reproducible by reviewers, including accessing the data". To me, this does seem to apply here - the figures displayed in the paper (and those comparing to the commercially available software in the documentation) were generated for the purpose of this paper.

  • Contribution and authorship: the submitting author has no commits to the repository, but the first author is active in this thread. Is this OK?

We typically leave it to the authors to decide who is eligible for authorship, in line with our authorship policy. Specifically, "Purely financial (such as being named on an award) and organizational (such as general supervision of a research group) contributions are not considered sufficient for co-authorship of JOSS submissions, but active project direction and other forms of non-code contributions are". Perhaps @Y0dler can comment on this for clarification.

@Y0dler
Copy link

Y0dler commented Oct 13, 2024

Hi @csoneson,

regarding data sharing: the repo was updated so that the notebooks to reproduce the data processing etc. have been added to peak-performance\docs\source\notebooks and the raw data to peak-performance\docs\source\notebooks\paper raw data and its sub-directories. In case the repo changes down the road, the raw data can be found on zenodo under version 0.7.1. The data availability statement was changed accordingly.

Regarding the authorship, @MicroPhen did indeed contribute via active project direction. For example, it was his idea to perform the validation via synthetic data, he helped identify a bug with the posterior predictive sampling etc. Also, I have since moved on to a different job, so further developments of the software will be realized by a successor under the guidance of @MicroPhen.

Also, we merged the paper branch into the main branch and created a version 0.7.1 release. This does not contain any changes to the program code, it just adds the paper, raw data and so on to the repo. Also, we changed the instructions for the installation (@lazear, @Adafede) since there were version conflicts with NumPy, PyMC, and numba.

@michaelosthege
Copy link

Is this algorithm only suitable for analysis of a single transition in targeted LC/MS data? Could it handle multiple transitions/product ions from the same precursor?

I think we discussed a possibility to fit multiple peaks in a hierarchical model of sorts but for a more exact answer I would refer this one to @michaelosthege.

Yes, using the functions from peak_performance.models it is possible to define a hierarchical PyMC model, that could connect multiple peaks by, for example, sharing the retention time parameter.
This would enable a more wholistic quantification of uncertainties and is something we would like to explore in the future, but it was out of scope for the current release of the library.

@michaelosthege
Copy link

@editorialbot set main as branch

@editorialbot
Copy link
Collaborator Author

Done! branch is now main

@michaelosthege
Copy link

@editorialbot generate pdf

@michaelosthege
Copy link

@editorialbot set v0.7.1 as version

@editorialbot
Copy link
Collaborator Author

I'm sorry @michaelosthege, I'm afraid I can't do that. That's something only editors are allowed to do.

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@michaelosthege
Copy link

@editorialbot set 10.5281/zenodo.13925914 as archive

@editorialbot
Copy link
Collaborator Author

I'm sorry @michaelosthege, I'm afraid I can't do that. That's something only editors are allowed to do.

@csoneson
Copy link
Member

@michaelosthege - I will update the version and set the archive at the end of the review process, for now we can leave them as they are.

@csoneson
Copy link
Member

csoneson commented Dec 5, 2024

@editorialbot set 10.5281/zenodo.14261846 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.14261846

@csoneson
Copy link
Member

csoneson commented Dec 5, 2024

@editorialbot set v0.7.2 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v0.7.2

@csoneson
Copy link
Member

csoneson commented Dec 5, 2024

@Y0dler Two minor things in the zenodo archive:

  • could you order the authors in the same order as in the paper, for consistency?
  • the license in the GitHub repository is AGPL-3.0, while the one indicated in zenodo is GPL>=3.0. Please make these consistent

After this, I think we're ready to move on

@Y0dler
Copy link

Y0dler commented Dec 5, 2024

@csoneson I aligned the order of the authors with the paper and changed the license to "GNU Affero General Public License v3.0 only".

@csoneson
Copy link
Member

csoneson commented Dec 5, 2024

Great, thanks for that - I'm going to hand over now to the track Associate EiC for the last steps. Thanks for submitting to JOSS!

@csoneson
Copy link
Member

csoneson commented Dec 5, 2024

@editorialbot recommend-accept

@editorialbot
Copy link
Collaborator Author

Attempting dry run of processing paper acceptance...

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1038/s41592-019-0686-2 is OK
- 10.1109/MCSE.2007.55 is OK
- 10.5281/zenodo.11201097 is OK
- 10.7717/peerj-cs.1516 is OK
- 10.1016/B978-0-12-405888-0.00001-5 is OK
- 10.1214/ss/1177011136 is OK
- 10.1021/ac60319a011 is OK
- 10.1002/biot.201700141 is OK
- 10.1002/1097-0290(20010205)72:3<346::aid-bit12>3.0.co;2-x is OK
- 10.1007/s11222-016-9696-4 is OK
- 10.21105/joss.01143 is OK
- 10.1038/s41586-020-2649-2 is OK
- 10.1021/ac202124t is OK
- 10.1021/acs.analchem.5b01521 is OK
- 10.1016/j.chroma.2018.11.076 is OK
- 10.1016/j.cherd.2021.09.003 is OK
- 10.1021/acs.analchem.5b03859 is OK
- 10.1021/ac60304a011 is OK
- 10.1021/ac60304a005 is OK

🟡 SKIP DOIs

- No DOI given, and none found for title: nutpie
- No DOI given, and none found for title: The No-U-Turn Sampler: Adaptively Setting Path Len...
- No DOI given, and none found for title: A class of distributions which includes the normal...
- No DOI given, and none found for title: Asymptotic Equivalence of Bayes Cross Validation a...

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

👋 @openjournals/bcm-eics, this paper is ready to be accepted and published.

Check final proof 👉📄 Download article

If the paper PDF and the deposit XML files look good in openjournals/joss-papers#6219, then you can now move forward with accepting the submission by compiling again with the command @editorialbot accept

@editorialbot editorialbot added the recommend-accept Papers recommended for acceptance in JOSS. label Dec 5, 2024
@Kevin-Mattheus-Moerman
Copy link
Member

Kevin-Mattheus-Moerman commented Dec 10, 2024

@MicroPhen as AEiC for JOSS I will now help to process this submission for acceptance in JOSS. Below are some final checks, some of which may require your attention:

Checks on repository

  • Project has OSI approved license
  • Project features contributing guidelines

Checks on review issue

  • Review completed
  • Software version tag listed here matches a tagged release

Checks on archive

  • Archive listed title and authors matches paper
  • Archive listed license matches software license
  • Archive listed version tag matches tagged release (and includes a potential v).

Checks on paper

  • Checked paper formatting
  • Check affiliations to make sure country acronyms are not used
  • Checked reference rendering
  • Checked if pre-print citations can be updated by published versions
  • Checked for typos

Remaining points:

As you can see, most seems in order, however the below are some points that require your attention 👇 :

  • The following words are European English: focussed, analysed, while the following are American English: analyze, optimization, utilizing, idealized, realized, visualizations, visualization, conceptualized. We suggest you choose one form and consistently use it.
  • On the Zenodo archive I would recommend removing the statements in brackets for the authors.
    image
    They seem unusual and the Related person sounds like Family member e.g. a brother, to me. Do you mean to say contributor? Removing these is not a requirement, I am merely stating that they appear unnecessary and are perhaps confusing/wrong.

@Y0dler
Copy link

Y0dler commented Dec 10, 2024

@Kevin-Mattheus-Moerman Thanks for bringing this to our attention. I've removed the roles from Zenodo. To be fair, I only added them because at the time I was under the impression they were mandatory. The inconsistent spelling is also fixed.

Does that mean we have to create a new release for changing these two words or can we leave it at the current one?

@Y0dler
Copy link

Y0dler commented Dec 10, 2024

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@Kevin-Mattheus-Moerman
Copy link
Member

@editorialbot accept

@editorialbot
Copy link
Collaborator Author

Doing it live! Attempting automated processing of paper acceptance...

@editorialbot
Copy link
Collaborator Author

Ensure proper citation by uploading a plain text CITATION.cff file to the default branch of your repository.

If using GitHub, a Cite this repository menu will appear in the About section, containing both APA and BibTeX formats. When exported to Zotero using a browser plugin, Zotero will automatically create an entry using the information contained in the .cff file.

You can copy the contents for your CITATION.cff file here:

CITATION.cff

cff-version: "1.2.0"
authors:
- family-names: Nießer
  given-names: Jochen
  orcid: "https://orcid.org/0000-0001-5397-0682"
- family-names: Osthege
  given-names: Michael
  orcid: "https://orcid.org/0000-0002-2734-7624"
- family-names: Lieres
  given-names: Eric
  name-particle: von
  orcid: "https://orcid.org/0000-0002-0309-8408"
- family-names: Wiechert
  given-names: Wolfgang
  orcid: "https://orcid.org/0000-0001-8501-0694"
- family-names: Noack
  given-names: Stephan
  orcid: "https://orcid.org/0000-0001-9784-3626"
doi: 10.5281/zenodo.14261846
message: If you use this software, please cite our article in the
  Journal of Open Source Software.
preferred-citation:
  authors:
  - family-names: Nießer
    given-names: Jochen
    orcid: "https://orcid.org/0000-0001-5397-0682"
  - family-names: Osthege
    given-names: Michael
    orcid: "https://orcid.org/0000-0002-2734-7624"
  - family-names: Lieres
    given-names: Eric
    name-particle: von
    orcid: "https://orcid.org/0000-0002-0309-8408"
  - family-names: Wiechert
    given-names: Wolfgang
    orcid: "https://orcid.org/0000-0001-8501-0694"
  - family-names: Noack
    given-names: Stephan
    orcid: "https://orcid.org/0000-0001-9784-3626"
  date-published: 2024-12-13
  doi: 10.21105/joss.07313
  issn: 2475-9066
  issue: 104
  journal: Journal of Open Source Software
  publisher:
    name: Open Journals
  start: 7313
  title: PeakPerformance - A tool for Bayesian inference-based fitting
    of LC-MS/MS peaks
  type: article
  url: "https://joss.theoj.org/papers/10.21105/joss.07313"
  volume: 9
title: PeakPerformance - A tool for Bayesian inference-based fitting of
  LC-MS/MS peaks

If the repository is not hosted on GitHub, a .cff file can still be uploaded to set your preferred citation. Users will be able to manually copy and paste the citation.

Find more information on .cff files here and here.

@editorialbot
Copy link
Collaborator Author

🐘🐘🐘 👉 Toot for this paper 👈 🐘🐘🐘

@editorialbot
Copy link
Collaborator Author

🦋🦋🦋 👉 Bluesky post for this paper 👈 🦋🦋🦋

@editorialbot
Copy link
Collaborator Author

🚨🚨🚨 THIS IS NOT A DRILL, YOU HAVE JUST ACCEPTED A PAPER INTO JOSS! 🚨🚨🚨

Here's what you must now do:

  1. Check final PDF and Crossref metadata that was deposited 👉 Creating pull request for 10.21105.joss.07313 joss-papers#6253
  2. Wait five minutes, then verify that the paper DOI resolves https://doi.org/10.21105/joss.07313
  3. If everything looks good, then close this review issue.
  4. Party like you just published a paper! 🎉🌈🦄💃👻🤘

Any issues? Notify your editorial technical team...

@editorialbot editorialbot added accepted published Papers published in JOSS labels Dec 13, 2024
@Kevin-Mattheus-Moerman
Copy link
Member

@MicroPhen congratulations on this JOSS publication!

@csoneson thanks for editing !

And a special thank you to the reviewers: @Adafede, @lazear !!

@editorialbot
Copy link
Collaborator Author

🎉🎉🎉 Congratulations on your paper acceptance! 🎉🎉🎉

If you would like to include a link to your paper from your README use the following

code snippets

Markdown:
[![DOI](https://joss.theoj.org/papers/10.21105/joss.07313/status.svg)](https://doi.org/10.21105/joss.07313)

HTML:
<a style="border-width:0" href="https://doi.org/10.21105/joss.07313">
  <img src="https://joss.theoj.org/papers/10.21105/joss.07313/status.svg" alt="DOI badge" >
</a>

reStructuredText:
.. image:: https://joss.theoj.org/papers/10.21105/joss.07313/status.svg
   :target: https://doi.org/10.21105/joss.07313

This is how it will look in your documentation:

DOI

We need your help!

The Journal of Open Source Software is a community-run journal and relies upon volunteer effort. If you'd like to support us please consider doing either one (or both) of the the following:

@Adafede
Copy link

Adafede commented Dec 17, 2024

And a special thank you to the reviewers: @Adafede, @lazear !!

Was a great experience! Any news on openjournals/joss#813?

@michaelosthege
Copy link

Thank you all for the thorough reviews and your patience with us learning how to do our first JOSS paper :)

And thanks for the reminder to update our ORCID entries!

@Y0dler
Copy link

Y0dler commented Dec 18, 2024

Fully agreed. Thanks for the quality reviews, was a good experience 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
accepted published Papers published in JOSS Python recommend-accept Papers recommended for acceptance in JOSS. review TeX Track: 2 (BCM) Biomedical Engineering, Biosciences, Chemistry, and Materials
Projects
None yet
Development

No branches or pull requests

10 participants