Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[REVIEW]: ysoisochrone: A Python package to estimate masses and ages for YSOs #7493

Open
editorialbot opened this issue Nov 18, 2024 · 58 comments
Assignees
Labels
review Track: 1 (AASS) Astronomy, Astrophysics, and Space Sciences

Comments

@editorialbot
Copy link
Collaborator

editorialbot commented Nov 18, 2024

Submitting author: @DingshanDeng (Dingshan Deng)
Repository: https://github.com/DingshanDeng/ysoisochrone
Branch with paper.md (empty if default branch):
Version: v1.0.0
Editor: @warrickball
Reviewers: @HeloiseS, @vsquicciarini
Archive: 10.5281/zenodo.14847202

Status

status

Status badge code:

HTML: <a href="https://joss.theoj.org/papers/d7ebe3bc625fbfc7372bee2bab4e53ce"><img src="https://joss.theoj.org/papers/d7ebe3bc625fbfc7372bee2bab4e53ce/status.svg"></a>
Markdown: [![status](https://joss.theoj.org/papers/d7ebe3bc625fbfc7372bee2bab4e53ce/status.svg)](https://joss.theoj.org/papers/d7ebe3bc625fbfc7372bee2bab4e53ce)

Reviewers and authors:

Please avoid lengthy details of difficulties in the review thread. Instead, please create a new issue in the target repository and link to those issues (especially acceptance-blockers) by leaving comments in the review thread below. (For completists: if the target issue tracker is also on GitHub, linking the review thread in the issue or vice versa will create corresponding breadcrumb trails in the link target.)

Reviewer instructions & questions

@HeloiseS & @vsquicciarini, your review will be checklist based. Each of you will have a separate checklist that you should update when carrying out your review.
First of all you need to run this command in a separate comment to create the checklist:

@editorialbot generate my checklist

The reviewer guidelines are available here: https://joss.readthedocs.io/en/latest/reviewer_guidelines.html. Any questions/concerns please let @warrickball know.

Please start on your review when you are able, and be sure to complete your review in the next six weeks, at the very latest

Checklists

📝 Checklist for @HeloiseS

📝 Checklist for @vsquicciarini

@editorialbot editorialbot added review Track: 1 (AASS) Astronomy, Astrophysics, and Space Sciences labels Nov 18, 2024
@editorialbot
Copy link
Collaborator Author

Hello humans, I'm @editorialbot, a robot that can help you with some common editorial tasks.

For a list of things I can do to help you, just type:

@editorialbot commands

For example, to regenerate the paper pdf after making changes in the paper's md or bib files, type:

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

Software report:

github.com/AlDanial/cloc v 1.90  T=0.11 s (592.5 files/s, 172060.6 lines/s)
-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
HTML                            13            420             39           4556
SVG                              3              0              0           2689
Python                           7            774           1451           1372
CSS                              7            223             57           1016
JavaScript                      11            143            234            877
Jupyter Notebook                 4              0           3061            379
CSV                              3              0              0            196
TeX                              1             18              0            186
Markdown                         5             72              0            158
reStructuredText                 6            159            348             47
YAML                             2              6             19             31
DOS Batch                        1              8              1             26
make                             1              4              7              9
-------------------------------------------------------------------------------
SUM:                            64           1827           5217          11542
-------------------------------------------------------------------------------

Commit count by author:

    74	Dingshan Deng

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.2458/azu_uapress_9780816531240-ch010 is OK
- 10.1111/j.1365-2966.2012.21948.x is OK
- 10.48550/arXiv.astro-ph/0003477 is OK
- 10.48550/arXiv.2203.09930 is OK
- 10.1111/j.1365-2966.2011.19945.x is OK
- 10.1051/0004-6361:20042185 is OK
- 10.3847/0004-637X/831/2/125 is OK
- 10.1088/0004-637X/771/2/129 is OK
- 10.1051/0004-6361/201527613 is OK
- 10.1051/0004-6361/201425481 is OK
- 10.3847/1538-3881/acf4f0 is OK

🟡 SKIP DOIs

- None

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@editorialbot
Copy link
Collaborator Author

Paper file info:

📄 Wordcount for paper.md is 682

✅ The paper includes a Statement of need section

@editorialbot
Copy link
Collaborator Author

License info:

✅ License found: MIT License (Valid open source OSI approved license)

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@warrickball
Copy link

Hi @HeloiseS & @vsquicciarini, and thanks again for agreeing to review (and so quickly!). This is the review thread for the paper. All our correspondence is now intended to happen here but you're welcome to email me with any questions, too.

Please read the "Reviewer instructions & questions" in the first comment above, and generate your checklists by commenting @editorialbot generate my checklist on this issue. As you go over the submission, please check off any items that you feel have been satisfied. There are also links to the JOSS reviewer guidelines.

The JOSS review is different from most other journals. We aim to work with the authors to help them meet our criteria instead of merely passing judgement on the submission. We also encourage reviewers to submit issues and pull requests on the software repository. When doing so, please mention openjournals/joss-reviews#7493 so that the issue/PR is linked to this thread. Please also feel free to comment and ask questions on this thread. JOSS editors have found it better to post comments/questions/suggestions as you come across them instead of waiting until you've reviewed the entire package.

We aim for the review process to be completed within about 4-6 weeks so start whenever you can. JOSS reviews are iterative—rather than monolithic—and the authors can start responding while you continue to review other parts of the submission.

If it suits your workflow, you're welcome to assign yourself to this issue in the GitHub UI.

@warrickball
Copy link

Hi @HeloiseS & @vsquicciarini, I just wanted to check in on how your reviews were progressing and if you need anything from me. Note that we find JOSS reviews are most efficient if you start commenting on the submission as you work through the checklist, rather than waiting to accumulate a single, monolithic review, as one would for a traditional journal.

@HeloiseS
Copy link

HeloiseS commented Dec 7, 2024

Hi Warwick - things have been crazy here and I'm just about seeing the end of the tunnel. I'm hoping to start this week 👍
Thanks to you and the authors for your patience

@HeloiseS
Copy link

HeloiseS commented Dec 7, 2024

Review checklist for @HeloiseS

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/DingshanDeng/ysoisochrone?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@DingshanDeng) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1. Contribute to the software 2. Report issues or problems with the software 3. Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@vsquicciarini
Copy link

vsquicciarini commented Dec 12, 2024 via email

@HeloiseS
Copy link

(A running list I'll update throughout my review of stuff that's not big enough to be put in an issue)

Nice-to-have

  • Juypter (notebook) not listed as a dependency but needed to run the tutos

@HeloiseS
Copy link

Comments on the paper

  • YSOs should be expanded the first time (it's expanded the second time)
  • line 35 and 36: looks like a list of bullet points didn't format properly
  • line 44, the examples aren't showing up.

A more general question for @DingshanDeng: do none of the other track fitting algorithm allow you to use your own YSO tracks but with their fitting methods?

For example isn't there other open source codes where you could instead add the YSO functionality rather than create a whole new package?

  • I'd like a bit more text on why a separate code is needed and joining other endeavours isn't workable.

For the paper that's all I have so you can get started on those changes as well.


Note to @warrickball - I've reviewed paper and install. Started on tutorials but they are giving me an error so until Issue #6 is sorted I've done pretty much all I can do at this point.

If changes get made and I miss it (because GH floods my inbox with many things) feel free to poke me. This is now top of my list to get done and if I take more than 24h to show activity it's because I've missed something :)

@DingshanDeng
Copy link

Thanks for reviewing the code and your helpful comments on the package and paper! I will start implementing them later next week. Starting from fixing the bugs.

@vsquicciarini
Copy link

vsquicciarini commented Dec 23, 2024

Review checklist for @vsquicciarini

Conflict of interest

  • I confirm that I have read the JOSS conflict of interest (COI) policy and that: I have no COIs with reviewing this work or that any perceived COIs have been waived by JOSS for the purpose of this review.

Code of Conduct

General checks

  • Repository: Is the source code for this software available at the https://github.com/DingshanDeng/ysoisochrone?
  • License: Does the repository contain a plain-text LICENSE or COPYING file with the contents of an OSI approved software license?
  • Contribution and authorship: Has the submitting author (@DingshanDeng) made major contributions to the software? Does the full list of paper authors seem appropriate and complete?
  • Substantial scholarly effort: Does this submission meet the scope eligibility described in the JOSS guidelines
  • Data sharing: If the paper contains original data, data are accessible to the reviewers. If the paper contains no original data, please check this item.
  • Reproducibility: If the paper contains original results, results are entirely reproducible by reviewers. If the paper contains no original results, please check this item.
  • Human and animal research: If the paper contains original data research on humans subjects or animals, does it comply with JOSS's human participants research policy and/or animal research policy? If the paper contains no such data, please check this item.

Functionality

  • Installation: Does installation proceed as outlined in the documentation?
  • Functionality: Have the functional claims of the software been confirmed?
  • Performance: If there are any performance claims of the software, have they been confirmed? (If there are no claims, please check off this item.)

Documentation

  • A statement of need: Do the authors clearly state what problems the software is designed to solve and who the target audience is?
  • Installation instructions: Is there a clearly-stated list of dependencies? Ideally these should be handled with an automated package management solution.
  • Example usage: Do the authors include examples of how to use the software (ideally to solve real-world analysis problems).
  • Functionality documentation: Is the core functionality of the software documented to a satisfactory level (e.g., API method documentation)?
  • Automated tests: Are there automated tests or manual steps described so that the functionality of the software can be verified?
  • Community guidelines: Are there clear guidelines for third parties wishing to 1. Contribute to the software 2. Report issues or problems with the software 3. Seek support

Software paper

  • Summary: Has a clear description of the high-level functionality and purpose of the software for a diverse, non-specialist audience been provided?
  • A statement of need: Does the paper have a section titled 'Statement of need' that clearly states what problems the software is designed to solve, who the target audience is, and its relation to other work?
  • State of the field: Do the authors describe how this software compares to other commonly-used packages?
  • Quality of writing: Is the paper well written (i.e., it does not require editing for structure, language, or writing quality)?
  • References: Is the list of references complete, and is everything cited appropriately that should be cited (e.g., papers, datasets, software)? Do references in the text use the proper citation syntax?

@vsquicciarini
Copy link

Hi @DingshanDeng, first of all I would like to apologize with you for the long wait. Being amidst a job change with a spare month between the two contracts, I thought I could have had much more free time than I eventually had. I am truly sorry about that.

I have read the paper, the documentation and tested the Jupyter Notebook you provided us with. My overall impression of your work is positive, even though I think that are a few issues to be discussed together. I am going to write down comments related to the paper and the documentation. I would like to keep on testing the code on my own for a couple of additional days before discussing the package from a technical point of view.

@vsquicciarini
Copy link

vsquicciarini commented Dec 28, 2024

Comments on the paper

  • line 9: stellar age -> stellar ages.
  • line 15: I would use the singular: the stellar mass, age, its effective temperature etc.
  • line 18: please add references here for "as assumed in previous literature". The point here is not trivial: to me, a log-uniform prior for Teff and L is more physically sound than a uniform prior. The assumption of linearly uniform priors is not uninformative, because it creates a bias for hotter and brighter stars. Both the shape of the initial mass function of stars and their evolutionary timescales imply that the occurrence of stars decreases as a function of Teff and L. Hence, using a log-uniform prior would give a more unbiased estimates of the underlying distribution. I know that this is a strong request, but I would suggest replacing the observables Teff and L with logTeff and logL.
  • line 24: it should be specified that the Feiden models here are non-magnetic. I could find this information from the quoted Pascucci+16 reference, but I think that this should be explicitly stated.
  • line 24: the choice of combining Baraffe and Feiden models is based on the study by Herczeg & Hillenbrand (2015), quoted in Pascucci+16. While I do not question their models and results, I note that over the last 10 years there were both significant improvements of stellar models and the observational revolution brought about by Gaia. My point is that a more recent study should be used here to justify the choice of the isochrone set. If this is not possible, I think that it would be perfectly fine to state more explicitly that your choice is based on the 2015 analysis, that might not be up to date, and that you intend to add new models in later versions of your package.
  • general comment: I would appreciate if you could detail in the text why you chose to use as input parameters L and Teff, instead of the measured photometry. On my side, I find this choice problematic for several reasons (that I will detail below); but I am not really an expert of proto-stars and <5 Myr-old stars with a full disk, so there might be reasons for your choice that I am overlooking.
  • statement of need: I think this has to be better rephrased: the e.g. parethesis is currently empty, and it is not clear what the advantage of this package over previous work is. I'm sorry to say that, but I have written a fully documented and open source package, MADYS (https://madys.readthedocs.io/en/latest/), that can be used to derive ages and masses for pre-main sequence stars. Another package, isochrones, is cited in the documentation but not here. I think that both packages should be, if not discussed, at least acknowledged here.

On the choice of input observables

As I mentioned before, I think it is necessary to explain why (L, Teff) were chosen over photometry as input variables. Based on my experience, I consider the latter to be much more reliable than the former because:

  1. luminosity and effective temperatures are not observed quantities, but they are derived from photometry assuming some atmospheric model. Photometry is a real observable, only dependent on the filter response and the zero point of the instrument;
  2. being model-dependent, two measurements of L and Teff from two different papers might assume different underlying models, ages, parallaxes, metallicities, bolometric corrections, and be based on different observational data (photometry, spectroscopy); this makes any direct comparison of input data extremely difficult;
  3. as a consequence of 1) and 2), a (L, Teff) point obtained through a model A and then fitted by your package to model B will be affected by a systematic error which is not just a random error, but also comprises the systematic difference between model A and model B. How is this taken into account in the error estimation?

@vsquicciarini
Copy link

vsquicciarini commented Dec 28, 2024

Some comments on the Jupyter Notebook

As I mentioned above, I may write some additional technical comments on the code itself after testing it more carefully. For the moment being, I just write here some general comments that came to my mind while executing the notebook:

  • 4.3) Estimate mass and ages: I understand the practical advantage of attempting to estimate masses even for underluminous stars. However, I think that the results here should be taken with extreme caution, and the user should be aware of that. I would add a flag column in df_output that specifies if the star is within the grid, underluminous or overluminous. Also, I notice that no error is provided on the mass and age here. This should be fixed, because 1) uncertainties must always be estimated, 2) for non-standard stars is it even more crucial to understand how much the mass is constrained. The uncertainty on Teff and the uncertainty on the assume age should be used to evaluate the corresponding mass uncertainty.
  • 5.1) Closest grid point on isochrone: as above, I think it is mandatory to provide uncertainty estimation here. I would estimate the uncertainties in a Monte Carlo way, i.e. by picking the closest (mass, age) grid point but with L and Teff randomly altered N times (let's say, 1000) according to their uncertainties. While I understand that this is not the preferred method and it is only provided to reproduce literature results, I stress the importance of doing that to allow for meaningful comparison with the literature itself.
  • 5.2) Assuming age to derive stellar masses: it is not clear to me what the difference between the two possibilities is. Am I right in thinking that in the first case it interpolates across mass, while in the second just takes the closest track? I think this should be better rephrased.
  • general comment: if a star is underluminous due to disk absorption, we should expect an effect on temperature as well. This means that when we use an age and a Teff, the Teff should be biased to lower values too (due to differential absorption). Shouldn't this be taken into account by using some kind of de-reddening? Also, depending on the way Teff was derived, Teff itself might be more or less biased. For instance, if Teff was derived only using NIR data, then the disk luminosity would introduce a larger bias compared to cases where UV measurements are available. Again, this stresses how the choice of Teff as an input parameter can induce a lot of unseen systematics.

@vsquicciarini
Copy link

vsquicciarini commented Dec 28, 2024

Comments on the documentation

  • "The available stellar evolutionary models": in the sentence "This package is mainly built to study the young-stellar-objects, so by default; it only utilizes the stellar evolutionary tracks for pre-main-sequence stars ({\rm stellar\ age}< 500,{\rm Myrs}). " , I think the parenthesis is misleading because the duration of the pre-MS strongly depends on mass. A solar mass star only stays on the pre-MS for about 50 Myr. I suggest removing the parenthesis.
  • In general, I find that the paragraph above could be rewritten better. I suggest: "For the moment being, this package is meant to be used to study young stellar objects. In other words, it only utilizes the stellar evolutionary tracks for pre-main-sequence stars, avoiding the problem of dealing with main-sequence and post-main-sequence targets (whose luminosity rises again and will overlay on the pre-main-sequence phase). In future releases, the package might be expanded to also handle main-sequence stars (e.g., Fernandes et al. 2023), but user-specified customized stellar evolutionary tracks would be needed."
  • Feiden: again, it should be specified that the isochrones are non-magnetic.
  • MIST: since you are quoting the isochrones package, I suggest quoting my MADYS package as well, which can handle MIST, PARSEC (1.2 and 2.0), Feiden, Baraffe and many other models for pre-MS or MS stars.

@warrickball
Copy link

Hi @DingshanDeng, I see @vsquicciarini and @HeloiseS have both left various comments for you. Are you progressing on resolving these?

@HeloiseS
Copy link

HeloiseS commented Jan 23, 2025

Hi @DingshanDeng!

Thanks for making the suggested changes and adding text to explain the purpose and need for the code.
Have you updated the pip version? I can see it's dated form Jan 15th but the unittests are form last week and the notebooks were updated a few hours ago.
(once that's done I'll update your code on my conda env, and rerun all the tests and notebooks - if it all runs that'll be it on my end!)

@vsquicciarini
Copy link

As a general question, to my eyes it seems from the previous plots that the two theoretical grids, albeit very close, do not exactly overlap. Is that true?

  • If the models do not overlap, I think it has to be stated explicitly in the documentation, providing a figure like Fig. 1 of Pascucci+16. In this case, I expect the situation for a star lying close to T_eff=3900 K to be tricky. If the star is fitted with Feiden isochrones if T_eff > 3900 K and with Baraffe isochrones otherwise, the shape of the posteriors will be skewed (I mean, it should become non-continuous and bimodal). You may want to investigate the impact of this on the derived parameters and uncertainties.
  • If the models are simultaneously fit with a polynomial curve to merge them into a single model, the process needs to be explained in the documentation.

@vsquicciarini
Copy link

vsquicciarini commented Jan 23, 2025

I've tried to run a test for a star with a temperature close to 3900 K to investigate what happens. These are my input parameters:

Image

and here is the HRD:

Image

I verified empirically that the star is just fitted with the Baraffe isochrones (the posteriors look the same). This means that the whole grid, both hotter and colder than 3900 K, is set equal to Baraffe's grid. But should not the program compare data to a blend of Baraffe (T_eff<3900 K) and Feiden (T_eff>3900 K) isochrones?

There is a second problem. There is a sharp cut at log(t)~7.7 (t~50 Myr) in the posteriors, meaning that solutions with t>50 Myr are not considered. Why does this happen? Is there some hidden cropping of the grid that discards everything older than 50 Myr? Notice that this happens when selecting Feiden's tracks too:

Image

Image

(P.S.: I suggest hiding the printing of RuntimeWarnings here, if you think that they are being properly handled)

Image

@vsquicciarini
Copy link

I have no other comment :)

@DingshanDeng
Copy link

Hi @DingshanDeng!

Thanks for making the suggested changes and adding text to explain the purpose and need for the code. Have you updated the pip version? I can see it's dated form Jan 15th but the unittests are form last week and the notebooks were updated a few hours ago. (once that's done I'll update your code on my conda env, and rerun all the tests and notebooks - if it all runs that'll be it on my end!)

Thanks for the reminder. I have not updated the pip version. I will update that once I finished the tests/comments suggested above :-).

@DingshanDeng
Copy link

DingshanDeng commented Feb 3, 2025

Hi @vsquicciarini

Thanks for your comments and questions. Please see my reply below.

As a general question, to my eyes it seems from the previous plots that the two theoretical grids, albeit very close, do not exactly overlap. Is that true?

If the models do not overlap, I think it has to be stated explicitly in the documentation, providing a figure like Fig. 1 of Pascucci+16. In this case, I expect the situation for a star lying close to T_eff=3900 K to be tricky. If the star is fitted with Feiden isochrones if T_eff > 3900 K and with Baraffe isochrones otherwise, the shape of the posteriors will be skewed (I mean, it should become non-continuous and bimodal). You may want to investigate the impact of this on the derived parameters and uncertainties.
If the models are simultaneously fit with a polynomial curve to merge them into a single model, the process needs to be explained in the documentation.

I verified empirically that the star is just fitted with the Baraffe isochrones (the posteriors look the same). This means that the whole grid, both hotter and colder than 3900 K, is set equal to Baraffe's grid. But should not the program compare data to a blend of Baraffe (T_eff<3900 K) and Feiden (T_eff>3900 K) isochrones?

Re:
The Feiden and Baraffe models actually overlaps well at Teff ~ 3900 K except for the youngest isochrones at ~ 0.5 Myrs. Please see the plot below (Here I plot the Feiden tracks in darkred and Baraffe tracks in orange):

Image

So it does not really matter whether we use the tracks from Feiden or Baraffe to derive the mass and age in most cases. What we are doing is simply using the track from Feiden tracks for hotter targets while using Baraffe tracks for cooler ones. We do not merge/blend the two tracks. This is clarified in the documentation as well as the tutorial.

A couple of comments related to the function ysoisochrone.plotting.plot_hr_diagram().

Re:
Also following your suggestion, we add a new function called
ysoisochrone.plotting.simple_plot_hr_diagram_feiden_n_baraffe(df_prop)
to make the Pascucci+2016 Figure 1 style plot. This is also added in the tutorial/documentation.

We still keep the previous plots with Feiden and Baraffe tracks in two different panels to demonstrate the ysoisochrone.plotting.plot_hr_diagram() is a versatile function, and also emphasizes that we use different tracks for different stars according to their input effective temperature.

The legend is updated in both functions.

There is a second problem. There is a sharp cut at log(t)7.7 (t50 Myr) in the posteriors, meaning that solutions with t>50 Myr are not considered. Why does this happen? Is there some hidden cropping of the grid that discards everything older than 50 Myr? Notice that this happens when selecting Feiden's tracks too:

Re:
For the second question related to the test target. We updated the code so that the grid could extend to the ZAMS (at about 100 Myrs) in the tracks. In the previous version, a hard cut was set at 50 Myrs, and this is fixed. Please see the new plots below, using Baraffe and Feiden tracks, respectively. We note that the scales in two x and y axis of the two grids are different.

Image Image

As we demonstrated above, even a test target that has Teff = 3895 K, the estimated mass and age are relatively close from both Feiden and Baraffe tracks, suggesting that the Feiden and Baraffe tracks overlap with each other well.

The runtime warning message is also properly handled.


Reply to all,

The updated code is also uploaded to PyPI. Thanks again for all of your helpful comments and suggestions. I hope the reply above could answer your questions. If you have any additional questions and/or comments, please let me know.

Best

@DingshanDeng
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@vsquicciarini
Copy link

Hi @DingshanDeng, sorry to get back to you so late. Thanks for the time you took to implement these changes and to clarify the points raised by some of my remarks. I do not have any other comment to make. Green light on my side!

@DingshanDeng
Copy link

Hi all,

Thanks again for your helpful comments and suggestions. I just released the first production ready version 1.0.0 to PyPI and GitHub. We hope this package/tool could be useful to the whole community.

Hi @warrickball, is there any other thing that we need to go through or edit for the publication? If so, please let me know, and I should be able to address them this week.

Best

@warrickball
Copy link

I'd just like to check in with @HeloiseS, who wrote

Thanks for making the suggested changes and adding text to explain the purpose and need for the code. Have you updated the pip version? I can see it's dated form Jan 15th but the unittests are form last week and the notebooks were updated a few hours ago. (once that's done I'll update your code on my conda env, and rerun all the tests and notebooks - if it all runs that'll be it on my end!)

I see that you've ticked everything off in the review checklist. Have you run everything you wanted to and, if so, can you confirm you're happy to recommend the paper be accepted? Thanks!

@HeloiseS
Copy link

Yes all good!!!

@warrickball
Copy link

Great! In that case, @DingshanDeng, the paper is accepted. Congratulations! I'll shortly generate a post-review checklist of items to work through. The main one will be for you to create an archive of the code to upload to an archiving service like Zenodo or Figshare.

I'll also start doing some editorial checks of the paper and either post comments here or open a PR for the finer details. I probably won't get to any of this tonight, though!

@warrickball
Copy link

warrickball commented Feb 10, 2025

Post-Review Checklist for Editor and Authors

Additional Author Tasks After Review is Complete

  • Double check authors and affiliations (including ORCIDs)
  • Make a release of the software with the latest changes from the review and post the version number here. This is the version that will be used in the JOSS paper.
  • Archive the release on Zenodo/figshare/etc and post the DOI here.
  • Make sure that the title and author list (including ORCIDs) in the archive match those in the JOSS paper.
  • Make sure that the license listed for the archive is the same as the software license.

Editor Tasks Prior to Acceptance

  • Read the text of the paper and offer comments/corrections (as either a list or a pull request)
  • Check that the archive title, author list, version tag, and the license are correct
  • Set archive DOI with @editorialbot set <DOI here> as archive
  • Set version with @editorialbot set <version here> as version
  • Double check rendering of paper with @editorialbot generate pdf
  • Specifically check the references with @editorialbot check references and ask author(s) to update as needed
  • Recommend acceptance with @editorialbot recommend-accept

@DingshanDeng
Copy link

We double checked the authors and affiliations including ORCIDs. We made a release with the latest changes, and it is release v1.0.0. (or v1.0.0-zenodo, which is the same). We also archiev the release on Zenodo, and the reference is here: DOI with the DOI number: 10.5281/zenodo.14847202. The title, author list and the license in the archive matches in the JOSS paper. Thanks!

@warrickball
Copy link

@editorialbot set 10.5281/zenodo.14847202 as archive

@editorialbot
Copy link
Collaborator Author

Done! archive is now 10.5281/zenodo.14847202

@warrickball
Copy link

@editorialbot set v1.0.0 as version

@editorialbot
Copy link
Collaborator Author

Done! version is now v1.0.0

@warrickball
Copy link

@editorialbot check references

@editorialbot
Copy link
Collaborator Author

Reference check summary (note 'MISSING' DOIs are suggestions that need verification):

✅ OK DOIs

- 10.1051/0004-6361/202244193 is OK
- 10.2458/azu_uapress_9780816531240-ch010 is OK
- 10.1111/j.1365-2966.2012.21948.x is OK
- 10.48550/arXiv.astro-ph/0003477 is OK
- 10.3847/1538-4357/ab3e3b is OK
- 10.48550/arXiv.2203.09930 is OK
- 10.1111/j.1365-2966.2011.19945.x is OK
- 10.1051/0004-6361:20042185 is OK
- 10.1088/0004-637X/808/1/23 is OK
- 10.3847/0004-637X/831/2/125 is OK
- 10.1088/0004-637X/771/2/129 is OK
- 10.1051/0004-6361/201527613 is OK
- 10.1051/0004-6361/201425481 is OK
- 10.3847/1538-3881/acf4f0 is OK
- 10.1051/0004-6361/201629929 is OK

🟡 SKIP DOIs

- None

❌ MISSING DOIs

- None

❌ INVALID DOIs

- None

@warrickball
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@warrickball
Copy link

Hi @DingshanDeng, thanks for creating the archive, where everything looks correct. I'll have a closer look at the paper hopefully over the next few days but there are a few things that you can address already.

  1. I think it makes more sense to put the Statement of Need at the start of the paper, as it leads better than the current Backgrounds and Methods section.
  2. You've presumably gathered many of your citations from NASA ADS, which uses shorthands like \mnras for journal names and our pipeline doesn't understand. Could you replace these in the BibTeX with the full journal names? E.g. Monthly Notices of the Royal Astronomical Society. (This is very common in astronomy submissions to JOSS.)
  3. There are two citations to arXiv versions of articles that have published versions, specifically Siess et al. (2000) and Manara et al. (2023). Please update these to the published, not arXiv, versions.

I think this will already takes us very close to publication, so after these are merged I'll probably open a pull request to address minor editorial points (if there are any).

@DingshanDeng
Copy link

Hi @DingshanDeng, thanks for creating the archive, where everything looks correct. I'll have a closer look at the paper hopefully over the next few days but there are a few things that you can address already.

  1. I think it makes more sense to put the Statement of Need at the start of the paper, as it leads better than the current Backgrounds and Methods section.
  2. You've presumably gathered many of your citations from NASA ADS, which uses shorthands like \mnras for journal names and our pipeline doesn't understand. Could you replace these in the BibTeX with the full journal names? E.g. Monthly Notices of the Royal Astronomical Society. (This is very common in astronomy submissions to JOSS.)
  3. There are two citations to arXiv versions of articles that have published versions, specifically Siess et al. (2000) and Manara et al. (2023). Please update these to the published, not arXiv, versions.

I think this will already takes us very close to publication, so after these are merged I'll probably open a pull request to address minor editorial points (if there are any).

Hi @warrickball, thanks for your suggestions and to review this.

I updated the draft and I moved the statement of need to the start of the paper.

I also updated the journal names in the bib file and now it should look good in the pdf.

For the two citations, I could not find their DOI numbers, and it might because the Siess et al. (2000) might be too old to have an assigned DOI number while the Manara et al. (2023) is a book. So I put the url links for the published versions.

@DingshanDeng
Copy link

@editorialbot generate pdf

@editorialbot
Copy link
Collaborator Author

👉📄 Download article proof 📄 View article proof on GitHub 📄 👈

@warrickball
Copy link

Thanks @DingshanDeng! I'll have a look over the next few days.

For the two citations, I could not find their DOI numbers, and it might because the Siess et al. (2000) might be too old to have an assigned DOI number while the Manara et al. (2023) is a book. So I put the url links for the published versions.

Though we like to have a complete set of DOIs, we are aware that there can be gaps. Both the reasons you mention are quite common and perfectly reasonable.

@warrickball
Copy link

I've opened a pull request in the ysoisochrone repo with some editorial tweaks. Once that's merged, I'll test the publication process, which might turn up a second round of editorial tweaks, but we'll have to see. We should be able to finish this up within a few days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
review Track: 1 (AASS) Astronomy, Astrophysics, and Space Sciences
Projects
None yet
Development

No branches or pull requests

5 participants