-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
brolgar submission #1
Comments
Hi @mpadge ! OK great, love these tests, very very cool. I am on leave from tomorrow so I won't have much opportunity to work on this today, but my focus points from this are:
Re |
Thanks for the quick response @njtierney. And yes, the first 2 of those response would be great. It would be especially useful if you could:
You'll be unlikely to get a clean As for |
Hi @mpadge I have provided examples in the documentation for |
Great, thanks @njtierney. I've updated the |
Thanks, @mpadge ! I've had a go at resolving these
|
That's mostly happening because, yes, the functions currently merely presume the input to be a formula, yet do not actually check for that. The unintelligible error messages are as directly issued by Lines 29--30 reflect attempts to convert a vector column to a list ( |
Thanks for that, Mark! These tests have been great, I've substantially improved the documentation and error/edge case handling as a result. I've now added tests for formula, first checking for I'm interested in the conversion of list-columns back into vectors, but I'm not sure what the best practices are here, is it just something like Output of autotest is below (happy to not paste it here if you don't want the issue to get super long), A few questions about it
Created on 2020-10-13 by the reprex package (v0.3.0) |
Thanks @njtierney for your submission and improvements that have already been made to the package. Don't worry about the remaining Given that, we would now like to proceed to the formal review stage, for which members of the project's advisory board @stephaniehicks and @lcolladotor have kindly agreed to review your package. They are now requested to perform a two-stage review, the first part involving assessment of the package against the standards as they are currently drafted, with the second being a more "traditional" review. We hope, by the time we proceed to this second component, that many aspects which might otherwise have arisen within a "traditional" unstructured review will already have been covered, and will thereby make the review process notably easier. Our review system will ultimately perform much of the preceding automated assessment prior to actual submission, and reviewers will be provided with a high-level interactive graphical overview of the package's functions and their inter-relationships. In lieu of the system being that far, reviewers can clone Nick's repo from github.com/njtierney/brolgar, then run the following three lines in the
That should give you an interactive version something like this: Instructions for review@stephaniehicks and @lcolladotor, could you please now asses the Please do this in two phases:
In each case, please only note those standards which you judge the package not to conform to, along with a description of what you would expect this particular software package to do in order to conform to each standard. When you do that, please provide sufficient information on which standard you are referring to. (The standards themselves are all enumerated, but not yet at a necessarily stable state, so please provide enough information for anyone to clearly know which standard you are referring to regardless of potential changes in nomenclature.) Please also note as a separate list all those standards which you think should not apply to this package, along with brief explanations of why. Importantly, to aid us in refining the standards which will ultimately guide the peer review of statistical software, we also ask you to please consider whether you perceive any aspects of software (design, functionality, algorithmic implementations or applications, testing, and any other aspects you can think of) which you think might be able to be addressed by standards, and yet which are not addressed by our standards in their current form. To sum up, please post the following in this issue:
Once you've done that, we'll ask to you proceed to a more general review of the software, for which we'll provide more detail at that time. Thanks all for agreeing to be part of this! Due dateWe would like to have this review phase completed within 4 weeks, so by the 13th of November 2020. We accordingly suggest that you aim to have the first of the two tasks completed within two weeks, by the 30th October. Could you both please also record approximately how much time you have spent on each review stage. Thank you! |
Update for reviewers @stephaniehicks and @lcolladotor, note that this repo now includes an R package which enables you to get a pre-formatted checklist for your reviews by running the following lines: remotes::install_github("ropenscilabs/statistical-software-review")
library(statsoftrev) # the name of the package
rssr_standards_checklist (category = c ("eda", "time-series")) That will produce a markdown-formatted checklist in your clipboard ready to paste where you like, or you can use a ping @noamross so you'll be notified of these conversations. |
Initial Review of
|
Re the second item requested from @mpadge: "An indication of any aspects of the software which you think are not addressed by the current standards yet could (potentially) be." Honestly, my biggest struggle was applying the time series standards, when this package is designed to be broader than just time series. So I wasn't sure how to handle that. |
@mpadge I'm not sure I understand the second phase of review. Do I proceed now with a "traditional review" or do I wait? Also, could you provide more guidance on what is meant by that? Thanks! |
Thank you very much @stephaniehicks! The next phase is now for @njtierney to address the issues you've addressed here, ping on this issue when he has done so, and for you to then re-assess the above points in response to his changes. Only after that do you need to commence a "traditional review". So over to you @njtierney ... 😄 |
Wonderful, thank you @stephaniehicks for the thougtful initial review, and @mpadge for organising - I'll get to this next week, looking forward to it. Cheers! |
Hi @njtierney, Sorry for the delay! I really liked your package and I believe that it is of quite useful. Going through the rOpenSci standards, I think that there's a bit of room for improvement, mostly in relation to handling of missing data and a bit of editing in the vignettes to explain more concepts to make it easier for new users to use and understand I think that you might want to use Let me know if I should clarify any of my comments to the rOpenSci standards below. Best, Review of git hash
|
Thanks @njtierney for volunteering to submit your
brolgar
package for trial "soft submission" to rOpenSci's new system for peer review of statistical software. This issue includes output from our automated assement and reporting tools developed as part of this new system. These currently function as the following two distinct components (which will be integrated later):packgraph
autotest
data.frame
input is removed.data.frame
input is removed.data.frame
input is removed.data.frame
input is removed.data.frame
input is removed.Created on 2020-10-07 by the reprex package (v0.3.0)
Further information on autotest output
The output of
autotest
includes a columnyaml_hash
. This in turn refers to theyaml
specification used to generate the autotests, which can be generated locally by runningexamples_to_yaml (<path>/<to>/<package>)
. Those contain theyaml_hash
values, and finding the matching value will show you the base code used to trigger the diagnostic messages. Theoperation
column should then provide a sufficient description of what has been mutated with regard to the structure defined in theyaml
.For example, several of the first values are errors for the
.data
parameter ofadd_n_obs
with thecontent
of the error message being, "Input must inherit from tsibble". Theoperation
in all of the cases describe the mutations whichautotest
has performed to trigger those error messages. You should be able to suppress these errors by simply specifying in the documentation of that parameter that it is expected to be of class ("tsibble") only.The text was updated successfully, but these errors were encountered: