Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

benchmarks? #19

Open
floswald opened this issue Oct 5, 2017 · 7 comments
Open

benchmarks? #19

floswald opened this issue Oct 5, 2017 · 7 comments

Comments

@floswald
Copy link
Contributor

floswald commented Oct 5, 2017

did you guys ever think about benchmarking this against https://cran.r-project.org/web/packages/quantreg/?

if i remember correclty this wraps fortran routines, could think about using those as well (if performance here is far below acceptable)

@pkofod
Copy link
Owner

pkofod commented Oct 5, 2017

Havn't done that. Quantreg's default (if I'm not mistaken) is the interior point method in Portnoy and Koenker, and that is exactly the method I've coded up here and chosen as the default. For large problems it is far faster than the IRLS algorithm we also have here.

@floswald
Copy link
Contributor Author

floswald commented Oct 5, 2017

that's good news! I just looked at the quantreg package. that's a lot of fortran. anyway, would be interesting to see how good or bad we are doing here.

@pkofod
Copy link
Owner

pkofod commented Oct 5, 2017

Sure, that would be interesting... I originally had the idea to go all in, but then I realized how much work Koenker had already put into quantreg and fainted :)

@pkofod
Copy link
Owner

pkofod commented Oct 6, 2017

Are you an R user? What is the standard way of benchmarking in R?

@floswald
Copy link
Contributor Author

floswald commented Oct 6, 2017

well, I used to be :-)

you would just time execution of a command with system.time( command ). for example, from the demo folder in the quantreg source:

> library(quantreg)
> data(engel)
> elapsed_time <- system.time(z <- rq(foodexp ~ income, tau= .50, data = engel))
> elapsed_time
   user  system elapsed 
  0.004   0.000   0.008 
> # we want elapsed
> elapsed_time[3]
elapsed 
  0.008 

@floswald
Copy link
Contributor Author

floswald commented Oct 6, 2017

i just submitted a pull request to Query.jl that does an R vs julia benchmark. maybe useful?
queryverse/Query.jl#154

@yanyu2015
Copy link

Yes, a benchmarks is important for people to choose jl over R.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants