-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tool Submission Discussion #3
Comments
Is the submission system open yet? |
Do our accounts from last year still work? If not, can I get my account activated? email: [email protected] |
@ttj @ChristopherBrix email: [email protected] |
Please activate account [email protected] when you have a chance. I'm affiliated with NeuralSAT - GMU. Thanks so much. |
Same here. zshi at cs.ucla.edu |
Same here: [email protected] |
Same here: [email protected] |
Will the submission deadline be pushed back? Can we get this AMI added to the submission system? @ChristopherBrix R2024a matlab_linux |
As we were a bit behind getting the execution system going (some AWS issues needed to be sorted out), we will extend the deadline to July 12. Barring any further AWS issues that hopefully are about resolved, I don't anticipate further extensions, as we need some time to execute and prepare prior to the conference presentation on July 23. |
Can you please activate [email protected] ? |
Can you please activate [email protected] ? |
Tool submission is open! |
@mldiego I've added ami-080fce32dc8c15ced to the list of supported amis. |
Hi @ChristopherBrix , could you please also add the standard ubuntu 24.04? It's |
@shizhouxing I've added ami-0aff18ec83b712f05, that should be the us-west-2 equivalent. Let me know if you need a different one. |
I ran into an error earlier which I think is a bug on the server side (submission 71): |
You can now have a fixed MAC address! This might help with licensing issues. In your new "Profile" page, you can find an "eni" code. This is used to assign a static MAC address to your AWS instance. Please let me know if you encounter any issues! @Jubengo I'll look into that. |
Setting a fixed MAC address did not work for me. I got the following error during the initialization phase: Submission IDs: 101 and 102 |
@mldiego This is fixed, ENIs should now no longer cause the submission to get stuck. |
@Jubengo I'm afraid this seems to be a bug in your code, or a very weird interaction between your scripts and mine. If I remove the call to |
If you encounter issues with your tool that you need to debug, you can now choose to pause the pipeline after the post-installation script was run. This can be selected in the submission form. |
I just switched over to a different AWS account. This caused all ENIs (and the associated MAC addresses) to change, so if you used them for your licenses, you'll have to update them one more time. From now on, I don't anticipate further changes to this. |
@ChristopherBrix It looks like "Yaml Config File, relative to repository root" cannot be loaded from a private repository with a personal access token? The submission site is trying to open it using a https://raw.githubusercontent.com/ URL instead of cloning the repository first. |
We were able to debug this. For anyone else encountering issues: Please make sure the URL you specify doesn't end in |
Hi, @ChristopherBrix I can't get past the initialisation phase with the |
@Jubengo Please try again. The problem is that this instance type spends a lot of time doing some background updates. I've tried to detect that and wait until they're done, but it's not foolproof. I've updated the scripts, hopefully this reduces this problem enough to make it usable. |
We have shared a draft overleaf report and are also fixing some execution issues identified by some participants. Please let us know if you didn't get it and we will share it, we emailed the listservs so far |
All: the report is about finalized pending some final checks, and we plan to submit to arxiv before the end of 2024 to ensure posting with a 2024 date, so please make any final changes before 12/27 at the very latest. We can post a new version to arxiv if further changes are needed after that. Thank you for your participation again and we wish you a happy holiday season and new year! |
We will update this soon after finishing the AWS set up on the submission system, but posting now to help ensure people getting started with tool preparation (copied from last iteration) @ChristopherBrix
At this point, you should be updating your tool in order to support quickly verifying as many benchmarks as you can. Note that the benchmarks instances will change based on a new random seed for the final evaluation. We will follow a similar workflow to last year, where tool authors provide shell scripts to install their tool, prepare instances (convert models to a different format, for example), and then finally verify an instance. The detailed instructions for this are available at 2021's git repo.
You will be able to run and debug your toolkit on the submitted benchmarks online at this link. There, you first need to register. Your registration will be manually activated by the organizers, you'll receive a notification once that's done. Afterwards, you can login and start with your first submission.
The process is similar to the submission of benchmarks, with a small change compared to last year: You need to specify a public git URL and commit hash, as well as the location of a .yaml config file. There, you can specify parameters for your toolkit evaluation. By making those settings part of the repository, those will be preserved for future reference.
You can define a post installation script to set up any licenses.
Once submitted, you're placed in a queue until the chosen AWS instance can be created, at which point your installation and evaluation scripts will be run. You'll see the output of each step and can abort the evaluation early in case there are any issues. Once a submission has terminated, you can use it to populate the submission form for the next iteration, so you don't have to retype everything.
Important: We currently have no limitation on how often you can submit your tool for testing purposes, but will monitor the usage closely and may impose limits if necessary. Please be mindful of the costs (approx. 3$ per hour) each submission incurs. To save costs, you should debug your code locally and then use the website to confirm the results match your expectations.
We strongly encourage tool participants to at least register and have some test submissions on the toolkit website well ahead of the deadline.
The text was updated successfully, but these errors were encountered: