-
Notifications
You must be signed in to change notification settings - Fork 119
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to submit to test server #21
Comments
Thanks for the interest in the dataset! I have to make an update to the data.zip to allow support for that, will do hopefully tonight or tomorrow night and let you know! (In more detail: I should convert the test/challenge splits to the data1.2.zip files format (they're currently only at the questions zip at visualreasoning.net/download.html), both are jsons but in the GQA website it's a dictionary of all questions while the mac model expects a list, so have to convert dict->list) |
I made an update that I believe should solve it and uploaded the new data needed for submission. I will test it later today to make sure it's all working! :) Please redownload it at: https://nlp.stanford.edu/data/gqa/data1.2s.zip and unzip. |
Thank you. I will have a try this weekend and will let you know if it works. |
Is there any example command for submission? I've tried to add --submission and --getPreds but only get train and val predictions. It is very helpful if an example is provided. |
you're right either --test or --finalTest is also needed. |
I tried running that command but it seems I do not have |
Sure you can download it in the new version of the data in the readme: https://nlp.stanford.edu/data/gqa/data1.2.zip |
I tried to run the command just now. What I get is trainPredictions-gqaExperiment.json and valPredictions-gqaExperiment.json. It seems no other .json files can be found. Then, where to get the .json file to submit to the test server? I am still confused. At least I tried submitting trainPredictions-gqaExperiment.json, which is not correct. Thanks for your help! |
sorry about that I should have definitely first checked it myself to make sure things are working fine end-to-end, but can't find enough time right now :/ If you ran the command with could you please post (or email me if you prefer) the output that you get for the run? |
I just redownloaded this but I don't see
|
updated git to point to the new zip with the right file: https://nlp.stanford.edu/data/gqa/data1.2s.zip |
Hey @zhegan27 please let me know if your problem has been resolved or if there's anything else I can help you with! If it doesn't work for you will be great if you could post the output of the command you've tried |
sorry for the late response, got busy on other stuff these two days. Yes, running the code has no problem, and the code can print out "Writing predictions..." and then "Done!". My question then is: where is the stored file with all the predictions needed to submit to the test server? What is the name of that file? I somehow cannot find it. Thanks. :) |
Oh alright :) actually i believe it should have been in the same directory with val and train predictions: that line shows the path https://github.com/stanfordnlp/mac-network/blob/master/config.py#L85 |
Thank you for your quick response! I will let you know whether it works or not today when I find time. :) |
Thanks for your help! We have successfully submitted to test server yesterday night. One thing to note is that: At our initial try, after submitting to test server, I got the following error: Traceback (most recent call last): Comparing submission_all_questions.json in version 1.2 and all_submission_data.json in version 1.2s, I have found that '20692199' only exist in 1.2, but not in 1.2s. So we need to use submission_all_questions.json in version 1.2 for test server submission. |
Alright glad to hear it worked for you with 1.2! You're right about the version will update it accordingly to make it work! |
hi, I have some confusions about submitting to the test server. I unzip the file it seems to test through 930K, 130K, and 130K (three dataset). It is same as the size of balanced_train_set(930K), val_set(130K). i think the third test is submission set but it has only 130K questions. however, I upload the according to @zhegan27 's solution I took a look at the I found it strange that there are 2M questions in here is my procedure to train & test on the model. Did I make any mistake or there are some bugs in the code? first, unzip then, run the command below and got the problems above... |
Hi, sorry for the problems you experience with that. I definitely believe it will be very useful to update the repo to make the submission smoother (although I don't have enough time to do it until next few weeks :/ ). To respond to specific things you mentioned:
Could you please try a new run (like with 1 epoch just to see if it works) where you do: |
Thanks for the reply, the problem was solved by myself yesterday. I have submitted to the server successfully. My experiment: hope to give you some idea
After I ran the command with btw, I converted |
Hi! That's great news! I'm glad it got solved! :) |
When I run File "/home/ailab/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/saver.py", line 113, in restore InvalidArgumentError (see above for traceback): Assign requires shapes of both tensors to match. lhs shape= [1848] rhs shape= [1845] |
Sorry about that! That's a bit of a bug with the vocab what happens in that case is that when you try to run with testdev, if you haven't done that from the beginning where it first loaded the list of all possible words, it then gets a mismatch when you switch from training to testing since there are a 3 new words in vocab of test that didn't appear on train. The fix is not complicated but unfortunately don't have the time to fix it until about 2 weeks, a possible workaround in the meantime though not ideal will be to retrain e.g. |
Thank you for your reply, I will try it. If it works, I will acknowledge you.
|
Sorry, I still encounter the problem after I run the command Preprocess data... During handling of the above exception, another exception occurred: Traceback (most recent call last): Caused by op 'save/Assign_30', defined at: InvalidArgumentError (see above for traceback): Assign requires shapes of both t ensors to match. lhs shape= [512,1848] rhs shape= [512,1845] |
oh sorry the first command should be without |
Thanks for this nice repo. I have tried to run experiments on GQA, and it has no problem. After I have trained the model, I did not find the instruction on how I can create the .json file that can be used to submit to the EvalAI test server. Maybe you mentioned it somewhere, but I did not find it. It will be good if you can let me know how such a .json file can be created in order to submit to test server. Thank you!
The text was updated successfully, but these errors were encountered: