-
Notifications
You must be signed in to change notification settings - Fork 272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Meaning of parameters in 'configuration.toml' #21
Comments
The table shows the results of running: With:
Beware - this is quite a long and expensive run... To maximize the score, add some "randomness" between different iterations, meaning choose somewhat different params for the stages in different iterations, to get the best possible chances for solving. I am not going to share at this point my full "recipe" for that, as it is just an ensemble detail, which is highly uncommon to share in academic implementations. even without adding randomness, you should get results close to that. |
Thanks for your reply! When I run it with more iterations, I sometimes can see one error message as follows. Will it create problems in evaluation?
|
can you reproduce this error ? does it happen on specific problems ? does it happen always ? |
Yeah, this error is quite random. The output is something like this:
Another interesting thing I found is that when I run gpt-3.5-turbo-0613 on the valid split (iterations=1) twice, the total amount of data instances after evaluation is quite different. One is 9+87=96 instances and another one is 12+ 80=92. I checked the total amount should be 177? Is this due to the above error? First time run: Second time run: |
|
Thanks again! that is very helpful!
|
"So how can I run 5 iterations with a partial of them applying 'use_direct_solutions=true'?" edit the file
|
@wenlinyao
Could you please provide detailed guidance on how to address this issue? I'm eager to learn and make the necessary modifications.🥰 |
@huanhuan6666 I just saw your message. I am not sure how to solve this issue thoroughly. But you can try to replace litellm LLM call with the OpenAI official API call. |
Hi,
In the 'configuration.toml' file, I see a range of parameters, but not sure what those parameters control. Could you please provide one example config file that can produce the result in Table 1 and Table 2 of the paper? One example for each table will be great! Thank you!
The text was updated successfully, but these errors were encountered: