Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hyperparameter tuning #2

Open
qqyqqyqqy opened this issue Oct 23, 2024 · 1 comment
Open

hyperparameter tuning #2

qqyqqyqqy opened this issue Oct 23, 2024 · 1 comment

Comments

@qqyqqyqqy
Copy link

Although Optuna is used for hyperparameter tuning in the project, the tuning results may vary across different environments. How can the reproducibility of hyperparameter tuning be ensured?

@tingtingwuwu
Copy link
Owner

To ensure consistency in hyperparameter tuning results, the following measures can be implemented:

Set Random Seeds: Establish fixed random seeds in all relevant code components (e.g., torch.manual_seed() and numpy.random.seed()) to guarantee the reproducibility of the tuning process.

Log All Experimental Configurations: Automatically save configuration files and parameter combinations in a log file during each hyperparameter tuning run, ensuring that the results can be fully replicated.

Use a Fixed Validation Set: Ensure that the validation set remains unchanged during hyperparameter tuning to avoid discrepancies in tuning outcomes caused by variations in data distribution.

Dockerized Environment: Recommend that users run tuning scripts within a consistent Docker environment to maintain uniformity in dependencies and environment configurations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants