You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Your question is rather unspecified ... I believe you should look up the bias-variance tradeoff, a standard concept in Machine Learning. There you will get more feeling for how to improve models, what is over/underfitting, distribution mismatches etc.
In general, bigger dataset "might" improve performance, but then your parametrization should also be adjusted to this. A too small hiddenSize will not be able to generalize, a too big hiddenSize will overfit on the training data. What concerns training, the "early stopping criterion" has been invented for a reason, again depending on quality improvement (mostly measured in MSE or a task-specific loss funtion) over the different sets (if applicable: train, train-dev, dev-test, test). batchSize I am not really certain how messing with this parameter might improve performance - again what do you mean with performance? - it will probably affect training time though.
Am I right about the following items:
--dataset more bigger, better performance?
--hiddenSize more bigger, better performance?
--maxEpoch more bigger, better performance?
--batchSize more bigger, better performance?
The text was updated successfully, but these errors were encountered: