-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Callback based Trainer #83
Comments
Our goal is to make the Not all of the methods you provided above seem to require overriding the
I think it's natural for data preprocessing and data post processing to be part of the model. Data preprocessing is already part of the model using |
@yukw777 thanks for your comments. I'll address a few of your points:
This would be a good exercise.
A few things come to mind. First, I think if you are trying to migrate code that you already have, it would be easier to do so if you can modify more of the behavior. For example, you may be using a dictionary as output of your forward method. But loss functions operate over tensors, so you would have to modify the loss function as well. I think the point here is that instead of modifying many different objects, you only modify 1, always.
I'm not sure I agree. The process can be very different across models depending on the metric.
Yes, the current Trainer only takes a single optimizer. 1) You may need more than one optimizer (ex: GAN). 2) You may have a custom update rule that does not use the optimizer class. I've seen pytorch code that does that. Now to be fair, you could argue that something that uses more than one optimizer could have its own Trainer.
That's not true, data processing is part of the TabularDataset at the moment, not the model
This is an example of the kind of customization that has analogies during training. But should it be part of the same model object that was used in training, or have a different model object for inference. I forgot to write one of the main goals of this process, which is to add more defaults, to simplify user experience. For example for model X, sampling, loss and metrics are always the same, why have to write them every time? Ultimately, I think user experience could benefit from having to modify the least number of objects to customize to your needs. I want to avoid someone having to customize: dataset, sampler, model, loss, metric, and optimizer, which implies understanding the interface of each of these objects. |
@jeremyasapp thanks for the clarification. I think I understand the direction we need to take better now. I totally agree with your point about having sensible defaults to simplify the user experience. I think I had some misunderstandings about the proposal, but now that I've taken another look with your clarification, I think all of those methods are a good starting point. I'd actually add data (pre|post)processing to the mix. I forgot that data preprocessing is currently a part of the dataset itself. I think this should definitely be part of the model as well as postprocessing. I think people would generally want to use the same model object for training and inference. |
This is being addressed by v0.5 ( |
Is your feature request related to a problem? Please describe.
Currently, customizing a model requires sometime overriding the Trainer. It would be best if the Trainer was an object that users didn't have to override. Furthermore, it would be good to be able to set more defaults values across the board. The Trainer is very generic which is great, but relying on the model more would simplify the interfaces, and improve user experience. The Trainer would be in charge of executing training, fp16, model parallelization, some logging, etc..
Describe the solution you'd like
The solution is to create a Model class with a set of methods (some of which optional) to be called by the Trainer. This has the added benefits that the new model objects can implement defaults for common parameters such as loss function or sampler.
Here are a set of potential methods for the model class:
Some questions to answer:
The text was updated successfully, but these errors were encountered: