-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to finetune on Custom Loss function #38
Comments
Hi @gopal86. What kind of loss functions would you like to finetune it with? |
Hi, I would like to train on weighted mape loss function. Kindly share an example |
You would have to replace the last layer with an appropriate layer for the loss function, and then finetune the model. You could try finetuning just the last layer, or the entire model. |
I have certain constraints which I would like to impose. Could you give an example on how to fine-tune any custom loss function? |
When you finetune on any other function, you will have to change the last layer to whatever is appropriate. For instance, if you would like to perform classification, you could put a linear layer with softmax at the end after the embeddings, and then use the cross entropy loss. If you would like to perform regression, you could put a linear layer at the end with a single output, and use the MSE loss. When training, you could train the full model or just the last layer. An idea: To start with, you could get the embeddings from the pretrained model, and then train a simple statistical classifier/regressor on the embeddings. |
Hi,
…On Wed, 17 Apr 2024 at 02:07, Arjun Ashok ***@***.***> wrote:
When you finetune on any other function, you will have to change the last
layer to whatever is appropriate.
For instance, if you would like to perform classification, you could put a
linear layer with softmax at the end after the embeddings, and then use the
cross entropy loss.
If you would like to perform regression, you could put a linear layer at
the end with a single output, and use the MSE loss.
When training, you could train the full model or just the last layer.
An idea: To start with, you could get the embeddings from the pretrained
model, and then train a simple statistical classifier/regressor on the
embeddings.
—
Reply to this email directly, view it on GitHub
<#38 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOTJPURAHHWCKUOCHCPS4XTY5WDZ7AVCNFSM6AAAAABFJAR3MCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJZHA4DGNZXHE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Hi,
Can you share an example on how to get the pretrained model embeddings for
the lag-llama model.
…On Wed, 17 Apr 2024 at 02:07, Arjun Ashok ***@***.***> wrote:
When you finetune on any other function, you will have to change the last
layer to whatever is appropriate.
For instance, if you would like to perform classification, you could put a
linear layer with softmax at the end after the embeddings, and then use the
cross entropy loss.
If you would like to perform regression, you could put a linear layer at
the end with a single output, and use the MSE loss.
When training, you could train the full model or just the last layer.
An idea: To start with, you could get the embeddings from the pretrained
model, and then train a simple statistical classifier/regressor on the
embeddings.
—
Reply to this email directly, view it on GitHub
<#38 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AOTJPURAHHWCKUOCHCPS4XTY5WDZ7AVCNFSM6AAAAABFJAR3MCVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANJZHA4DGNZXHE>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Hi, I wanted to fine tune the model on my own dataset however with my own custom loss. Could you give an example on how to do that?
Thanks
The text was updated successfully, but these errors were encountered: