You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to express my appreciation for providing Phavip. However, I have encountered some confusion while working with it. In your documentation, you mention the following: "Thus, we first apply an end-to-end method to train the binary classification model. Then, we fix the parameters in the Transformer encoder and fine-tune a new classifier layer for the multi-class classification model. Binary cross-entropy (BCE) loss and L2 loss are employed for the binary classification and multi-class classification, respectively."
However, when I attempted to retrain the model, I couldn't find the fine-tune mode as described. Could you please assist me in reproducing the results using your code?
Best regards,
Misaki
The text was updated successfully, but these errors were encountered:
misaki-taro
changed the title
Fix the binary model and fintune
Fix the binary model and finetune
Sep 6, 2023
If you do not know how to extract the parameters, you can directly train on a new model without pertaining. The results is almost the same as the increase in the epoch
Hi Jiayu,
I would like to express my appreciation for providing Phavip. However, I have encountered some confusion while working with it. In your documentation, you mention the following: "Thus, we first apply an end-to-end method to train the binary classification model. Then, we fix the parameters in the Transformer encoder and fine-tune a new classifier layer for the multi-class classification model. Binary cross-entropy (BCE) loss and L2 loss are employed for the binary classification and multi-class classification, respectively."
However, when I attempted to retrain the model, I couldn't find the fine-tune mode as described. Could you please assist me in reproducing the results using your code?
Best regards,
Misaki
The text was updated successfully, but these errors were encountered: