You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am using the TabPFN model for a classification task and would like to fine-tune the already trained pre-trained model to better adapt it to my specific dataset. I have reviewed the documentation and code, but I still have some questions about the specific steps and parameter settings for fine-tuning.
Does fine-tuning TabPFN involve gradient updates?
How many training epochs are needed?
During the fine-tuning process, which parameters are the most important?
The text was updated successfully, but these errors were encountered:
Hi everyone, there are two different things that both can be called fine-tuning:
(1) Fine-tuning TabPFN to do better on one or more datasets in order to generalize better to other, similar datasets. The analogy for LLMs would be fine-tuning GPT/Llama/Mistral to your own data.
(2) Fine-tuning TabPFN to a single dataset, in order to improve its performance (e.g., to tackle large datasets that don't fit into memory). This doesn't have an analogy in LLMs but is specific to tabular data.
I am using the TabPFN model for a classification task and would like to fine-tune the already trained pre-trained model to better adapt it to my specific dataset. I have reviewed the documentation and code, but I still have some questions about the specific steps and parameter settings for fine-tuning.
Does fine-tuning TabPFN involve gradient updates?
How many training epochs are needed?
During the fine-tuning process, which parameters are the most important?
The text was updated successfully, but these errors were encountered: