You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
In the loop of the trainer, the evaluation is always after training. Hence, the model performance reported isn't from the pure global model but rather taking the global model and then conduct fine-tuning on the local dataset. Both models are meaningful, so we may want to report both.
Describe the solution you'd like
In the final round, conduct an evaluation before the training and call it the performance of the global model. Then, record the evaluation after the training and call it personalized model.
Describe alternatives you've considered
We can also save global model on aggregator registry and personalized model on trainer registry, and then conduct testing offline.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a problem? Please describe.
In the loop of the trainer, the evaluation is always after training. Hence, the model performance reported isn't from the pure global model but rather taking the global model and then conduct fine-tuning on the local dataset. Both models are meaningful, so we may want to report both.
Describe the solution you'd like
In the final round, conduct an evaluation before the training and call it the performance of the global model. Then, record the evaluation after the training and call it personalized model.
Describe alternatives you've considered
We can also save global model on aggregator registry and personalized model on trainer registry, and then conduct testing offline.
The text was updated successfully, but these errors were encountered: