-
Notifications
You must be signed in to change notification settings - Fork 101
How to export a Huggingface transfromer model? #212
Comments
I tried something like the code below, but would get unknown tensor error. Could be a numpy interaction. import torch
import numpy as np
from transformers import AutoModelForSequenceClassification
from torch.utils.mobile_optimizer import optimize_for_mobile
task='sentiment'
MODEL = f"cardiffnlp/twitter-roberta-base-{task}"
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
model.eval()
scripted_model = torch.jit.script(model)
optimized_model = optimize_for_mobile(scripted_model)
optimized_model._save_for_lite_interpreter("roberta-base-sentiment.ptl")
|
Hi @vukovinski , this process would be different from model to model. What I can describe from a high level perspective is that the An easier way is to use the |
I see, I believe I understand now. Basically I would need access to the source code for the model so that I could annotate it with python or torchscript (some confusion here) datatypes. Hmm, this requirement precludes using HuggingFace models for mobile device optimization, unless the original authors of the models embrace such a practice. Alternatively, such models (meaning weights and nodes) could be reverse engineered, but that sounds like more trouble than retraining them. Thank you once again @rraihansaputra Move to close the issue. |
Tutorial Select
Prepare Custom Model
Feedback
I would like more context on how to optimize for mobile models other than the ones registered with PyTorch hub, such as Hugging Face Hub for example.
The text was updated successfully, but these errors were encountered: