Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Does the model support FP16 inference? #9

Closed
ereday opened this issue Feb 15, 2023 · 1 comment
Closed

Does the model support FP16 inference? #9

ereday opened this issue Feb 15, 2023 · 1 comment

Comments

@ereday
Copy link

ereday commented Feb 15, 2023

Hello,

I was looking for ways to increase the inference speed, and one thing I thought would be useful was to use FP16. For this, I called model.half() after loading it. Unfortunately, it generated RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' error. I was wondering if there is a way to use FP16 during inference? (Or any other trick to accelerate inference).

# This works:
model = MvpForConditionalGeneration.from_pretrained('RUCAIBox/mvp')
inputs = tokenizer(
    ["Describe the following data: Iron Man | instance of | Superhero [SEP] Stan Lee | creator | Iron Man",
     "Describe the following data: Batman | instance of | Superhero",
    ]
    return_tensors="pt",
)

generated_ids = model.generate(**inputs)

tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['Iron Man is a fictional superhero appearing in American comic books published by Marvel Comics.',
"Batman is a superhero"]
# This doesn't:
model = model.half()
generated_ids = model.generate(**inputs)
@StevenTang1998
Copy link
Member

Sorry, I am not familar with this. Our model is based on the Hugging Face API. You can find solution in their GitHub or Forum. Or the Accelerate can work?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants