You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Dec 29, 2022. It is now read-only.
Hi @dennybritz The in-graph beam search is pretty nice. I have couple of questions. Can you please clarify?
If we need to save the inference graph for C++ deployment, are the configurations like beam width and length norm weight defined through a placeholder tensor or they have to be baked into the graph?
What's the inference speed look like on the NMT task (as described in your paper https://arxiv.org/abs/1703.03906, beam 10, K80 GPU)?
It may be a little inflexible if we want to use additional information like language model score to guide the search. Do you have any comments about that aspect?
Many Thanks!
The text was updated successfully, but these errors were encountered:
Hi @dennybritz The in-graph beam search is pretty nice. I have couple of questions. Can you please clarify?
Many Thanks!
The text was updated successfully, but these errors were encountered: