You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm currently working with an existing TensorFlow 2 (TF2) model and the SparseOperationsKit. This set up allows me to utilize the SparseEmbedding Layer of the SOK toolkit. However, I've found that I have to define the sok_model and tf_model separately for training.
Model Serving: My goal is to deploy this model on the Triton Inference Server. I'm seeking guidance or examples that could streamline this process. I'm also curious about the ideal structure for this deployment - would treating it as an ensemble model that includes both sok and TensorFlow 2 backends be beneficial? In terms of backends, which would be the optimal choice - HugeCTR, TensorFlow 2, or another option? If there are any resources or guides that could assist me in this situation, I'd appreciate the pointers. For HugeCTR, it seems necessary to export the model graph; I'm wondering how I can accomplish the same with this TensorFlow 2 model that utilizes the SOK toolkit.
Model Conversion to ONNX: According to the Hierarchical Parameter Server Demo, HugeCTR can load both the sparse and dense models and convert them to a single ONNX model. I'm wondering how I can perform a similar conversion for this merlin-tensorflow model that uses the SOK toolkit and exports both the sparse and dense model.
Details
I'm currently working with an existing TensorFlow 2 (TF2) model and the SparseOperationsKit. This set up allows me to utilize the SparseEmbedding Layer of the SOK toolkit. However, I've found that I have to define the
sok_model
andtf_model
separately for training.After training the new TF2 model with SOK, I found that I need to export both the sok_model and the tf_model separately.
The resulting outputs are as follows:
EmbeddingVariable_*_keys.file
andEmbeddingVariable_*_values.file
.saved_model.pb
,variables
files.When I need to execute a local test prediction request, I have to load both models independently. I then call the inference_step as follows:
Questions
Environment details
nvcr.io/nvidia/merlin/merlin-tensorflow:23.02
The text was updated successfully, but these errors were encountered: