Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sense about input tensor different to output tensor #165

Open
Petros626 opened this issue Jul 19, 2023 · 0 comments
Open

Sense about input tensor different to output tensor #165

Petros626 opened this issue Jul 19, 2023 · 0 comments

Comments

@Petros626
Copy link

Hey,

I reviewed you guide to train a TF2 object detection model and wonder why you're using two different datatypes for the final inference.

First you stick with int8 and then you declare uint8 for the input_tensor. The second weird thing is, the final use of float32.
I assume the model get feeded with uint8 (should be int8) for faster inference and the output_tensor should be as accurate as possible? Didn't you build a bottleneck with that?

Logically would be using int8 for the input_tensor and the output_tensor or not.

# For full integer quantization, though supported types defaults to int8 only, we explicitly declare it for clarity.
converter.target_spec.supported_types = [tf.int8]
# These set the input tensors to uint8 and output tensors to float32
converter.inference_input_type = tf.uint8
converter.inference_output_type = tf.float32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant