You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
File "../anaconda3/envs/envaimet/lib/python3.8/site-packages/aimet_tensorflow/keras/quantsim.py", line 697, in train_step
self._fill_missing_encoding_min_max_gradients(gradients)
File "../anaconda3/envs/envaimet/lib/python3.8/site-packages/aimet_tensorflow/keras/quantsim.py", line 661, in _fill_missing_encoding_min_max_gradients
dloss_by_dmin, dloss_by_dmax = param_quantizer.get_gradients_for_encoding_min_max(weight_tensor,
File "../anaconda3/envs/envaimet/lib/python3.8/site-packages/aimet_tensorflow/keras/quant_sim/tensor_quantizer.py", line 884, in get_gradients_for_encoding_min_max
gradients = quantsim_per_channel_custom_grad_learned_grid(weight_tensor,
File "../anaconda3/envs/envaimet/lib/python3.8/site-packages/aimet_tensorflow/keras/quant_sim/quantsim_straight_through_grad.py", line 364, in quantsim_per_channel_custom_grad_learned_grid
dloss_by_dmin, dloss_by_dmax, dloss_by_dx = \
File "../anaconda3/envs/envaimet/lib/python3.8/site-packages/aimet_tensorflow/keras/quant_sim/quantsim_straight_through_grad.py", line 332, in _compute_dloss_by_dmin_dmax_and_dx_for_per_channel
dloss_by_dmin, dloss_by_dmax, dloss_by_dx = \
File "../anaconda3/envs/envaimet/lib/python3.8/site-packages/aimet_tensorflow/keras/quant_sim/quantsim_straight_through_grad.py", line 233, in _compute_dloss_by_dmin_dmax_and_dx
dloss_by_dmax = tf.cast(_compute_dloss_by_dmax(x, grad, scaling, rounded_offset, bitwidth, is_symmetric),
File "../anaconda3/envs/envaimet/lib/python3.8/site-packages/aimet_tensorflow/keras/quant_sim/quantsim_straight_through_grad.py", line 167, in _compute_dloss_by_dmax
r_x_by_s_minus_x_by_s = tf.round(x / scaling) - (x / scaling)
Node: 'truediv_4'
Incompatible shapes: [2,2,32,2] vs. [32]
[[{{node truediv_4}}]] [Op:__inference_train_function_9818]
It looks like that there is some dimensional mismatch because of the fact, that in tensorflow weights of transpose convolutions are in shape [k,k,out_channels,in_channels] while in regular convolution the shape is [k,k,in_channels,out_channels].
Is this known issue like missing feature? It would be nice to use also transpose convolutions with range learning schemes.
The text was updated successfully, but these errors were encountered:
If I run test-case from below link, it works as expected and prints correctly: https://github.com/quic/aimet/blob/ce8e344685e1949bb5fbd0c7836b16defce33981/TrainingExtensions/tensorflow/test/python/test_per_channel_quantization_keras.py
But if I modify the network in a way that it contains Conv2DTranspose, I am facing following error:
It looks like that there is some dimensional mismatch because of the fact, that in tensorflow weights of transpose convolutions are in shape [k,k,out_channels,in_channels] while in regular convolution the shape is [k,k,in_channels,out_channels].
Is this known issue like missing feature? It would be nice to use also transpose convolutions with range learning schemes.
The text was updated successfully, but these errors were encountered: