Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

使用QAT进行量化时,config中'granularity':使用'per_channel'出现报错 #5772

Open
Count1ngStar opened this issue Apr 22, 2024 · 0 comments

Comments

@Count1ngStar
Copy link

NNI 学生项目问题概述 / General Question of Student Program

使用QAT进行量化时,config中'granularity':使用'per_channel'出现报错

**请简要概述您的问题 / 观点 :
使用的是Doc里面的quick start 历程,将量化粒度由default改为per_channel无法完成量化

请提供 NNI 环境信息 :
nni Environment :

  • nni 3.0
  • python version:
    conda
    python 3.10

其他建议 / Other Advice

是否需要更新文档(是 / 否):

报错信息
{
"name": "AssertionError",
"message": "",
"stack": "---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[214], line 53
10 config_list = [{
11 'op_names': ['conv1', 'conv2', 'fc1', 'fc2'],
12 'target_names': ['input', 'bias'],
(...)
31 'granularity': 'per_channel',
32 }]
34 # config_list = [{
35 # 'op_names': ['conv1', 'conv2', 'fc1', 'fc2'],
36 # 'target_names': ['weight', 'bias'],
(...)
50
51 # ]
---> 53 quantizer = QATQuantizer(model, config_list, evaluator, len(train_dataloader))
54 real_input = next(iter(train_dataloader))[0].to(device)
55 quantizer.track_forward(real_input)

File ~/anaconda3/envs/pt/lib/python3.10/site-packages/nni/compression/quantization/qat_quantizer.py:75, in QATQuantizer.init(self, model, config_list, evaluator, quant_start_step, existed_wrappers)
73 def init(self, model: torch.nn.Module, config_list: List[Dict], evaluator: Evaluator,
74 quant_start_step: int = 0, existed_wrappers: Dict[str, ModuleWrapper] | None = None):
---> 75 super().init(model, config_list, evaluator, existed_wrappers)
76 self.evaluator: Evaluator
77 self.quant_start_step = max(quant_start_step, 0)

File ~/anaconda3/envs/pt/lib/python3.10/site-packages/nni/compression/base/compressor.py:295, in Quantizer.init(self, model, config_list, evaluator, existed_wrappers)
293 super().init(model=model, config_list=config_list, evaluator=evaluator, existed_wrappers=existed_wrappers)
294 self._target_spaces: _QUANTIZATION_TARGET_SPACES
--> 295 self._register_scalers()

File ~/anaconda3/envs/pt/lib/python3.10/site-packages/nni/compression/base/compressor.py:299, in Quantizer._register_scalers(self)
297 def _register_scalers(self):
298 # scalers are used to support different sparse/quant granularity
--> 299 register_scalers(self._target_spaces, self._set_default_sparse_granularity)

File ~/anaconda3/envs/pt/lib/python3.10/site-packages/nni/compression/base/compressor.py:433, in register_scalers(target_spaces, set_default_granularity)
430 target_space._scaler = Scaling([-1, 1], kernel_padding_mode='back', kernel_padding_val=-1)
431 elif target_space.granularity == 'per_channel':
432 # NOTE: here assume dim 0 is batch, dim 1 is channel
--> 433 assert target_space._target_type in [TargetType.INPUT, TargetType.OUTPUT]
434 target_space._scaler = Scaling([-1, 1], kernel_padding_mode='back', kernel_padding_val=-1)
435 else:

AssertionError: "
}

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant