We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Traceback (most recent call last): File "/usr/lib/python3.10/runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "/usr/lib/python3.10/runpy.py", line 86, in _run_code exec(code, run_globals) File "/data/dss/SimAI/aicb/workload_generator/AIOB_simAI_workload_generator.py", line 886, in <module> comp_filepath = get_comp_out(args) File "/data/dss/SimAI/aicb/utils/utils.py", line 283, in get_comp_out filepath = measure_model(masked_input) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1510, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1519, in _call_impl return forward_call(*args, **kwargs) File "/data/dss/SimAI/aicb/workload_generator/mocked_model/AiobMegatron.py", line 86, in forward lay_out, layernorm = self.Layernorm(Emb_output) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1510, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1519, in _call_impl return forward_call(*args, **kwargs) File "/data/dss/SimAI/aicb/workload_generator/mocked_model/AiobMegatron.py", line 375, in forward lay_out, lay_time = self._apply(hidden_states) File "/data/dss/SimAI/aicb/utils/utils.py", line 406, in wrapper result = func(*args, **kwargs) File "/data/dss/SimAI/aicb/workload_generator/mocked_model/AiobMegatron.py", line 363, in _apply output_lay = FastLayerNormFN.apply( File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 551, in apply return super().apply(*args, **kwargs) # type: ignore[misc] TypeError: FastLayerNormFN.forward() missing 1 required positional argument: 'memory_efficient'
Docker image used: pytorch:24.01-py3
pytorch:24.01-py3
The text was updated successfully, but these errors were encountered:
Successfully merging a pull request may close this issue.
Docker image used:
pytorch:24.01-py3
The text was updated successfully, but these errors were encountered: