Skip to content

[TOSA] Add TosaLayerwiseConstantFoldPass and TosaReduceTransposes passes #4165

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions lib/Dialect/TorchConversion/Transforms/Passes.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -117,6 +117,12 @@ void TorchConversion::createTorchBackendToTosaBackendPipeline(
const TorchConversion::TosaBackendPipelineOptions &options) {
pm.addNestedPass<func::FuncOp>(
createConvertTorchToTosaPass(options.requireFullTosaConversion));
// Fold full-layer operations on TOSA constants
pm.addNestedPass<func::FuncOp>(createTosaLayerwiseConstantFoldPass());

// Perform transpose reductions for avoidable data movements
pm.addNestedPass<func::FuncOp>(createTosaReduceTransposes());

// Perform rank broadcasting so TosaToLinalg pass works
pm.addNestedPass<func::FuncOp>(createTosaMakeBroadcastablePass());

Expand Down
9 changes: 9 additions & 0 deletions projects/pt1/e2e_testing/xfail_sets.py
Original file line number Diff line number Diff line change
Expand Up @@ -1731,6 +1731,15 @@
"HBC_basic",
# 1D inputs cause generated tosa.negate ops to crash downstream
"NllLossModule_1D_basic",
# BertModule is not crashing, but is timing out due to TosaLayerwiseConstantFoldPass:
# Exception ignored on calling ctypes callback function: <function RefBackendInvoker.__init__.<locals>.consume_return_funcs at 0x765783f12c20>
# Traceback (most recent call last):
# File "torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/linalg_on_tensors_backends/refbackend.py", line 101, in consume_return_funcs
# def consume_return_funcs(*args):
# File "torch-mlir/build/tools/torch-mlir/python_packages/torch_mlir/torch_mlir_e2e_test/framework.py", line 316, in handle_timeout
# raise TimeoutError(self.error_message)
# TimeoutError: Timeout
"BertModule_basic",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's possible to specify a timeout value like

@register_test_case(module_factory=lambda: TimeOutModule(), timeout_seconds=10)
but I don't know if it's possible to increase the value only to Tosa conversion path.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's a good idea to increase the timeout for all conversion paths. Right now, the BertModule test is marked as XFail any way, so maybe it can be fixed altogether later.

}

# Write the TOSA set as a "passing" set as it is very early in development
Expand Down
Loading