-
Notifications
You must be signed in to change notification settings - Fork 135
Add rewrite for softplus(log(x)) -> log1p(x)
#1452
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2010,6 +2010,27 @@ def test_exp_softplus(self, exp_op): | |
decimal=6, | ||
) | ||
|
||
def test_softplus_log(self): | ||
# softplus(log(x)) -> log1p(x) | ||
data_valid = np.random.random((4, 3)).astype("float32") * 2 | ||
data_valid[0, 0] = 0 # edge case | ||
data_invalid = data_valid - 2 | ||
|
||
x = fmatrix() | ||
f = function([x], softplus(log(x)), mode=self.mode) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. if you want you can check against the expected graph directly, something like assert equal_computations(f.maker.fgraph.outputs, [pt.switch(x > 0, pt.log1p(x), np.asarray([[np.nan]], dtype="float32")]) Or something like that. This is not a request! There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It took me a while to figure out how to make the test work. It works fine if I apply it to a scalar rewritten output, like: x = pt.scalar("x")
out = pt.softplus(pt.log(x))
new_out = rewrite_graph(out, include=("canonicalize", "stabilize", "specialize"))
equal_computations([new_out], [pt.switch(x >= 0, pt.log1p(x), pt.nan)]) But if I want to apply within the test (where the function is also applied to test data), I need to fix a few things. x = fmatrix()
mode=get_mode('FAST_COMPILE').including("local_exp_log", "local_exp_log_nan_switch")
f = function([x], softplus(log(x)), mode=mode)
assert equal_computations(f.maker.fgraph.outputs, [pt.switch(x >= np.array([[0]], dtype=np.int8), pt.log1p(x), np.array([[np.nan]], dtype=np.float32))]) For some reason the mode used in the test (set at class level) seems to do something extra and nd_x.op: Elemwise(scalar_op=Switch,inplace_pattern=<frozendict {0: 1}>)
nd_y.op: Elemwise(scalar_op=Switch,inplace_pattern=<frozendict {}>) I don't know enough of the internals of PyTensor to find a solution, except forcing the mode above inside this test. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yeah nvm. I usually use it with just the output of You're seeing Not worth the trouble here. |
||
graph = f.maker.fgraph.toposort() | ||
ops_graph = [ | ||
node | ||
for node in graph | ||
if isinstance(node.op, Elemwise) | ||
and isinstance(node.op.scalar_op, ps.Log | ps.Exp | ps.Softplus) | ||
] | ||
assert len(ops_graph) == 0 | ||
|
||
expected = np.log1p(data_valid) | ||
np.testing.assert_almost_equal(f(data_valid), expected) | ||
assert np.all(np.isnan(f(data_invalid))) | ||
|
||
@pytest.mark.parametrize( | ||
["nested_expression", "expected_switches"], | ||
[ | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpick I prefer to refer to it by
log1pexp
, which we have as an alias to softplus:pytensor/pytensor/tensor/math.py
Line 2474 in ff98ab8
Also we can add a similar case for
log1mexp
?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this (with the code below) and it works fine in its domain [0, 1]. I will add it too.