Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python Frontend: Use Numba Type Inference #1253

Draft
wants to merge 30 commits into
base: main
Choose a base branch
from

Conversation

AlexanderViand-Intel
Copy link
Collaborator

@AlexanderViand-Intel AlexanderViand-Intel commented Jan 9, 2025

Very much WIP,

This does a few things:

  • Enable Numba type inference and uses the inferred types to decide the MLIR type that gets emitted
  • Add an @mlir decorator that is a slightly modified version of numba's @intrinsic, with the main difference being that one can provide a pure python implementation and run it under python. This allows us to define, e.g., linalg.matmul and assign it np.matmul as its python implementation.
  • Changes a few defaults/adds a few options to the pipeline here and there. Not really related to type inference, might pull out into another PR
  • Adds dummy classes for MLR types (e.g., Secret[...]), in preparation for grabbing signature types from type annotations (rathter than from a string passed into the decorator, as is Numba's approach for ahead-of-time compilation)
  • TODO: lots of cleanup, but most importantly we need to define MLIR-style integers and Tensor types (see also Decide on (Cleartext) Integer Semantics for HEIR's Python Frontend #1252)

PS: fun fact - in numba/numpy world, i4 is not a 4-bit integer (as in MLIR), but shorthand for the 4-byte type int32.

@AlexanderViand-Intel
Copy link
Collaborator Author

AlexanderViand-Intel commented Jan 21, 2025

Sorry for the lack of clean-up/updates on this one, I was feeling a bit under the weather.

The big ToDo, as discussed last week, to move from Numba-style inference to something that supports statically shaped tensors is still open, but I did do some other smaller cleanup

  • You can now select between schemes (only, bgv and ckks so far, but it'd be interesting to see how much effort it would be to support the boolean pipeline (at least to the heir-translate output)).
  • The emitter now supports floats (i.e., emits arith.addf instead of arith.addi)
  • You can now use Python type annotations to specify types (including ranked tensors, though the sizes will be converted to "?")
  • The frontend now respects Secret[...] annotations and no longer relies on --secretize. This also means it will only generate enc helpers for the secret arguments.

Here's a current example:

from heir_py import compile
from heir_py.mlir import *
from heir_py.types import *

@compile() # defaults to scheme="bgv"", backend="openfhe", debug=False
def foo(x : Secret[I16], y : I16):
  sum = x + y
  mul = x * y
  expression = sum * mul
  return expression

foo.setup() # runs keygen/etc
enc_x = foo.encrypt_x(7)
result_enc = foo.eval(enc_x, 8)
result = foo.decrypt_result(result_enc)
print(f"Expected result for `foo`: {foo(7,8)}, decrypted result: {result}")

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant