Skip to content

Commit

Permalink
Use google style docstrings everywhere
Browse files Browse the repository at this point in the history
  • Loading branch information
vloncar committed Apr 26, 2023
1 parent 8f1f709 commit a00f9c6
Show file tree
Hide file tree
Showing 6 changed files with 154 additions and 221 deletions.
2 changes: 1 addition & 1 deletion docs/flows.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ New optimizers can be registered with the :py:func:`~hls4ml.model.optimizer.opti

Flows
-----
A :py:class:`~hls4ml.model.flow.flow.Flow` is an ordered set of optimizers that represent a single stage in the conversion process. The optimizers
A :py:class:`~hls4ml.model.flow.flow.Flow` is an ordered set of optimizers that represents a single stage in the conversion process. The optimizers
from a flow are applied in sequence until they no longer make changes to the model graph (controlled by the ``transform`` return value), after which
the next flow (stage) can start. Flows may require that other flows are applied before them, ensuring the model graph is in a desired state before a
flow starts. The function :py:func:`~hls4ml.model.flow.flow.register_flow` is used to register a new flow. Flows are applied on a model graph with
Expand Down
12 changes: 6 additions & 6 deletions hls4ml/backends/vivado_accelerator/vivado_accelerator_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,13 +58,13 @@ def build(
return parse_vivado_report(model.config.get_output_dir())

def make_xclbin(self, model, platform='xilinx_u250_xdma_201830_2'):
"""
"""Create the xclbin for the given model and target platform.
Parameters
----------
- model : compiled and built hls_model.
- platform : development Target Platform, must be installed first. On the host machine is required only the
deployment target platform, both can be found on the Getting Started section of the Alveo card.
Args:
model (ModelGraph): Compiled and build model.
platform (str, optional): Development/Deployment target platform, must be installed first.
The host machine only requires the deployment target platform. Refer to the Getting Started section of
the Alveo guide. Defaults to 'xilinx_u250_xdma_201830_2'.
"""
curr_dir = os.getcwd()
abs_path_dir = os.path.abspath(model.config.get_output_dir())
Expand Down
151 changes: 60 additions & 91 deletions hls4ml/converters/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ def convert_from_keras_model(
hls_config=None,
**kwargs,
):
"""Convert to hls4ml model based on the provided configuration.
"""Convert Keras model to hls4ml model based on the provided configuration.
Args:
model: Keras model to convert
Expand All @@ -221,7 +221,7 @@ def convert_from_keras_model(
kwargs** (dict, optional): Additional parameters that will be used to create the config of the specified backend
Raises:
Exception: If precision and reuse factor are not present in 'hls_config'
Exception: If precision and reuse factor are not present in 'hls_config'.
Returns:
ModelGraph: hls4ml model.
Expand Down Expand Up @@ -256,54 +256,35 @@ def convert_from_pytorch_model(
hls_config=None,
**kwargs,
):
"""
"""Convert PyTorch model to hls4ml model based on the provided configuration.
Args:
model: PyTorch model to conert.
input_shape (list): The shape of the input tensor.
output_dir (str, optional): Output directory of the generated HLS project. Defaults to 'my-hls-test'.
project_name (str, optional): Name of the HLS project. Defaults to 'myproject'.
input_data_tb (str, optional): String representing the path of input data in .npy or .dat format that will be
used during csim and cosim. Defaults to None.
output_data_tb (str, optional): String representing the path of output data in .npy or .dat format that will be
used during csim and cosim. Defaults to None.
backend (str, optional): Name of the backend to use, e.g., 'Vivado' or 'Quartus'. Defaults to 'Vivado'.
board (str, optional): One of target boards specified in `supported_board.json` file. If set to `None` a default
device of a backend will be used. See documentation of the backend used.
part (str, optional): The FPGA part. If set to `None` a default part of a backend will be used.
See documentation of the backend used. Note that if `board` is specified, the part associated to that board
will overwrite any part passed as a parameter.
clock_period (int, optional): Clock period of the design.
Defaults to 5.
io_type (str, optional): Type of implementation used. One of
'io_parallel' or 'io_stream'. Defaults to 'io_parallel'.
hls_config (dict, optional): The HLS config.
kwargs** (dict, optional): Additional parameters that will be used to create the config of the specified backend.
Raises:
Exception: If precision and reuse factor are not present in 'hls_config'.
Convert a Pytorch model to a hls model.
Parameters
----------
model : Pytorch model object.
Model to be converted to hls model object.
input_shape : @todo: to be filled
output_dir (str, optional): Output directory of the generated HLS
project. Defaults to 'my-hls-test'.
project_name (str, optional): Name of the HLS project.
Defaults to 'myproject'.
input_data_tb (str, optional): String representing the path of input data in .npy or .dat format that will be
used during csim and cosim.
output_data_tb (str, optional): String representing the path of output data in .npy or .dat format that will be
used during csim and cosim.
backend (str, optional): Name of the backend to use, e.g., 'Vivado'
or 'Quartus'.
board (str, optional): One of target boards specified in `supported_board.json` file. If set to `None` a default
device of a backend will be used. See documentation of the backend used.
part (str, optional): The FPGA part. If set to `None` a default part of a backend will be used.
See documentation of the backend used. Note that if `board` is specified, the part associated to that board
will overwrite any part passed as a parameter.
clock_period (int, optional): Clock period of the design.
Defaults to 5.
io_type (str, optional): Type of implementation used. One of
'io_parallel' or 'io_stream'. Defaults to 'io_parallel'.
hls_config (dict, optional): The HLS config.
kwargs** (dict, optional): Additional parameters that will be used to create the config of the specified backend
Returns
-------
ModelGraph : hls4ml model object.
See Also
--------
hls4ml.convert_from_keras_model, hls4ml.convert_from_onnx_model
Examples
--------
>>> import hls4ml
>>> config = hls4ml.utils.config_from_pytorch_model(model, granularity='model')
>>> hls_model = hls4ml.converters.convert_from_pytorch_model(model, hls_config=config)
Notes
-----
Only sequential Pytorch models are supported for now.
Returns:
ModelGraph: hls4ml model.
"""

config = create_config(output_dir=output_dir, project_name=project_name, backend=backend, **kwargs)
Expand Down Expand Up @@ -335,49 +316,37 @@ def convert_from_onnx_model(
hls_config=None,
**kwargs,
):
"""
"""Convert Keras model to hls4ml model based on the provided configuration.
Args:
model: ONNX model to convert.
output_dir (str, optional): Output directory of the generated HLS
project. Defaults to 'my-hls-test'.
project_name (str, optional): Name of the HLS project.
Defaults to 'myproject'.
input_data_tb (str, optional): String representing the path of input data in .npy or .dat format that will be
used during csim and cosim.
output_data_tb (str, optional): String representing the path of output data in .npy or .dat format that will be
used during csim and cosim.
backend (str, optional): Name of the backend to use, e.g., 'Vivado'
or 'Quartus'.
board (str, optional): One of target boards specified in `supported_board.json` file. If set to `None` a default
device of a backend will be used. See documentation of the backend used.
part (str, optional): The FPGA part. If set to `None` a default part of a backend will be used.
See documentation of the backend used. Note that if `board` is specified, the part associated to that board
will overwrite any part passed as a parameter.
clock_period (int, optional): Clock period of the design.
Defaults to 5.
io_type (str, optional): Type of implementation used. One of
'io_parallel' or 'io_stream'. Defaults to 'io_parallel'.
hls_config (dict, optional): The HLS config.
kwargs** (dict, optional): Additional parameters that will be used to create the config of the specified backend
Raises:
Exception: If precision and reuse factor are not present in 'hls_config'.
Convert an ONNX model to a hls model.
Parameters
----------
model : ONNX model object.
Model to be converted to hls model object.
output_dir (str, optional): Output directory of the generated HLS
project. Defaults to 'my-hls-test'.
project_name (str, optional): Name of the HLS project.
Defaults to 'myproject'.
input_data_tb (str, optional): String representing the path of input data in .npy or .dat format that will be
used during csim and cosim.
output_data_tb (str, optional): String representing the path of output data in .npy or .dat format that will be
used during csim and cosim.
backend (str, optional): Name of the backend to use, e.g., 'Vivado'
or 'Quartus'.
board (str, optional): One of target boards specified in `supported_board.json` file. If set to `None` a default
device of a backend will be used. See documentation of the backend used.
part (str, optional): The FPGA part. If set to `None` a default part of a backend will be used.
See documentation of the backend used. Note that if `board` is specified, the part associated to that board
will overwrite any part passed as a parameter.
clock_period (int, optional): Clock period of the design.
Defaults to 5.
io_type (str, optional): Type of implementation used. One of
'io_parallel' or 'io_stream'. Defaults to 'io_parallel'.
hls_config (dict, optional): The HLS config.
kwargs** (dict, optional): Additional parameters that will be used to create the config of the specified backend
Returns
-------
ModelGraph : hls4ml model object.
See Also
--------
hls4ml.convert_from_keras_model, hls4ml.convert_from_pytorch_model
Examples
--------
>>> import hls4ml
>>> config = hls4ml.utils.config_from_onnx_model(model, granularity='model')
>>> hls_model = hls4ml.converters.convert_from_onnx_model(model, hls_config=config)
Returns:
ModelGraph: hls4ml model.
"""

config = create_config(output_dir=output_dir, project_name=project_name, backend=backend, **kwargs)
Expand Down
30 changes: 11 additions & 19 deletions hls4ml/converters/onnx_to_hls.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,18 +29,12 @@ def __init__(self, model):
def get_weights_data(self, layer_name, var_name):
"""Extract weights data from ONNX model.
Parameters
----------
layer_name : string
layer's name in the ONNX model
var_name : string
variable to be extracted
Returns
-------
data : numpy array
extracted weights data
Args:
layer_name (str): Layer's name in the ONNX model.
var_name (str): Variable to be extracted.
Returns:
ndarray: Extracted weights data.
"""
# Get the node associated with the layer name
node = next(node for node in self.model.graph.node if node.name == layer_name)
Expand Down Expand Up @@ -218,17 +212,15 @@ def get_out_layer_name(graph):
def onnx_to_hls(config):
"""Convert onnx model to hls model from configuration.
Parameters
----------
config: dict
onnx configuration from yaml file or passed through API.
Args:
config (dict): ONNX configuration from yaml file or passed through API.
Returns
-------
ModelGraph : hls4ml model object
Raises:
Exception: Raised if an unsupported operation is found in the ONNX model.
Returns:
ModelGraph: hls4ml model object
"""

# This is a list of dictionaries to hold all the layer info we need to generate HLS
layer_list = []

Expand Down
103 changes: 39 additions & 64 deletions hls4ml/model/profiling.py
Original file line number Diff line number Diff line change
Expand Up @@ -422,31 +422,22 @@ def activations_torch(model, X, fmt='longform', plot='boxplot'):


def numerical(model=None, hls_model=None, X=None, plot='boxplot'):
"""
Perform numerical profiling of a model
Parameters
----------
model : keras or pytorch model
The model to profile
hls_model : ModelGraph
The ModelGraph to profile
X : array-like, optional
Test data on which to evaluate the model to profile activations
Must be formatted suitably for the ``model.predict(X)`` method
plot : str, optional
The type of plot to produce.
Options are: 'boxplot' (default), 'violinplot', 'histogram',
'FacetGrid'
Returns
-------
tuple
The quadruple of produced figures. First weights and biases
for the pre- and post-optimization models respectively,
then activations for the pre- and post-optimization models
respectively. (Optimizations are applied to an ModelGraph by hls4ml,
a post-optimization ModelGraph is a final model)
"""Perform numerical profiling of a model.
Args:
model (optional): Keras of PyTorch model. Defaults to None.
hls_model (ModelGraph, optional): The ModelGraph to profile. Defaults to None.
X (ndarray, optional): Test data on which to evaluate the model to profile activations.
Must be formatted suitably for the ``model.predict(X)``. Defaults to None.
plot (str, optional): The type of plot to produce. Options are: 'boxplot' (default), 'violinplot', 'histogram',
'FacetGrid'. Defaults to 'boxplot'.
Returns:
tuple: The quadruple of produced figures. First weights and biases
for the pre- and post-optimization models respectively,
then activations for the pre- and post-optimization models
respectively. (Optimizations are applied to an ModelGraph by hls4ml,
a post-optimization ModelGraph is a final model).
"""
wp, wph, ap, aph = None, None, None, None

Expand Down Expand Up @@ -554,21 +545,15 @@ def _get_output(layer, X, model_input):


def get_ymodel_keras(keras_model, X):
"""
Calculate each layer's ouput and put them into a dictionary
Parameters
----------
keras_model :
a keras model
X : array-like
Test data on which to evaluate the model to profile activations.
Must be formatted suitably for the ``model.predict(X)`` method.
Returns
-------
dictionary
A dictionary in the form {"layer_name": ouput array of layer}
"""Calculate each layer's ouput and put them into a dictionary.
Args:
keras_model (_type_): A keras Model
X (ndarray): Test data on which to evaluate the model to profile activations.
Must be formatted suitably for the ``model.predict(X)``.
Returns:
dict: A dictionary in the form {"layer_name": ouput array of layer}.
"""

ymodel = {}
Expand Down Expand Up @@ -668,30 +653,20 @@ def _dist_diff(ymodel, ysim):


def compare(keras_model, hls_model, X, plot_type="dist_diff"):
"""
Compare each layer's output in keras and hls model. Note that the hls_model should not be compiled before using this.
Parameters
----------
keras_model :
original keras model
hls_model :
converted ModelGraph, with "Trace:True" in the configuration file.
X : array-like
Input for the model.
plot_type : string
different methods to visualize the y_model and y_sim differences.
Possible options include:
- 'norm_diff' : square root of the sum of the squares of the differences
between each output vectors
- 'dist_diff' : The normalized distribution of the differences of the elements
between two output vectors
Returns
-------
matplotlib figure
plot object of the histogram depicting the difference in each layer's output
"""Compare each layer's output in keras and hls model. Note that the hls_model should not be compiled before using this.
Args:
keras_model: Original keras model.
hls_model (ModelGraph): Converted ModelGraph, with "Trace:True" in the configuration file.
X (ndarray): Input tensor for the model.
plot_type (str, optional): Different methods to visualize the y_model and y_sim differences.
Possible options include:
- 'norm_diff':: square root of the sum of the squares of the differences between each output vectors.
- 'dist_diff':: The normalized distribution of the differences of the elements between two output vectors.
Defaults to "dist_diff".
Returns:
matplotlib figure: Plot object of the histogram depicting the difference in each layer's output.
"""

# Take in output from both models
Expand Down
Loading

0 comments on commit a00f9c6

Please sign in to comment.