Releases: fastmachinelearning/hls4ml
Releases · fastmachinelearning/hls4ml
aster
What's new:
- Support for GarNet layer (see paper)
- Input layer precision added to config generator utility
- New 'SkipOptimizers' config option. Now you can run all Optimizers by default (as in v0.3.0) but subtract any specified by 'SkipOptimizers' e.g.
hls_config['SkipOptimizers'] = ['fuse_consecutive_batch_normalization']
- Print out the latency report from Cosimulation
Bugfixes:
- Fixes related to tensorflow 2.3: new Functional API, changes to handling of Input layer
- Fix error with config generator utility and activation layers gor
granularity='name'
- Fix issue with reloading of emulation library after configuration change
- Fix to handling of layers with
use_bias=False
and merged Dense and BatchNormalization
v0.3.0
What's new:
- API expansion:
- Create configuration dictionary from model object
- Run 'C Simulation' from Python with
hls_model.predict(X)
- Trace model layer output with
hls_model.trace(X)
- Write HLS project, run synthesis flow from Python
- QKeras support: convert models trained using layers and quantizers from QKeras
- Example models moved to separate repo, added as a submodule with an API to retrieve them
- New Softmax implementations
- Minor fixes: weights exported at higher precision, concatenate layer shape corrected
v0.2.0
What's new:
tf_to_hls
: convert tensorflow protobuf (.pb
) models to HLS projects- Support for Keras model
.h5
files (extending existing support for.json
architecture +.h5
weights format) - Support larger Conv1D / 2D layers
- Support for binary and ternary layers from QKeras
- API enhancements for addition of custom layer and new backends
- Keras and HLS model profiling tool
hls4ml report
command to gather HLS build reportshls4ml build -l
command to run logic synthesis- Fused Batch Normalization and Dense layer optimization pass
v0.1.6
v0.1.5
v0.1.2
Update license