Skip to content

Releases: fastmachinelearning/hls4ml

aster

30 Oct 16:49
521deb1
Compare
Choose a tag to compare

What's new:

  • Support for GarNet layer (see paper)
  • Input layer precision added to config generator utility
  • New 'SkipOptimizers' config option. Now you can run all Optimizers by default (as in v0.3.0) but subtract any specified by 'SkipOptimizers' e.g. hls_config['SkipOptimizers'] = ['fuse_consecutive_batch_normalization']
  • Print out the latency report from Cosimulation

Bugfixes:

  • Fixes related to tensorflow 2.3: new Functional API, changes to handling of Input layer
  • Fix error with config generator utility and activation layers gor granularity='name'
  • Fix issue with reloading of emulation library after configuration change
  • Fix to handling of layers with use_bias=False and merged Dense and BatchNormalization

v0.3.0

31 Jul 09:06
d098c54
Compare
Choose a tag to compare

What's new:

  • API expansion:
    • Create configuration dictionary from model object
    • Run 'C Simulation' from Python with hls_model.predict(X)
    • Trace model layer output with hls_model.trace(X)
    • Write HLS project, run synthesis flow from Python
  • QKeras support: convert models trained using layers and quantizers from QKeras
  • Example models moved to separate repo, added as a submodule with an API to retrieve them
  • New Softmax implementations
  • Minor fixes: weights exported at higher precision, concatenate layer shape corrected

v0.2.0

31 Mar 10:48
d6b529e
Compare
Choose a tag to compare

What's new:

  • tf_to_hls: convert tensorflow protobuf (.pb) models to HLS projects
  • Support for Keras model .h5 files (extending existing support for .json architecture + .h5 weights format)
  • Support larger Conv1D / 2D layers
  • Support for binary and ternary layers from QKeras
  • API enhancements for addition of custom layer and new backends
  • Keras and HLS model profiling tool
  • hls4ml report command to gather HLS build reports
  • hls4ml build -l command to run logic synthesis
  • Fused Batch Normalization and Dense layer optimization pass

v0.1.6

10 Feb 17:05
Compare
Choose a tag to compare
  • Support for larger Dense layers (enabled with Strategy: Resource in the configuration file)
  • Binary/Ternary NN refinements
  • Built-in optimization framework
  • Optional C/RTL validation

v0.1.5

02 Aug 20:38
2f96e5a
Compare
Choose a tag to compare
Merge pull request #141 from vloncar/precision

Per-layer precision and reuse factor

v0.1.2

20 Mar 16:55
ab9073e
Compare
Choose a tag to compare

Update license

v0.1.1

16 Mar 18:07
93451a9
Compare
Choose a tag to compare

second beta version: fixed README