This experiment was done to check performance of Large machine learning tensorflow/caffe models over FPGA using the Xilinx stack on AWS
Prerequisites
Installed libraries -Xilinx ml-suite, Tensorflow,caffe,jupyter notebook
Open connections to AWS ml-suite AMI and run the following commands in the terminal :
sudo su; source ~centos/.bashrc; source activate ml-suite; source ml-suite/overlaybins/setup.sh aws
Now you can execute the jupyter notebooks
inference_results folder has the data of the experiment results for inception v1 and v3 models with 8 bit and 16 bit quantizations
inceptionv1-tensorflow-final-inference notebook- allows to run 8-bit/16 bit quantized inception v1 model. It can be also used to run any supported tensorflow model on xilinx
inceptionv3-caffe-final-inference notebook - allows to run 8-bit/16 bit quantized inception v3 model. It can be also used to run any supported Caffe model on xilinx
tensorflow-inference-multinet.py - Allows you to run upto 4 Inception v1 models in parallel. It can also allow running 4 different models in parallel say v1 and v3 together.
fpga_utils.py - Contains helper wrapper functions using Xilinx API to communicate with FPGA
Limitations_xilinx.txt - Limitations of the Xilinx stack.
-
Notifications
You must be signed in to change notification settings - Fork 0
MacherLabs/fpga-inference
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
No description, website, or topics provided.
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published