You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi David,
I am trying to compress your model by removing some of the connections between the neurons so it would fit on my Zynq XC7Z020 FPGA.
I synthesized your model in Vivado HLS and chose my FPGA when selecting the part/board. The screen shot below is a summary of utilization estimates.
I would like to get your advice on this, do you think compressing the model would take care of it? I'm afraid the accuracy would suffer greatly if I reduce the size to that extend.
Any thoughts?
The text was updated successfully, but these errors were encountered:
ZynqNet is already quite optimized, and the hardware and the network have been co-adapted. Therefore you can’t just shrink the hardware or the net independently. Your utilization is above 300%, so the FPGA logic is more than 3x too large. I don‘t think that you can cut the CNN that much by removing connections and still achieve reasonable accuracy.
Also, processing is very regular, so any „irregular“ pruning algorithms won‘t help.
You could probably benefit a lot from fixed-point arithmetic (e.g. int8), that would reduce the size of operands but also computation logic.
You could also reconfigure the hardware to use less processing engines (reduce loop-unrolling in HLS), and reduce the Weights Cache to save BRAM. This will probably result in some layers being too big (too many channels) to be processed at once, but then you can split them into „parallel branches“ like I had to do with conv10, thereby manually channel-tiling the network...
But: Do you really need to solve the entire ImageNet classification challenge for your application? Maybe you could restrict the problem to some subset, some less complex task? ImageNet is a real beast for your FPGA... ;-)
Hi David,
I am trying to compress your model by removing some of the connections between the neurons so it would fit on my Zynq XC7Z020 FPGA.
I synthesized your model in Vivado HLS and chose my FPGA when selecting the part/board. The screen shot below is a summary of utilization estimates.
![resource utilization](https://user-images.githubusercontent.com/33066019/40879855-87352548-666c-11e8-88df-8ea21aaab2db.JPG)
I would like to get your advice on this, do you think compressing the model would take care of it? I'm afraid the accuracy would suffer greatly if I reduce the size to that extend.
Any thoughts?
The text was updated successfully, but these errors were encountered: