Neural network inference on FPGA accelerators offers a promising alternative to GPUs. Leveraging FPGA's reconfigurability allows for optimized implementations of specific neural network architectures, maximizing performance while minimizing energy consumption, making it an attractive option for high-performance computing.
FINN is an experimental, open-source framework, which optimizes deep neural network inference on FPGA accelerators. Tailored for quantized neural networks, FINN creates customized dataflow-style architectures, resulting in highly efficient FPGA accelerators with high throughput and low latency for neural network inference.
This workshop focuses on two main objectives: understanding the concepts of FINN and using FINN in practical hands-on sessions on the FPGAs of Noctua 2. The workshop is divided into three parts:
1) Understanding the fundamentals of Neural Networks (about 30 min)
2) Understanding the basics of FPGAs (about 30 min)
3) Understanding FINN and its tool flow process with hands-on sessions (about 3 hours)
The workshop is a hybrid event, combining in-person and virtual participation.
This course is free of cost for members of German universities or publicly-funded research institutions in Germany.
Zoom Link: https://uni-paderborn-de.zoom-x.de/j/63587869344?pwd=OHpWUmlFR0xya0JBT0MzcHFwc0xaQT09
Note: the number of on-site seats is limited and the allocation of seats is done by the organizers.