REQ-YOLO: A resource-aware, efficient quantization framework for object detection on FPGAS

Caiwen Ding, Shuo Wang, Ning Liu, Kaidi Xu, Yanzhi Wang, Yun Liang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

10 Scopus citations

Abstract

Deep neural networks (DNNs), as the basis of object detection, will play a key role in the development of future autonomous systems with full autonomy. The autonomous systems have special requirements of real-time, energy-efficient implementations of DNNs on a power-constrained system. Two research thrusts are dedicated to performance and energy efficiency enhancement of the inference phase of DNNs. The first one is model compression techniques while the second is efficient hardware implementation. Recent works on extremely-low-bit CNNs such as the binary neural network (BNN) and XNOR-Net replace the traditional floating point operations with binary bit operations which significantly reduces the memory bandwidth and storage requirement. However, it suffers from non-negligible accuracy loss and underutilized digital signal processing (DSP) blocks of FPGAs. To overcome these limitations, this paper proposes REQ-YOLO, a resource aware, systematic weight quantization framework for object detection, considering both algorithm and hardware resource aspects in object detection. We adopt the block-circulant matrix method and propose a heterogeneous weight quantization using Alternative Direction Method of Multipliers (ADMM), an effective optimization technique for general, non-convex optimization problems. To achieve real-time, highly-efficient implementations on FPGA, we present the detailed hardware implementation of block circulant matrices on CONV layers and develop an efficient processing element (PE) structure supporting the heterogeneous weight quantization, CONV dataflow and pipelining techniques, design optimization, and a template-based automatic synthesis framework to optimally exploit hardware resource. Experimental results show that our proposed REQ-YOLO framework can significantly compress the YOLO model while introducing very small accuracy degradation. The related codes are here: https://github.com/Anonymous788/heterogeneous_ADMM_YOLO.

Original languageEnglish (US)
Title of host publicationFPGA 2019 - Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays
PublisherAssociation for Computing Machinery, Inc
Pages33-42
Number of pages10
ISBN (Electronic)9781450361378
DOIs
StatePublished - Feb 20 2019
Externally publishedYes
Event2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA 2019 - Seaside, United States
Duration: Feb 24 2019Feb 26 2019

Publication series

NameFPGA 2019 - Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays

Conference

Conference2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, FPGA 2019
CountryUnited States
CitySeaside
Period2/24/192/26/19

Keywords

  • ADMM
  • Compression
  • FPGA
  • Object detection
  • YOLO

ASJC Scopus subject areas

  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint Dive into the research topics of 'REQ-YOLO: A resource-aware, efficient quantization framework for object detection on FPGAS'. Together they form a unique fingerprint.

  • Cite this

    Ding, C., Wang, S., Liu, N., Xu, K., Wang, Y., & Liang, Y. (2019). REQ-YOLO: A resource-aware, efficient quantization framework for object detection on FPGAS. In FPGA 2019 - Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays (pp. 33-42). (FPGA 2019 - Proceedings of the 2019 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays). Association for Computing Machinery, Inc. https://doi.org/10.1145/3289602.3293904