Skip to main content Skip to main navigation

Publikation

XbNN: Enabling CNNs on Edge Devices by Approximate On-Chip Dot Product Encoding

Lucas Klemmer; Saman Fröhlich; Rolf Drechsler; Daniel Große
In: IEEE International Symposium on Circuits & Systems (ISCAS). IEEE International Symposium on Circuits and Systems (ISCAS-2021), May 22-28, Daegue, Korea, Republic of, 2021.

Zusammenfassung

Only a few trends have gained as much traction as Edge Computing and Neural Networks (NN). Both have the potential to radically change how technology influences us. However, since edge devices feature only very limited resources, the sheer amount of performance required by modern NNs limits their use on the edge. Especially, the conversion of Convolutional Neural Networks (CNN) into feasible on-chip designs remains a hard task. Currently, hand-crafted and most-often very heavy architectures have to be used as existing High-Level Synthesis (HLS) frameworks provide only inefficient solutions. In this paper, we introduce the Crossbar Neural Network (XbNN) architecture. Our architecture employs a novel approximate on-chip dot product encoding for the efficient synthesis of CNNs on hardware. This encoding embeds the weights used in CNNs into the hardware design itself, significantly reducing the required memory and computation time. In addition, we present a methodology for the automated conversion of traditional CNNs given in TensorFlow into accelerators on top of the XbNN architecture. To demonstrate the effectiveness of XbNN, we conduct experiments on a common CNN test dataset and analyze the accuracy and performance of the resulting XbNN accelerators. We show that XbNN (a) achieves similar accuracies compared to TensorFlow CNNs and (b) provides much better area and performance results in comparison to a state-of-the-art HLS flow.