MyJournals Home  

RSS FeedsSensors, Vol. 19, Pages 924: FPGA-Based Hybrid-Type Implementation of Quantized Neural Networks for Remote Sensing Applications (Sensors)

 
 

22 february 2019 15:02:09

 
Sensors, Vol. 19, Pages 924: FPGA-Based Hybrid-Type Implementation of Quantized Neural Networks for Remote Sensing Applications (Sensors)
 


Recently, extensive convolutional neural network (CNN)-based methods have been used in remote sensing applications, such as object detection and classification, and have achieved significant improvements in performance. Furthermore, there are a lot of hardware implementation demands for remote sensing real-time processing applications. However, the operation and storage processes in floating-point models hinder the deployment of networks in hardware implements with limited resource and power budgets, such as field-programmable gate arrays (FPGAs) and application-specific integrated circuits (ASICs). To solve this problem, this paper focuses on optimizing the hardware design of CNN with low bit-width integers by quantization. First, a symmetric quantization scheme-based hybrid-type inference method was proposed, which uses the low bit-width integer to replace floating-point precision. Then, a training approach for the quantized network is introduced to reduce accuracy degradation. Finally, a processing engine (PE) with a low bit-width is proposed to optimize the hardware design of FPGA for remote sensing image classification. Besides, a fused-layer PE is also presented for state-of-the-art CNNs equipped with Batch-Normalization and LeakyRelu. The experiments performed on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset using a graphics processing unit (GPU) demonstrate that the accuracy of 8-bit quantized model drops by about 1%, which is an acceptable accuracy loss. The accuracy result tested on FPGA is consistent with that of GPU. As for the resource consumptions of FPGA, the Look Up Table (LUT), Flip-flop (FF), Digital Signal Processor (DSP), and Block Random Access Memory (BRAM) are reduced by 46.21%, 43.84%, 45%, and 51%, respectively, compared with that of floating-point implementation.


 
48 viewsCategory: Chemistry, Physics
 
Sensors, Vol. 19, Pages 925: A Polydimethylsiloxane (PDMS) Waveguide Sensor that Mimics a Neuromast to Measure Fluid Flow Velocity (Sensors)
Sensors, Vol. 19, Pages 923: A Sensor System for Detecting and Localizing Partial Discharges in Power Transformers with Improved Immunity to Interferences (Sensors)
 
 
blog comments powered by Disqus


MyJournals.org
The latest issues of all your favorite science journals on one page

Username:
Password:

Register | Retrieve

Search:

Physics


Copyright © 2008 - 2024 Indigonet Services B.V.. Contact: Tim Hulsen. Read here our privacy notice.
Other websites of Indigonet Services B.V.: Nieuws Vacatures News Tweets Nachrichten