cfaed Publications

Improving approximate neural networks for perception tasks through specialized optimization

Reference

Cecilia De la Parra, Andre Guntoro, Akash Kumar, "Improving approximate neural networks for perception tasks through specialized optimization", In Future Generation Computer Systems, vol. 113, pp. 597 - 606, July 2020. [doi]

Abstract

Approximate Computing has been proven successful in reducing the energy consumption of Deep Neural Networks (DNNs) implemented in embedded systems. For efficient DNN approximation at software and hardware levels, a specialized simulation environment and optimization methodology are required, to reduce execution and optimization times, as well as to maximize energy savings. Traditional frameworks for cross-layer approximate computation of DNNs are generally built only for simulation of convolutional and fully-connected layers, limiting the DNN types to be optimized through approximations. In this work, we present a specialized simulation environment for approximate DNNs, which allows for optimization of several DNN architectures built with more complex DNN layers such as depthwise convolutions and Recurrent Neural Units (RNNs) for time series processing. Low execution time overhead is achieved hereby through efficient GPU acceleration. Additionally, we deliver an analysis of approximate DNN and RNN robustness against quantization noise and different approximation levels. Finally, through specialized approximate retraining, we achieve promising energy savings and negligible accuracy losses with highly complex DNNs for image classification with ImageNet, such as MobileNet, and RNNs for keyword spotting with the Speech Commands Dataset.

Bibtex

@article{DELAPARRA2020597,
title = "Improving approximate neural networks for perception tasks through specialized optimization",
journal = "Future Generation Computer Systems",
volume = "113",
pages = "597 - 606",
year = "2020",
month = "July",
issn = "0167-739X",
doi = "https://doi.org/10.1016/j.future.2020.07.031",
url = "http://www.sciencedirect.com/science/article/pii/S0167739X20301576",
author = "Cecilia {De la Parra} and Andre Guntoro and Akash Kumar",
keywords = "Approximate neural networks, Approximate computing, Approximate multipliers, Neural network optimization",
abstract = "Approximate Computing has been proven successful in reducing the energy consumption of Deep Neural Networks (DNNs) implemented in embedded systems. For efficient DNN approximation at software and hardware levels, a specialized simulation environment and optimization methodology are required, to reduce execution and optimization times, as well as to maximize energy savings. Traditional frameworks for cross-layer approximate computation of DNNs are generally built only for simulation of convolutional and fully-connected layers, limiting the DNN types to be optimized through approximations. In this work, we present a specialized simulation environment for approximate DNNs, which allows for optimization of several DNN architectures built with more complex DNN layers such as depthwise convolutions and Recurrent Neural Units (RNNs) for time series processing. Low execution time overhead is achieved hereby through efficient GPU acceleration. Additionally, we deliver an analysis of approximate DNN and RNN robustness against quantization noise and different approximation levels. Finally, through specialized approximate retraining, we achieve promising energy savings and negligible accuracy losses with highly complex DNNs for image classification with ImageNet, such as MobileNet, and RNNs for keyword spotting with the Speech Commands Dataset.",
}

Downloads

Elsevier_Approx_DNN [PDF]

Permalink

https://cfaed.tu-dresden.de/publications?pubId=2854


Go back to publications list