6533b857fe1ef96bd12b39c0

RESEARCH PRODUCT

Deep Learning for Resource-Limited Devices

Sebti TamraouiMohammed H. KechoutAhmed MostefaouiMohammed Amine Merzoug

subject

business.industryComputer scienceDistributed computingDeep learningIntelligent decision support systemRedundancy (engineering)InitializationArtificial intelligencePruning (decision trees)businessAdaptation (computer science)Quantization (image processing)Convolutional neural network

description

In recent years, deep neural networks have revolutionized the development of intelligent systems and applications in many areas. Despite their numerous advantages and potentials, these intelligent models still suffer from several issues. Among them, the fact that they became very complex with millions of parameters. That is, requiring more resources and time, and being unsuitable for small restricted devices. To contribute in this direction, this paper presents (1) some state-of-the-art lightweight architectures that were specifically designed for small-sized devices, and (2) some recent solutions that have been proposed to optimize/compress classical deep neural networks to allow their deployment on embedded systems. In this paper, we also present our four-staged approach that aims to further enhance the performance of lightweight object-detection models. The conducted performance evaluations have demonstrated that the proposed four layers (basic model initialization, pruning, quantization, and adaptation) considerably reduce the redundancy of filters and model size with negligible performance degradation.

https://doi.org/10.1145/3416013.3426445