Hardware efficient multiplier design for deep learning processing unit
Abstract
Deep learning models increasing computational requirements have increased the demand for specialized hardware architectures that can provide high performance while using less energy. Because of their high-power consumption, low throughput, and incapacity to handle real-time processing demands, general-purpose processors frequently fall short. In order to overcome these obstacles, this work introduces a hardware-efficient multiplier design for deep learning processing unit (DPU). To improve performance and energy efficiency, the suggested architecture combines low-power arithmetic circuits, parallel processing units, and optimized dataflow mechanisms. Neural network core operations, such as matrix computations and activation functions, are performed by dedicated hardware blocks. By minimizing data movement, an effective on-chip memory hierarchy lowers latency and power consumption. According to simulation results using industry-standard very large-scale integration (VLSI) tools, compared to traditional processors, there is a 25% decrease in latency, a 40% increase in computational throughput, and a 30% reduction in power consumption. Architecture’s scalability and modularity guarantee compatibility with a variety of deep learning applications, such as edge computing, autonomous systems, and internet of things devices.
Keywords
Booth multiplier; Deep learning processing unit; Field programmable gate array; Pipeline; Po2 multiplier
Full Text:
PDFDOI: http://doi.org/10.11591/ijece.v15i6.pp5205-5214
Copyright (c) 2025 Jean Shilpa V., Anitha R., Anusooya S., Jawahar P. K., Nithesh E., Sairamsiva S., Syed Rahaman K.

This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
International Journal of Electrical and Computer Engineering (IJECE)
p-ISSN 2088-8708, e-ISSN 2722-2578
This journal is published by the Institute of Advanced Engineering and Science (IAES).