Edge Computing using FPGA with the Deployment of Neural Networks for General Purpose Application
dc.contributor.author | Perera, Kevini | |
dc.contributor.author | Hettihewa, Chamod | |
dc.contributor.author | Wickramasinghe, Manupa | |
dc.contributor.author | Sandanayake, Ashan | |
dc.contributor.author | Rajapaksha, Chamali | |
dc.contributor.author | Pathirana`, Pubudu | |
dc.date.accessioned | 2025-01-15T08:05:44Z | |
dc.date.available | 2025-01-15T08:05:44Z | |
dc.date.issued | 2024-09 | |
dc.identifier.uri | http://ir.kdu.ac.lk/handle/345/7967 | |
dc.description.abstract | Artificial intelligence and deep learning are gaining traction in edge computing to extract insights from Internet of Things (IoT) devices. Hardware accelerators like Field Programmable Gate Arrays (FPGAs) accelerate deep learning efficiently due to their energy efficiency, parallelism, flexibility, and reconfigurability. However, resource constraints of FPGAs pose deployment challenges. This research explores hardwareaccelerated applications’ dynamic deployment on the Kria KV260 platform with a Xilinx Kria K26 system-on-module, equipped with a Zynq multiprocessor system-onchip. It presents an innovative solution to dynamically reconfigure deep neural networks by running multiple neural networks and Deep Processing Units (DPU) concurrently. This research advances Edge Computing using FPGAs to facilitate efficient deployment of Neural Networks in resource-constrained edge environments. | en_US |
dc.language.iso | en | en_US |
dc.subject | FPGA, | en_US |
dc.subject | neural networks, | en_US |
dc.subject | DPU. | en_US |
dc.subject | hardware accelerator | en_US |
dc.title | Edge Computing using FPGA with the Deployment of Neural Networks for General Purpose Application | en_US |
dc.type | Article Abstract | en_US |
dc.identifier.faculty | FOE | en_US |
dc.identifier.journal | KDU IRC | en_US |
dc.identifier.pgnos | 22 | en_US |
Files in this item
This item appears in the following Collection(s)
-
Engineering [25]