Abstract
|
|
---|---|
Nowadays, with the huge advance of sensor technology and the increase of the amount of data generated by them, techniques have to be developed to be able to process all this amount of information in real-time applications on edge devices, close to where data is being generated. If all that information has to be sent to the cloud to be processed, it has certain disadvantages in terms of latency, bandwidth, privacy and reliability, compared to locally processing it on the edge. In this paper, the implementation of deep learning algorithms in low power and limited resources devices in an Internet of Things scenario is studied. In order to work in real-time applications, the influence of different low power consumption deep learning hardware accelerators is studied. Finally, a practical case for smart farming is shown with comparative results in terms of power consumption and performance when running the same artificial vision algorithm on different devices. | |
International
|
Si |
Congress
|
XXXIV Conference on Design of Circuits and Integrated Systems (DCIS) |
|
960 |
Place
|
Bilbao, España |
Reviewers
|
Si |
ISBN/ISSN
|
2471-6170 |
|
10.1109/DCIS201949030.2019.8959857 |
Start Date
|
20/11/2019 |
End Date
|
22/11/2019 |
From page
|
1 |
To page
|
7 |
|
Artificial Vision on Edge IoT Devices: A Practical Case for 3D Data Classification |