Login
HomePublicationsJournal Paper

Online Velocity Control and Data Capture of Drones for the Internet-of-Things: An Onboard Deep Reinforcement Learning Approach
Ref: CISTER-TR-201106       Publication Date: Mar 2021

Online Velocity Control and Data Capture of Drones for the Internet-of-Things: An Onboard Deep Reinforcement Learning Approach

Ref: CISTER-TR-201106       Publication Date: Mar 2021

Abstract:
Applications of Unmanned Aerial Vehicles (UAVs) for data collection are a promising means to extend Internet-of-Things (IoT) networks into remote and hostile areas, and areas with no access to power supplies. Adequate design of velocity control and communication decisions of UAVs is critical to minimize the data packet losses of ground IoT nodes resulting from overflowing buffers and transmission failure. However, online velocity control and communication decision-making is challenging in UAV-enabled IoT networks, due to the lack of the up-to-date knowledge on the state of the IoT nodes, e.g., battery energy, buffer length and channel conditions, at the UAV. Current methodology using reinforcement learning complements real-time solutions to small-scale decision problems in static IoT networks. However, reinforcement learning is impractical for the online velocity control and communication decision in the UAV-enabled IoT network, due to its rapidly growing complexity (also known as the curse-of-dimensionality). This article discusses the design of onboard deep Q-network to deliver the online velocity control and communication decision of UAVs. The onboard deep Q-network can jointly determine the optimal patrol velocity of the UAV and decide the IoT node to be interrogated for data collection, thereby minimizing asymptotically the data packet loss of the IoT networks.

Authors:
Kai Li
,
Wei Ni
,
Eduardo Tovar
,
Abbas Jamalipour


Published in IEEE Vehicular Technology Magazine, IEEE, Volume 16, Issue 1, pp 49-56.

DOI:https://doi.org/10.1109/MVT.2020.3039199.
ISSN: 1556-6072.



Record Date: 19, Nov, 2020