Document Type

Thesis

Degree Name

Master of Applied Computing

Department

Physics and Computer Science

Program Name/Specialization

Applied Computing

Faculty/School

Faculty of Science

First Advisor

Dr. Lilatul Ferdouse

Advisor Role

She provided me with continuous guidance, valuable feedback, and unwavering support throughout my research.

Abstract

Reconfigurable intelligent surface (RIS)-assisted unmanned aerial vehicle (UAV) networks have emerged as a promising solution for future sixth-generation (6G) wireless systems due to their ability to enhance coverage and improve communication performance in complex and infrastructure-limited environments. The integration of UAV-mounted mobile edge computing (MEC) further enables efficient data processing and service delivery for large-scale internet of things (IoT) applications. In such systems, maintaining fresh information updates while ensuring energy-efficient operation becomes a critical challenge. Age of information (AoI) is an effective metric for measuring the freshness of received data, while energy efficiency (EE) is essential since most IoT devices are battery-powered and resource-constrained. In this thesis, we investigate efficient resource allocation strategies in RIS-assisted UAV-MEC networks with the objective of improving information freshness and energy efficiency. The considered optimization problems involve multiple decision variables, including UAV mobility control, RIS phase shift configuration, communication channel allocation, user association, task offloading, and computing resource allocation. In the first work, the focus is on minimizing the average AoI of IoT devices through communication resource allocation in a RIS-assisted UAV-MEC network. A deep Q-networks (DQN) based AoI-aware multi-agent deep Q-networks (MADQN) is proposed, where IoT devices and RIS elements act as agents that interact with the environment to learn efficient scheduling, association, and phase-shift decisions. In the second work, the study is extended to jointly optimize AoI and energy consumption by considering additional system decisions such as UAV mobility, task offloading, and energy-efficient computation. To effectively handle the tradeoff between these two objectives, normalized utility functions are introduced, and a multi-agent proximal policy optimization (MAPPO) algorithm is developed to learn optimal resource allocation policies. Simulation results demonstrate that the proposed reinforcement learning frameworks successfully learn stable policies for complex RIS-assisted UAV-MEC environments. The DQN-based approach effectively reduces AoI in the first work, while the MAPPO-based approach achieves better convergence and improved performance in balancing AoI and energy consumption. Comparative analysis further shows that the proposed MAPPO approach outperforms the baseline methods in terms of learning stability and overall system performance.

Convocation Year

2026

Convocation Season

Spring

Available for download on Friday, April 16, 2027

Share

COinS