Search Results - "Dinh, Thinh Quang"

  • Showing 1 - 16 results of 16
Refine Results
  1. 1

    Offloading in Mobile Edge Computing: Task Allocation and Computational Frequency Scaling by Thinh Quang Dinh, Jianhua Tang, Quang Duy La, Quek, Tony Q. S.

    Published in IEEE transactions on communications (01-08-2017)
    “…In this paper, we propose an optimization framework of offloading from a single mobile device (MD) to multiple edge devices. We aim to minimize both total…”
    Get full text
    Journal Article
  2. 2

    Adaptive Computation Scaling and Task Offloading in Mobile Edge Computing by Thinh, Quang Dinh, Tang, Jianhua, Quang, Duy La, Quek, Tony Q. S.

    “…The energy consumption and applications' execution latency of mobile devices (MDs) can be improved by migrating application tasks to a nearby edge device. In…”
    Get full text
    Conference Proceeding
  3. 3

    Learning for Computation Offloading in Mobile Edge Computing by Dinh, Thinh Quang, La, Quang Duy, Quek, Tony Q. S., Shin, Hyundong

    Published in IEEE transactions on communications (01-12-2018)
    “…Mobile edge computing (MEC) is expected to provide cloud-like capacities for mobile users (MUs) at the edge of wireless networks. However, deploying MEC…”
    Get full text
    Journal Article
  4. 4

    A Hybrid DQN and Optimization Approach for Strategy and Resource Allocation in MEC Networks by Wu, Yi-Chen, Dinh, Thinh Quang, Fu, Yaru, Lin, Che, Quek, Tony Q. S.

    “…We consider a multi-user multi-server mobile edge computing (MEC) network with time-varying fading channels and formulate an offloading decision and resource…”
    Get full text
    Journal Article
  5. 5

    Enabling intelligence in fog computing to achieve energy and latency reduction by La, Quang Duy, Ngo, Mao V., Dinh, Thinh Quang, Quek, Tony Q.S., Shin, Hyundong

    Published in Digital communications and networks (01-02-2019)
    “…Fog computing is an emerging architecture intended for alleviating the network burdens at the cloud and the core network by moving resource-intensive…”
    Get full text
    Journal Article
  6. 6

    Low-Latency and Secure Computation Offloading Assisted by Hybrid Relay-Reflecting Intelligent Surface by Ngo, Khac-Hoang, Nguyen, Nhan Thanh, Dinh, Thinh Quang, Hoang, Trong-Minh, Juntti, Markku

    “…Recently, the hybrid relay-reflecting intelligent surface (HRRIS) has been introduced as a spectral- and energy-efficient architecture to assist wireless…”
    Get full text
    Conference Proceeding
  7. 7

    Enabling Large-Scale Federated Learning over Wireless Edge Networks by Dinh, Thinh Quang, Nguyen, Diep N., Hoang, Dinh Thai, Vu, Pham Tran, Dutkiewicz, Eryk

    “…Major bottlenecks of large-scale Federated Learning (FL) networks are the high costs for communication and computation. This is due to the fact that most of…”
    Get full text
    Conference Proceeding
  8. 8

    Joint Optimization of Execution Latency and Energy Consumption for Mobile Edge Computing with Data Compression and Task Allocation by Ly, Minh Hoang, Quang Dinh, Thinh, Kha, Ha Hoang

    “…This paper studies the mobile edge offloading scenario consisting of one mobile device (MD) with multiple independent tasks and various remote edge devices. In…”
    Get full text
    Conference Proceeding
  9. 9

    Online Resource Procurement and Allocation in a Hybrid Edge-Cloud Computing System by Dinh, Thinh Quang, Liang, Ben, Quek, Tony Q.S., Shin, Hyundong

    “…By acquiring cloud-like capacities at the edge of a network, edge computing is expected to significantly improve user experience. In this paper, we formulate a…”
    Get full text
    Journal Article
  10. 10

    In-network Computation for Large-scale Federated Learning over Wireless Edge Networks by Dinh, Thinh Quang, Nguyen, Diep N, Hoang, Dinh Thai, Vu, Pham Tran, Dutkiewicz, Eryk

    Published 21-09-2021
    “…Most conventional Federated Learning (FL) models are using a star network topology where all users aggregate their local models at a single server (e.g., a…”
    Get full text
    Journal Article
  11. 11

    Enabling Large-Scale Federated Learning over Wireless Edge Networks by Dinh, Thinh Quang, Nguyen, Diep N, Hoang, Dinh Thai, Vu, Pham Tran, Dutkiewicz, Eryk

    Published 21-09-2021
    “…Major bottlenecks of large-scale Federated Learning(FL) networks are the high costs for communication and computation. This is due to the fact that most of…”
    Get full text
    Journal Article
  12. 12

    Low-Latency and Secure Computation Offloading Assisted by Hybrid Relay-Reflecting Intelligent Surface by Ngo, Khac-Hoang, Nguyen, Nhan Thanh, Dinh, Thinh Quang, Hoang, Trong-Minh, Juntti, Markku

    Published 03-09-2021
    “…Recently, the hybrid relay-reflecting intelligent surface (HRRIS) has been introduced as a spectral- and energy-efficient architecture to assist wireless…”
    Get full text
    Journal Article
  13. 13

    Joint Optimization of Execution Latency and Energy Consumption for Mobile Edge Computing with Data Compression and Task Allocation by Ly, Minh Hoang, Dinh, Thinh Quang, Kha, Ha Hoang

    Published 27-09-2019
    “…In this paper, we consider the mobile edge offloading scenario consisting of one mobile device (MD) with multiple independent tasks and various remote edge…”
    Get full text
    Journal Article
  14. 14

    Online Resource Procurement and Allocation in a Hybrid Edge-Cloud Computing System by Dinh, Thinh Quang, Liang, Ben, Quek, Tony Q. S, Shin, Hyundong

    Published 24-01-2020
    “…By acquiring cloud-like capacities at the edge of a network, edge computing is expected to significantly improve user experience. In this paper, we formulate a…”
    Get full text
    Journal Article
  15. 15

    A Learning-Based Expected Best Offloading Strategy in Wireless Edge Networks by Wu, Yi-Chen, Dinh, Thinh Quang, Fu, Yaru, Lin, Che, Quek, Tony Q. S.

    “…Recently, Mobile-Edge Computing (MEC) has been considered as a powerful supplement to a wireless network by processing computationally intensive tasks for…”
    Get full text
    Conference Proceeding
  16. 16

    In-Network Computation for Large-Scale Federated Learning Over Wireless Edge Networks by Dinh, Thinh Quang, Nguyen, Diep N., Hoang, Dinh Thai, Pham, Tran Vu, Dutkiewicz, Eryk

    Published in IEEE transactions on mobile computing (01-10-2023)
    “…Most conventional Federated Learning (FL) models are using a star network topology where all users aggregate their local models at a single server (e.g., a…”
    Get full text
    Magazine Article