Accelerator-Aware Computation Offloading Under Timing Constraints
The rise of chiplets in personal and high performance computing is mirrored in System on Chip (SOC) in mobile devices. Both paradigms allow vendors and designers to integrate dedicated circuitry for accelerating computation. Implementations like cryptographic or vector engines are well known, and no...
Saved in:
Published in: | 2024 International Conference on Computing, Networking and Communications (ICNC) pp. 706 - 710 |
---|---|
Main Authors: | , , , |
Format: | Conference Proceeding |
Language: | English |
Published: |
IEEE
19-02-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Abstract | The rise of chiplets in personal and high performance computing is mirrored in System on Chip (SOC) in mobile devices. Both paradigms allow vendors and designers to integrate dedicated circuitry for accelerating computation. Implementations like cryptographic or vector engines are well known, and nowadays Machine Learning (ML) blocks are often included to accelerate Deep Neural Network (DNN) inference. The shift toward diverse device architectures, as exemplified by RISC-V, is poised to gain momentum. The widespread integration of accelerators in smartphones, tablets, SoCs, and dedicated server systems, is opening up exciting new innovations. In this short paper we present computation offloading for specific workloads in the framework of Multi-Access Edge Computing (MEC) and energy optimisation. We honour inter-task dependency through use of a Directed Acyclic Graph (DAG). Our system model with multiple mobile users, Device-to-Device (D2D) links between User Equipments (UEs), and edge servers enables computational and communication cooperation. The system's energy efficiency is significantly improved by introducing accelerators to the UEs and the MEC. We study the capabilities of the devices (accelerators) and propose an effective solution. |
---|---|
AbstractList | The rise of chiplets in personal and high performance computing is mirrored in System on Chip (SOC) in mobile devices. Both paradigms allow vendors and designers to integrate dedicated circuitry for accelerating computation. Implementations like cryptographic or vector engines are well known, and nowadays Machine Learning (ML) blocks are often included to accelerate Deep Neural Network (DNN) inference. The shift toward diverse device architectures, as exemplified by RISC-V, is poised to gain momentum. The widespread integration of accelerators in smartphones, tablets, SoCs, and dedicated server systems, is opening up exciting new innovations. In this short paper we present computation offloading for specific workloads in the framework of Multi-Access Edge Computing (MEC) and energy optimisation. We honour inter-task dependency through use of a Directed Acyclic Graph (DAG). Our system model with multiple mobile users, Device-to-Device (D2D) links between User Equipments (UEs), and edge servers enables computational and communication cooperation. The system's energy efficiency is significantly improved by introducing accelerators to the UEs and the MEC. We study the capabilities of the devices (accelerators) and propose an effective solution. |
Author | Fitzek, Frank H. P. Latzko, Vincent Mehrabi, Mahshid Vielhaus, Christian |
Author_xml | – sequence: 1 givenname: Vincent surname: Latzko fullname: Latzko, Vincent organization: Technische Universität Dresden,Deutsche Telekom Chair of Communication Networks,Dresden,Germany,01062 – sequence: 2 givenname: Christian surname: Vielhaus fullname: Vielhaus, Christian organization: Technische Universität Dresden,Deutsche Telekom Chair of Communication Networks,Dresden,Germany,01062 – sequence: 3 givenname: Mahshid surname: Mehrabi fullname: Mehrabi, Mahshid organization: Barkhausen Institute,Dresden,Germany,01062 – sequence: 4 givenname: Frank H. P. surname: Fitzek fullname: Fitzek, Frank H. P. organization: Technische Universität Dresden,Deutsche Telekom Chair of Communication Networks,Dresden,Germany,01062 |
BookMark | eNo1j8tKxDAARaMoOI79A8H-QGvej2UJOg4MzqauhzQPCbTJkFbEv7eiri5ncznnFlylnDwADwi2CEH1uNevmimpeIshpi2CjHHI6QWolFCSMEgEVEpcgg2mgjSCSXYDqnmOA6SYciUQ2oCus9aPvpgll6b7NMXXOk_nj8UsMaf6GMKYjYvpvX5Lzpe6j9MP6JzmpZiYlvkOXAczzr762y3on596_dIcjru97g5NpIo1gWCJlQurmseIsmAdt8xLzlUYMA9ukIZajge-ynJG3VrjGXIBGycxtmQL7n9vo_f-dC5xMuXr9B9NvgF5k03Z |
ContentType | Conference Proceeding |
DBID | 6IE 6IL CBEJK RIE RIL |
DOI | 10.1109/ICNC59896.2024.10556064 |
DatabaseName | IEEE Electronic Library (IEL) Conference Proceedings IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume IEEE Xplore All Conference Proceedings IEEE Electronic Library Online IEEE Proceedings Order Plans (POP All) 1998-Present |
DatabaseTitleList | |
Database_xml | – sequence: 1 dbid: RIE name: IEEE Electronic Library Online url: http://ieeexplore.ieee.org/Xplore/DynWel.jsp sourceTypes: Publisher |
DeliveryMethod | fulltext_linktorsrc |
EISBN | 9798350370997 |
EISSN | 2473-7585 |
EndPage | 710 |
ExternalDocumentID | 10556064 |
Genre | orig-research |
GrantInformation_xml | – fundername: European Union grantid: 16MEE0173 funderid: 10.13039/501100000780 |
GroupedDBID | 6IE 6IL 6IN ABLEC ADZIZ ALMA_UNASSIGNED_HOLDINGS BEFXN BFFAM BGNUA BKEBE BPEOZ CBEJK CHZPO IEGSK OCL RIE RIL |
ID | FETCH-LOGICAL-i495-f32829df979e2145fcd6c5e8669fb26fdb8a4c62b6247654d055e51df2ad822c3 |
IEDL.DBID | RIE |
IngestDate | Wed Jul 03 05:40:23 EDT 2024 |
IsPeerReviewed | false |
IsScholarly | false |
Language | English |
LinkModel | DirectLink |
MergedId | FETCHMERGED-LOGICAL-i495-f32829df979e2145fcd6c5e8669fb26fdb8a4c62b6247654d055e51df2ad822c3 |
PageCount | 5 |
ParticipantIDs | ieee_primary_10556064 |
PublicationCentury | 2000 |
PublicationDate | 2024-Feb.-19 |
PublicationDateYYYYMMDD | 2024-02-19 |
PublicationDate_xml | – month: 02 year: 2024 text: 2024-Feb.-19 day: 19 |
PublicationDecade | 2020 |
PublicationTitle | 2024 International Conference on Computing, Networking and Communications (ICNC) |
PublicationTitleAbbrev | ICNC |
PublicationYear | 2024 |
Publisher | IEEE |
Publisher_xml | – name: IEEE |
SSID | ssib042469711 ssib055713110 ssib048504775 |
Score | 1.9127725 |
Snippet | The rise of chiplets in personal and high performance computing is mirrored in System on Chip (SOC) in mobile devices. Both paradigms allow vendors and... |
SourceID | ieee |
SourceType | Publisher |
StartPage | 706 |
SubjectTerms | Artificial neural networks Computation Offloading Heterogeneous Computing Machine learning Multi-access edge computing System-on-chip Technological innovation Timing Vectors |
Title | Accelerator-Aware Computation Offloading Under Timing Constraints |
URI | https://ieeexplore.ieee.org/document/10556064 |
hasFullText | 1 |
inHoldings | 1 |
isFullTextHit | |
isPrint | |
link | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV25TgMxELVIKipALOKWC1one_gsVyFRaAISW9BF6wshoQTlEL_PjHOJgoJuZWkt38_jeTOPkAdfaatNbmEjwXYDvDWs1T4yK0zlCmEr6dBQHL-qyZt-HGKaHLaPhQkhJPJZ6OFn8uX7uVvjU1k_iTkChnZIRxm9CdbaLR5egqGnDouVa5FzdcBGIRRmlsm3HK8iN_2nwWQgjDbIVCh5b1f7L52VBDOjk3828JRkh4A9-rKHojNyFGbnpK6dA1BJfnRWf7eLQDcaDmky6HOMn_PEoKdJ_Ig2KPD1TlHCMwlHrJYZaUbDZjBmW8UE9gGGDosV-kV9NMoEzEAenZdOBC2libaU0VvdcidLK0uupOAemhtE4WPZergouOqCdGfzWbgktOKCt8JKuPHlnAv4LwczOopg4fjUsbgiGXZ_-rXJiTHd9fz6j_IbcoyDjHznwtyS7mqxDneks_Tr-zSNP_eXmMc |
link.rule.ids | 310,311,782,786,791,792,798,27934,54767 |
linkProvider | IEEE |
linkToHtml | http://sdu.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwlV25TgMxEB2RUEAFiCButqB1sofHRxmFRIkIAYkt6KL1hZBQgnKI38d2LlFQ0K0sreX7eTxv5gHcm0IoIVPlN5Lfbh5vJamEcUShLHSGqmA6GIr9Vz56Ew_dkCaHbGNhrLWRfGab4TP68s1UL8NTWSuKOXoMrcE-Us74Klxrs3xo7k09vluuVGBK-Q4dEXnILZOuWV5ZKluDzqiDUsjAVchpc1P_L6WVCDS9o3828Rgau5C95GULRiewZyen0G5r7WEletJJ-7ua2WSl4hCnI3l27nMaOfRJlD9KyiDx9Z4EEc8oHbGYN6DsdctOn6w1E8iHN3WIK4Jn1DjJpQ05yJ02TKMVjEmncuaMEhXVLFcs9wOI1PjmWsyMyyvjrwq6OIP6ZDqx55AUFGmFivk7X0op-v9Sb0g7tMofoMJlF9AI3R9_rbJijDc9v_yj_A4O-uXTcDwcjB6v4DAMeGA_Z_Ia6ovZ0t5AbW6Wt3FKfwA5fZwY |
openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=proceeding&rft.title=2024+International+Conference+on+Computing%2C+Networking+and+Communications+%28ICNC%29&rft.atitle=Accelerator-Aware+Computation+Offloading+Under+Timing+Constraints&rft.au=Latzko%2C+Vincent&rft.au=Vielhaus%2C+Christian&rft.au=Mehrabi%2C+Mahshid&rft.au=Fitzek%2C+Frank+H.+P.&rft.date=2024-02-19&rft.pub=IEEE&rft.eissn=2473-7585&rft.spage=706&rft.epage=710&rft_id=info:doi/10.1109%2FICNC59896.2024.10556064&rft.externalDocID=10556064 |