Author ORCID Identifier

0000-0003-4413-4476

Document Type

Article

Disciplines

1.2 COMPUTER AND INFORMATION SCIENCE

Publication Details

S. Wang, M. A. Qureshi, L. Miralles-Pechuán, T. Huynh-The, T. R. Gadekallu and M. Liyanage, "Explainable AI for 6G Use Cases: Technical Aspects and Research Challenges," in IEEE Open Journal of the Communications Society, vol. 5, pp. 2490-2540, 2024,

doi: 10.1109/OJCOMS.2024.3386872.

Abstract

Around 2020, 5G began its commercialization journey, and discussions about the next-generation networks (such as 6G) emerged. Researchers predict that 6G networks will have higher bandwidth, coverage, reliability, energy efficiency, and lower latency, and will be an integrated “human-centric” network system powered by artificial intelligence (AI). This 6G network will lead to many real-time automated decisions, ranging from network resource allocation to collision avoidance for self-driving cars. However, there is a risk of losing control over decision-making due to the high-speed, data-intensive AI decision-making that may go beyond designers’ and users’ comprehension. To mitigate this risk, explainable AI (XAI) methods can be used to enhance the transparency of the black-box AI decision-making process. This paper surveys the application of XAI towards the upcoming 6G age, including 6G technologies (such as intelligent radio and zero-touch network management) and 6G use cases (such as industry 5.0). Additionally, the paper summarizes the lessons learned from recent attempts and outlines important research challenges in applying XAI for 6G use cases soon.

DOI

https://doi.org/10.1109/OJCOMS.2024.3386872

Funder

European Commission

Creative Commons License

Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
This work is licensed under a Creative Commons Attribution-NonCommercial-Share Alike 4.0 International License.


Included in

Engineering Commons

Share

COinS