Asilomar Conference on Signals, Systems and Computers,
Pacific Grove, California USA, Oct. 28-31, 2018
Deep Q-Learning for Self-Organizing Networks Fault Management and Radio Performance Improvement
Faris B. Mismar and
Brian L. Evans
Department of Electrical and Computer Engineering,
Wireless Networking and Communications Group,
The University of Texas at Austin,
Austin, TX 78712 USA
Final Paper (Archive) -
Final Paper (Local) -
Poster (PowerPoint) -
Poster (PDF) -
We propose an algorithm to automate fault management in an outdoor
cellular network using deep reinforcement learning (RL) against
This algorithm enables the cellular network cluster to self-heal
by allowing RL to learn how to improve the DL SINR and spectral
efficiency through exploration and exploitation of various alarm
The main contributions of this paper are to
Simulation results show that our proposed learns an action sequence
to clear alarms and improve the performance in the cellular cluster
better than existing algorithms, even against the randomness of the
network fault occurrences and user movements.
- introduce a deep RL-based fault handling algorithm which
self-organizing networks can implement in a polynomial runtime and
- show that this fault management method can improve the radio link
performance in a realistic network setup.
COPYRIGHT NOTICE: All the documents on this server
have been submitted by their authors to scholarly journals or conferences
as indicated, for the purpose of non-commercial dissemination of
The manuscripts are put on-line to facilitate this purpose.
These manuscripts are copyrighted by the authors or the journals in which
they were published.
You may copy a manuscript for scholarly, non-commercial purposes, such
as research or instruction, provided that you agree to respect these
Last Updated 11/07/18.