Asilomar Conference on Signals, Systems and Computers,
Pacific Grove, California USA, Oct. 28-31, 2018
Deep Q-Learning for Self-Organizing Networks Fault Management and Radio Performance Improvement
Faris B. Mismar and
Brian L. Evans
Department of Electrical and Computer Engineering,
Wireless Networking and Communications Group,
The University of Texas at Austin,
Austin, TX 78712 USA
faris.mismar@utexas.edu -
bevans@ece.utexas.edu
Final Paper (Archive) -
Final Paper (Local) -
Poster (PowerPoint) -
Poster (PDF) -
Software Release
Abstract
We propose an algorithm to automate fault management in an outdoor
cellular network using deep reinforcement learning (RL) against
wireless impairments.
This algorithm enables the cellular network cluster to self-heal
by allowing RL to learn how to improve the DL SINR and spectral
efficiency through exploration and exploitation of various alarm
corrective actions.
The main contributions of this paper are to
- introduce a deep RL-based fault handling algorithm which
self-organizing networks can implement in a polynomial runtime and
- show that this fault management method can improve the radio link
performance in a realistic network setup.
Simulation results show that our proposed learns an action sequence
to clear alarms and improve the performance in the cellular cluster
better than existing algorithms, even against the randomness of the
network fault occurrences and user movements.
COPYRIGHT NOTICE: All the documents on this server
have been submitted by their authors to scholarly journals or conferences
as indicated, for the purpose of non-commercial dissemination of
scientific work.
The manuscripts are put on-line to facilitate this purpose.
These manuscripts are copyrighted by the authors or the journals in which
they were published.
You may copy a manuscript for scholarly, non-commercial purposes, such
as research or instruction, provided that you agree to respect these
copyrights.
Last Updated 11/07/18.