IEEE Globecom Communications Conference,
Dec. 9-13, 2019, Waikoloa, HI, USA.
Robust Learning-Based ML Detection for Massive MIMO Systems with One-Bit Quantized Signals
Jinseok Choi (1),
Yunseong Cho (1),
Brian L. Evans (1), and
Alan Gatherer (2)
(1) Department of Electrical and Computer Engineering,
Wireless Networking and Communications Group,
The University of Texas at Austin,
Austin, TX 78712 USA
jinseokchoi89@gmail.com -
yscho@utexas.edu -
bevans@ece.utexas.edu
and
(2) Futurewei Technologies, Plano, Texas USA.
Paper Draft -
Poster Draft -
Software Release
Multiantenna Communications Project
Abstract
In this paper, we investigate learning-based maximum likelihood (ML) detection
for uplink massive multiple-input and multiple-output (MIMO) systems with
one-bit analog-to-digital converters (ADCs).
To overcome the significant dependency of learning-based detection on the
training length, we propose two one-bit ML detection methods:
a biased-learning method and a dithering-and-learning method.
The biased-learning method keeps likelihood functions with zero probability
from wiping out the obtained information through learning, thereby providing
more robust detection performance.
Extending the biased method to a system with knowledge of the received
signal-to-noise ratio, the dithering-and-learning method estimates more
likelihood functions by adding dithering noise to the quantizer input.
The proposed methods are further improved by adopting the post likelihood
function update, which exploits correctly decoded data symbols as training
pilot symbols.
The proposed methods avoid the need for channel estimation.
Simulation results validate the detection performance of the proposed methods
in symbol error rate.
Questions & Answers
Here are some of the questions and answers that arose during Mr. Cho's
interactive poster presentation of the work at the conference.
Q. Why does it utilize dithering noise?
A. At high SNR with one-bit ADCs, it is hard to get enough statistics with
the reasonable training length because it rarely shows sign changes in a
short frame.
Dithering noise perturbs the received signal to get the desired statistics.
After that, the effect of dithering noise will be removed because the BS knows what it is.
Q. Is dithering noise used during the data transmission phase?
A. No. Dithering noise helps to get enough statistics during the training
phase, but it is no longer used after the BS gets an initial guess of transition probability.
Q. What is the meaning of the dithering variance?
A. This is the amount of perturbation, so we've increased it proportional to SNR.
Designing an optimal dithering variance is our future work.
COPYRIGHT NOTICE: All the documents on this server
have been submitted by their authors to scholarly journals or conferences
as indicated, for the purpose of non-commercial dissemination of
scientific work.
The manuscripts are put on-line to facilitate this purpose.
These manuscripts are copyrighted by the authors or the journals in which
they were published.
You may copy a manuscript for scholarly, non-commercial purposes, such
as research or instruction, provided that you agree to respect these
copyrights.
Last Updated 12/17/19.