Proc. IEEE Global Communications Conference,
Dec. 5-9, 2011,
Houston, TX USA.
Non-Parametric Impulsive Noise Mitigation in
OFDM Systems Using Sparse Bayesian Learning
Marcel Nassar and
Brian L. Evans
Department of Electrical
and Computer Engineering,
Engineering Science Building,
The University of Texas at Austin,
Austin, TX 78712 USA
Draft of Paper -
Slides in PowerPoint -
Slides in PDF
Standalone Matlab code -
Note: The above Matlab code is for the sparse Bayesian learning (SBL)
algorithm for interference mitigation that uses the interference observed
in the null tones in received complex-valued OFDM signals.
The Globecom paper figures were generated using real-valued OFDM signals.
Also, the above Matlab code doesn't implement the second SBL algorithm
in the Globecom paper using data in all tones.
Interference Modeling and Mitigation Toolbox
Smart Grid Communications Mitigation Research at UT Austin
Additive asynchronous impulsive noise limits communication performance
in certain OFDM systems, such as powerline communications, cellular
LTE and 802.11n systems.
Under additive impulsive noise, the fast Fourier transform (FFT) in
the OFDM receiver introduces time-dependence in the subcarrier noise
As a result, complexity of optimal detection becomes exponential in
the number of subcarriers.
Many previous approaches assume a statistical model of the impulsive
noise and use parametric methods in the receiver to mitigate impulsive
Parametric methods degrade with increasing model mismatch, and require
training and parameter estimation.
In this paper, we apply sparse Bayesian learning techniques to estimate
and mitigate impulsive noise in OFDM systems without the need for training.
We propose two nonparametric iterative algorithms:
In our simulations, the estimators achieve 5dB and 10dB SNR gains in
communication performance respectively, as compared to conventional
- estimate impulsive noise by its projection onto null and pilot
tones so that the OFDM symbol is recovered by subtracting out
the impulsive noise estimate; and
- jointly estimate the OFDM symbol and impulsive noise utilizing
information on all tones.
- "The time-correlation properties of the impulsive noise are
not very clear to us. The description and the equations in
Section II only seem to give a pdf, presumably of the
instantaneous noise sample. How is the process itself described?"
Answer: In this paper, we assume that the impulsive noise samples
are i.i.d. It'll be interesting to see how our methods work in
case that the noise is time-correlated.
We are working towards deriving a hidden Markov chain model to
reflect this correlation.
- "Could you explain the statement: 'Although the real and imaginary
parts of g are not exactly i.i.d, we approximate them as being
such'. I thought they were i.i.d Gaussian.
Answer: Vector g is the DFT of a real Gaussian vector.
The real and imaginary parts of it are not i.i.d.
For example, the imaginary part of the first entry is always 0.
- "If you don't have space [page limit] constraints, it would be
good to explain equations 14-17.
Specifically, it is surprising to see that none of those equations
had a step where you made a slicing decision on the data."
Answer: We don't slice the data because the Expectation-Maximization
algorithm only works for continuous variables.
If some of the variables are discrete, then there's no guarantee on
the convergence. The data will be sliced after all the iterations,
and the symbol error rate is computed based on the sliced data.
- "The results of Section VIII are interesting.
We are guessing it does not have any FEC [forward error correction],
since this is not mentioned.
It may be interesting to see what happens when even a relatively
simple FEC scheme like convolutional coding is used."
Answer: See answer to #5.
- "Would it be possible to also try clipping at some level,
say 18 dB above the signal rms / median value.
This should cover for the PAPR of OFDM itself."
Answer: You're right. Adding forward error correction and
clipping will make our experimental results stronger.
Actually, we've seen sparse Bayesian learning applied in peak-to-average
power ratio (PAPR) reduction. My initial guess is that the clipping
errors could be automatically corrected by our algorithms,
since it's relatively sparse.
COPYRIGHT NOTICE: All the documents on this server
have been submitted by their authors to scholarly journals or conferences
as indicated, for the purpose of non-commercial dissemination of
The manuscripts are put on-line to facilitate this purpose.
These manuscripts are copyrighted by the authors or the journals in which
they were published.
You may copy a manuscript for scholarly, non-commercial purposes, such
as research or instruction, provided that you agree to respect these
Last Updated 02/03/11.