Construction of Rate-Compatible Codes from Punctured Polar Codes
Produktform: Buch
Transmitting information enables technical applications modern societies are built
on. This thesis focuses on channel coding, a fundamental building block of communications
systems that transmit information using noisy channels. In his seminal
work presented in 1948, Shannon laid ground for the mathematical theory of information
and its transmission. Formalizing the notion of a noisy channel as
a statistical model, Shannon establishes the maximum rate at which information
may be transmitted. This upper bound, referred to as the channel’s capacity, is
strict. At rates below the capacity of a given channel, information may be transmitted
and recovered at the receiver, while there is no hope for doing so without a
positive error probability at rates exceeding that capacity.
To protect against transmission errors, channel codes add redundancy to the information,
which supports a channel decoder located at the receiver in recovering
the information. The field of channel coding is concerned with designing said redundancy
to transmit at rates close to capacity. Shannon shows that this is possible
using random codes, and a code length approaching infinity. However, neither random
codes nor codes of infinite length are a viable way to build concrete systems.
As a result, his work sparked a vast body of research concerned with building practical
channel codes. Prominent early milestones are given by Hamming codes,
Golay codes, as well as Reed-Muller (RM) codes. Early coding theory
was dominated by algebraic approaches that exploit the structure of finite fields to
devise efficient encoding and decoding algorithms for linear block codes on harddecision
channels, with Bose-Chaudhuri-Hocquenghem (BCH) codes
and Reed-Solomon (RS) codes marking major breakthroughs. Advances in
computer technology enabled iterative decoders allowing turbo codes to approach
the Shannon limit with moderate decoding complexity in the last decade of the 20th
century, and lead to the rediscovery of low-density parity-check (LDPC)
codes proposed by Gallager in 1963. Under iterative decoding algorithms operating
on graphical representations of the parity-check matrix, the latter are shown
to approach the Shannon limit on binary-input channels subject to additive white
Gaussian noise.
These developments solve the channel coding problem from a practical point of
view. However, in 2008, Arıkan presents polar codes (PCs), the first channel coding
scheme provably achieving the Shannon capacity of output-symmetric binary-input
discrete memoryless channels (BDMCs) with low-complexity encoding and decoding
methods. Describing the phenomenon of polarization, Arıkan shows how to
create disparate mutual information terms by combining independent pairs of random
variables. This results in a recursive construction that amplifies this disparity,
ultimately leading to polarization. His sequential decoding algorithm exploits this
phenomenon, and allows for the construction of channel codes that provably feature
vanishing error probability as the code length approaches infinity. Consequently,
this reignited research in coding and inspired many works that translate the polarization
phenomenon to classical information-theoretic problems, but also lead to a
continuously growing number of publications focusing on practical aspects of polar
coding.weiterlesen