Department of Electrical Engineering
Examensarbete
Investigation of LDPC code in DVB-S2
Examensarbete utfört i Reglerteknik vid Tekniska högskolan vid Linköpings universitet
av
Hanxiao Ge
LiTH-ISY-EX--12/4321--SE Linköping 2012
Department of Electrical Engineering Linköpings tekniska högskola
Linköpings universitet Linköpings universitet
Examensarbete utfört i Reglerteknik
vid Tekniska högskolan i Linköping
av
Hanxiao Ge
LiTH-ISY-EX--12/4321--SE
Handledare: Jian Wang
isy, Linköpings universitet Examinator: Di Wu
isy, Linköpings universitet Linköping, 23 January, 2012
Department of Electrical Engineering Linköpings universitet
SE-581 83 Linköping, Sweden
2012-01-23 Språk Language ¤ Svenska/Swedish ¤ Engelska/English ¤ £ Rapporttyp Report category ¤ Licentiatavhandling ¤ Examensarbete ¤ C-uppsats ¤ D-uppsats ¤ Övrig rapport ¤ £
URL för elektronisk version
http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-75164 http://www.ep.liu.se ISBN — ISRN LiTH-ISY-EX--12/4321--SE
Serietitel och serienummer
Title of series, numbering
ISSN
—
Titel
Title Investigation of LDPC code in DVB-S2
Författare
Author
Hanxiao Ge
Sammanfattning
Abstract
As one of the most powerful error-correcting codes, Low-density parity check codes are widely used in digital communications. Because of the performance of LDPC codes are capable to close the shannon limited extraordinarily, LDPC codes are to be used in the new Digital Video Broadcast-Satellite-Second Generation(DVB-S2) and it is the first time that LDPC codes are included in the broadcast standard in 2003.
In this thesis, a restructured parity-check matrices which can be divided into sub-matrices for LDPC code in DVB-S2 is provided. Corresponded to this re-structured parity-check matrix, a reconstructed decoding table is invented. The encoding table of DVB-S2 standard only could obtain the unknown check nodes from known variable nodes, while the decoding table this thesis provided could obtain the unknown variable nodes from known check nodes what is exactly the Layered-massage passing algorithm needed. Layered-message passing algorithm which also known as "Turbo-decoding message passing" is used to reduce the de-coding iterations and memory storage for messages. The thesis also investigate Bp algorithm, λ-min algorithm, Min-sum algorithm and SISO-s algorithm, meanwhile, simulation results of these algorithms and schedules are also presented.
Nyckelord
Keywords LDPC code, DVB-S2, Restructured parity-check matrix, Layered message pass-ing(LMP), Reconstructed decoding table
As one of the most powerful error-correcting codes, Low-density parity check codes are widely used in digital communications. Because of the performance of LDPC codes are capable to close the shannon limited extraordinarily, LDPC codes are to be used in the new Digital Video Broadcast-Satellite-Second Generation(DVB-S2) and it is the first time that LDPC codes are included in the broadcast standard in 2003.
In this thesis, a restructured parity-check matrices which can be divided into sub-matrices for LDPC code in DVB-S2 is provided. Corresponded to this re-structured parity-check matrix, a reconstructed decoding table is invented. The encoding table of DVB-S2 standard only could obtain the unknown check nodes from known variable nodes, while the decoding table this thesis provided could obtain the unknown variable nodes from known check nodes what is exactly the Layered-massage passing algorithm needed. Layered-message passing algorithm which also known as "Turbo-decoding message passing" is used to reduce the de-coding iterations and memory storage for messages. The thesis also investigate Bp algorithm, λ-min algorithm, Min-sum algorithm and SISO-s algorithm, meanwhile, simulation results of these algorithms and schedules are also presented.
I would like to express my sincere gratitude to my examiner Di Wu who gives lots of helpful on thesis ideas and proposal.
I would like to thank my supervisor Jian Wang who guide me to finish the thesis.
There is no doubt there are lots of helpful tools, such as latex, matlab, emacs, I used during my project. so, I am grateful to those tools’ authors.
Special thanks should be given to my parents and my boyfriend for their en-couragement and assistance. And thanks to all my friends who help and support me.
Finally, it is my honour to be a master student in the Linköping University and completed the programme successfully in the ISY department. I will never forget these wonderful life during my study in Sweden.
1 Introduction 3
1.1 Thesis Guidelines . . . 4
2 Low-density parity check codes 5 2.1 The block code . . . 5
2.2 Representations of LDPC codes . . . 7
2.3 Hard-decision and Soft-decision decoding . . . 9
2.3.1 Hard-decision decoding . . . 9
2.3.2 Soft-decision decoding . . . 12
2.4 Decoding algorithms for LDPC codes . . . 14
2.4.1 BP Algorithm . . . 15
2.4.2 λ-min Algorithm . . . . 18
2.4.3 Min-sum Algorithm . . . 19
3 Analysis of LDPC codes in DVB-S2 21 3.1 Regularity of the LDPC code in DVB-S2 . . . 21
3.2 Schedule for iterative decoding . . . 24
3.2.1 Two-Phase Message passing schedule . . . 24
3.2.2 Layered-Message passing schedule . . . 25
4 DVB-S2 LDPC decoding using reconstructed decoding table 29 4.1 Restructured parity-check matrix for DVB-S2 . . . 29
4.1.1 Description of restructured parity-check matrix . . . 31
4.2 Reconstructed LDPC decoding table . . . 32
4.3 SISO algorithm for restructured parity-check matrix . . . 36
4.3.1 SISO-simplified(SISO-s) algorithm . . . 38
4.3.2 Verification of SISO algorithm . . . 38
5 Simulation results 41 5.1 Comparison among BP, λ-min and Min-sum algorithms . . . . 41
5.1.1 Comparison among BP, 3-min, 2-min and Min-sum algo-rithms at rate 1/4 . . . 41
5.1.2 Comparison among BP, 4-min, 3-min, 2-min and Min-sum algorithms at rate 1/2 . . . 42
5.1.3 Conclusion . . . 42 ix
5.2 Comparison between TPMP and LMP . . . 43
5.2.1 Comparison of performance with the specified iterations . . 43
5.2.2 Comparison of the iterations between TPMP and LMP . . 43
5.2.3 Conclusion . . . 44
5.3 Comparison among SISO, SISO-s and λ-min algorithms . . . . 44
5.3.1 Comparison between SISO and SISO-s algorithms . . . 44
5.3.2 Comparison between SISO-s and λ-min algorithms . . . . . 45
5.3.3 Conclusion . . . 45
6 Memory cost estimation 49 6.1 Decoder architecture . . . 49
6.1.1 Memory of variable messages . . . 49
6.1.2 Interconnected network . . . 50
6.1.3 Decoding units of extrinsic messages . . . 51
6.2 Memory cost estimation . . . 52
6.2.1 Memory cost estimation of Extrinsic-messages . . . 52
7 Conclusion and Future work 55 7.1 Conclusion . . . 55
7.2 Future work . . . 55
Bibliography 57
2.1 Tanner graph . . . 8
2.2 Presentation for qij and rji . . . 9
2.3 Illustration for example 2.1 . . . 12
2.4 Function φ . . . . 18
3.1 Tanner graph for an irregular LDPC code . . . 24
3.2 Illustration of a decoding for a sub-iteration . . . 27
4.1 Tanner graph for the number ’54’ . . . 30
4.2 Parity-check matrix for the number ’54’ . . . 30
4.3 Tanner graph for the number ’9318’ . . . 31
4.4 Parity-check matrix for the number ’9318’ . . . 32
4.5 Restructured parity-check matrix . . . 33
4.6 A special sub-matrix in restructured parity-check matrix . . . 34
4.7 Single row layered-message passing to block row layered-message passing . . . 34
4.8 Computation of extrinsic message with q(x, y) . . . . 36
5.1 Performance of Bp, λ-min with λ = 2, 3 and Min-sum algorithms at rate 1/4 . . . 42
5.2 Performance of Bp, λ-min with λ = 2, 3, 4 and Min-sum algorithms at rate 1/2 . . . 43
5.3 Performance of Two-phase message passing(TPMP) and Layered message passing(LMP) with four specified iterations . . . 44
5.4 Iterations comparison between Two-phase message passing(TPMP) and Layered-message passing(LMP) at rate 1/4 . . . 45
5.5 Iterations comparison between Two-phase message passing(TPMP) and Layered-message passing(LMP) at rate 1/2 . . . 46
5.6 Iterations comparison between Two-phase message passing(TPMP) and Layered-message passing(LMP) at rate 2/3 . . . 46
5.7 Performance of SISO and SISO-s algorithms at rate 1/2 . . . 47
5.8 Performance of SISO-s and λ-min algorithms at rate 1/2 . . . . 47
6.1 Overview of the date flow for Layered-message passing algorithm . 50 6.2 Memory of variable messages . . . 51
6.3 Decoding unit for extrinsic message . . . 54
List of Tables
3.1 The q for normal frames . . . 223.2 The q for short frames . . . 22
3.3 Parameters for LDPC code in DVB-S2 of 11 code rates . . . 23
Introduction
Low-Density Parity-Check(LDPC) codes was first discovered by Robert G. Gal-lager in 1962 in his doctoral dissertation [9]. It had not been drawing enough attention for almost 30 years until McaKay and Neal rediscovered it in 1990s[6]. Granted by its excellent decoding ability, the performance of LDPC codes allows the noise threshold to be set very close to the theoretical maximum (the Shannon limit).
Compared to turbo codes, LDPC codes have a higher error correcting perfor-mance and lower complexity. These advances boosted LDPC codes to be applied in the new Digital Video Broadcast-Satellite-Second Generation (DVB-S2)[1], and for the first time LDPC codes won the competition against six turbo codes and became part of the broadcast standard in 2003. In fact, LDPC codes have been also used in many other systems, such as WiMAX (802.16e), CMMB, 10GBaseT (802.3an).
However, LDPC code in DVB-S2 can be a combination of normal and short frames, which have the codeword length of 64800 bits and 16200 bits, respectively. Such kind of computational burden laid on an implementation platform demands a proper decoding algorithm simplification to reduce its computation complexity. Up to date, intensive investigation has shown there are many possible algorithms for LDPC codes. Among them, the basic BP-algorithm which is so-called sum-product algorithm, achieves good performance but still remains high computation complexity. In order to overcome its shortcomings, λ-min algorithm and min-sum algorithm[5][3], which are both based on BP-algorithm, are introduced.
The decoding algorithms for LDPC are iterative, and the procedure consists of variable message and check message and their updating. In the process, the mes-sages sent to check nodes are updated by variable nodes and on the contrary, the ones sent to variable nodes are updated by check nodes. A schedule called "Two-phase message passing (TPMP)" executes the computations in two steps, which include the message computation from variable node to check node and that from
check node to variable node as well. Compared to TPMP, another schedule which is known as "Layered-message passing (LMP)" or "Turbo decoding-message pass-ing (TDMP)" has higher convergence speed[8][2]. For this schedule, the variable messages are updated in each sub-iteration and hard-decision is made after the updating of variable message in each sub-iteration.
In this thesis, the parity-check matrix of LDPC in DVB-S2 is restructured so that the matrix can be divided into many sub-matrices. Meanwhile, one com-putation mechanism called "Soft-in Soft-out (SISO)" algorithm[7], which is also evolved from BP-algorithm is introduced. With the combination of sub-matrices and layered message passing schedule, the efficiency of decoding is increased and the memory storage for messages are reduced.
This thesis investigate several of the most important LDPC algorithms and presents the simulation results. According to analysis of the characteristic to the LDPC code in DVB-S2, a structured decoding iterative algorithm is proposed. Meanwhile, a corresponding decoding table for LDPC codes in DVB-S2 is provided.
1.1
Thesis Guidelines
Thesis consists of 7 chapters.
Chapter 1 Summarizes the LDPC code and DVB-S2 system.
Chapter 2 Introduces the basic knowledge and several common algorithms of
LDPC code.
Chapter 3 Analyses the structure of parity-check matrix of LDPC code in
DVB-S2 and presents two schedules with different convergence speed.
Chapter 4 Restructures the parity-check matrix of LDPC code in DVB-S2 and
provide a reconstructed decoding table instead of the encoding table.
Chapter 5 Presents the simulation results and comparison of their performance. Chapter 6 Provides decoder architecture in DVB-S2 and memory cost
estima-tion.
Low-density parity check
codes
In 1962, Robert G. Gallager first presented the Low-Density Parity-Check(LDPC) codes in his doctoral dissertation[9]. LDPC codes are linear error correcting codes defined by a sparse parity check matrix, it achieves the bit-error-rate performance near the theoretical Shannon limit. But it’s limited to the technical conditions and LDPC codes were forgotten for almost 30 years until it was reinvented by Mackay in 1996[6]. Compared to turbo codes, LDPC codes have a higher error correcting performance and lower complexity, LDPC codes are to be used in the new Digital Video Broadcast-Satellite-Second Generation(DVB-S2) and it is the first time that LDPC codes are included in the broadcast standard in 2003[1].
2.1
The block code
LDPC codes are error correcting codes consist of block codes and convolutional codes. Because LDPC convolutional codes are uneconomical to implement in hard-ware with large area and high power consumption, mostly LDPC block codes are used in the practical hardware implementation. In the rest part of this paper, we only work on LDPC block codes.
The block code is an important error-correcting code which helps us to trans-mit the messages over communication channels. The data stream is divided into pieces which we called messages with the length K, then each message is encoded to a codeword with length N. Here we use (N,K) to denote the block code which consist of the length N for codeword and the length K for message. The ratio of the message and the codeword is R = K/N . For easy understanding, we can consider R as one codeword with length N contains one message with length K.
The encoding process can be described as the multiplication between the mes-sage m and a special matrix, here the special matrix is called as the generator
matrix.
C = m · G (2.1)
Here we denote the generator matrix G is a set of {g0, g1, · · · , gk−1}and each
gk−1 have the dimension with the number n. We also denote the message m =
{m0, m1, · · · , mk−1}, and the codeword C = {c0, c1, · · · , cn−1}.
G = g0,0 g0,1 · · · g0,n−1 g1,0 g1,1 · · · g1,n−1 .. . ... . .. ... gk−1,0 gk−1,1 · · · gk−1,n−1 (2.2)
Corresponding to the generator matrix G, there exists a (n − k) × n matrix
H to let G · HT, we call it parity-check matrix. It requires H should meet the
equation C · H = 0. If we put the generator matrix G into [I, P ], the parity-check matrix can be organized as£PT, I¤.
H = h0,0 h0,1 · · · h0,n−1 h1,0 h1,1 · · · h1,n−1 .. . ... . .. ... hn−k−1,0 hn−k−1,1 · · · hn−k−1,n−1 (2.3) C · HT = 0 (2.4)
Hence, it’s easy to find the relation between G and H as the equation below.
G · HT = 0 (2.5)
Here, we use an example to achieve a quick understanding of the relation among them.
Example 2.1
It’s a (7,3) generator matrix.
G = 1 0 0 1 1 0 10 1 0 0 1 1 1 0 0 1 1 0 1 1
If the message vector is m = [1, 0, 1], we can calculate the responding codeword as below. C = m · G =£1 1 0¤· 1 0 0 1 1 0 10 1 0 0 1 1 1 0 0 1 1 0 1 1 =£1 1 0 1 0 1 0¤
we need to find a (n − k) × n matrix H to meet the requirement C · HT = 0. Obviously, the parity-check matrix we supplied below also meet the requirement
G · HT = 0. H = 1 0 1 1 0 0 0 1 1 0 0 1 0 0 0 1 1 0 0 1 0 1 1 1 0 0 0 1
2.2
Representations of LDPC codes
Normally, there exists matrix and graphical representations for the LDPC codes. Before Tanner introduced his effective graphical representation, we use the matrix representation to describe the LDPC codes. The reason for the LDPC codes called low-density is its parity-check matrix has most zeros and a small amount of ones. Suppose we have an (N, K)LDPC code with the parity-check matrix H. Parity-check matrix H has the block length N and the dimension M = N − K which means this parity-check matrix has N columns and M rows. We denote each row has Dc ones and each column has Dv ones which are corresponded to the degrees
in the Tanner graph.
Tanner graph is a bipartite graph which consists of variable nodes and check nodes. An (N, K) LDPC code has N variable nodes and N-k check nodes, we can simply consider that variable nodes are the bits of the codewords. The edges between variable nodes and check nodes are the ones in the matrix H which means this variable node and check node are associated. As it is called, check nodes help us to check the correctness of the codewords. We can easy to change the H below
Figure 2.1. Tanner graph
into Tanner graph of figure 2.1.
H = 0 1 1 0 1 0 0 1 1 1 0 1 0 1 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 1 1 0
Regular and Irregular LDPC codes
From the figure 2.1 we can easily find that the degree of variable nodes which are associated to the bits of codewords is Dv and the degree of check nodes is Dc.
If each Dc are constant and so it is with Dv, we call this LDPC code is regular.
Otherwise, it’s an irregular LDPC code.
According to every variable node, if it has a high degree which means it can receive more information from the associated check nodes. More information the variable node get, the higher correctness the variable node reach. For each check node, it can propagate more valuable information to the associated variable nodes with low degree. In irregular LDPC codes, the variable nodes with higher degree could get their correct value more quickly, and the associated check nodes have a greater chance to get the correct information. Then, the variable nodes with low degree can receive more valuable information from the associated high degree check nodes. Hence, the performance of irregular LDPC codes is better than reg-ular LDPC codes.
Figure 2.2. Presentation for qijand rji
2.3
Hard-decision and Soft-decision decoding
The decoding algorithms for LDPC codes approximately divided into hard-decision algorithm and soft-decision algorithm. Hard-decision algorithm decodes the data on fixed set of value, because we only consider the binary symmetric channels that the fixed value should be 0 or 1. Compared with hard-decision algorithm’s simple implementation, soft-decision algorithm has better performance.
Before introducing the algorithms, let’s make some notations first.
fj- the j’th check node.
ci- the i’th variable node.
qij - the message from variable node ci to check node fj.
rji- the message from check node fj to variable node ci.
Nj = {i, Hji= 1}- the set of variable nodes fulfil the equation in the j’th row.
Nj\i= {i0, Hji0 = 1} \ {i}- the set of variable nodes fulfil the equation in the j’th
row except the i’th variable node.
Mi= {j, Hji= 1}- the set of check nodes fulfil the equation in the i’th column.
Mi\j = {j0, Hj0i= 1} \ {j}- the set of check nodes fulfil the equation in the i’th
column except the j’th check node.
pi= Pr(ci= 1 | yi)
2.3.1
Hard-decision decoding
Here we introduce a kind of hard-decision algorithm named bit-flipping algorithm. The parity-check matrix is a sparse parity check matrix which was built randomly. Because of its sparseness, either the receiver doesn’t receiver any wrong code or there is a high probability to receive a wrong code. This algorithm will be in-troduced by the data of the example 2.1 and figure 2.1. The modulation maps the codewords C = {c1, c2, · · · , cn−1} into a sequence X = {x1, x2, · · · , xn−1}.
Y = {y1, y2, · · · , yn−1} was denoted as the real received vector via the transmit
channel, and Z = {z1, z2, · · · , zn−1} was the hard-decision vector which was got
from the equation below.
zi=
(
0 yi≤ 0
1 yi> 0
(2.6)
The parity matrix H was denoted as equation 2.2, and we denote s = z · HT.
Then we can get the equation below.
sn−k= z · hn−k= n−1
X
i=0
zi· h(j,i)(mod2) , J = N − K (2.7)
If the sn−k in the set of S are all zeros, they receive the correct codewords.
And if there has no-zeros in the set of S, the received vector Z has some mistakes in it. Let’s make some notations as below.
f = {f0, f1, · · · , fj, · · · , fN −1} = s · H fn−k= s · hn−k= j−1 X i=0 si· h(i,j), J = N − K (2.8)
Then find the largest element fj in the set {f0, f1, · · · , fN −1}and transfer the
corresponding zj to the opposite number. Here we use several steps to introduce
the bit-flipping algorithm.
Step 1 Use the equation 2.7 to calculate the vector sn−k with the received
vec-tor z. If the elements in the set of s are all zeros, it’s terminated with the correct vector, otherwise, go to the next step.
Step 2 Calculate the set of {f0, f1, · · · , fN −1} and find the largest fj. Then
transfer the corresponding zj to it’s opposite number (0 or 1), get a new vector z’.
Step 3 calculate the vector sn−k = z · hn−k with the new vector z’. If the
ele-ments of s are all zeros or the iterations reach the maximum number, the decoding is terminated with the current vector, otherwise, the decoding go back to step 2.
Example 2.2
Assume we get the vectorz = {1, 1, 0, 0, 1, 1, 0, 1}, and we use the same parity-check matrix H which is built the figure 2.1. Then we can get
s = z · HT = {1, 1, 0, 0} Then go to step 2,
f = {1, 2, 1, 1, 1, 1, 0, 1}
Obviously, f1 is the largest element in the set of f. Then change z1 = 1 to
z1 = 0, and go back to re-calculate the vector s. Because of s = {0, 0, 0, 0},
z = {1, 0, 0, 0, 1, 1, 0, 1} is the correct vector after the decoding.
Message passing algorithm for hard-decision decoding
In order to get a better overview of the soft-decision decoding, here another hard-decision decoding algorithm will be introduced. Unlike the last method which only use the parity-check matrix H, this method is based on the Tanner graph to decode. As figure 2.2 shows, the check node fj receives the message from the connected
variable nodes. Assume the check node fj has four connected variable nodes and
it receive four message from them. Then if ciwhich is one of the connected variable
nodes want the response message from this check node fj, in order to fulfil the
equation Hji = 1, the check node fj will return the message which is calculated
via the messages from the other three variable nodes to ci For example, the f0
is connected to c1, c2, c4, c7 in figure. If the messages from c1, c1, c4 are 0,1,0, the
check nodefj returns 1 to c7 for fulfilling the equation. Then every variable node
gets their messages from the check nodes they connected, use the majority vote to make the decision.
Let’s use the data from example 2.2 and figure 2.3 to illustrate this method.
Step 1 All variable node ci send their messages to the connected check nodes
fj, and each fj receives four variable node messages. Such like the f0 in figure 2.3
Figure 2.3. Illustration for example 2.1
Step 2 Calculate the the response message from check nodes fj to connected
variable nodes ci. We assume the other message from connected variable nodes
are correct, in order to fulfil the equation H = 1 we can get the response message. Use f2 as an example, the messages from c1, c2, c4, c7 is {1, 0, 1, 1}. We calculate
the response message to c1 by the other three message {0, 1, 1}. For fulfilling the
equation, the response message to c1 should be 0.
In this step, every variable node receives two response message from their con-nected check nodes, if those two messages are both equal to its original message it means this variable code has the correct value. it is presented in figure 2.3.
Step 3 This step is going to decide the value of variable node by majority vote.
The messages consist of two messages from the connected check nodes and its original message. Let’s go to the figure 2.3, the message for c0 is {0, 1, 1} and the
decision for c0 is 1. After every variable node decide its new value, we can find
only c1 has changed. We get a new vector {1, 0, 0, 0, 1, 1, 0, 1}.
Step 4 Go back to step 2, if every variable node receives the same value message
from the connected check nodes, the vector is decoded correctly. So we correct the vector {1, 1, 0, 0, 1, 1, 0, 1} to {1, 0, 0, 0, 1, 1, 0, 1} which is the same result with example 2.2.
2.3.2
Soft-decision decoding
According to the method explained above, we have a basic understanding of mes-sage passing algorithm which is also the basic principle for soft-decision
decod-ing algorithm. The greatest difference between hard-decision decoddecod-ing and soft-decision decoding is the former propagates the message of 0 or 1 while the latter propagate the probability of 0 or 1. Next there is three steps to introduce the soft-decision algorithm.
Step 1 All variable node cisend their messages qij to the connected check nodes
fj. The message which consist of qij(0) and qij(1) from the variable node ci is
the probability of 0 and 1. The equations below are based on there is no available information between variable nodes and check notes, it’s a initialization.
qij(0) = 1 − Pi
qij(1) = Pi
Step 2 Calculate the the response message rjifrom check nodes fjto connected
variable nodes ci. According to Gallager’s research[], we get a equation to
calcu-late the probability for the even 1’s in a sequence to fulfil H = 1 . For an N length binary sequence, the probability for how many even 1’s is showed below.
1 2 1 +Yn j=1 (1 − 2Pi) (2.9)
Then the check nodes’ response messages are
rji(0) = 1 2 1 + Y i0∈N j\i (1 − 2qi0j(1)) (2.10) rji(1) = 1 − rji(0)
Step 3 When the check codes receive the messages from variable nodes, the new
qij is a multiplication of old qij times the received rji. Kij is denoted as a
param-eter for the new qij to fulfil the equation qij(0) + qij(1) = 1. In order to reduce
the effect of cito itself after the iterations, Mi\j means all veritable nodes except
cj participate the multiplication.
qij(0) = Kij(1 − pi) Y j0∈M i\j rj0i(0) qij(1) = Kijpi Y j0∈M i\j rj0i(1) (2.11)
qij(0) + qij(1) = 1
After the variable node cj update it’s probabilities, the equations below shows
how to calculate the estimation of cj. Kijis selected to ensure Qi(0) + Qi(1) = 1.
If Qi(0) > Qi(1), the estimation for this variable node is 0, otherwise the
estima-tion is 1. Qi(0) = Kij(1 − pi) Y j∈Mi rji(0) Qi(1) = Kijpi Y j∈Mi rji(1) (2.12) Qi(0) + Qi(1) = 1
If the current estimation of the codeword fulfil H=1 or the iterations reach the limitation, the termination to the decoding is enforced. Otherwise, go back to step 2.
This soft decision decoding algorithm is sum-product algorithm, also called belief- propagation algorithm. Compared to soft-decision decoding, hard-decision decoding has low complexity but worse performance, while hardware implemen-tation of soft-decision is limited by its complexity. Generally, both soft-decision decoding and hard-decision decoding are used in one algorithm.
2.4
Decoding algorithms for LDPC codes
The (N, K) LDPC codes are defined by parity-check matrices H = [Hji] which is
a matrix with N columns and N-K rows. Here denote each column which could be regarded as one variable node has Dv check nodes connected and each row which
could be regarded as one check node has Dc variable nodes connected. From that
we can denote the set of variable nodes N (j) = {n : Hji= 1} and the check nodes
M (i) = {j : Hji= 1}. Similar to the notation in previous chapter, Nj\i means
the variable nodes’ set without the variable node n and Mi\j means check nodes’
The message passing algorithm was simply introduced in chapter 2, the algo-rithms which will be introduced are all based on message passing algorithm. The message passing algorithm for LDPC codes could be illustrated by Tanner graph with passing message via the edge between variable nodes and check nodes.In the subsection, Belief-propagation decoding algorithm will be introduced first, then several simplified BP-decoding algorithm will be proposed and compared in the left part.
2.4.1
BP Algorithm
Belief-propagation decoding algorithm, also called sum-product decoding algo-rithm was rediscoverd by McaKay and Neal[6]. In the previous chapter, the soft-decision decoding algorithm propagates the message from variable nodes to check nodes then propagate the message back. The message consists of the probabilities of both 0 and 1, and the calculations for the check nodes is multiplications which causes high complexity. Here we introduce the log-likelihood ratio (LLR) which can transfer the multiplications to additions in log-domain. In the left part of this thesis, all of the numbers will be introduced in log-domain.
L (ci) = log1 − pi pi uij= log qij(0) qij(1) , vji= log rji(0) rji(1) (2.13)
The codeword is transmitted over a channel corrupted by additive white Gaus-sian noise(AWGN) after the BPSK modulation. Then we can get the probability of qij(0) and qij(1). qij(0) = 1 − pi = P r (ci= 0|yi) = 1 1 + e−2yiσ2 (2.14) qij(1) = pi= P r (ci= 1|yi) = 1 1 + e2yiσ2 (2.15) Initialization
uij = log µ qij(0) qij(1) ¶ = log à 1 + e2yiσ2 1 + e−2yiσ2 ! = log(e2yi/σ2 ) = 2yi/σ2 (2.16) vji= 0
Here we use several steps to introduce the BP-decoding algorithm. Here we denote LLRint as the received LLR message.
Step 1 Message computation from variable node to check node
The detailed computation of the message from the varible nodes to each check node vji is introduced. vji= log µ rji(0) rji(1) ¶ = log 1 + Q i0∈N j\i (1 − 2qi0j(1) 1 − Q i0∈N j\i (1 − 2qi0j(1) = log 1 + Q i0∈N j\i ³ 1 − 2 ³ 1 1+eui0j ´´ 1 − Q i0∈N j\i ³ 1 − 2 ³ 1 1+eui0j ´´ = log 1 + e P i0∈Nj\i log „ eui0j−1 eui0j+1 « 1 − e P i0∈Nj\i log „ eui0j−1 eui0j+1 « (2.17)
The most frequently used computation is defined, and φ has the property that φ (x) = φ−1(x) when x > 0. φ (x) = − log³tanh³ x 2 ´´ = loge x+ 1 ex− 1 (2.18)
Simplified by φ(x), the equation 2.18 can reconstruct as below.
vji= Y i0∈N j\i sign(ui0j) · φ X i0∈N j\i φ (|ui0j|) (2.19)
Note that all the value from variable nodes need to be separated into sign and absolute value in the computation.
Step 2 Message computation from check node to variable node
For each check node, use the equation below to update the message uij.
uij= LLRint(i) +
X
j0∈M i\j
vj0i (2.20)
Step 3 Hard decision
In order to get a new LLR for hard-decision, use equation below to update the message ui. ui= LLRint(i) + X j∈Mi vji (2.21) zi = ( 1 ui≤ 0 0 ui> 0 (2.22)
If ziHT = 0 , zicould be considered as a correct codeword then stop the
itera-tions. Or go back to step 1 until the iteration exceeds the maximum number, but the codeword ˆz can not be considered as a correct codeword.
Each check node receives Dcmessages from the variable nodes which are
asso-ciated to this check node, then the check node calculate the corresponding message
vjiand send them back to associated variable node. After receiving messages from
the check nodes, the variable nodes calculate a new LLR vector to replace the old LLR vector for the next iteration.
0 0.5 1 1.5 2 2.5 3 0 1 2 3 4 5 6 7 8 φ (x) φ(x) Figure 2.4. Function φ
2.4.2
λ-min Algorithm
λ-min decoding algorithm could be considered as a simplified BP decoding algorithm[3],
the key of the simplification is the approximation of 2.18. As the curve of φ (x) shows in figure 2.4, the smaller value in x-axis obtains higher value in y-axis that means the result of vji depends on the minimum absolute value of uij. Here
sup-pose the vector [I1, I2, · · · , Iλ] is the λ smallest absolute value, then 2.18 could
described as vji= Y i0∈Nj\i sign(ui0j) · φ X i0∈Nj\i φ (|ui0j|) ≈ Y i0∈N j\i sign(ui0j) · φ [φ |I0| + φ |I1| + · · · + φ |Iλ|]
For each iteration, 3 steps are processed as below.
Step 1 Message computation from variable node to check node
For each check node with λ-min absolute value from the associated variable nodes, update vji by equation below.
vji=
Y
i0∈N j\i
sign(ui0j) · φ [φ |I0| + φ |I1| + · · · + φ |Iλ|] (2.23)
Step 2 Message computation from check node to variable node
For each check node, use the equation below to update the message uij.
uij= LLRint(i) +
X
j0∈M i\j
vj0i (2.24)
Step 3 Hard decision
In order to get a new LLR for hard-decision, use equation below to update the message ui.
ui= LLRint(i) +
X
j∈Mi
vji (2.25)
Create the hard decision vector zi, zi= 1 when ui> 0 and zi= 0 when ui> 0.
If ziHT = 0 , zicould be considered as a correct codeword then stop the iterations.
Or go back to step 1 until the iteration exceeds the maximum number, but the codeword ˆz can not be considered as a correct codeword.
2.4.3
Min-sum Algorithm
As the suggestion of the name, min-sum algorithm could be considered as two steps with min and sum. The simplification is still the approximation of 2.18, but in this algorithm only the minimum and minimum but one absolute value are stored.For each iteration, 3 steps are processed as below.
Step 1 Message computation from variable node to check node
For each check node with minimum and minimum but one absolute value from the associated variable nodes, update vjiby equation below.
vji= Y i0∈N j\i sign(ui0j) · φ X i0∈N j\i φ (|ui0j|) ≈ Y i0∈N j\i sign(ui0j) · [min |ui0j|] (2.26)
Step 2 Message computation from check node to variable node
For each check node, use the equation below to update the message uij.
uij = LLRint(i) +
X
j0∈M i\j
vj0i (2.27)
Step 3 Hard decision
In order to get a new LLR for hard-decision, use equation below to update the message ui.
ui= LLRint(i) +
X
j∈Mi
vji (2.28)
Create the hard decision vector zi, zi= 1 when ui> 0 and zi = 0 when ui> 0.
If ziHT = 0 , zicould be considered as a correct codeword then stop the iterations.
Or go back to step 1 until the iteration exceeds the maximum number, but the codeword ˆz can not be considered as a correct codeword.
Analysis of LDPC codes in
DVB-S2
Because of the outstanding communication performance which even can outper-form the Turbo code, LDPC code is selected by the second generation Digital Video Broadcasting standard for satellite applications (DVB-S2) as their channel decoding scheme. It’s necessary to analysis the structure and schedule of LDPC codes in DVB-S2[4], then in section 3.4 a reconstructed decoding table and an efficient algorithm will be introduced.
3.1
Regularity of the LDPC code in DVB-S2
The LDPC code in DVB-S2 has normal frames with the codeword length of 64800 bits and short frames with the codeword length of 16200 bits. The normal frames has 11 different code rates ranging from 1/4 to 9/10 and the short frames has 10 different code rate ranging from 1/4 to 8/9, the normal frames will be illustrated in the following part.
According to the definition in DVB-S2 standard[1], the LDPC code are encoded as irregular repeat accumulate (IRA) codes which have the advantage of low en-coding complexity. The LDPC encoder encode the information (i0, i1, · · · , iK−1)
with the size of K onto a codeword with the size of N , the LDPC encoder need to create the (N − K) parity bits (p0, p1, · · · , pN −K−1) from the K information bits.
Before the procedure, the q was denoted as a constant varies by rates. The table 3.1 for normal frames and table 3.2 for short frames are presented by N = 64800 (or 16200) in DVB-S2. q = N − K 360 = N 360(1 − R) (3.1) 21
Code Rate q 1/4 135 1/3 120 2/5 108 1/2 90 3/5 72 2/3 60 3/4 45 4/5 36 5/6 30 8/9 20 9/10 18
Table 3.1. The q for normal frames
Code Rate q 1/4 36 1/3 30 2/5 27 1/2 25 3/5 18 2/3 15 3/4 12 4/5 10 5/6 8 8/9 5
Table 3.2. The q for short frames
Step 1 p0= p1= p2· · · = pN −K−1= 0
Step 2 Let’s do the accumulation to i0, the corresponding parity bits to the
information bits i0 are specified in the annex B of DVB-S2 standard. Here we use
rate 1/2 as the example rate.
p54=p54⊕ i0, p9318= p9318⊕ i0
p14392=p14392⊕ i0, p27561= p27561⊕ i0
p26909=p26909⊕ i0, p10219= p10219⊕ i0
p2534=p2534⊕ i0, p8597= p8597⊕ i0 Step 3 With m = (0, 1, 2, · · · , 359), there exist
{x + (m mod 360) × q} mod (N − K) (3.2) to accumulate the next 359 informations at the first row. After the first 360 information bits are accumulated, the next 360 information bits is accumulated by the second row of the table for rate 1/2.
p144=p144⊕ i1, p9408= p9408⊕ i1
p14482=p14482⊕ i1, p27651= p27651⊕ i1
p26999=p26999⊕ i1, p10309= p10309⊕ i1
p2624=p2624⊕ i1, p8687= p8687⊕ i1
Step 4 After the whole information bits are used, the left parity bits are
i = 0, 1, · · · , N − K − 1.
The encoding procedure above comes from the standard of DVB-S2. Figure 3.1 presents the tanner graph for the irregular LDPC code in DVB-S2, and it shows some regularities which can be use for the hardware implementation.
The encoding table in the standard of DVB-S2 could be regarded as another description of the parity-check matrix. Each row of the table presents the cor-responding addresses of check nodes to those 360 variable nodes. The table has
q = (N − K)/360 rows and the degree of information bits can be discovered in
the table. In DVB-S2, the information bits consist of two degrees Dj in the front
part and D3 in the left part of information bits. Each rate has different Dj but
has the same D3 = 3. The degree Dc for the check node are uniform and can be
calculated from the known Dc and Ds.
Dc= (Df× Row + D3× (q − Row)) × 360 + 2 × (N − K)
N − K (3.3)
Here Row is the number of rows which has degree with Dj.
For example, the table for code rate 1/2 which provided by DVB-S2 standard can be divided into a 36 × 8 matrix and a 54 × 3 matrix. The 36 × 8 matrix means each bit of the first 36 × 360 information bits has the degree Dj= 8, and the next
54 × 360 information bits has the degree D3= 3. Because each check node except
the first one has the same degree, according to the known edges the degree for check node are easy to figure out. Table 3.3 list the Dj, Dc and the number of
edges for each rate with N = 64800.
Rate Dj EDj E3 Dc K 1/4 12 5400 10800 4 16200 1/3 12 7200 14400 5 21600 2/5 12 8640 17280 6 25920 1/2 8 12960 19440 7 32400 3/5 12 12960 25920 11 38880 2/3 13 4320 38880 10 43200 3/4 12 5400 43200 14 48600 4/5 11 6480 45360 18 51840 5/6 13 5400 48600 22 54000 8/9 4 7200 50400 27 57600 9/10 4 6480 51840 30 58320
...
...
...
Permutation
Check node
Information bits
Parity bits
Figure 3.1. Tanner graph for an irregular LDPC code
3.2
Schedule for iterative decoding
Normally, the LDPC code are decoded by iterative standard message passing algo-rithm which also known as Two-phase message passing algoalgo-rithm. The drawback of this schedule is their low convergence compared to the turbo code. In order to increase the speed of convergence, a ’layered decoding’ schedule will be introduce.
3.2.1
Two-Phase Message passing schedule
As the name of TPMP schedule, it consist of two phases in one iteration. The message form variable node propagated to check nodes for computation in phase 1, then the check node send the message to variable node and update in phase 2. Here the function 2.18 φ (x) = logex+1
ex−1 are used for simplification in procedure. Initialization
For all variable ui are initialized by the input LLR, and the extrinsic message
for check nodes vj are initialized to zero.
Phase 1
In the phase 1 of every iteration, the extrinsic message for each check nodes are calculated with the message from associated variable.
vji= Y i0∈N j\i sign(ui0j) · φ X i0∈N j\i φ (|ui0j|)
Phase 2
In phase 2, each variable node update it’s new LLR with the extrinsic message.
uij= LLRint(i) +
X
j0∈M i\j
vj0i
3.2.2
Layered-Message passing schedule
The major problem of Two-phase message passing algorithm is the low convergence speed. The Two-Phase message passing algorithm executes the computations in two steps which are the message computation from variable node to check node and the message computation from check node to variable node. In Two-phase message passing algorithm, it has two types of computation and the inter-media messages need to be saved at every round at each iteration.
Layered-message passing algorithm which updating the message layer by layer is widely utilized in LDPC decoder. Because of the regularity of the LDPC code in DVB-S2, row Layered-message passing algorithm which also known as "Horizon-tal shuffled decoding", "Turbo Decoding Message Passing(TDMP)", "Turbo-like decoding" will be introduced[2][8]. In Two-phase message passing, the computed messages don’t participate in the further computation in this decoding iteration. If the computed messages can be used to compute other messages in the same iteration and the convergence will speed up. Moreover, the inter-media messages are updated directly and the check messages become the new variable messages in a sub-iteration. The utilization to inter-media messages in every sub-iteration significant saves the memory.
By the regularity of LDPC code in DVB-S2, in every rate the degree for check nodes are identical while the degrees for variable nodes vary frequently. So the row Layered-message passing approach is utilized for LDPC in DVB-S2, and in the left part of this subsection this approach will be illustrated by examples.
Description for Row Layered-message passing
Compared to Two-phase message passing algorithm with two simple steps, Row Layered-message passing algorithm update variable message with check message row by row. A 4 by 8 parity-check matrix H which shows in figure 3.2 will help us to understand this algorithm in details. Before introducing the algorithms, let’s
make some notations first.
λi =£λi
1, λi2, · · · , λiDc
¤
- extrinsic message corresponding to the non-zeros in the i’th row.
Ii = [I1, I2, · · · , IDc] - the set of indexes in i’th row.
γ(Ii) = [γ1, γ2, · · · , γDc] - a vector of variable messages with the corresponding Ii
in i’th row.
ρ = [ρ1, ρ2, · · · , ρDc] = γ(Ii) − λ
i - the input message which is the i’th variable
message subtract the old extrinsic message to keep the reliability.
Λi= [Λ1, Λ2, · · · , ΛDc] - the output message after the computation in i’th row.
H = 0 1 1 0 1 0 0 1 1 1 0 1 0 1 0 0 0 0 0 1 1 0 1 1 1 0 1 0 0 1 1 0
In the parity-check matrix H, the extrinsic messages λi = £λi
1, λi2, · · · , λiDc
¤ in each row are corresponding to the non-zeros in this row. An iteration con-sist of multiple sub-iterations which corresponding to rows. Because the LLR messages corresponding to the zeros are not used in the computation, here a set
Ii = [I1, I2, · · · , IDc] is used to store the index of the non-zeros for the i’th row.
γ(Ii) = [γ1, γ2, · · · , γDc] are the set of variable messages which are corresponding
to the non-zeros for the i’th row in H.
Let’s use the matrix above as an example, it’s a parity-check matrix for the code of length 8, and each row has the same weight of 4. With the help from the index set I2= [1, 2, 4, 6] for row 2, we can get the extrinsic vector λ2when λ21is the
first variable message, λ2
2 is the second variable message, and the left two λ2 are
correspond to the forth and sixth variable message. Actually, the vector of γ is the variable message which initialized by the input LLR value. Every time after the computation for a row, the new extrinsic messages will update the corresponding
γ and the new vector γ will go to work for the next row.
Here four steps are used to illustrate the decoding process for i’th row in H.
Step 1 The extrinsic messages for i’th row and variable messages for Ii =
[I1, I2, · · · , IDc] are ready.
Step 2 ρ = [ρ1, ρ2, · · · , ρDc] = γ(Ii) − λ
i. The subtraction is to make sure the
message which generated by this row in the earlier iteration are not used as inputs in the next iteration to lead the reliability.
Step 3 Computation for the new extrinsic messages which use ρ as inputs using
0 1 1 0 1 0 0 1
1 1 0 1 0 1 0 0
0 0 0 1 1 0 1 1
1 0 1 0 0 1 1 0
λ
1λ
λ
λ
2 3 4ρ
γ
Step 1
(variable message)
2+ +
-
-
+
-
+
-2Step 2
Decode
Λ
2+ +
+
+
Step 3
(old extrinsic message)
(new extrinsic message)
Step 4
γ
3(posterior message
for row 2)
(posterior message
for row 3)
0 1 1 0 1 0 0 1
1 1 0 1 0 1 0 0
0 0 0 1 1 0 1 1
1 0 1 0 0 1 1 0
λ
1λ
λ
λ
2 3 4(new extrinsic
message updated)
Step 4 The new extrinsic messages vector Λireplace the old extrinsic messages,
and then add the new extrinsic message Λito the corresponding variable message.
γi = ρi+ Λi.
The hard decision for Layered-message passing algorithm is different from two-phase message passing, in each sub-iteration the γ was updated then the hard decisions are made. If the hard decision appears it is the correct variable message, the decoding stop immediately. Obviously, the hard decision in each sub-iterations helps this algorithm getting correct vector faster.
Figure 3.2 illustrated this four steps and using the decoding process for row 2 as an example.
Compared to Two-phase message passing algorithm, the Layered-message pass-ing algorithm not only speeds up the convergence behaviour but also leads to sig-nificant memory saving. The Two-phase message passing algorithm need to save (Dc· N · (1 − R) − 1) messages for check nodes in phase 1, (Ej· Dj+ E3· 3 + N ·
(1 − R) · 2 − 1) messages for variable nodes in phase 2 and N posterior messages. The messages which check node received are sent by variable nodes which means they store the same number of message, and the storage for Two-phase message passing algorithm is
STT P M P = (2 · (Dc· N · (1 − R) − 1) + N ) (3.4)
The Layered-message passing algorithm need to save (Dc · N · (1 − R) − 1)
extrinsic message for check nodes and N posterior messages.
STRLM P = (Dc· N · (1 − R) − 1) + N (3.5)
Compared to Two-phase message passing algorithm, Row Layered-message passing algorithm leading a memory savings as
Dc· N · (1 − R) − 1
2 · (Dc· N · (1 − R) − 1) + N
% (3.6)
For example, the LDPC codes with rate 1/2 have degree with 7 for each check node, and the memory savings up to 43.75%. For rate 2/3 the memory savings is 43.48%.
DVB-S2 LDPC decoding
using reconstructed
decoding table
With the help of formula 3.2 and encoding table for each rate, the connectivity between variable nodes and check nodes are described clearly. This encoding table shows the way how to calculate the unknown index of check node form known index of variable node, but the regularity of LDPC codes in DVB-S2 shows each check node has the uniform degree meanwhile the variable nodes has three differ-ent degrees[4]. So it’s better to use a calculation method from known check node to unknown variable node.
4.1
Restructured parity-check matrix for
DVB-S2
Each number in encoding table representing the 360 check nodes’ indexes associ-ated to 360 variable nodes. Let’s use a codeword with length 64800 as an example. In the encoding table for 1/2 rate, the first row consists of 8 numbers which means the first 360 variable nodes have a degree with 8. Moreover, the first row shows all
54 9318 14392 27561 26909 10219 2534 8597 55 7263 4635 2530 28130 3033 23830 3651 56 24731 23583 26036 17299 5750 792 9169
.
.
.
52 29194 595 53 19267 20113u
0 tou
359u
tou
719u
720 tou
1079 360u
tou
32039u
32040tou
32399 31680 29variable node
check node
54 54+90 54+2*90 ... ... 54+359*90 54+358*90 0 1 2 358 359variable node
m
check node
m*q+54
Figure 4.1. Tanner graph for the number ’54’
54 q+54 2q+54 358q+54 359q+54 0 1 2 358 359 1 0 0 0 1 0 0 0 1 1 0 0 1 0 0 0 0 0 0 0 0 0
.
.
.
. . .
. . .
. . .
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . .
. . .
. .
.
variable nodes
chcek nodes
0 0 0Figure 4.2. Parity-check matrix for the number ’54’
the corresponding check nodes for variable nodes u0 to u359 and the second row
shows the connectivity for variable nodes u360to nodes u719which was illustrated
in figure 4.1.
For the rate 1/2, the first number ’54’ define an edge between variable node
u0 and check node v54. The check node’s index for next variable node u1 can be
calculated.
q = N · (1 − R)
360 = 90
(54 + 1 mod (360) × 90) mod 32400 = 144
variable node
check node
9318+90 9318+2*90 ... ... mod(9318+ 359*90) mod(9318+ 358*90) 0 1 2 358 359 9318variable node
check node
m (m*q+9318) mod 32400
Figure 4.3. Tanner graph for the number ’9318’
corresponding check nodes in parallel which means at least 360 message can be transmitted to the 360 different check nodes in the same time without conflict. In addition, the figure 4.1 shows u0 corresponding to v54, u1 corresponding to v144
and u2 corresponding to v235. When the index for variable node increases with
unit 1, the index for corresponding check node increases with unit q = 90. As we know, the variable nodes and check nodes could be regarded as X-axis and Y-axis respectively in parity-check matrix H. And the figure 4.2 presents that kind of matrix for the first 360 variable nodes with the number ’54’. Obviously, it’s a 360 × 360 identical matrix while the unit for Y-axis is q.
In order to find the regularity, the second number ’9318’ in the first row was illustrated by figure 4.3. Because the last 104 indexes of check nodes was out of the range (N-K), it mod the (N-K) to make sure the connectivity’s correctness. Figure 4.4 presents the corresponding matrix for the number ’9318’, and it’s the an identical matrix left shifts 104 unit.
4.1.1
Description of restructured parity-check matrix
From this two examples, we can find each number in encoding table could be re-garded as an identical matrix with certain left cyclic shifts. In the parity-check matrix H, there exist few ones and many zeros. Inspired by the matrices from examples, the whole parity-check matrix is divided into sub-matrices of the size 360 × 360. Here denote the Ix as the identical matrix with x left cyclic shifts.
Figure 4.5 presents the restructured parity-check matrix.
Figure 4.5 shows the restructured parity-check matrix and detailed three sub-matrix to help readers for easy understanding. Figure 4.1 and 4.4 show the reg-ularity of the encoding table in DVB-S2. One number in encoding table stands for a 360 × 360 identical matrix with certain left cyclic shifts. With this method,
48 q+48 2q+48 358q+48 359q+48 0 1 2 358 359 0 0 0 0 0 0 103q+48 . . . . . . 1 0 0 0 1 0 0 0 1 1 0 0 1 . . . . . . . . . . . . . . . . . . 1 0 0 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 . . . . . . . . . . . . . . . . . . . . . . . . 0 0 0 0 . . . . . . . . . . . . 257
variable node
check node
Figure 4.4. Parity-check matrix for the number ’9318’
the information part of variable nodes are divided into q × (R · 180) sub-matrices while the unit for Y-axis is defined as q.
In the original parity-check matrix, there exist a fixed zigzag pattern. Because the unit of Y-axis is q, it can’t use the same method as information parts. In order to get the identical matrix, the unit of X-axis is defined as q which is the uniform definition to the unit of Y-axis. The sub-matrices in parity node groups are all identical matrix and the figure 4.5 also illustrated three sub-matrices in parity node groups.
In the restructured parity-check matrix, there only have one sub-matrix which is not identical matrix or identical matrix with left cyclic shifts. It’s illustrated by figure 4.6, and it could be regarded as a identical matrix without the first row’s element.
4.2
Reconstructed LDPC decoding table
The restructured parity-check matrix could be regarded as a q × 180 matrix. As the figure 4.5 shows, the restructured parity-check matrix could also be divided into q rows with 360 nodes elements per row and this restructured parity-check matrix could use the row Layered-message passing algorithm. Figure 4.7
illus-I
I
I
0 0 0I
xI
xI
xI
xI
xI
1*0
0
0
0
0
0
0
0
0
0
0
0
0
0
Information node groups
Parity nodes groups
180-q q
q
the first index for each sub-matrix (0, 1, 2, 3,...q) the first index for
each sub-matrix (0, 360, 720,...K-360) 0, q, 2q, ... 359q 0, q, 2q,... 359q 1, q+1, 2q+1, ... 359q+1 1, q+1, 2q+1,... 359q+1 q-1, 2q-1, 3q-1, ... 360q-1 q-1, 2q-1, 3q-1,... 360q-1 0, 1, 2, ... 359 0, q, 2q,... 359q 360, 361, 362, ... 719 1, q+1, 2q+1,... 359q+1 q-1, 2q-1, 3q-1,... 360q-1 k-360. k-359, k-358....k-1 The first sub-matrix in row 1 The second sub-matrix in row 2 The last sub-matrix in last row The first sub-matrix in row 1 The second sub-matrix in row 2 The last sub-matrix in last row
I
xI
xI
xI
0I
0I
0I
xI
xI
0I
0I
0...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
0 q 2q 358q 359q q-1 2q-1 359q-1 360q-1
0 0 0
1 0 0
0 1 0
1 0 0
0 1 0
0
0
0
0 0 0
0 0 0
.
.
.
. . .
. . .
. . .
. . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. . .
. . .
. .
.
variable nodes
chcek nodes
0
0
0
0
0
0
Figure 4.6. A special sub-matrix in restructured parity-check matrix
trates a simple example which transfer the single row Layered-message passing to block row Layered-message passing.
The restructured parity-check matrix shows each block row contains Dc
sub-matrices which are identical sub-matrices with left cyclic shifts. The encoding table presents the connectivity from known variable nodes to corresponding unknown check nodes while the row Layered-message passing algorithm want to transmits the message from known check nodes to unknown variable nodes.
Here a LDPC decoding table for rate 1/2 is presented in Appendix A. This decoding table consists of two parts, the index of sub-matrix in row which denote
λ
0λ
1λ
2λ
λ
4λ
5 3 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0I
λ
0λ
1as gb, and the beginning position gp. Let’s use figure 4.4 as an example, it’s the
48th row in the decoding table with gb= 1 and gp= 257. It means the first check
node v48are associated to the variable node u257. Because the unit for our column
varies by q, we denote the gs as the index of column and according to the check
node with index
(m · q + gs) (4.1)
the corresponding variable node with the index of
((gb− 1) · 360 + (gp+ m) mod 360) (4.2)
The decoding table only presents for the information bits, the parity bits can be calculated by
v(m·q+gs)→ u(m·q+gs−1) (4.3)
v(m·q+gs)→ u(m·q+gs) (4.4)
In order to check the correctness of this formula, an example will be illustrated.
Example 4.1
In the decoding table for rate 1/2, the 48th row has gb= 1 while gp= 257, gs= 48.
When m = 0, the check node v48corresponding to
u((gb−1)·360+(gp mod 360))= u257
The result is exactly the same result as the figure 4.4 shows.
For each check node, it is associated to (Dc−2) information nodes and 2 parity
nodes. Here I will use two steps to illustrate the procedure and v0will be used as
an example.
Step 1 The corresponding information nodes are calculated by the decoding
table. For the check node v1, the corresponding information bits can be calculated
by u((11−1)·360+(310) mod 360) = u3910 u((11−1)·360+(299) mod 360) = u3899 u((21−1)·360+(74) mod 360) = u7274 u((38−1)·360+(0) mod 360) = u13320 u((42−1)·360+(206) mod 360) = u14966
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
Q
λ
λ
λ
λ
λ
α
α
α
α
α
β
β
β
β
β
Λ
Λ
Λ
Λ
Λ
Λ
λ
2 3 4 5λ
2λ
3λ
4λ
5 1 7 2 3 4 5 7 1 2 3 4 5 6 1 2 3 4 5λ
α
6 6β
λ
6 7Q
Λ
6Figure 4.8. Computation of extrinsic message with q(x, y)
Step 2 Without the information bits, there also exist 2 parity bit corresponding
to the check node v1.
v(0·q+1)→ u(0·q+1−1)= u0
v(0·q+1)→ u(0·q+1)= u1
4.3
SISO algorithm for restructured parity-check
matrix
In the previous section, the original parity-check matrix is restructured and be-came a matrix consists of q × 180 sub-matrices and each sub-matrix is identical matrix or identical matrix with certain left cyclic shifts. In section 3.2, the pro-cedure for a decoding sub-iteration is introduced without detailing the decoding algorithms for extrinsic messages.
In Bp algorithm, because of the wide dynamic range of function φ(x) and its inverse φ(−x), it need large date-paths and function units to make sure the accu-racy of the extrinsic messages. SISO algorithm is introduced by M. Mansour[7], it use an alternate function q(x, y) to compute message which can be approximated to SISO-s algorithm and implement using simple logic gates.
Here a new function q(x, y) will instead the old function φ(x) to reconstruct the formula 2.19. The bivariate function is defined as
q(x, y) = loge
x+y+ 1
The function q(x, y) has this properties as below: 1. q(x, y) = q(y, x); 2. q(−x, −y) = q(x, y); 3. q(x, 0) = q(0, y) = q(0, 0) = 0; 4. q(x, −y) = q(−x, y) = −q(x, y); 5. q(|x|, |y|) ≥ 0;
6. q(x, y) = q(sign(x) · |x|, sign(y) · |y|) = sign(x · y) · q(|x|, |y|).
This properties help us to get the relation between q(x, y) and φ(x) is shows below:
7. sign(x · y) · φ−1(φ(|x|) + φ(|y|)) = q(x, y);
8. sign(x · y · z) · φ−1(φ(|x|) + φ(|y|) + φ(|z|)) = q((q(x, y), z);
9. sign(QDc
i=1xi) · φ−1(
PDc
i=1φ(|xi|)) = q(· · · (q(q(x1, x2), x3), · · · ), xDc).
The following notation is defined for convenience.
q(x1, x2, · · · , xc) = q(· · · (q(q(x1, x2), x3), · · · ), xDc) (4.6)
According to the formula above, the decoding formula for extrinsic messages which has the same computation result with formula 2.19 shows below:
Λj = q[j](λ1, · · · , λj, · · · , λDc), j = 1, · · · , Dc (4.7)
With the help of formula 3.13, figure 4.8 shows a effective computing architec-ture to compute the extrinsic messages. If computation starts from the λ1,
q(λ1, λ2) = α2
q(q(λ1, λ2), λ3) = q(α2, λ3) = α3
q(q(q(λ1, λ2), λ3), λ4) = q(α3, λ4) = α4
q(αj−2, λj−1) = αj−1 (4.8)
If computation starts from the λ6,
q(λDc−1, λDc) = βDc−1
q(λDc−2, q(λDc−1, λDc)) = q(λDc−2, βDc−1) = βDc−2
q(λDc−3, q(λDc−2, q(λDc−1, λDc))) = q(λDc−3, βDc− 2) = βDc−3
At the position j + 1,
q(λj+1, βj+2) = βj+1 (4.9)
Next step is the computation of the extrinsic message Λj, and with the help of
formula 3.14 and 3.15, the Λj can be computed by
Λj = q(αj−1, βj + 1) (4.10)
4.3.1
SISO-simplified(SISO-s) algorithm
The function q(x, y) could be approximated as below
q(x, y) = log(ex+ ey) − log(ex+y+ 1)
≈ max(x, y) + max(5/8 − |x − y|/4, 0) − max(x + y, 0) − max(5/8 − |x + y|/4, 0)
(4.11)
With to the approximated equation 4.11, SISO algorithm could be approxi-mated to SISO-s algorithm without using a lookup table. The performance com-parison between SISO algorithm and SISO-s algorithm is present in simulations results.
4.3.2
Verification of SISO algorithm
In the previous section, 9 properties is introduced and the formula 3.16 replace the formula 2.19 to compute the extrinsic message. The properties 7 to 9 will be illustrated in this section.