LDPC GSoC Project Status

Hi all,

I’d just like to share a status of how my LDPC implementation project is
going. When the summer started, Manu and I made a list of the
LDPC-related algorithms for encoding, decoding, and parity check matrix
construction that we had found in our literature review. We then divided
them up such that there would be no overlap in our efforts. The
algorithms that I put on my list were:

  1. Regular and irregular parity check matrix construction functions
  2. Generating a code that is optimized for PSK
  3. Constructing quasi-cyclic codes which are especially efficient for
    encoding
  4. Performing encoding as described by Richardson & Urbanke in Appendix
    A of ‘Modern Coding Theory’ (manipulating the parity-check matrix into
    an approximate upper triangulation form which reduces complexity during
    the real-time encoding steps)
  5. Bit-flip decoding, a hard decision algorithm
  6. Min-sum algorithm for decoding

So far, I’ve written prototype functions for these methods in python,
using numpy: #1/regular, #4, and #5.

Right now, I’m at the stage where I’m trying to link them all together
and confirm that the processes perform as expected before moving on to
converting them to C++. The challenge that I’m facing is that the parity
check matrices being created by my function are not full rank, and so
they don’t work with the encoding algorithm. I have tried a method to
manipulate them into being full rank before encoding but it was not
successful. This is my top priority right now - to be able to have the
parity-check matrices that I am creating work with the encoding
algorithm that I’ve written up. Then I’d like to finish testing the
chain of processes before moving on to creating classes that inherit
from those in the FEC API.

My GitHub repo is here:

Any questions or comments, just let me know.

Thanks,
Tracie Perez

Hello all once again,

I’d like to thank everyone who has given me helpful feedback since my
last status. I really appreciate the thoughtful comments.

I solved the issue regarding getting my matrices to work with the
encoder algorithm. For the parity check matrices, I’m constructing
regular LDPC parity check matrices per a simple algorithm described in
Gallager’s original 1963 thesis on this topic. However, per design, the
matrices are never full rank, so I then process them to be so. Then, the
matrices are ran through a pre-processing algorithm to set them up for
efficient, low complexity encoding per a process described by Richardon
and Urbanke. I call the resulting matrix an “encoding-ready” parity
check matrix, and it takes days to generate each one (bigger = more
days). I would like to build up to matrices for n=5000 because I believe
this is the ballpark for codeword lengths used in industry. As my
scripts create these matrices, I am pushing them to a library of sorts
on my github repo. Of course the idea is that the time is invested in
creating them once, and then they can be used as needed after that.

There are regular parity check matrices already available on the web by
Dr. MacKay, but they are not full rank, so they won’t work with my
encoder. By the time I reduce them to be full rank, the information rate
that they offer is unacceptably low. So, this is why I have gone the
route of generating my own matrices that are compatible with Richardson
and Urbanke’s encoding algorithm.

Next, I ran into an issue with my decoder failing for large size
codewords. That tied me up for a few weeks. After much troubleshooting,
I discovered that the issue stemmed from rounding errors from numpy
doing mod 2 operations. I won’t go into to much detail since I will be
porting this algorithm out of python to C++ anyway. After manually
correcting this rounding issue, my hard decision bit-flip decoding
algorithm performed really well in the prototype python tests that I
did. To simulate errors, I manually introduced bit flips and checked
that the decoder could detect and correct them. I have tested codewords
up to 2400 bits in length, and the algorithm can correct a few errors in
1 or 2 iterations, which is actually much faster than I anticipated. I
will be testing larger codewords as I construct larger and larger parity
check matrices.

Now I am at the point where I am converting my python code into C++.
After thinking about how best to do this, we (Jens, Nick and I) agreed
that it made sense to fork Nick McCarthy’s fecapi repo. So, I have done
this, and I will create classes that inherit from and use the FEC API
classes that he has already written. This will hopefully allow me to
eventually use the BER curve generator to do final tests of my
implementation.

Because there are so many matrix operations, I am using the GNU
Scientific Library (GSL) in my C++ code. I thought that this would be a
reasonable approach since GSL is a required dependency for gr-wavelet,
and, if I am not mistaken, a user would get the GSL as part of the
standard install via pybombs. I do think that there will be many
opportunities to increase efficiency by using VOLK, but I don’t think
that will fall within the scope of this summer’s project. I will be
flagging potential VOLK code replacements with something like “FIXME -
consider replacing this with VOLK library functions.”

As always, I welcome any feedback that you have!

Thanks,
Tracie Perez