# A4L-A: Channel Coding

## A4LA: Channel Coding

Session Type: LectureSession Code: A4L-A

Location: Room 1

Date & Time: Wednesday March 22, 2023 (14:00 - 15:00)

Chair: Brint Cooper, Steven Jones

Track: 1

Paper Name | Authors | Abstract | |
---|---|---|---|

3026 | Block Turbo Decoding with ORBGRAND | Kevin Galligan{2}, Muriel Médard{1}, Ken Duffy{2} | Guessing Random Additive Noise Decoding (GRAND) is a family of universal decoding algorithms suitable for decoding any moderate redundancy code of any length. We establish that, through the use of list decoding, soft-input variants of GRAND can replace the Chase algorithm as the component decoder in the turbo decoding of product codes. In addition to being able to decode arbitrary product codes, rather than just those with dedicated hard-input component code decoders, results show that ORBGRAND achieves a coding gain of up to 0.7dB over the Chase algorithm with same list size. |

3111 | On the Determination of Grand Noise Sequences by Employing Integer Compositions | Steven Jones, A. Brinton Cooper III | Guessing Random Additive Noise Decoding (GRAND) has been proposed as an efficient and universal decoding technique for linear block codes [1]. A necessary element of the GRAND decoder is a set of noise sequences of length n equal to the code length. The decoded codeword is taken as the first valid codeword that results from the binary addition of the noise sequences, in order of likelihood, with the received word. For a binary code of length n there are 2^n such noise sequences. When n is large, it will typically be intractable to add every noise sequence during decoding. Moreover, these sequences exhibit a number of error bursts (m) and a total number of bit flips (l). In [1] the authors provide expressions to determine the probability of each error burst sequence for a two-state Markov channel model. The resulting decoder is called GRAND-Markov Order (GRAND-MO). The authors of [1] describe a GRAND-MO error pattern generator for the two-state Markov channel based upon sequential transition between (m,l) pairs, incrementing m first, and then treating each value of l. In this paper we address a method of constructing the n-length noise sequences based upon integer compositions on the required l bit flips as well as the n-l non-errors in the sequence. This method allows one to construct noise sequences in order of probability without the need to construct less likely noise sequences. A GRAND decoder may employ a limited number of noise sequences and abandon decoding upon finding a valid code word, or upon reaching an abandonment threshold without decoding success. In this way the most likely noise sequences can be prioritized in decoding. |

3102 | Error Detection Strategies for CRC-Concatenated Polar Codes Under Successive Cancellation List Decoding | Alexander Sauter, Balazs Matuz, Gianluigi Liva | In this work we introduce a framework to study the trade-off between the undetected error rate (UER) and overall frame error rate (FER) of CRC-concatenated polar codes in the short blocklength regime. Three approaches to improve the tradeoff under successive cancellation list (SCL) decoding are outlined. Two techniques are based on the optimum threshold test introduced by Forney in 1968, whereas a third technique partitions the CRC code parity bits in two sets, where one set is used to prune the SCL decoder list, and the other set is used for error detection. The performance of the three schemes is analyzed via Monte Carlo simulations, and compared with a finite-length achievability bound based on Forney\'s random coding bound. |