SYNDROMEBASED DECODING METHOD AND APPARATUS FOR BLOCK TURBO CODE

0Associated
Cases 
0Associated
Defendants 
0Accused
Products 
0Forward
Citations 
0
Petitions 
1
Assignment
First Claim
1. A syndromebased decoding method for a block turbo code, the block turbo code having an extended Hamming code as a component code thereof, the decoding method comprising:
 (a) generating an input information value for a next half iteration by using channel passage information and extrinsic information and a reliability factor of a previous half iteration;
(b) generating a hard decision word by way of a hard decision of the input information value;
(c) calculating an n number of 1bit syndromes by using the hard decision word, the n number of 1bit syndromes corresponding to a number of columns or rows of the block turbo code; and
(d) determining whether or not to proceed with the next half iteration by using the calculated n number of 1bit syndromes.
1 Assignment
0 Petitions
Accused Products
Abstract
A syndromebased decoding method and apparatus for a block turbo code are disclosed. An embodiment of the present invention provides a syndromebased decoding method for a block turbo code that includes an extended Hamming code as a component code, where the decoding method includes: (a) generating an input information value for a next half iteration by using channel passage information and the extrinsic information and reliability factor of a previous half iteration; (b) generating a hard decision word by way of a hard decision of the input information value; (c) calculating an n number of 1bit syndromes, which corresponds to the number of columns or rows of the block turbo code, by using the hard decision word; and (d) determining whether or not to proceed with the next half iteration by using the calculated n number of 1bit syndromes.
0 Citations
No References
No References
8 Claims
 1. A syndromebased decoding method for a block turbo code, the block turbo code having an extended Hamming code as a component code thereof, the decoding method comprising:
(a) generating an input information value for a next half iteration by using channel passage information and extrinsic information and a reliability factor of a previous half iteration; (b) generating a hard decision word by way of a hard decision of the input information value; (c) calculating an n number of 1bit syndromes by using the hard decision word, the n number of 1bit syndromes corresponding to a number of columns or rows of the block turbo code; and (d) determining whether or not to proceed with the next half iteration by using the calculated n number of 1bit syndromes.  View Dependent Claims (2, 3, 4, 5, 6, 7)
 8. A syndromebased decoding apparatus for a block turbo code, the block turbo code having an extended Hamming code as a component code thereof, the decoding apparatus comprising:
a processor; and a memory connected to the processor, wherein the memory stores program instructions executable by the processor to; generate an input information value for a next half iteration by using channel passage information and extrinsic information and a reliability factor of a previous half iteration; generate a hard decision word by way of a hard decision of the input information value; calculate an n number of 1bit syndromes by using the hard decision word, the n number of 1bit syndromes corresponding to a number of columns or rows of the block turbo code; and determine whether or not to proceed with the next half iteration by using the calculated n number of 1bit syndromes.
1 Specification
This application claims the benefit of Korean Patent Application No. 1020180128883, filed with the Korean Intellectual Property Office on Oct. 26, 2018, the disclosure of which is incorporated herein by reference in its entirety.
The present invention relates to a syndromebased decoding method and apparatus for a block turbo code, more particularly to a syndromebased block turbo code decoding method and apparatus that employ a lowcomplexity early stopping technique and a hybrid decoding technique.
The block turbo code, proposed by Berrou et al., performs iterative decoding with parallel concatenated convolutional codes having two connected recursive systematic convolutional codes to exhibit performance approaching Shannon'"'"'s limit.
Iterative decoding is one of the most important features of the block turbo code, and iterative decoding is performed as softoutput generated by each decoder is fed to another decoder.
As the iterative decoding continues, the bit error rate (BER) performance may be improved, but the degree of improvement in BER performance may gradually decrease and reach saturation, whereas decoding computations and decoding delay may increase linearly.
Since the required number of iterations for proper decoding may vary for each frame according to channel properties, fixing the number of decoding iterations may be inefficient. Thus, in order to reduce unnecessary computations and decoding delay, an efficient way of applying early stopping to the iterative decoding of block turbo codes is needed.
The conventional early stopping technique may entail performing decoding for a half of the columns and rows of the input information every time, performing syndrome computation for the results from the columns and rows, and determining that there are no errors if the syndrome for all of the columns and rows is 0 (a zero vector).
This, however, entails the problem of an increased amount of computation, as the procedure of computing the syndrome is performed at each decoding iteration.
Also, there have been proposed various methods aimed at lowering complexity for decoding block turbo codes.
R, Pyndiah, “Nearoptimum decoding of product codes: block turbo codes,” IEEE Trans. Commun., vol 46, No. 8, pp. 10031010, August 1998 (Prior Art 1) presents the ChasePyndiah decoding algorithm, which is a method of extending a hardinput hardoutput decoding algorithm to softinput softoutput.
The previous technique differentiates hardinput and softinput based decoding techniques in the case of a single error, so as to increase the applied rates of hardinput decoding, but does not greatly lower complexity.
PenYao Lu, ErlHuei Lu, TsoCho Chen, “An Efficient Hybrid decoder for Block Turbo Codes,” IEEE Commun. Letters, vol. 18, no. 12, pp. 20772080, October 2014 (Prior Art 2) presents a method of lowering decoding complexity by using softinput softoutput decoding and hardinput hardoutput decoding selectively according to the number of errors included in the input value of a decoder.
E. H. Lu and P. Y. Lu, “A syndromebased hybrid decoder for turbo product codes,” in Proc. IEEE 3CA Tainan, Taiwan, May 2010, pp. 280282 (Prior Art 3) presents an algorithm that can greatly lower complexity by calculating the syndrome for a decoder input vector and applying hardinput softoutput decoding for a value of 0, meaning error free, and applying softinput softoutput decoding for any other syndrome value.
Guo Tai Chen, Lei Cao, Lun Yu, Chang Wen Chen, “An Efficient Stopping Criterion for Turbo Product Codes,” IEEE Commun. Letters, vol. 11, no. 6, pp. 525527, July 2007 (Prior Art 4) presents a method for preventing unnecessary decoding iterations in a procedure for iteratively decoding block turbo codes. The presented method involves performing a syndrome computation on all decoded output vectors after the completion of the decoding procedure for each iteration and finishing the decoding process if all of the output vectors are 0 for all columns and rows, as this ensures that there are no more errors.
Byungkyu Ahn, Sungsik Yoon, Jun Heo, “Low Complexity SyndromeBased Decoding Algorithm Applied to Block Turbo Codes,” IEEE Access., vol. 6, pp. 21693536, April 2018 (Prior Art 5) proposes a syndromebased hybrid decoding technique that can reduce the complexity for decoding block turbo codes. The proposed method applies hardinput softoutput decoding, which has low complexity, as much as possible up to the error correction capability for linear block codes used as the component codes of block turbo codes and applies softinput softoutput decoding, which has high complexity, only at portions where hardinput softoutput decoding cannot be applied.
To resolve the problems of the related art described above, an aspect of the invention aims to provide a syndromebased decoding method and apparatus for block turbo codes that employ a lowcomplexity early stopping technique and hybrid decoding technique to reduce computation and complexity without performance loss.
To achieve the objective above, an embodiment of the present invention provides a syndromebased decoding method for a block turbo code that includes an extended Hamming code as a component code, where the decoding method includes: (a) generating an input information value for a next half iteration by using channel passage information and the extrinsic information and reliability factor of a previous half iteration; (b) generating a hard decision word by way of a hard decision of the input information value; (c) calculating an n number of 1bit syndromes, which corresponds to the number of columns or rows of the block turbo code, by using the hard decision word; and (d) determining whether or not to proceed with the next half iteration by using the calculated n number of 1bit syndromes.
The hard decision word can be a 1 by n matrix, and the 1bit syndromes can be calculated by multiplying the hard decision word with an n by 1 matrix having all values of 1.
Step (d) can include calculating syndromes of n−k−1 bits for rows or columns of the block turbo code by multiplying the input information value with a parity check matrix if the n number of 1bit syndromes are all 0; and determining whether or not the calculated (n−k−1)bit syndromes are all 0, where the next half iteration can be performed if any one of the (n−k−1)bit syndromes is not 0.
Step (c) to step (d) can be performed sequentially for all of the rows or columns.
The decoding method can further include: (e) determining whether or not to apply hardinput softoutput based decoding, where, if the n number of 1bit syndromes calculated in step (c) are all 1, then step (e) can determine whether or not to apply hardinput softoutput based decoding in the next half iteration according to whether or not there are two errors in the previous half iteration.
Step (e) can include: identifying the positions of three bits of the lowest reliability and a bit corrected by the hardinput softoutput based decoding, if there are two errors included in the previous half iteration, and comparing the sum of the reliability values of the three bits of lowest reliability with the reliability value of the bit corrected by the hardinput softoutput based decoding to decide whether or not to perform the hardinput softoutput based decoding.
Step (e) can include: deciding whether or not to perform a hardinput softoutput based decoding by checking whether or not the reliability values of two bits having the lowest reliability and of the bits corrected by the hardinput softoutput based decoding match each other, if the result of calculating the syndromes for the input information value includes two errors.
Another aspect of the present invention provides a syndromebased decoding apparatus for a block turbo code that includes an extended Hamming code as a component code, where the decoding apparatus includes a processor and a memory connected to the processor, and where the memory stores program instructions executable by the processor to: generate an input information value for a next half iteration by using channel passage information and the extrinsic information and reliability factor of a previous half iteration; generate a hard decision word by way of a hard decision of the input information value; calculate an n number of 1bit syndromes corresponding to the number of columns or rows of the block turbo code by using the hard decision word; and determine whether or not to proceed with the next half iteration by using the calculated n number of 1bit syndromes.
An embodiment of the invention provides the advantages of enabling a quick determination of whether or not to perform a next half iteration with a low complexity and increasing decoding efficiency by increasing the applied rates of hardinput softoutput based decoding.
Additional aspects and advantages of the present invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
As the invention allows for various changes and numerous embodiments, particular embodiments will be illustrated in the drawings and described in detail in the written description.
However, this is not intended to limit the present invention to particular modes of practice, and it is to be appreciated that all changes, equivalents, and substitutes that do not depart from the spirit and technical scope of the present invention are encompassed in the present invention.
The present invention applies the following techniques to lower the complexity of decoding a block turbo code without performance loss.
1) LowComplexity Early Stopping Technique
The conventional early stopping technique used for decoding block turbo codes may involve completing decoding iterations for a half of all columns (or rows) and then calculating to see whether or not the syndromes obtained by multiplying all column (or row) vectors with a parity check matrix are all 0.
If the syndromes for all columns and rows are 0, the decoding iterations are stopped, but performing syndrome decisions at every decoding iteration in this manner increases the amount of computation.
To resolve this, an embodiment of the present invention utilizes the property that, if among the syndromes of the extended Hamming code in a block turbo code the value of a particular bit is known, then it is possible to differentiate those having single errors and those having other errors.
An embodiment of the invention can greatly reduce the amount of computation expended in error estimation by preliminarily using only one bit to decide whether or not errors are present and performing the next half iteration if it is decided that at least one from among all of the columns and rows has a single error.
2) Increased Application Rate of HardInput SoftOutput Based Decoding in Cases Decided as Having Single Errors
In a block turbo code having an extended Hamming code as a component code, it is possible to divide the input information values, according to the forms of error, into those having no errors, those having single errors, and those having other types of errors, by way of computing the syndromes of the input information values during a syndromebased decoding procedure.
In a previous study, it was shown that lowcomplexity decoding is possible without performance loss even if hardinput decoding, which has low complexity, is applied instead of the more complex softinput based decoding, in certain cases of having single error occurrences and cases of having no errors.
In the ChasePyndiah decoding technique, which is used as a softinput based decoding technique in the related art, a procedure for finding a codeword with a ML decoding method is performed. It is proven that, in cases where the sum of reliability values of the three bits having the lowest reliability is smaller than the reliability value of the bit position corrected by the hardinput softoutput based decoding for cases decided as having single errors as presented by an embodiment of the present invention, the codeword and the bit position corrected by the hardinput softoutput based decoding are the same. This can be utilized to increase the rate of applying hardinput softoutput based decoding, which is of a lower complexity, for cases that are decided as having single errors, and this in turn can greatly lower decoding complexity.
3) Increased Application Rate of HardInput SoftOutput Based Decoding in Cases Decided as Having Multiple Errors
For a block turbo code having an extended Hamming code as a component code, from among the cases where it is decided from the syndrome computation of the input information values that there are two errors, if there is an error in the parity bit added during the extension of the extended Hamming code and an error in one of the remaining bits, the two bit errors may be corrected.
When it is decided that the input information value has two errors, an error correction of 2 bits may occur. To correct such error patterns with hardinput softoutput based decoding, it is proven in relation to an embodiment of the present invention that, when the sum of the reliability values of the 2 bits is smaller than the sum of the reliability values of the bit having the lowest reliability and the bit having the second lowest reliability, a codeword such as the ML codeword of a conventional softinput based decoding technique can be obtained through a decoding method based on hardinput information. Thus, the decoding complexity can be lowered by as much as the applied rate of the above.
A detailed description of a method of decoding a block turbo code is provided below, with reference to the accompanying drawings.
Referring to
Through a hard decision of the input information value R(m), a hard decision word R^{H }may be generated.
As illustrated in
As illustrated in
Here, if the decision result is 1, this means that a single error is included, and if the result is 0, this means that either there are no errors or there are two errors included.
Generally during an iterative decoding process, if an error is included in all n number of columns or rows, it is much likely to be determined as a single error than two errors.
According to this embodiment, an n number of 1bit syndromes may be calculated, corresponding to the number of columns or rows of the block turbo code, and based on the calculated n number of 1bit syndromes, it may be determined whether or not to proceed with the next decoding iteration.
An embodiment of the invention performs the syndrome decision with single bits and uses the probabilistic estimation that, if all n number of columns or rows are decided to be 0, then since there are no single errors included, there would not be two errors either in most of the cases.
Therefore, if the results of the syndrome decision for the n number of 1bit syndromes show that not all of them are 0, then the next half iteration may be proceeded with. In other cases, to account for the possibility of there being two errors albeit with a very low probability, only when all of the 1bit syndromes for all n number of columns (or rows) are 0, the syndromes of the remaining (n−k−1) bits may be obtained from a parity check matrix to check that the bit values of the remaining syndromes are also all 0. Here, k is the number of information bits.
If the syndrome of a column (or row) has a value other than 0, then the early stopping technique for the decoding may be halted.
However, if the n number of 1bit syndromes are 0, then, this time, the 1bit syndromes of the rows (or columns) containing information bits excluding parity bits may be obtained. If these values are all 0, then the syndromes of the remaining (n−k−1) bits may be obtained, and if they are all 0, the early stopping technique may be applied to stop further decoding iterations. Otherwise, the next decoding iteration may be continued.
In cases where there is even one error, the syndrome of each column or row vector may be obtained individually, and afterwards, as shown in
W_{j}=γ_{1}·(2r_{j}^{H}−1), [Equation 1]
Here, γ_{1 }uses a value twice that of the β used in [1].
Here, W_{j }represents the extrinsic information of the jth bit, r_{j}^{H }represents the jth bit of the hard decision vector R^{H }of the input information, and γ_{1 }represents the compensation factor value.
In cases where a single error exists, if it was decided in the previous half iteration that there are no occurrences of two errors, then hardinput softoutput based decoding may be applied starting from the next half iteration, and if there is even one occurrence of two errors, then the positions of the three bits having the lowest reliability and the bits corrected by the hardinput softoutput based decoding may be identified, and a sum of the reliability values of the three bits having the lowest reliability is compared with the reliability values of the bits corrected by the hardinput softoutput based decoding so as to determine whether the hardinput softoutput based decoding or the conventional softinput based ChasePyndiah decoding is to be performed.
According to this embodiment, the differentiation of which cases are to employ the hardinput softoutput based decoding and which cases are to employ the softinput based decoding is based on the following basis.
First, the ChasePyndiah decoding technique, which is an existing softinput based decoding technique, may proceed as shown in
Also, if the syndrome calculation results for the input information value are decided as having single errors, decoding the hard decision vector into an extended Hamming code causes the value of 1 bit to be amplified by the Euclidean distance compared to the hard decision vector. However, in order to find the minimum Euclidean distance, it must be guaranteed that the codeword obtained in this manner through a hardinput decoding procedure has a shorter Euclidean distance than any of the remaining competing codewords.
Therefore, in an embodiment of the invention, the codeword having the shortest Euclidean distance from among the competing codewords can be set as follows.
As the competing codewords form a codeword set with the codewords obtained with hardinput softoutput based decoding, the values of at least 4 bits must be different. Therefore, the case having the shortest Euclidean distance from the input information values among the codewords that have at least 4 bits different is when the bit value corrected previously by a hardinput softoutput based decoding is the same as the hard decision bit and the values of three bits from among the remaining bits are flipped.
In this case, in order for the Euclidean distance to be minimized, the 3 bits referred to above must have the lowest reliability values. Therefore, supposing that the value with the lowest reliability is present in the positions of the three bits, the sum of the reliability values of the three bits may be compared with the reliability value of the one bit corrected with conventional hardinput softoutput based decoding, and if the reliability of the bit corrected by the hardinput softoutput based decoding is smaller, then hardinput softoutput based decoding may be applied.
Here, the extrinsic information of the jth bit may be obtained by the equation below.
w_{j}=γ_{2}·(2d_{j}^{HDD}−1), (j=1,2, . . . n) [Equation 2]
Here, w_{j }represents the extrinsic information of the jth bit, d_{j}^{HDD }represents the jth bit value of the vector generated by a hard decision decoding of the input information, and γ_{2 }represents the correction factor value.
Here, the best performance was obtained when γ_{2 }is three times the value of β, and as such, this value was applied.
Otherwise, the softinput based ChasePyndiah decoding technique may be applied to proceed with the decoding process.
If the results of calculating the syndromes for the input information value mean that there are two errors, then error correction would not be possible with the conventional hardinput decoding, and thus the softinput based ChasePyndiah decoding technique may be applied.
However, in the case of an extended Hamming code, after first decoding the Hamming code, the decision result is given as 0 or 1 depending on whether the values of the remaining extended bits subjected to mod 2 computation with each bit value of the Hamming code is 0 or 1, so that if an error occurs in 1 bit from among the bits corresponding to the Hamming code and an error occurs in 1 bit in the parity bit added during the extending of the Hamming code, then error correction would be possible.
To take into account such cases, the present embodiment may employ hardinput softoutput based decoding of low complexity for certain error patterns, even when there is an occurrence of two errors.
To differentiate when to perform the hardinput softoutput based decoding and when to perform the softinput based decoding, the following differentiation criteria may be used.
If it was decided that there are two errors, performing decoding using hardinput information would cause additional flips of 2 bits during the error correction procedure. The Euclidean distance between the hard decision vector and the channel information would be increased by four times the reliability values of the 2 bits where flipping occurred.
However, if a codeword obtained after hardinput information based decoding is to have the minimum Euclidean distance from among the group of candidate codewords, there should not be a codeword having a shorter Euclidean distance from among the codewords having a minimum of 4 bits different from the codeword.
If the reliability value of the bit corrected by a hardinput softoutput based decoding matches the values of the two bits having the lowest reliability, then there can be no other competing codeword that has a shorter distance. As such, when the above is satisfied, the value obtained with the hardinput decoding may be decided as the codeword, and the extrinsic information of the jth bit may be obtained by the equation shown below.
w_{j}=γ_{2}·(2d_{j}^{HDD}−1, (j=1,2, . . . ,n) [Equation 3]
Here, the best performance was obtained when γ_{2 }is three times the value of β, and as such, this value was applied.
Otherwise, softinput based decoding may be applied to proceed with the error correction process.
As illustrated in
The processor 600 can include a CPU (central processing unit) or a virtual machine, etc., that is capable of running a computer program.
The memory 602 can include a nonvolatile storage device such as a fixed type hard drive or a detachable storage device. A detachable storage device can include a compact flash unit, a USB memory stick, etc. The memory 602 can also include volatile memory such as various types of random access memory.
The memory 602 may store program instructions executable by the processor 600.
For decoding a block turbo code with low complexity, the memory 602 can store program instructions that can be executed by the processor 600 to generate an input information value for a next half iteration by using channel passage information and the extrinsic information and reliability factor of a previous half iteration, generate a hard decision word by way of a hard decision of the input information value, calculate an n number of 1bit syndromes corresponding to the number of columns or rows of the block turbo code by using the hard decision word, and determine whether or not to proceed with the next half iteration by using the calculated n number of 1bit syndromes.
Here, the hard decision word may be a 1 by n matrix, and the 1bit syndromes may be calculated by multiplying the hard decision word with an n by 1 matrix having all values of 1.
Also, to determine whether or not to proceed with the next half iteration, the memory 602 may store program instructions for calculating syndromes of n−k−1 bits for rows or columns of the block turbo code by multiplying the input information value with a parity check matrix if the n number of 1bit syndromes are all 0, and determining whether or not the calculated (n−k−1)bit syndromes are all 0, where the next half iteration can be performed if any one of the (n−k−1)bit syndromes is not 0.
Also, the memory 602 may store program instructions for determining whether or not to apply hardinput softoutput based decoding, and, if the calculated n number of 1bit syndromes are all 1, then determining whether or not to apply hardinput softoutput based decoding in the next half iteration according to whether or not there are two errors in the previous half iteration.
Furthermore, the program instructions may identify the positions of three bits of the lowest reliability and a bit corrected by the hardinput softoutput based decoding, if there are two errors included in the previous half iteration, and may compare the sum of the reliability values of the three bits of lowest reliability with the reliability value of the bit corrected by the hardinput softoutput based decoding to decide whether or not to perform the hardinput softoutput based decoding.
Also, the program instructions may, if the result of calculating the syndromes for the input information value includes two errors, decide whether or not to perform a hardinput softoutput based decoding by checking whether or not the reliability values of two bits having the lowest reliability and of the bits corrected by the hardinput softoutput based decoding match each other.
Here, ‘Conventional’ represents the technique disclosed in Prior Art 4 mentioned above.
Referring to
As illustrated in
Referring to
As can be seen from the results, the rate of increase differs according to the number of iterations, and it can be observed that on average the lowcomplexity decoding is used more, from a minimum of 1.4 times to 5.6 times more.
Thus, by using hardinput decoding having a low complexity instead of the existing ChasePyndiah decoding technique having a high complexity, the overall decoding complexity can be greatly reduced, tantamount to the amount of increase in the applied rate.
The embodiments of the invention set forth above are disclosed for illustrative purposes only. A person of ordinary skill in the art would be able to make various modifications, alterations, and additions without departing from the spirit and scope of the invention, and such modifications, alterations, and additions are to be interpreted as being encompassed within the scope of claims set forth below.