• No results found

Cyclically permutable codes

N/A
N/A
Protected

Academic year: 2021

Share "Cyclically permutable codes"

Copied!
145
0
0

Loading.... (view fulltext now)

Full text

(1)Linkoping ¨ Studies in Science and Technology. Dissertations No. 501. Cyclically Permutable Codes Anders Lundqvist. Department of Electrical Engineering Linkoping ¨ University, S-581 83 Linkoping, ¨ Sweden Linkoping ¨ 1997.

(2) ISBN 91-7219-029-9 ISSN 0345-7524 Printed in Sweden by ??, Linkoping ¨ 1997..

(3) To Mom and Dad..

(4)

(5) Abstract Brevity is the soul of wit. William Shakespeare. A cyclically permutable code is a set of codewords having the property that no codeword is a cyclic shift of another codeword. We study the problem of constructing cyclically permutable codes of large size and low correlation. Cyclically permutable codes are used in code-division multiple-access systems realized by e.g. direct-sequence modulation or frequency-hopping. Advantages of code-division multiple-access to conventional access methods, such as time-division and frequency-division, include greater flexibility, better robustness and that no synchronization among the transmitters is needed. Among our main results are an efficient method of selecting cyclically distinct codewords from linear cyclic codes, a new family of sequences for direct-sequence modulation, several constructions of hopping-sequences for multiple-access coupled with a decoding algorithm for asynchronous communication. We have also constructed new binary constant-weight codes of high minimum distance.. v.

(6)

(7) Acknowledgements I get by with a little help from my friends. John Lennon. First of all, I would like to express my sincere gratitude to my supervisor Professor Thomas Ericson for putting so much trust in me and for always giving me constructive criticism. His constant encouragement has meant a lot to me. I am specially indebted to Dr. Per-Olof Anderson who acted as my mentor during my first years in the Data Transmission group. He was always willing to listen to me and discuss the various problems that I ran into. Also, a particular word of gratitude to my humorous (ex.) room-mate Dr. Eva Englund who patiently listened to all my stupid ideas and suggestions, and to Dr. Ralf Kotter ¨ for helping out when I got stuck in the (in-)finite fields of coding theory. I would also like to thank the entire faculty, staff and students at the Data Transmission group, the Information Theory group and the Image Coding group for the friendly and warm atmosphere that made things so much easier. Last, but not least, I must thank Bodil for putting up with me and my research. I love deadlines. I like the whoshing sound they make as they fly by. Douglas N. Adams. Linkoping, ¨ September 1997 Anders Lundqvist. vii.

(8)

(9) Contents. 1 Introduction. 1. 1.1. Applications to communication systems . . . . . . . . . . . .. 2. 1.2. Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . .. 4. 1.3. Publications . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 6. 2 Preliminaries. 7. 2.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 7. 2.2. Cyclically permutable codes defined . . . . . . . . . . . . . .. 8. 2.3. Correlation and distance measures . . . . . . . . . . . . . . .. 9. 3 The cyclic equivalence classes of linear cyclic codes. 13. 3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 13. 3.2. Preliminaries . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 14. 3.3. The cyclic equivalence classes . . . . . . . . . . . . . . . . . .. 15. 3.4. Selecting the representatives . . . . . . . . . . . . . . . . . . .. 17. 3.5. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 36. ix.

(10) 4 Sequences for direct-sequence modulation. 39. 4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 39. 4.2. Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 40. 4.3. Sequences and linear cyclic codes . . . . . . . . . . . . . . . .. 41. 4.4. Results of a computer search . . . . . . . . . . . . . . . . . . .. 47. 5 Sequences for frequency-hopping. 53. 5.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 53. 5.2. System model . . . . . . . . . . . . . . . . . . . . . . . . . . .. 54. 5.3. Code constructions . . . . . . . . . . . . . . . . . . . . . . . .. 55. 5.4. Decoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 61. 5.5. Simulations . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 72. 5.6. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 74. 6 Sequences for the OR channel. 81. 6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 81. 6.2. A new construction . . . . . . . . . . . . . . . . . . . . . . . .. 82. 6.3. Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 88. 6.4. Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 93. 7 Conclusions. 99. A Families of sequences for direct-sequence modulation. 101. A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 A.2 The families . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 x.

(11) B Best known sets of signature sequences. 107. B.1 Key to the tables . . . . . . . . . . . . . . . . . . . . . . . . . . 107 B.2 List of constructions . . . . . . . . . . . . . . . . . . . . . . .. 110. B.3 Other constructions . . . . . . . . . . . . . . . . . . . . . . . .. 111. B.4 The tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 113. Bibliography. 127. xi.

(12)

(13) Chapter 1. Introduction ’Where shall I begin, please your Majesty?’ he asked. ’Begin at the beginning’ the King said, gravely, ’and go on till you come to the end; then stop.’ Lewis Carrol, Alice’s Adventures in Wonderland.. Today the use of error-correcting codes in communication systems is wide spread, e.g. in cellular telephony, compact discs and satellite communication systems. These codes provide protection against random disturbances induced on the transmitted information by the channel and they allow the receiver to either correct or detect errors. In this thesis we investigate some non-conventional applications of errorcorrecting codes in communication systems. We do this by constructing several families of, so called, cyclically permutable codes (CP codes) using a well-known class of error-correcting codes: the linear cyclic codes. We show how CP codes can be used, not only to provide us with robustness against additive errors, but also to aid in synchronization or to allow several user to share a single channel (multiple-access). Specifically, we are interested in giving a unified description of sets of sequences suitable for error-correction, synchronization and multiple-access, by using conventional algebraic coding theory. By using linear cyclic codes as a foundation for these sets of sequences, we inherit some of the algebraic structure possessed by the original code. This 1.

(14) Chapter 1. Introduction. 2. 0 1. 0 2. 2. 0. 1 0. Figure 1.1: Two necklaces of length n = 4 and with alphabet A = f0 1 2g. Note that they are mirror images of each other, but they are still considered different.. structure can then be used to reduce the complexity of the transmitter and receiver. In Chapter 2 we give the formal definition of cyclically permutable codes. But for now, it is sufficient to note that the problem of constructing CP codes is somewhat similar to the problem of designing necklaces with a fixed number, n, of beads (the length of code), where the color of each bead is selected from a given set of colors (the alphabet) of size q . Given the length of the code and the size of the alphabet, the goal is to obtain a large set of necklaces with the property that each necklace differs1 as much as possible from the others. See Figure 1.1.. A. 1.1 Applications to communication systems The basic purpose of any digital communication system is to transmit discrete quanta of information over a channel. Depending on the application, the number of transmitters and receivers using the channel may vary:.   1. In a point-to-point communication system the information is sent from one single transmitter to one single receiver; a typical example of this is a microwave radio link. If several transmitters are using the same channel to send information to one or more receivers we call this a multiple-access communication. When we say “differs” we actually only consider rotations of the necklaces and not mirror images, i.e. we are not allowed to flip them over only to rotate them..

(15) 1.1. Applications to communication systems. Source. -. Source coder. - Channel - Modulator coder .. .. Source. -. Source coder. 3. - Channel - Modulator coder. - Receiver - Channel -. .. .. - Receiver. Figure 1.2: The components of a general digital communication system.. system. Well known examples include cellular telephony and local area networks (LAN:s).. . Finally, in a broadcast communication system we have one transmitter sending information to many receivers.. Regardless of the number of transmitters and senders using the channel, the components of the communication systems are basically the same, see Figure 1.2. The information to be transmitted (e.g. text, speech or pictures) first needs to be encoded into a string of, usually binary, symbols. Next, in order to protect the information from being corrupted while transmitted over the channel, we insert some redundancy into the stream of symbols. This allows the receiver to correct distorted information under the assumption that the distortion is not too severe. The information is protected using error-correcting codes: either block codes or convolutional codes. Before the symbols can be sent on the channel they need to be modulated onto some carrier. In a point-to-point communication system the modulation signal may also be used for synchronization purposes or, in a system that needs to achieve a low probability of interception, for spreading the transmission spectrum over a large bandwidth. In both these cases we need a modulation signal that is easy to distinguish from a time-shifted version of itself..

(16) Chapter 1. Introduction. 4. In multiple-access communication systems, each transmitter is assigned a unique signal (or spreading sequence) onto which the information is modulated. These sequences can be in the form of e.g. frequency-hopping patterns, direct-sequences or protocol sequences. They are used by the receiver(s) to separate the different transmitters from each other. In order to minimize cross-talk interference, each signature should be easy to distinguish from all other signatures used in the system. Further, if the transmitters are sending their information asynchronously each spreading sequence must also be easy to distinguish from time-shifted versions of all other spreading sequences used.. C. To summarize, the sets CP of sequences in which we are interested should have one or both of the following properties:. 1. Each sequence should be easy to distinguish from a time-shifted version of itself. (Low autocorrelation.) 2. Each sequence should be easy to distinguish from (a possibly) timeshifted version of all other sequences. (Low crosscorrelation.). 1.2 Outline of the thesis Figure 1.3 contains a graphical representation of how the central chapters in this thesis are linked. In the next chapter, Chapter 2, we give the formal definition of a cyclically permutable code and we introduce the different correlation measures used throughout the thesis. In Chapter 3 we present a numerically efficient algorithm for finding the set of representatives from the cyclic equivalence classes of a linear cyclic code of length n. In particular, we are interested in those representatives which have minimum period n. This set of representatives, which we denote repn ( ), will in the subsequent chapters be used to construct various families of sequences with good correlation properties.. C. C.

(17) 1.2. Outline of the thesis. 5. Linear cyclic codes. ?Chapter 3. Cyclic equivalence classes. f0 1g 7! f1g. ?Chapter 4. Sequences for directsequence modulation. Concatenation. ?Chapter 6. Sequences for the OR channel. Chapter 5 Sequences for frequency-hopping Figure 1.3: How the main chapters are linked.. Sequences suitable for coherent communication have been thoroughly investigated over the years. These sequences are often referred to as psuedorandom or psuedonoise sequences, and have favorable correlation properties. In Chapter 4 we investigate the relationship between the representatives of binary linear cyclic codes and some families of well-known psuedorandom sequences. Through computer search we have found a new family of sequences. (Appendix A contains a comprehensive overview over families which can be derived directly from some linear cyclic code of length n, e.g. m-sequences, Gold sequences and Kasami sequences.). C. In Chapter 5 we construct sequences for slow frequency-hopping from linear cyclic Reed-Solomon codes. Three different synchronization situations are considered: synchronous, quasi-synchronous and asynchronous communication. We present decoding algorithms for hopping-sequences in these three cases together with computer simulations estimating the probability of erroneous decoding. Constant-weight cyclically permutable codes for the OR channel are covered in Chapter 6. These codes are constructed by concatenating codes. The outer code is a set of representatives, repn ( ), for some linear cyclic. C.

(18) Chapter 1. Introduction. 6. C. code with a prime field alphabet (larger than two) and the inner code is an orthogonal weight one binary code. An asymptotic analysis of this code construction is carried out and we review other constructions of codes and compare our construction to these. Three types of spreading sequences are mentioned: signature sequences, protocol sequences and optical orthogonal codes. Tables of best known sets of signature sequences are listed in Appendix B. Finally in Chapter 7 we draw some conclusions as to what we have done.. 1.3 Publications Parts of this thesis have been presented at the following conferences:.    . Sixth Joint Swedish-Russian International Workshop on Information Theory in Molle, ¨ Sweden; August 1993 [22], IEEE Vehicular Technology Conference in Stockholm, Sweden; June 1994 [26], IEEE International Symposium on Information Theory in Trondheim, Norway; June 1994 [24], NRS-seminarium in Linkoping, ¨ Sweden; October 1994 [23].. or have been published in or submitted to the following journals.   . Journal on Communications, 1994 [25], Anders Lundqvist, “On the Cyclic Equivalence Classes of Linear Cyclic Codes”, submitted to IEEE Transactions on Information Theory, Anders Lundqvist, “Asynchronous Multiple-Access Frequency-Hopping”, submitted to IEEE Transactions on Information Theory..

(19) Chapter 2. Preliminaries The distance doesn’t matter; it is only the first step that is difficult. Marquise du Deffand. 2.1 Introduction The concept of cyclically permutable (CP) codes was introduced as early as 1963 by Gilbert [13]. The application was asynchronous multiple-access on radio channels. CP codes are sometimes referred to as optical orthogonal codes in the literature on optical communications. The optical orthogonal codes, which in all practical aspects are identical to CP codes as defined by Gilbert, were introduced by Chung, Salehi and Wei [6] in 1989 as a possible coding scheme for code-division multiple-access on fiber optic channels. Throughout the thesis we use the name “cyclically permutable” rather than “optical orthogonal”. The reason is that the codes find applications also outside the field of optical communication and, as soon will be apparent, the sequences are not necessarily orthogonal. 7.

(20) Chapter 2. Preliminaries. 8. 2.2 Cyclically permutable codes defined We first need the concept of cyclic shifts: Definition 2.1 Let a = a0  a1  : : :  an;1 be a vector of length n over an arbitrary alphabet. The (cyclic) shift operator S is defined by. S(a) Si (a). = an;1  a0  : : :  an;2 ;  = Si;1 S(a). where i is an integer greater than one.. C. Definition 2.2 A cyclically permutable code (CP code) CP of length n is a set of codewords (or sequences) of length n over any finite alphabet, having the following two properties: 1. All codewords c in that 1 i < n.. . C. CCP have full cyclic order, i.e. c 6= Si(c) for all i such C. 2. No codeword in CP is a cyclic shift of another codeword in , i.e. if a and b are two distinct codewords in CP then a = Si (b) for all i, 0 i < n. We say that the codewords are cyclically distinct.. C. 6. . The definition that we give of CP codes is slightly different from the one originally given by Gilbert since he allowed codewords to have a period that was less than the length of the code, i.e. he used only condition 2 in Definition 2.2 above. Also, he only considered binary codes and we allow codes over arbitrary alphabets.. Example 2.1 The set. CCP = f2310 4120 1430 3240 3421g of length n = 4 with alphabet A = f0 1 2 3 4g is a CP code.. 2.

(21) 2.3. Correlation and distance measures. 9. 2.3 Correlation and distance measures. C. When evaluating a cyclically permutable code CP we consider the maximum correlation between the codewords. The reason why correlation is an interesting measure of performance is that, as we mentioned in the introduction, the autocorrelation function determines the synchronization capabilities of the code and that the crosscorrelation function is related to the amount of crosstalk interference in multiple-access systems. We use three different correlation measures in this thesis, depending on the application. In order to unify and simplify the presentation, all sequences are considered to have an alphabet equal to some finite field with q elements, denoted by GF(q). In the following, let a = a0  a1  : : :  an;1 and b = b0  b1  : : :  bn;1 be two sequences of length n. The first two correlation functions we use are only defined for binary sequences:. ab ( ) =4. nX ;1 i=0. and. ab ( ) =4. (;1)ai +bi+. nX ;1 i=0. aibi+ :. Note that all indices are calculated modulo the length of the sequences, n. The third correlation function is the Hamming-correlation function defined by. Hab. ( ) =4. nX ;1 i=0. I (ai  bi+ ). where the indices are again calculated modulo n and where I is an indicator function defined by. I (a b) =4.  1 a = b 0 a 6= b:. The Hamming-correlation function simply gives the number of coinciding symbols in the sequences and it is defined on sequences with arbitrary alphabets..

(22) Chapter 2. Preliminaries. 10. Example 2.2 Consider the two binary sequences of length n = 7:. a = 1000000 b = 1001011: Evaluating the autocorrelation functions yields:. aa ( ) = bb ( ) = aa ( ) = bb( ) = Haa ( ) = Hbb ( ) =.  7  0 (mod 7)  37 otherwise,  0 (mod 7)  ;11  otherwise, 0 (mod 7).  04 otherwise,  0 (mod 7) 2 otherwise.  7  0 (mod 7)  57 otherwise,  0 (mod 7) 3. otherwise.. 2 We denote the maximum correlation between any two codewords in a code of length n by. C. . or.  6= 0 . . or.  6= 0. . or.  6= 0 :. jjmax =4 max jab( )j : a 6= b ab2C 0<n. max =4 max ab ( ) : a 6= b ab2C 0<n. or by. Hmax =4 max Hab( ) : a 6= b ab2C 0<n. C is defined by d (a b) : a =6 bg dmin =4 ab min 2C H. The minimum distance, dmin , of a code. . (2.1). . (2.2). . (2.3).

(23) 2.3. Correlation and distance measures. 11. where dH is the Hamming distance between the codewords. The minimum cyclic distance, dc , of a code.  ;. . C of length n is. i j dc =4 ab min 2C dH S (a) S (b) : a 6= b 0ij<n. C. or. . i 6= j :. (2.4). Note that if not all codewords in have full cyclic order, or if the codewords are not cyclically distinct, then the minimum cyclic distance is zero..

(24)

(25) Chapter 3. The cyclic equivalence classes of linear cyclic codes A child of five would understand this. Send someone to fetch a child of five. Groucho Marx. 3.1 Introduction. C. Any cyclic code of length n can be partitioned into its cyclic equivalence classes, generated by the operation of cyclic shifts. In this chapter we derive a systematic and numerically efficient algorithm to find a set of representatives containing one codeword from each of these equivalence classes (denoted by rep( )). We pay special attention to the problem of finding a set of representatives (denoted by repn ( )) containing one codeword from each one of those equivalence classes having size n, i.e. having maximum size.. C. C. A large number of papers deal with the problem of characterizing the cyclic equivalence classes in linear cyclic codes and problems related to this. In connection with the construction of sequences for frequency-hopping, Reed [42] presented a method of finding a subset of repn ( ) in the case when is. C. 13. C.

(26) Chapter 3. The cyclic equivalence classes of linear cyclic codes. 14. a primitive narrow-sense Reed-Solomon code. His method was later modified by Reed and Wolverton in [43] so that the entire set rep( ) could be determined for the same class of codes. This method was “reasonably systematic” according to Song, Reed and Golomb [52]. In [52] some additional results can be found, such as a closed expression for the size of repn ( ).. C. C. In the papers [54, 2], Allard, Shiva and Tavares study the cyclic equivalence classes of binary linear cyclic codes. In [54] they show how to find the entire set rep( ) under the condition that all the zeroes of the parity-check polynomial of the code have multiplicative order n, or that the multiplicative orders of these zeroes are all relatively prime. These restrictions were later removed in the paper [2] by the same authors.. C. C. The basic idea of subdividing a linear cyclic code into its cyclic equivalence classes has been used in connection with the problem of determining the weight distribution of the code. A number of authors, beginning with MacWilliams [27], have studied such methods that are based on the algebraic structure of linear cyclic codes. In this chapter we present a new and efficient method of determining both rep( ) and repn ( ) for any linear cyclic code of length n.. C. C. C. 3.2 Preliminaries. C. Let denote a linear cyclic

(27) n k ] code over GF(q ), where GF(q ) is the finite field with q elements. We demand that n and q are relatively prime. Let m be the multiplicative order of q modulo n, i.e. let m be the smallest positive integer i such that n divides qi 1. We may factor g (x) and h(x), the generator polynomial and the parity-check polynomial of , into irreducible polynomials over GF(q ) as. ;. g(x) = h(x) =. C. Yr. i=1 Ys j =1. mi (x) mj (x). where m (x) is the minimal polynomial of  in GF(qm ) over GF(q )..

(28) 3.3. The cyclic equivalence classes. 15. ;. Since n and q are relatively prime we know that xn 1 has n distinct zeroes (see e.g. MacWilliams and Sloane [28, Ch. 7, 5]) and therefore the factors of g(x) and h(x), i.e. the polynomials mi (x), 1 i r, and mj (x), 1 j s, are also relatively prime. Hence, we can uniquely characterize the code using one of the sets. x  .  . C. A(C ) =4 f 1  2  : : :  r g or. B (C ) =4 f 1  2  : : :  s g consisting of one element from each conjugacy class in the set of all zeroes of g (x) and h(x), respectively. We will say that is in B ( ) if there exists an element 0 in B ( ) such that and 0 belong to the same conjugacy class. The elements in the set A( ) are called the zeroes of and B ( ) is called the (defining set of) non-zeroes. Note that in the general case, the set B ( ) is not the complement of A( ).. C. C. C. C. C. C. C. jj. The (multiplicative) order of an element in GF(qm ), denoted by , is defined to be the smallest positive integer i such that i = 1. In Lemma 3.1 below we relate the multiplicative order of an element to the degree of its minimal polynomial m (x). Lemma 3.1 Let be an element in GF(qm ) and let m (x) be its minimal polynomial over GF(q ). If the multiplicative order of is n and if d is the smallest integer i such that n divides qi 1 then deg m (x) = d.. ;. ;. Proof: Since d is the smallest integer i such that n divides qi 1, cannot be contained in any proper subfield of GF(qd ). We know that the degree of m (x) must be a divisor of d. Assuming deg m (x) = m0 < d implies 0 that is contained in the subfield GF(qm ), which is a contradiction. Hence deg m (x) = d. 2. 3.3 The cyclic equivalence classes. jj. The cyclic order of a codeword c, denoted by c , is defined to be the smallest positive integer i such that Si (c) = c, where S is the previously defined.

(29) Chapter 3. The cyclic equivalence classes of linear cyclic codes. 16. cyclic shift operator. A codeword is said to have full cyclic order if the cyclic order of the codeword equals the length of the codeword. Also, two codewords c and c0 are said to belong to the same cyclic equivalence class if c is a cyclic shift of c0 . (Codewords belonging to different cyclic equivalence classes are said to be cyclically distinct.) We use c to denote the cyclic equivalence class containing c:. c =4. C. For a set , we refer to. Si(c) : 0  i < jcj:. C as the cyclic extension of C defined by  C = c:. (3.1). c2C. By selecting one representative (one codeword) from each cyclic equivalence class in the code we get a set of representatives, denoted by rep( ). (Note that in the general case, rep( ) is not uniquely determined by .) Any cyclic code can then be written as:. C. C. C. C. C=. . c2rep(C ). C. c:. Let the set of all representatives of cyclic order d be denoted by. . C. . repd (C ) =4 c 2 rep(C ) : jcj = d :. Notice that repd ( ) is empty when d is not a divisor of n. We pay special attention to those cyclic equivalence classes in containing codewords of full cyclic order and use repn ( ) to denote these representatives.. C. C. For the sake of completeness, we state the following two theorems giving us the number of cyclic equivalence classes in linear cyclic codes:. C. Theorem 3.1 If is a linear cyclic code of length n over GF(q ) with parity-check polynomial h(x), then. rep (C ) = 1 X (n=d)qm(d) n n djn. where m(d) is the degree of the greatest common divisor of h(x) and xd is the M¨obius function.. ; 1, and .

(30) 3.4. Selecting the representatives. 17. 2. Proof: See Theorem 4 in Tavares, Allard and Shiva [54].. C. Theorem 3.2 If is a linear cyclic code of length n over GF(q ) with parity-check polynomial h(x), then. rep(C ) = 1 X '(n=d)qm(d) n djn. where m(d) is the degree of the greatest common divisor of h(x) and xd is the Euler totient-function.. ; 1, and '. Proof: By using the following relationship (see e.g. Niven, Zuckerman and Montgomery [40, p. 195]). '(d) =. X kjd. (d=k)k. we see that. 1 X '(n=d)qm(d) = 1 X '(d)qm(n=d) = 1 X X (d=k)kqm(n=d) :. n djn. n djn. n djn kjd. Changing the order of summation gives us. 1 X X (d)kqm( kdn ) = n kjn dj nk. = =. X k X  n m(d) kd q n n kjn dj X 1 Xk m(d) (k=d)q k kjn djk. X  k jn.  . . repk (C ) = rep(C ). where the last equality follows from Theorem 3.1.. 2. 3.4 Selecting the representatives A trivial way of selecting the representatives from the cyclic equivalence classes is by doing an exhaustive search through all the codewords in .. C.

(31) Chapter 3. The cyclic equivalence classes of linear cyclic codes. 18. We first need to check if the selected codeword is a cyclic shift of a previously accepted codeword. If it is, the codeword is rejected, otherwise the codeword belongs to rep( ) and we repeat the process with the next codeword in . To find repn ( ) we also need to check the cyclic order of each codeword.. C C. C. C. C. Example 3.1 Consider the two linear cyclic codes 1 and 2 in Figure 3.1. Each row constitutes one cyclic equivalence class.. 0000 2310 4120 1430 3240 3421 1111 2222 3333 4444. 3102 1204 4301 2403 4213. C1. 1023 2041 3014 4032 2134. 0231 0412 0143 0324 1342. 0000 0104 0203 1144 2233 2431 3421. 1040 2030 1441 2332 4312 4213. C2. 0401 0302 4411 3322 3124 2134. 4010 3020 4114 3223 1243 1342. Figure 3.1: Two cyclic codes C1 and C2 of length 4 over GF(5). The non-zeroes of C1 are B (C1 ) = f0  g and the minimum distance is dmin (C1 ) = 3. The code C2 has non-zeroes B (C2 ) = f;1  g and minimum distance dmin (C2 ) = 2.. By writing out all codewords in this manner we can easily determine the cyclic equivalence classes and see which of the codewords have full cyclic order.. C. For code 1 , one possible choice of representatives is. repn (C1 ) = f2310 4120 1430 3240 3421g and the size of this set is jrepn (C1 )j = 5. For the second code, we can use. repn (C2 ) = f0104 0203 1144 2233 2431 3421g.

(32) 3.4. Selecting the representatives. and the size of this set is. 19. jrepn(C2)j = 6.. 2. It is obvious that any algorithm based on exhaustive search is highly inefficient for codes of large size and in order to formulate a more efficient algorithm we are forced to use another approach. The basic idea is to partition the code as. C. C = C 1 C2 : : : in such a way that each subset Ci is a union of cyclic equivalence classes  Ci = c c2Ai. C. C. for some partitioning A1  A2  : : : of rep( ). We can then determine rep( i ) for each subset individually, and write. rep(C ) = rep(C1 ) rep(C2 ) : : : :. C. It turns out that it is easy to find large sets i in which it is very easy to determine rep( i ). Therefore by using this method of “divide-and-conquer”, we can significantly reduce the complexity of finding rep( ).. C. C. Lemma 3.3 below shows one way to partition a code such that this can be accomplished, but we first need to introduce some notation. Given a linear cyclic code over GF(q ) we define  , where is an element of B ( ), to be the linear cyclic subcode of with parity-check polynomial m (x). It follows directly that the size of  is. C. C C. C. C  = qdeg m (x): . C C. C C. Let be any code, and let 1  2  : : : c in can be uniquely expressed as. C. (3.2).  Cm be subcodes of C . If any codeword. c = c1 + c2 +

(33)

(34)

(35) + cm  ci 2 Ci  i = 1 2 : : :  m then we say that C can be decomposed into a direct sum of C1  : : :  Cm , de-. noted by. C = =4. m M. Ci =4 C1 C2

(36)

(37)

(38) Cm. i=1c + c +

(39)

(40)

(41) + c : c 2 C  i = 1 2 : : :  m: 1 2 m i i.

(42) Chapter 3. The cyclic equivalence classes of linear cyclic codes. 20. Lemma 3.2 Any linear cyclic code composed into the direct sum. C with B (C ) = f 1  2  : : :  s g can be de-. C = C1 C2

(43)

(44)

(45) Cs : 2. Proof: See Theorem 2 in Zierler [59].. C. f. g. C. If is a linear cyclic code, then for any subset S = 1  2  : : :  m of B ( ) we let S denote the linear cyclic subcode of having non-zeroes S , i.e. B( S ) = S.. C. C. C () Let CS denote the set that contains all codewords that can be written as the sum of one non-zero codeword from each of the codes C1  C2  : : :  Cm : M CS() =4 C = C1 C2

(46)

(47)

(48) Cm  CS :  2S. . In general we use the star-notation ( ) to indicate that the all-zero codeword is excluded from a code. Note that if the size of S is strictly greater than one, () then S is a proper subset of S because. C. C CS()  CS =4 ;C1 C2

(49)

(50)

(51) Cm  n f0n g:.  to be the set containing the all-zero codeword only. This We define ? might, at a first glance, seem like a very counter-intuitive extension of the definition. Later it will be become evident that this indeed is the natural extension.. C. C. C. f. Lemma 3.3 If is linear cyclic code of length n with B ( ) = 1  : : : may be partitioned into 2s cyclic subcodes in the following manner:. C. C=. . S B(C ).  s g then. CS() :. Proof: The fact that the subcodes are closed under the operation of cyclic shifts follows from the additivity of the shift operator. From Lemma 3.2 we have. C=. s M i=1. Ci = C1 C2

(52)

(53)

(54) Cs.

(55) 3.4. Selecting the representatives. 21. 2C.

(56)

(57)

(58). and we can uniquely write any codeword c as c = c1 + c2 + + cs where ci i , 1 i s. Let ci1  ci2  : : :  cim be the non-zero codewords in this sum. Then the set of indices i1  i2  : : :  im uniquely identifies the 2 partition that c belongs to.. 2C.  . f. g. The following result is an immediate consequence of Lemma 3.3: Corollary 3.1 If. C is a linear cyclic code and S is a subset of B (C ) then. 0 1  C = CS CB(C)nS = @ CS(0) A CB(C)nS : S 0 S. C (). In order to determine the cyclic order of the codewords in  or S we apply the following lemma:. C. C. Lemma 3.4 Let be a linear cyclic code and let S = of B ( ). Then the codewords in  have cyclic order have cyclic order. C. C. f 1  2  : : :  m g be a subset j j and the codewords in CS(). ;. . lcm(S ) =4 lcm j 1 j j 2 j : : :  j m j : Proof: See the corollary of Lemma 6 in Zierler [59].. 2. C. Example 3.2 If we partition the code 1 from Example 3.1, according to Lemma 3.3, we get. ;. . C1 = f0n g C 0 C C 0 C = f0000g f1111 2222 3333 4444g f3421 4213 2134 1342g f2310 3102 1023 0231 4120 1204 2041 0412 1430 4301 3014 0142 3240 2403 4032 0324g and similarly for C2 , ;  C2 = f0n g C ;1 C C ;1 C = f0000g f2431 4312 3124 1243g f3421 4213 2134 1342g f0104 1040 0401 4010 0203 2030 0302 3020 1144 1441 4411 4114 2233 2332 3322 3223g: Note that the codewords in each partition have the same cyclic order.. 2.

(59) Chapter 3. The cyclic equivalence classes of linear cyclic codes. 22. After partitioning the code in the manner specified by Lemma 3.3 we know   that all codewords in a subset 1 2 m have the same cyclic order and that all codewords in a certain cyclic equivalence class belong to the same subset. This implies that we can consider one subset at a time and select representatives from the cyclic equivalence classes contained therein. () The size of a subset S , where S = 1  2  : : :  m , is. C C

(60)

(61)

(62) C.  ()  CS . C. f. g.  M    C  = C1 C2

(63)

(64)

(65) Cm  C2S

(66) C  

(67)

(68)

(69) C   = Y C   = Y (qdeg m (x) ; 1) 1 2 m . = =. 2S. 2S. C(). and since all cyclic equivalence classes in S have the same size, () the number of equivalence classes in S is. C. lcm(S ),.  ()  Y rep(CS ) = lcm(1 S ) (qdeg m (x) ; 1):. (3.3).  2S. Lemma 3.6 and Lemma 3.7 below provide us with methods of finding the set of representatives in  for some in B ( ). These methods can then be. C. C. ;C () . used together with Lemma 3.8 to determine repn S . In order to simplify the presentation, we use the trace representation of cyclic codes.. The q -ary trace of an element  sum. 2 GF(qm ) over GF(q) is defined to be the. Tr( ) =4. mX ;1 i=0.  qi. where the summation is performed over GF(qm ). The following lemma is a generalization of Theorem 6.5.1 in van Lint [21]:. C. Lemma 3.5 The linear cyclic

(70) n k ] code  over GF(q ) can be written as. . C = Tr() Tr( ;1 ) : : :  Tr( ;(n;1) ) :  2 GF(qk ) where k. = deg m (x).. .

(71) 3.4. Selecting the representatives. 23. Proof: Essentially the same proof as for Theorem 6.5.1 in van Lint [21], but we allow the cyclic order of to be a divisor of n and not necessarily equal 2 to n.. C. Lemma 3.6 Consider the code  of length n over GF(q ) and let m be the multiplicative order of q modulo n, then a possible choice for rep(  ) is. C. o ;  n rep C = Tr( i ) Tr( i ;1 ) : : :  Tr( i ;(n;1) ) : 0  i < j j=j j. where is a primitive element of GF(qm ).. Proof: Consider the two distinct codewords. c1 = Tr( 1 ) Tr( 1 ;1 ) : : :  Tr( 1 ;(n;1) ) c2 = Tr( 2 ) Tr( 2 ;1 ) : : :  Tr( 2 ;(n;1) ): If c1 is a cyclic shift of c2 , then for some integer s we must have 1 = 2 s . Let 1 = e1 , 2 = e2 and = f , where e1 = e2 and 0 e1  e2 < = . The integer f must be a multiple of = . Now, 1 = 2 s can be written as. jjjj. 6. . jjjj. e1 e2 + sf (mod j j):. jj jjjj e1 e2 (mod j j=j j):. If this congruence holds modulo it must also hold modulo since f is a multiple of = we have. The only solution is e1 the lemma.. j j=j j and. = e2 which is a contradiction and we have proved. 2. When we have a code of primitive length and has order n, Lemma 3.6 may be stated in the following much simpler way:. C. ;. Lemma 3.7 Consider the code  of primitive length, i.e. n = qm 1, over GF(q ) where m is the multiplicative order of q modulo n. If has multiplicative order n then rep(  ) can be chosen to be any codeword in  .. C. C.

(72) Chapter 3. The cyclic equivalence classes of linear cyclic codes. 24. Proof: Since = n, all codewords in  have full cyclic order, qm 1. From Lemma 3.1 follows that the degree of m (x) is m, hence the size of  deg m (x) 1 = qm 1 and we see that all non-zero codewords must  is q belong to the same cyclic equivalence class. Hence,  is a union of only two c, where c  and any codeword cyclic equivalence classes,  = 0n  2 from  is a representative of cyclic order n.. jj. C. C. ;. ;. C. C. f g. C C. ;. C 2C. Lemma 3.8 Let be a linear cyclic code over GF(q ) and let () be a subset of B ( ). For the subcode S we have. C. S = f 1  : : :  m g. ;  M rep CS() = C0 = C0 1 C0 2

(73)

(74)

(75) C0 m  2S. where. . C0 i =4 Sj (c) : c 2 rep;Ci  0  j < gcd ;j i j lcm(j 1 j : : :  j i;1 j) :. Proof: Using the fact that. C 0  i. we note that.  . = Ci 

(76) gcd(j i j lcm(j 1 j : : :  j i;1 j)) C  

(77) j j

(78) lcm(j j : : :  j j) i 1 i;1 = i lcm( j 1 j : : :  j i j).  M   C0  2S. =. Y. jC1 j

(79) jC2 j

(80)

(81)

(82) jCm j jC0 j = lcm( j 1 j : : :  j m j) 2S. Y = lcm(1 S ) (qdeg m (x) ; 1) 2S. ;C () . which is the size of rep S according to Equation 3.3. This shows that we have selected the correct number of codewords. We now show that all selected codewords are cyclically distinct by using a proof of induction over the number of non-zeroes, m. In the case m = 1, this lemma coincides () with Lemma 3.6. Now, assume that we can determine rep S for any. ;C  ;  S  B (C ) for some fixed size of S and that we want to evaluate rep C () S fg. 2 B (C ) n S . We claim that ;  ;  rep CS(f) g = rep CS() Sj (c) : c 2 rep;C   0  j < gcd(j j lcm(S )): . where.

(83) 3.4. Selecting the representatives. 25. ;C () . Let c1 and c2 be two distinct codewords from rep S f g . These codewords can be uniquely decomposed into c1 = c01 + c001 and c2 = c02 + c002 where. ;  c01  c02 2 rep CS(). and. . ;  c001  c002 2 Sj (c) : c 2 rep C  0  j < gcd(j j lcm(S )) :. If c1 is a cyclic shift of c2 then for some integer i we must have. Si (c01 ) = c02. ) i 0 (mod lcm(S )) ) i 0 (mod gcd(j j lcm(S ))). Si (c001 ) = c002. ) i j (mod j j) ) i j (mod gcd(j j lcm(S ))). and. where 0.  j < gcd(j j lcm(S )).. This system of equations only has a solution when j. = 0, which is. i 0 (mod lcm(j j lcm(S ))) according to the Chinese Remainder Theorem (see e.g. Niven, Zuckerman and Montgomery [40, Th. 2.18]). Hence, all codewords are cyclically dis2 tinct and the lemma is proved. We are soon ready to state the first version the algorithm, but first we need the following algorithm which returns the power set 2B of a set B , i.e. the set of all subsets of B . Algorithm 3.1 For arbitrary finite sets B and S , let the function f be recursively defined as follows:. 8 ; < f B n f g S  f ;B n f g S f g if B 6= ? f (B S ) = : fS g otherwise. where is an arbitrary element in B ..

(84) 26. Chapter 3. The cyclic equivalence classes of linear cyclic codes. Theorem 3.3 For any set B , we have. f (B ?) = 2B : Proof: We prove this by induction over the size of B . It is easy to see that f (B ?) = ? if B = ?. Now assume that f (B ?) returns the power set of B for all sets B up to some fixed size of B . Consider the set B where. B . It is obvious that the power set of B is. f g. f g f g 2B fa f g : a 2 2B g: We see that f (B f g ?) = f (B ?) f (B f g) and by the induction hypothesis we have f (B ?) = 2B . What remains to be proved is that f (B f g) equals fa f g : a 2 2B g. This follows directly from the definition of f since f (B f g) returns the same set as f (B ?) except that each 2 subset in f (B f g) also contains . 62. Example 3.3. f (f1 2g ?) = f (f2g ?) f (f2g f1g) = f (? ?) f (? f2g) f (? f1g) f (? f1 2g) = f? f2g f1g f1 2gg = 2f12g :. 2 A direct consequence of Lemma 3.3 is the relation. rep(C ) =. . S B(C ). rep(CS() ). C. where S ranges over the power set B ( ). By applying Equation 3.3 we get. 1 Y qdeg m (x) ; 1 : lcm(S ) 2S S B(C ). rep(C ) = X. (3.4). C. It follows, that if we let B in Algorithm 3.1 be the set of non-zeroes B ( ) for some code , then the representatives can be found using the following algorithm:. C.

(85) 3.4. Selecting the representatives. 27. Algorithm 3.2 Let the function f be recursively defined by. 8 f ;B n f g S  f ;B n f g S f g if B 6= ? < C C C f (BC  S ) = : ; ()  rep CS  otherwise. where is any non-zero in BC .. C. Theorem 3.4 For any linear cyclic code , we have. f (B (C ) ?) = rep(C ):. ;C . ;C . Note that to find the representatives for the cyclic equivalence classes of size n, we replace rep S() by repn S() in the function f of Algorithm 3.2. We have. repn (C ) =. . S B(C) lcm(S )=n. rep(CS() ). (3.5). and this simplifies Equation 3.4 to. rep (C ) = 1 X Y qdeg m (x) ; 1 : n n S B(C)  2S lcm(S )=n. Example 3.4 We now show how to use the partitioning in Lemma 3.3 together with Lemma 3.8 in determining repn ( ) for the codes in Example 3.1. Remember that each subset S of the non-zeroes B ( ) with lcm(S ) = n corresponds to a set of codewords of full cyclic order.. C. C. C. f. g. C. The code 1 has B ( 1 ) = 0  where the multiplicative order of 0 is 1 and the order of is 4, hence the subsets of B ( 1 ) corresponding to subsets containing codewords of full cyclic order are and 0  . According to Equation 3.5 we have. C fg. f. g. ;  ;  repn (C1 ) = rep C rep Cf()0 g :. From Lemma 3.6 we have. ;  repn C 0 = f( i  i  i  i ) : 0  i < 4g = f1111 2222 3333 4444g.

(86) Chapter 3. The cyclic equivalence classes of linear cyclic codes. 28. and. ;  repn C = f( i  i ;1  i ;2  i ;3 ) : 0  i < 1g = f1342g:. Applying Lemma 3.8 gives us. ;  rep Cf()0 g =. Si(c) : c 2 f1111 2222 3333 4444g 0  i < 1 Si(c) : c 2 f1342g 0  i < 1. = f1111 2222 3333 4444g f1342g = f2403 3014 0231 4120g:. C. The resulting set of representatives from 1 is. ; . ;. . repn (C1 ) = rep C rep Cf()0 g = f1342 2403 3014 0231 4120g:. C. In the case of the code 2 both non-zeroes and we have. and ;1 have cyclic order n. ;  ;  ;  repn (C2 ) = rep C rep C ;1 rep Cf();1 g :. From Lemma 3.6 we have. ;  repn C ;1 = f( i  i  i 2  i 3 ) : 0  i < 1g = f1243g:. Applying Lemma 3.8 gives us. ;  rep Cf();1 g =. Si(c) : c 2 f1243g 0  i < 1 Si(c) : c 2 f1342g 0  i < 4. = f1243g f1342 2134 4213 3421g = f2030 3322 0401 4114g: The resulting set of representatives. repn (C2 ) = f1342 1243 2030 3322 0401 4114g:. 2.

(87) 3.4. Selecting the representatives. 29. The obvious drawback with Algorithm 3.2 is its complexity. Since the number of subsets that need to be considered grows exponentially with the number of non-zeroes, i.e. as 2jB (C )j , the algorithm quickly becomes intractable for codes of large dimension. In order to reduce the complexity in some special cases we apply the following two lemmas:. C. Lemma 3.9 Let be a linear cyclic code of length n over GF(q ) and let S and B be two disjoint subsets of B ( ). If lcm(S ) = n then. C. ;  ;  rep CS() CB = rep CS() CB :. Proof: We first note that.  ; ()  rep CS CB . = = =. 0 1 Y Y 1 @ (qdeg m (x) ; 1)

(88) qdeg m (x)A n  2S 0 12BC @ 1 Y (qdeg m (x) ; 1)A

(89) Y qdeg m (x) n  2S  ()   ()2BC  rep(CS )

(90) jCB j = rep(CS ) CB  : ;C ()  C. Let c1 and c2 be any two distinct codewords in rep S B . The codewords can be uniquely decomposed into c1 = c01 + c001 and c2 = c02 + c002 () where c01  c02 rep S and c001  c002 B . If c1 and c2 are not cyclically distinct then there exists an integer j such that Sj (c1 ) = c2 , or equivalently Sj (c01 ) + Sj (c001 ) = c02 + c002 . But since the decomposition is unique we have Sj (c01 ) = c02 . Since the cyclic order of c01 and c02 is n and since all codewords in rep () are cyclically distinct we see that j must be a multiple of n. This implies that c1 and c2 are equal, which is a contradiction. Hence, the 2 assertion holds.. 2. ;C . 2C. ;C . C. Lemma 3.10 Consider the linear cyclic code of length n. If at least one of the non-zeroes B ( ) has multiplicative order n, then repn ( ) can be partitioned into two disjoint sets as. 2 C. ;. . C. ;. . repn (C ) = repn (C ) CB(C )nfg repn CB(C )nfg :.

(91) 30. Chapter 3. The cyclic equivalence classes of linear cyclic codes. Proof: If we let S. ;. = f g and apply Corollary 3.1, we get. . ;. . C = f0n g C CB(C)nfg = C CB(C)nfg CB(C)nfg and, since the two sets C CB (C )nf g and CB (C )nf g are closed under cyclic shifts, we also have. ;;. . . repn (C ) = repn C CB(C )nfg CB(C )nfg ;  ;  = repn C CB(C )nfg repn CB(C )nfg and by applying Lemma 3.9 on the right-hand side of the last equation we get. ; ;   ;  repn (C ) = repn C CB(C )nfg repn CB(C )nfg :. 2. C. If the order of at least one non-zero of the code is n then, according to Lemma 3.1, the degree of the corresponding minimal polynomial is m. This implies that the size of  is q m . Hence, if we apply Lemma 3.10. C. ;. . ;. rep(C ) = rep(C ) CB(C )nfg rep CB(C )nfg. . C. and use Lemma 3.6 to determine rep(  ), we see that we can extract. k k ;m   m jrep(C )j

(92) CB(C)nfg  = (q n; 1)

(93) qk;m = q ;nq. C. representatives of order n from rep( ).. C. If yet another element (different from ) in B ( ) has order n then, by using exactly the same arguments as above, we can in the next step extract another (q k;m q k;2m )=n codewords of order n from rep( ) resulting in a total of (qk q k;2m )=n representatives. From this follows that if v nonzeroes have multiplicative order n then. ; ;. C. k k;vm jrepn(C )j  q ; nq. (3.6). k jrepn(C )j = q n; 1 :. (3.7). and if all non-zeroes have multiplicative order n then.

(94) 3.4. Selecting the representatives. 31. The last equation constitutes of course also a trivial upper bound on the size of repn ( ). We have now arrived at the following algorithm:. C. C. Let be a linear cyclic code of length n over GF(q ) and let m be the multiplicative order of q modulo n. If B ( ) contains v elements of multiplicative order n then the following algorithm gives us a subset repn of repn ( ) containing (qk q k;vm )=n representatives.. C. C. ;. Algorithm 3.3 Let. C. 1. Let repn. C. C be any linear cyclic code of length n..  ? and let BC  B (C ).. 2. If there exists no element of order n in BC then end algorithm. 3. Select an element of order n from BC . 4. Let. and let BC. Crepn  Crepn ;repn(C ) CBC   BC n f g.. 5. Go to line 2.. Theorem 3.5 We have. Crepn  repn(C ): This method, using the non-zeroes of order n, for finding a large set of representatives of order n has previously been used in the literature for codes of primitive length, i.e. when n = qm 1. One example is Reed [42], where it is used to construct frequency-hopping sequences for asynchronous transmission.. ;. To illustrate how Algorithm 3.3 can be used to find following example:. repn (C ), consider the.

(95) ?. ?. f2 g. ?. f1 g f1 g. f1  2 g. BC = ?. BC = f3 g. BC = f2  3 g. Chapter 3. The cyclic equivalence classes of linear cyclic codes. BC = f1  2  3 g. 32. f0n g C3 C2 C2  C3 C1 C1  C3 C1  C2 C1  C2  C3. Figure 3.2: An illustration on how Algorithm 3.2 works by recursively considering all subsets of B (C ). The size of BC determines the depth and the label on each internal node is the set S . The algorithm starts at the root and in each step of the recursion we go one step down in the tree.. C. Example 3.5 We return to the codes in Example 3.1. First we consider 1 . Note that one of the non-zeroes has multiplicative order n = 4 and the other 0 has order 1.. ;. . repn (C1 ) = repn (C ) C0 repn (C0 ) ;  = f1342g f0000 1111 2222 3333 4444g ? = f1342 2403 3014 4120 0231g:. C. Similarly for 2 , but in this case both non-zeroes, plicative order n:. ;. ;1 and , have multi-. . repn (C2 ) = repn (C;1 ) C repn (C ) ;  = f1243g f0000 3421 4213 2134 1342g f1342g = f1243 4114 0401 3322 2030 1342g:. 2 Algorithm 3.3 is very good if we are only interested in finding a large subset of repn ( ). But in order to find a more efficient algorithm in the general case we look at Figure 3.2, which is an illustration of how Algorithm 3.2. C.

(96) 3.4. Selecting the representatives. 33. recursively searches through all subsets of the set of non-zeroes. In this case, we have a code with three non-zeroes B ( ) = 1  2  3 . For each recursion we go one step further down the tree. We make the important observation that if at some node in the tree, for instance at S = 1  2 , we have lcm(S ) = n then all leaves of the subtree having S as a root contain only codewords of full cyclic order. This follows from the fact that when()  ever lcm(S ) is equal to n then all codewords in S = 1 2 have full cyclic order and all leaves under the node S contain codes of the form. C. C. f. g. f. C. g. C C. CS() CS(0). (3.8). where S 0 is some subset of BC . From Lemma 6 in Zierler [59], it follows that the codewords defined by Equation 3.8 all have cyclic order n. Hence, we need not search any further down this subtree. We note that the union of the codewords at the leaves in this subtree can be written as. ;C  C   ;C  C  C   = ;C  C   C 3 1 2 1 2 3 1 2. C() CBC . From Lemma 3.9 follows that. or, in a more general form, S. ;  ;  repn CS() CBC = repn CS() CBC :. We can now formulate one final version of the algorithm as follows: Algorithm 3.4 Let the function f be recursively defined by. 8 ; ()  > rep C  if BC = ? > < ; S()  f (BC  S ) = > rep CS CBC  if lcm(S ) = n > : f ;BC n f g S f g f ;BC n f g S  otherwise. where is an element in BC that maximizes lcm(S. f g).. C we have ;  f B (C ) ? = rep(C ):. Theorem 3.6 For any linear cyclic code.

(97) 34. Chapter 3. The cyclic equivalence classes of linear cyclic codes. Note that we have a different criterion for selecting the non-zero in this algorithm compared to Algorithm 3.2. Previously we chose any non-zero. , but now we use the heuristic rule to always choose a non-zero that max ). This has the effect that all non-zeroes of order n are imizes lcm(S chosen first and that the branches, in most cases, will be cut as high up in the tree as possible, hence reducing unnecessary branching.. f g. C. If we are only interested in repn ( ) we can further modify Algorithm 3.4 since we need not search through subtrees that do not contain any codewords of full cyclic order: Algorithm 3.5 Let the function f be recursively defined by. 8 ? > > < rep;CS()  CBC  f (BC  S ) = > ;  f B n f. g  S. f. g > C ;  :. if lcm(BC S ) < n if lcm(S ) = n. f BC n f g S  otherwise where is an element in BC that maximizes lcm(S f g).. C we have f (B (C ) ?) = repn (C ):. Theorem 3.7 For any linear cyclic code. The complexity of Algorithm 3.4 and 3.5 can be estimated by reviewing the arguments leading to Algorithm 3.3, which shows that for each non-zero of order n, the complexity is approximately reduced by a factor of two. Each additional subset containing non-zeroes with a least common multiple of their orders equal to n reduces the complexity further. Example 3.6 Consider the vector space GF(2)63 , i.e. the set of all binary vectors of length n = 63. We can view this space as a

(98) 63 63] code over GF(2) with parity-check polynomial h(x) = x63 1. From Table 3.1 we see that this corresponds to a code with. C. ;. B (C ) = f 0   3  5  7  9  11  13  15  21  23  27  31 g:.

(99) 3.4. Selecting the representatives. 0. 3 5 7 9 11 13 15 21 23 27 31. 35. m (x). x+1 x6 + x4 + x3 + x + 1 x6 + x5 + x4 + x2 + 1 x6 + x + 1 x6 + x3 + 1 x3 + x + 1 x6 + x5 + x2 + x + 1 x6 + x5 + x4 + x + 1 x6 + x4 + x2 + x + 1 x2 + x + 1 x6 + x5 + 1 x3 + x2 + 1 x6 + x5 + x3 + x2 + 1. deg m (x) 1 6 6 6 6 3 6 6 6 2 6 3 6. j j. 1 63 21 63 9 7 63 63 21 3 63 7 63. Table 3.1: The irreducible factors of x63 ; 1 over GF(2).. Now, assume that we are only interested in the representatives of full cyclic order. To determine the size of repn ( ) we use Theorem 3.1:. C. 63 21 9 3 X jrepn(C )j = n1 (n=d)qm(d) = 2 ; 2 63; 2 + 2 djn = 146 402 730 743 693 304  1:5

(100) 1017 :. If we use Algorithm 3.2 we see that we need to check 2jB (C )j = 213 = 8192 subsets of B ( ). Instead of using this algorithm, we can use Algorithm 3.3 to find a large subset of repn ( ). The non-zeroes of order 63 are. C. C. f  5  11  13  23  31 g. and we get the following set of representatives. Crepn = rep(C ) CB(C)nfg rep(C 5 ) CB(C )nf5 g rep(C 11 ) CB(C )nf5 11 g rep(C 13 ) CB(C )nf5 11 13 g rep(C 23 ) CB(C )nf5 11 13 23 g rep(C 31 ) CB(C )nf5 11 13 23 31 g :.

(101) Chapter 3. The cyclic equivalence classes of linear cyclic codes. 36. Since. C has 6 non-zeroes of order n, we have according to Equation 3.6 jCrepn j = 257 + 251 + 245 + 239 + 233 + 227 63 63;66 = 2 ;2 = 146 402 730 741 596 160. 63 and we are only missing jrepn (C )j ; jCrepn j = 2 097 144 representatives.. The remaining representatives are contained in the code 0 defined by the non-zeroes B ( 0 ) = 0  3  7  9  15  21  27 . Using Algorithm 3.2 on 0 this code requires searching through 2jB (C )j = 27 = 128 subsets, which is substantially fewer than the original 8192 subsets. But we can do even better if we instead use Algorithm 3.4:. C. f. C. g. Crepn  Crepn ;  rep Cf()3 7 g Cf0 9 15 21 27 g ;  rep Cf()15 7 g Cf0 9 21 27 g ;  rep Cf()7 9 g Cf0 21 27 g ;  rep Cf()7 27 g Cf0 21 g : Of course, applying Algorithm 3.4 from the beginning would have been as easy and would have resulted in the same set of representatives.. C. Now, as a small check, we can calculate the size of repn and compare it with the size of repn ( ). C. 63 63;66 + 63

(102) 215 + 63

(103) 29 + 7

(104) 26 + 7

(105) 23 jCrepn j = 2 ;632. = 146 402 730 741 596 160 + 2 064 384 + 32 256 + 448 + 56 = 146 402 730 743 693 304 = jrepn (C )j:. 2. As we have seen in the last example, it is quite easy to get a relatively compact, explicit expression for repn ( ) even when the code is large.. C. 3.5 Summary We have presented a number of different methods of partitioning a linear cyclic code in ways that aid us in the search of representatives for all cyclic.

(106) 3.5. Summary. 37. equivalence classes. The basic method that we use to partition the code, has the drawback that the number of subcodes that needs to be considered grows exponentially with the number of irreducible factors in the paritycheck polynomial. Careful observations show that in many cases it is possible to reduce the complexity substantially Algorithm 3.3 and a method somewhat similar to Algorithm 3.2 is used in Allard, Shiva and Tavares [2] to determine rep( ) for binary linear cyclic codes . By further refining and generalizing these algorithms, we have arrived at a more systematic and efficient method of determining both rep( ) and repn ( ) for any linear cyclic code .. C. C. C. C. C.

(107)

(108) Chapter 4. Sequences for direct-sequence modulation The problem of code division is to write a duet for two tenors, who will be occupying the same region in time, space and frequency, in such a way that a listener can choose to follow one or the other without getting confused. Solomon W. Golomb. 4.1 Introduction In this chapter we point out the relationship that exists between the representatives of the cyclic equivalence classes of certain binary linear cyclic codes and some families of well-known psuedonoise sequences. Families of sequences derived in this way are often referred to as “linear families”. Using extensive computer search, we have also found a new family of sequences which compares favorably to previously known families of sequences. 39.

(109) Chapter 4. Sequences for direct-sequence modulation. 40. Data source. -. R -  - - Channel 6 1 6 - PN source Carrier. Carrier. - ? - Filter 6Correlation Demodulation. PN source. Figure 4.1: A binary direct-sequence spread-spectrum communication system with spreading factor g = 7.. 4.2 Applications Psuedonoise (PN) or, as they are sometimes referred to, psuedorandom sequences can be used in direct-sequence spread-spectrum systems. The rationale of spreading the spectrum of the transmitted information may in some military systems be that we might want to transmit information in such way that we achieve a low probability of intercept (LPI). If we use a spread-spectrum system the surveillance receiver needs to monitor a large frequency band and the power density of the signal to be detected is lowered. Another military application is anti-jamming (AJ). By spreading the transmission spectrum, we force the jammer to spread his available transmission power over a large frequency band. Another advantage is that the vulnerability against tone-jamming is also decreased. A more recent, and maybe more peaceful, application is code-division multiple-access (CDMA). One spreading sequence is assigned to each transmitter and by keeping the mutual correlation between the PN sequences as low as possible (quasi-orthogonal) we can distinguish the different users at the receiver. In the general direct-sequence spread-spectrum system the data sequence is multiplied by a high rate psuedonoise sequence. The ratio between the rate of the psuedonoise sequence and the rate of the data sequence is called the spreading factor, usually denote by g If we compare to the usual modulation of the data, the multiplication with the PN sequence causes the modulated signal spectrum to spread by a factor g . See Figure 4.1 for an example of a binary spread-spectrum system..

(110) 4.3. Sequences and linear cyclic codes. 41. The performance measure that we will use when evaluating the sets of sequences is the maximum absolute correlation, denoted by  max from Definition 2.1. As Simon et. al. [50, p. 325] point out, this performance measure may not correspond directly to any network performance measure, but it can be argued that any set of sequences which optimizes the performance of the network probably has a small value of  max ; hence one should be able to find good sets of sequences among those with a small maximum absolute correlation. Therefore it is interesting to investigate and search for sequences with low correlation. Furthermore, from an analytic viewpoint, no other design criteria have proven tractable in choosing long-period sequences.. jj. jj. Another measure of performance is the linear span, which can be defined as the length of the shortest linear feedback shift register generating the sequences. This measure tells us, in some sense, how difficult it is to predict the sequence; which might be interesting in hostile communication environments. Since the focus in this chapter, as in the thesis as a whole, will be on cooperative communication we will favor sequences with a low linear span since these are easy to generate.. 4.3 Sequences and linear cyclic codes The basic idea that we will exploit in this chapter is the following: Beginning with a binary linear cyclic code of length n, we select the representatives of full cyclic order, i.e. repn ( ). For each codeword symbols cj GF(2), 0 j < n in the codeword c = c0  c1  : : :  cn;1 in repn( ), we use the mapping:. 2. . C. C. 0 ! 7 1 1 ! 7 ;1:. We are interested in finding a way of relating the correlation properties of repn ( ) to the distance properties of the original code .. C. C. In Helleseth [16] we find some results regarding the crosscorrelation function between two maximal linear sequences. This corresponds to sequences.

(111) Chapter 4. Sequences for direct-sequence modulation. 42. ;. derived from linear cyclic codes of primitive length n = pm 1 over GF(p) with B ( ) =  d , i.e. codes with exactly two non-zeroes. Here, d is usually referred to as a decimation.. C. f. g. When studying codes with consecutive non-zeroes, e.g. dual BCH codes, the following theorem is useful:. C. Theorem 4.1 Let be a binary linear cyclic code of length n and let m be the multiplicative order of 2 modulo n. If B ( ) =  2  : : :  s for some element of order n in GF(2m ) then. C. . n. f. g. . jjmax  s ; 2m ; 1 2m=2 + 2mn; 1 for the codewords in repn (C ).. 2. Proof: Theorem 3 in Sidel’nikov [49] restricted to the binary case. For codes of primitive length, the bound in Theorem 4.1 reduces to. jjmax  (s ; 1)2m=2 + 1: jj C. It is also possible to determine  max by calculating the minimum distance of a cyclic code closely related to as we will show below. We first need to introduce the following concept (MacWilliams and Sloane [28, Ch. 2, 1]):. C.  n. x. Definition 4.1 The distance distribution of a code is a set Bd d=0 where Bd is the number of ordered pairs of codewords in at Hamming distance d, divided by the size of .. C. C. C. If is a linear code, then the distance distribution coincides with the weight + Bn = . distribution. We note that B0 = 1 and B0 + B1 +.

(112)

(113)

(114). C. jCj. Let ( ) denote the set of possible correlation values between sequences in . Here we write them as a union of the autocorrelation values and the crosscorrelation values:. C. (C ) =4 faa ( ) : a 2 C  1   < ng fab ( ) : a b 2 C  a 6= b 0   < ng:. Below we present a theorem that relates the distance distribution of a linear cyclic code to the set of correlation values ( ).. C. C.

(115) 4.3. Sequences and linear cyclic codes. 43. C. Lemma 4.1 If is a binary linear cyclic code of length n with distance distribution Bd nd=0 , then.  . ;.  . .  repn (C )  n ; 2d : Bd 6= 0 1  d  n :. Proof: The lemma follows directly from the fact that the correlation  between two binary codewords at distance d is  = n 2d and the fact that 2 the cyclic extension (see Equation 3.1) of repn ( ) lies in .. ;. C. C. The next lemma shows the connection between the distance distribution of a linear code and the distance distribution of  , which is a code obtained from by removing the all-zero codeword.. C. C. C. C. Lemma 4.2 Let be any linear

(116) n k ] code with k distance distribution. Then the distance distribution. Bd =. ( 1. .  n. let Bd >B 0and n of C  is d d=0. d=0 be its. d=0. Bd 1 ; jCj;1  1  d  n: 1. Proof: Define the following indicator function.  1 d (a b) = d H Id(a b) = 0. It is obvious that B0. jCjBd = = = =. = 1, so we assume d > 0. We have. XX a2C b2C. X. otherwise.. Id (a b). . X Id (a 0n ) + Id (a b) a2C b2C  X X XX Id (0n 0n ) + Id (0n  b) + Id (a 0n ) + Id(a b)     b2C;  a2C ; a2C b2C 0 + Bd + Bd + jCj ; 1 Bd = 2Bd + jCj ; 1 Bd. and, hence,. B  = Bd d.  jCj ; 2 . . 1. . jCj ; 1 = Bd 1 ; jCj ; 1 :. 2.

(117) Chapter 4. Sequences for direct-sequence modulation. 44. C.  n. Corollary 4.1 Let be any linear

(118) n k ] code with k > 1. Let Bd d=0 be the n distance distribution of and let Bd d=0 be the distance distribution of  . For all d, such that 0 d n, we have.  .  . C. C. Bd 6= 0 , Bd 6= 0: We can now strengthen Lemma 4.1 for cyclic codes where all codewords, except the all-zero codeword, have full cyclic order. This property of a binary linear cyclic

(119) n k ] code is equivalent to. C. k jrepn(C )j = 2 n; 1 :. (4.1). > 1 and distance  C ;    repn (C ) = n ; 2d : Bd = 6 0 1  d  n. Theorem 4.2 Let be a binary linear cyclic

(120) n k ] code with k n distribution Bd d=0 . If Equation 4.1 holds then. C. Proof: If all codewords in have full cyclic order except the all-zero codeword then the cyclic extension of repn ( ) is equal to  . Hence,. C. C ;     repn (C ) = n ; 2d : Bd 6= 0 1  d  n  n where Bd d=0 is the distance distribution of C  . But Corollary 4.1 shows. that Bd is non-zero whenever Bd is non-zero and the theorem follows.. 2. C. Note that Theorem 4.2 in principle does not simplify the calculation of ( ) since determining the distance distribution of by brute force requires the same amount of calculations as determining ( ). But, when the distance distribution is known, we can use Theorem 4.2.. C. jj. C. If we are only interested in finding  max then the following approach may be used. We make the following definition: Definition 4.2 For any code. C we define. . . C + =4 C C + 1n :.

(121) 4.3. Sequences and linear cyclic codes. C. 45. C. C f g. If is a linear cyclic code then B ( + ) = B ( ) 1 which corresponds to adjoining an all-ones row to the generator matrix. Further, if is a linear

(122) n k] code such that 1 is not a non-zero of then + is an

(123) n k + 1] code with minimum distance. C. C. C. . . dmin(C + ) = min dmin(C ) n ; dmax (C ). (4.2). C. C. where dmax ( ) is the maximum distance between any two codewords in , or equivalently since the code is linear, the maximum weight of any codeword in .. C. C C. Theorem 4.3 Let be a binary linear cyclic

(124) n k ] code with k > 0 such that 1 is not a non-zero of . Then the maximum absolute correlation  max between any two (not necessarily distinct) vectors in repn ( ) is. jj. C. jjmax  n ; 2dmin(C +): We have equality if k > 1 and jrepn (C )j = (2k ; 1)=n. If 1 is a non-zero of C and repn (C ) is not the empty set then jjmax = n: Proof: Since the correlation  between two codewords at Hamming distance d is  = n 2d, we see that. ;   jjmax  max n ; 2dmin(C ) 2dmax (C ) ; n :. From Equation 4.2 we have. . . dmin(C + ) = min dmin(C ) n ; dmax (C ) : If we assume that dmin ( that. C )  n ; dmax(C ), then dmin(C + ) = dmin(C ). We see. 2dmax (C ) ; n  2n ; 2dmin (C ) ; n = n ; 2dmin (C ) and. . . jjmax  max n ; 2dmin(C ) 2dmax (C ) ; n = n ; 2dmin (C ) = n ; 2dmin (C + ):. (4.3).

(125) Chapter 4. Sequences for direct-sequence modulation. 46. If we instead make the assumption that dmin ( the same result.. C ) > n ; dmax(C ) we arrive at. Finally, we prove the last part of the theorem. Let. c be any codeword in. repn (C ). Since 1 2 B (C ) we know that 1n 2 C . By the linearity of C , c + 1n is also a codeword and it must also have full cyclic order. Since C is a binary cyclic code and n is odd, c and c + 1n must be cyclically distinct since the weight of c differs from the weight of c + 1n . Hence, without loss of generality, we may assume that c and c + 1n are in repn (C ). Since these 2 two codewords differ by 1n , their maximum absolute correlation is n. What codes result in good sets of sequences? In using our approach of constructing sets of sequences with low correlation from linear cyclic codes it is natural to ask the question “What codes correspond to good sets of sequences?”. The problem of constructing large sets of sequences with low absolute maximum correlation is not entirely equivalent to constructing a large code with high minimum distance. From Theorem 4.3 we see that the correlation is determined by the minimum distance of + and not by the minimum distance of the code itself. As a direct consequence of this, the code should not only have a high minimum weight but also a low maximum weight which corresponds to a narrow weight spectra centered around w = n=2.. C. C. C. C. Example 4.1 Consider the following two linear cyclic codes: 1 a binary linear cyclic

(126) 63 12 24] code with B ( 1 ) =  3 and 2 a binary linear cyclic

(127) 63 12 24] code with B ( 2 ) =  15 . If we view these codes as block codes, their parameters are identical. The corresponding weight enumerators are. C. C f. f g. g. C. A1(z ) = 1 + 210z24 + 1512z28 + 1071z 32 + 1176z36 + 126z40  A2(z ) = 1 + 588z24 + 504z28 + 1827z 32 + 1176z36 :. C. We see that the maximum weight of 1 is 40 and that the maximum weight of 2 is 36. Hence, the sequences derived from 1 has maximum correlation (using Equation 4.3). C. C. jjmax  maxfn ; 2dmin 2dmax ; ng = maxf15 17g = 17.

(128) 4.4. Results of a computer search. 47. C. and those sequences derived from 2 has maximum correlation. jjmax  maxfn ; 2dmin 2dmax ; ng = maxf15 9g = 15:. 2. Of course, an alternative approach is to restrict the search to linear cyclic codes having 1 as a non-zero, 1 B ( ), and then select codewords from the code defined by B ( ) 1 .. C. 2 C. C nf g. Another problem is that the size of the resulting set of sequences is not directly linked to the dimension of the linear cyclic code, but rather to the degree of the irreducible factors of h(x) and to the multiplicative orders of the non-zeroes of the code. Two cyclic codes of the same length, dimension and minimum distance do not necessarily contain the same number of cyclic equivalence classes of size n as illustrated by the following example:. C. Example 4.2 Consider the following two linear cyclic codes: 1 a binary linear cyclic

(129) 63 18 9] code with B ( 1 ) =  3  9  27 and 2 a binary linear cyclic

(130) 63 18 9] code with B ( 2 ) =  5  9  27 . The difference between these two codes is that 1 has one non-zero, 3 , having multiplicative order 21 instead of the non-zero 5 of multiplicative order 63 in B ( 2 ).. C. C. C. f. f. g. g. C. C. C. C. The code 1 contains 4096 representatives of full cyclic order while 2 contains 4160 representatives. Both codes yield sequences with maximum cor2 relation  max = 45.. jj. 4.4 Results of a computer search We have performed an extensive search by computer through a subset of all cyclic codes of length n = 2m 1, where m = 4 : : :  10. We have restricted the search to codes with two or three non-zeroes where at least one of the non-zeroes has multiplicative order n, i.e. B ( ) =  e or B ( ) =  e1  e2 where is a primitive element in GF(2m ). For each code, the number of representatives of full cyclic order was calculated and using M AGMA V1.20-1 (see Handbook of M AGMA functions [4]) together with Theorem 4.3 the maximum correlation was determined.. C. C. f. g. ;. C. f. g. Table 4.2 lists the best sets of sequences found, together with some other interesting (or well-known) families of sequences. (Appendix A contains a comprehensive overview of families of sequences derived directly from linear cyclic codes.).

(131) Chapter 4. Sequences for direct-sequence modulation. 48 Length, n. Family New family Dual 3 BCH Recip. m-seq. Dual 2 BCH. jjmax Size. jjmax. Size. jjmax Size. jjmax. Size. 15 31 63 127 255 511 1023 11 19 25 41 55 81 119 272 1057 4160 16513 65792 262657 1049600 N/A 17 33 33 65 65 129 N/A 1057 4160 16513 65791 262657 1049600 7 11 15 21 31 45 63 17 33 65 129 257 512 1025 9 9 17 17 33 33 65 16 33 64 129 256 513 1024. Table 4.1: The maximum correlation of the new family of sequences compared to the dual three-error correcting BCH sequences and the maximum correlation of reciprocal m-sequences compared to the dual two-error correcting BCH sequences.. A new family of binary sequences As a result of the computer search we have found a new family of sequences closely related to the sequences derived from dual three-error correcting BCH codes. This set of sequences are the representatives from the binary 4, with non-zeroes linear cyclic code of length n = 2m 1 where m ; 1 3 B ( ) =   . The size of the family is. C. f. C g. ;. .  22m + 2m . m even, jrepn(C )j = 22m + 2m + 1 m odd. The maximum correlation for this new family of sequences of lengths up to n = 1023 is given in Table 4.1. For sequences of length n = 2m 1, m even, this family of sequences has a lower maximum correlation than any other set of linear sequences of the same, or larger, size.. ;. Unfortunately we have not been able to derive a sufficiently good upper bound on the maximum correlation for this new set of sequences, but it appears that the code corresponding to the sequences of length n = 255, i.e. the

(132) 255 25 100] linear cyclic code, improves on the previously best know lower bound on minimum distance. From the database of the best known bounds of the minimum distance of binary linear codes (maintained by the Discrete Mathematics Group of the Eindhoven University of Technology) we have. 96  D255 (25)  113.

(133) 4.4. Results of a computer search. 49. where Dn (k ) denotes the largest minimum distance of any binary linear code of length n and dimension k . The codes associated with the sequences of length n = 15 and n = 63, i.e. a

(134) 15 13 2] code and a

(135) 63 19 19] code, both have minimum distance coinciding with the currently best known lower bound. It seems likely that also the

References

Related documents

Therefore, efficient coding using GR or EG code can only be achieved by careful selection of one set of codes by determining the suffix length of the codes according to a quantised

In the second case we have employed a nested construction of polar codes for channel and source coding to adapt the quality of the description conveyed from the relay to the

;àþq]é

For codes where more errors is possible the direct method needs to calcu- late the determinant for larger matrices to find the number of errors, then this method potentially needs

8906025 Anders Lycken Underkapning av grova trädstammar Förstudie Trätek.. INSTITUTET FOR

• Kvalitetssäkring - arbetssätt inom alla led under byggnadens livslängd för att se till att byggnaden uppfyller ställda krav på fuktsäkerhet.. Inom

Intervjuerna med de övriga kommunerna i nätverksstudien ger vid handen att de regionala nätverken har en viktig funktion för mindre kommuner som inte har resurser att delta

H er research revolves around “mobile location -based services and people‟s perceptions of „place‟”, but also “people‟s interaction w ith applications”