• No results found

Degree project

N/A
N/A
Protected

Academic year: 2021

Share "Degree project"

Copied!
49
0
0

Loading.... (view fulltext now)

Full text

(1)

Degree project

Classification of Perfect codes in Hamming Metric

Author: Tanveer Sabir Date: 2011-06-17 Subject: Mathematics Level: Advanced

(2)

Abstract

The study of coding theory aims to detect and correct the errors during the transmission of the data. It enhances the quality of data transmission and provides better control over the noisy channels.

The perfect codes are collected and analyzed in the premises of the Hamming metric.

This classification yields that there exists only a few perfect codes. The perfect codes do not guarantee the perfection by all means but just satisfy certain bound and properties. The detection and correction of errors is always very important for better data transmission.

Keywords: Hamming metric; Hamming code, Hamming bounds; Cyclic codes; Perfect codes; Minimal distance

(3)

Acknowledgments

My first thanks goes to Per-Anders Svensson for giving us an exciting topic related to coding theory. I am also thankful to him for the time and guidance he provided. Secondly, I admire and appreciate Marcus Nilsson, the Programme Manager, for his nice and flexible behaviour.

Finally, I present my gratitude to all respectable teaching faculty at the department of mathematics and last but not the least my parents and family for their support.

(4)

Contents

1 Outline of the Thesis 1

2 What is Coding Theory? 1

2.1 Backgound of Coding Theory . . . 1

2.2 Perliminaries . . . 3

2.3 Groups . . . 3

2.4 Rings and Fields . . . 4

2.5 Vector Spaces . . . 5

2.6 Subspaces . . . 6

2.7 Important Code Parameters . . . 8

2.8 Principle for correcting message . . . 8

2.9 Error detection and correction process . . . 9

3 Linear Codes 10 3.1 Linear Codes . . . 10

3.2 Structure of Generating matrices . . . 10

3.3 Parity- check matrix . . . 11

3.4 Process to detect error by using the parity check matrix . . . 12

4 Finite fields 13 4.1 Introduction . . . 13

4.2 Polynomials . . . 13

4.3 Primitive elements of finite fields . . . 16

4.4 The structure of finite fields . . . 18

5 Cyclic Codes 20 5.1 Introduction to Cyclic Codes . . . 20

5.2 Generating polynomial, Parity check polynomial . . . 21

5.3 Generating Matrices and Parity Check matrices . . . 22

5.4 BCH Codes . . . 23

5.5 Reed-Solomon codes . . . 26

6 Perfect codes in the Hamming metric 27 6.1 Introduction of perfect codes . . . 29

6.2 Perfect Codes in Hamming Metric . . . 29

6.3 Binary Hamming Codes . . . 30

6.4 Golay Codes . . . 32

6.5 The extended Golay codes . . . 32

6.6 The (non-extended) Golay code G23 . . . 35

6.7 Ternary Hamming code . . . 36

6.8 Ternary Golay code . . . 37

6.9 Application . . . 37

6.10 Reed-Solomon codes (a non-perfect code) . . . 40

7 Conclusion 42

References 43

(5)
(6)

1 Outline of the Thesis

The aim of this dissertation is to classify perfect codes in the Hamming metric. This dissertation has been divided into six chapters. The second chapter deals with the in- troduction to coding theory, in which we discuss correcting and detecting error patterns, maximum likelihood decoding, encoding and decoding. We also discuss some important algebraic structures like groups, fields and vector spaces. In the third chapter linear codes are discussed. We will also try to show the link between linear codes and their algebraic structures.

Finite fields have great use in coding theory. The fourth chapter is about finite fields.

Theorems about structure of finite fields are given with basic definitions. The fifth chapter explains how cyclic codes are developed on the basis of Galois fields and polynomials.

The structure of generating and parity check matrices are discussed. The generating ma- trix is used to generate Hamming codes and the Golay codes. The encoding and decoding algorithm for cyclic codes is explained in detail.

The last chapter is the main part of the whole work. In this chapter, we classify the perfect codes in Hamming metric. They include the Hamming codes, Golay codes and Golay extended codes. As an example of non perfect codes, we discuss BCH and Reed- Solomon codes.

2 What is Coding Theory?

The perception of a message or vision sent from one source to another can not always be reliable. For example, someone gets up in the morning and watches a shadow in the darkness in one corner of the room. He may think that the shadow was a person but after some concentration, the person finds that the shadow was not a person. He finds that the shadow was just his shirt hanging on the hanger.

Suddenly he notices that his friend is gone and he finds a note lying on the table. He reads the note which is ’I like Jou’. Now it is confusion for the reader to understand the meaning of the word ’Jou’ in the note. The difference between the two confusions is different. The first confusion is related to the vision of the person. The second one is the confusion with English language. The reliability of the two is different. In the first one, the confusion created by the vision can be overcome with expereince of the person. The brain can easily realise the confusion and understand the situation.

The second confusion may or may not be an error. If the word ’Jou’ is the name of somebody then the sentence is correct ’I like Jou’. The second probability is that the writer has written J instead of Y in the word Jou. In that case the sentence is incorrect.

The error in the English letters cannot be overcome by the humain brain easily as En- glish language has fixed grammar rules. If these rules are violated, the message is not sent properly. There is no continuous sequence of letters and words in English. Few words are made if English letters are joined with cartesian product. The above example describes the theory of error correcting code. The message can be of any file of any type like music, text or picture. It may be a stream of 0s and 1s. [6, page 3]

2.1 Backgound of Coding Theory

Data transmission became a subject of interest with the publication of Claude Shannon in 1948. Shannon showed that the probability of error could be minimized by using a proper ratio of capacity of the channel C (measured in bits per second) and information

(7)

transmission rate R (in bits per second). Shannon said that it is possible to design a communication system with less output error if the transmission rate R required for a communication system is less than C. Shannon’s theory led the researchers work on the more effective and powerful data-transmission code. Although Shannon did not present any code, but he showed that better codes exist.

Later in the 1950s and the 1960s, a lot of effort was devoted to find communication codes that could decrease the probability of error. Most of the research was based on two avenues of codes. The first avenue had been derived through algebraic block codes.

But unfortunately these algorithms were far away from the prediction by Shannon. The series of practical codes for new digital technology were presented by Peterson (1960), Berlekamp (1968) and Massey (1969).

The second avenue had probabilistic flavor. These codes concentrated on understand- ing encoding and decoding from a probabilistic point of view. The best codes among them were convolutional codes. In the 1950s, the convolutional codes were successfully decoded by sequential coding.

During the 1970s, new types of codes were introduced. Forney (1966) introduced concatenated codes. Later Justesen designed long block codes using the concatenated codes. A compact disk was designed in the 1980s which could correct double bit errors.

The non-algebraic codes also started in the 1980s based on Euclidean distance.

In the 1990s, the two ways algorithm was introduced which was given by Shannon in the 1940s. Other codes like Ungerbeck codes and the Berrou codes are in rapid de- velopment. More research is being done and will be available for commercial use in the future.[3, page 4]

Definition 2.1 (Channel). The medium through which the data is transmitted from one place to the other is called the channel.

Almost all channels have some kind of noise. Examples of noisy channels are poor telephone lines, bad weather, speech and atmosphere, etc. The noise in the channel is caused by some sources. This noise in the channel create problem in the data transmission.

Data transmission depend on the channel, if the channel is more noisy then it is more difficult to transmit the data. For example if we talk on the phone and if the telephone lines are poor then it is very difficult to have a conversation. Then either you have to talk loud or you have to repeat your message many times.

Example 2.1. In this example, we consider the digits 1, 2, 3, 4. To understand data trans- mitting, consider a noisy channel that has probability of error 0.2. For instance if we send the digit 2 through a noisy channel, then there are 80% chances that the received message will be 2. In this case the chance of error is large. To avoid this we repeat the message say three times, and thus send 222. Now we suppose an error has occured during the trans- mission and that the received message is 212. The probability of correcting this message is the probability that all three digits are received correctly+ the probability that exactly one of these digits is wrong,

(0.8)3+ 3(0.8)2(0.2) = 0.896, which shows that the chance of error is small.[7, page 393]

Definition 2.2 (Coding theory). Coding theory is the branch of mathematics which studies methods to transmit data as efficiently and accurately as possible across noisy channels.

It also includes methods that can detect and possibly correct errors.

(8)

Figure 2.1: Data transmission process

The diagram which best illustrates how coding theory works is then given below The above figure shows that when the data is sent from a information source, first, it is encoded by adding parity check bits. In the next step some noise is added to the data when it passes through a noisy channel. When the noise added to the message the the information get disturbed and the data will not be received in the original form. The receiver decodes the message after receiving it and then sends it to the information sink.

2.2 Perliminaries

For studying linear codes the most important tools come from linear algebra. We will discuss in this section some basic facts from linear algebra and we will try to show their relevance to coding theory. Most of the material in this section is taken from [1] and [4].

2.3 Groups

In this section we summarize the main ideas and terminology of groups, which we shall use throughout our thesis. We will discuss some definitions and some important theorems on groups such as Lagrange’s theorem.

Definition 2.3 (Group). A non empty set G is said to be group under a operation+ if it satisfies the following properties.

(a) For all g1, g2∈ G,

g1+ g2∈ G.

(b) If for all g1, g2, g3∈ G, then

g1+ (g2+ g3) = (g1+ g2) + g3.

(c) G contains an additive identity, this means that there is an element 0∈ G such that g+ 0 = 0 + g = g for all g ∈ G.

(d) Every element of G has an additive inverse in G i.e for each element g∈ G there exists an element−g such that

g+ (−g) = (−g) + g = 0.

A group G is said to be abelian if its binary operation+ satisfies the following condi- tion, for any g1 and g2in G, g1+ g2= g2+ g1.

(9)

Example 2.2. Consider the set

G={1,−1}

and let the binary operation defined on G be the ordinary multiplication of real numbers, then(G, ·) is a group.

Examples of abelian groups(Z, +), (Q, +), (R, +), and (R − {0},×).

Definition 2.4 (Order of a group). If group G is finite, then the number of elements in the group G is said to be order of the group.

Definition 2.5 (Subgroup). Suppose G is group under the binary operation+ and K is any subset of G. Then K is said to be subgroup of group G if K itself a group under the induced binary operation.

Example 2.3. If we take Q the set of rational numbers and Z the set of integers in our discussion, we see that Q and Z are groups under addition. But Z is the subset of Q, so we can say that Z is a subgroup of Q.

Definition 2.6 (Cyclic Group). A group G is said to be cyclic if every element of G is a power of one and the same element b, (say), of G. Such an element b of G is called a generator of G.

Example 2.4. Consider the group Z7={1,2,3,4,5,6}, we can see that 31= 3, 32= 2, 33= 6, 34= 4, 35= 5 and 36= 1.

This shows that group Z7is cyclic and 3 is a generator. It can be written ash3i = Z7. Definition 2.7 (Cosets). Let G be a group and K be subgroup of G. Then for each b∈ G the set bK={bk | k ∈ G} is called the left coset of K that contains b. Similarly the set Kb={kb | k ∈ G} is called the right coset of K containing b.

Lemma 2.1. Let G be a group of finite order and k be a subgroup of G. Then for each b∈ G,

|bK| = |K|.

2.4 Rings and Fields

Now we will discuss definitions of ring and field along with some examples.

Definition 2.8 (Ring). A non-empty set R is said to be ring under two binary operations + and · if it satisfies the following properties

1. R together with the operation+ is an abelian group.

2. If for all r1, r2∈ R then r1· r2∈ R.

3. If for all r1, r2, r3∈ R then

r1· (r2· r3) = (r1· r2) · r3. 4. If for all r1, r2, r3∈ R then

r1· (r2+ r3) = r1· r2+ r1· r3

and

(r2+ r3) · r1= r2· r1+ r3· r1.

(10)

Example 2.5. The set of rational numbers Q and the set of real numbers R are rings under the ordinary addition and multiplication of numbers.

Definition 2.9 (Commutative Ring). A ring in which multiplication is commutative is called a Commutative ring.

Definition 2.10 (Ring with unity). A ring(R, +, ·) with multiplicative identity 1 such that 1· b = b · 1 = b for all b ∈ R is called a ring with unity.

Example 2.6. Under ordinary multiplication and addition of integers the set Znof integers modulo n is a commutative ring with unity.

Definition 2.11 (Subring). Suppose R is a ring and F is a non empty subset of R. Then F is said to be subring of ring R if it is itself a ring under the induced binary operations.

Definition 2.12 (Ideal). Suppose R is a ring and I is a subring. A subring I of R is said to be ideal of R if rb, br ∈ I whenever b ∈ I and r ∈ R.

Example 2.7. For any ring R, 0 and R are ideals of R.

Definition 2.13 (Principal Ideal). Let I be an ideal in a commutative ring with unity, then Iis called a principal ideal if there is an b∈ R such that

I=hbi = {rb | r ∈ R}.

Definition 2.14 (Maximal Ideal). Let R be a ring. An ideal M of R is said to be maximal, if M6= R, and if there are no ideals I such that M ⊂ I ⊂ R.

Definition 2.15 (Field). A commutative ring with unity, in which all non zero elements in the ring are units is called field.

Example 2.8. The set of real numbers R, the set of rational numbers Q and the set of complex numbers C under ordinary addition and multiplication are examples of field.

Definition 2.16 (Subfield). Suppose F is a field and B is any subset of F. Then B is said to be a subfield of field F if it is itself a field under the induced binary operation.

Examples: The set of rational numbers Q is a subfield of R and R is a subfield of C.

2.5 Vector Spaces

In this section, one of the important fundamentals of linear algebra will be discussed.

This is called vector spaces and it has great importance in coding theory and have many applications. We can see its importance from the fact that even linear algebra is generally considered as the theory of vector spaces.

Definition 2.17 (Vector Spaces). Let M be a field and V be a non-empty set. We assume that V is closed with respect to addition, i.e. for any u1, u2 ∈ V , u1+ u2∈ V . We also assume that for any b∈ M and u ∈ V , bu ∈ V . The set V is said to be vector space over M if it satisfies the following conditions:

1. V together with the operation+ is an abelian group.

2. For any a∈ M and v ∈ V , a · v ∈ V .

(11)

3. For any a1, a2∈ M and v ∈ V ,

(a1+ a2) · v = a1· v + a2· v.

4. If v1, v2∈ V and a ∈ M then a · (v1+ v2) = a · v1+ a · v2. 5. For the identity element 1 in M, 1· u1= u1 for any u1∈ V . 6. If a1, a2∈ M and v ∈ V , then

(a1a2) · v = a1(a2· v).

Example 2.9. Consider the set of all ordered n-tuples given as F(n)={(a1, a2, ··· ,an)|ai∈ F, j = 1,2,3,4,··· ,n}

where F is a field and n is any positive integer. We define the vector addition and scalar multiplication of a vector by an element in F as follows:

(a1, a2, ··· ,an) + (b1, b2, ··· ,bn) = (a1+ b1, a2+ b2, ··· ,an+ bn), and

c(a1, a2, ··· ,an) = (ca1, ca2, ··· ,can), c∈ F.

Then F(n)is a vector space over F with respect to the operations defined above.

2.6 Subspaces

Definition 2.18 (Subspace). Suppose V is a vector space and U a nonempty subset of V . Then subset U of a vector space V is called a subspace of V if U itself a vector space under the induced operations.

Theorem 2.2. For a vector space V over a field F, from non-empty subset W of V is subspace if and only if the following conditions hold:

1. For any w1, w2∈ W,w1+ w2∈ W.

2. For any b∈ F and w ∈ W , bw ∈ W .

Definition 2.19 (Linearly Dependent). Let V be a vector space over a field F. The n vectors y1, y2, ··· ,yn ∈ V are said to be linearly dependent if there exist n elements c1, c2, ··· ,cn∈ F which are not all equal to zero such that

c1x1+ c2x2+ ··· + cnxn= 0.

Definition 2.20 (Linearly independent). Let V be a vector space over a field F. The vector space V is said to be linearly independent if it is not linearly dependent.

Example 2.10. When working in the vector space of R3, the vectors (1, 0, 0), (0, 1, 0), and(0, 0, 1) are linearly independent, since none of these vectors are linear combinatios of the others. More precisely, if c0(1, 0, 0) + c1(0, 1, 0) + c2(0, 0, 1) = 0, then the only possible solution to this equation is when c0= c1= c2= 0.

Definition 2.21 (fields). A set F having at least two elements and two binary operations namely addition and multiplication defined on it, is called field if the following conditions are satisfied.

(12)

1. F is a commutative ring with respect to addition and multiplication and contains multiplicative identity, and

2. every non-zero element of F is invertible with respect to multiplication.

If p is a prime interger, consider the set Fp ={0,1,2,··· , p − 1} of p elements in which addition and multiplication are defined modulo p. For p= 2, we denote the field F2 by B. Thus B={0,1} with binary operations addition and multiplication defined by 0× 0 = 0, 1 × 0 = 0, 1 × 1 = 1, 0 + 0 = 0, 0 + 1 = 1, 1 + 0 = 1, 1 + 1 = 1.

Let Bn, where n is a positive integer, denote the set of all ordered n-tuples of length n i.e. for b∈ B, b = b1, ··· ,bn, with bibelonging to the field B. The number of words in Bn is 2n.

Example 2.11. We have B1= {0,1} and B2= {00,10,01,11}.

Encoding mapping

Let Bkand Bndenote the sets of binary words of length k and n respectively, where n> k.

Suppose we want to send a word of length k from source to destination, for this purpose we define a one-to-one mapping

E : Bk−→ Bn.

An encoding mapping is actully a rule with which we increase the length of original mes- sage. At destination the receiver then decodes the received message by using a decoding mapping

D: Bn−→ Bk, to restore the original message.

Definition 2.22 (Codewords). A non-empty subset C of Bn is called a code, and the ele- ments of C are called codewords.

Example 2.12. Suppose we want to send any of the words 00, 01, 10, or 11. We use the encoding mapping

E: B2−→ B4,

defined by E(00) = 0000, E(01) = 0101, E(10) = 1010, and E(11) = 1111. This yields the code C={0000,0101,1010,1111} ⊆ B4.

Definition 2.23 (Hamming distance). The number of places in which the codewords x and y differ is called the Hamming distance between x and y, and is denoted by d(x, y).

Example 2.13. Suppose C={0000,0101,1010,1111} ⊆ B4, then d(1010, 1111) = 2, since the two codewords differ in two positions.

Definition 2.24 (Minimum Hamming distance). The minimum (Hamming) distance of a code C is the minimum distance between any two different codewords in the code.

Mathematically it can be written as

d(C) = min{d(x,y)|x 6= y,x,y ∈ C}.

Example 2.14. Consider a code C= {00,01,10,11}. The minimum Hamming distance is 1.

Definition 2.25 (Hamming weight). The number of ones in a codeword is called the Hamming weight of the codeword. The Hamming weight of a codeword is denoted by wt(c).

(13)

Definition 2.26 (Minimum Hamming weight). The minimum Hamming weight of non zero words of the code C is denoted by wt(C) and is defined as

wt(C) = min{wt(a)| a ∈ C, a 6= 0}

Example 2.15. Suppose we have code C={000000,111111,101010,101000} then the minimum Hamming weight of C is 2.

Definition 2.27. Hamming sphere A Hamming sphere of radius k centered at a codeword uis denoted by B(c, k) and is defined as

B(u, k) ={v ∈ Bn| d(u,v) ≤ k}.

Example 2.16. Let u= 101 ∈ B3and radius r= 1. Then B(u, 1) ={001, 100, 101, 111}.

Lemma 2.3. If x, y, z ∈ Bn, then d(x, z) ≤ d(x,y) + d(y,z) 2.7 Important Code Parameters

When we work with codes there is need to define such terms which decide which codes are good. Three main parameters are used to describe and evaluate codes in general. Here in my thesis I represent these parameter by n, M, and d, where n denote the length of the codeword, M denote the number of codewords in a code and the parameter d is for minimum distance. The notation(n, M, d) is used to represent a code.

Example 2.17. Suppose we have a code C={0000,0101,1010,1111} ⊆ B4. The first parameter n of this code is 4, the second parameter M is 4 and minimum distance d of this code is 2.

Definition 2.28 (Sum of Words). Let u and v be two binary words. Then the sum u+ v is defined as the bitwise addition modulo 2 of u and v.

2.8 Principle for correcting message

Let C∈ Bnbe a code and x∈ Bmbe a word that is to be sent. Also let y∈ Bnbe a received word. If y is not a codeword then it means some error has occured during the transmission.

To correct y, we compare y to each codeword of C and calculate the Hamming distance.

If c∈ C is any codeword with minimum Hamming distance to y, then we replace y with c, provided that c is unique. On the other hand if c∈ C is not unique then no correction can be made. This principle is also known as nearest-neighbour decoding principle.

Example 2.18. Let C={000,011,110,111} be a code. We suppose that we receive a word 101. We can easily say that the received word 101 is not a codeword, this means that some error has occurred. We can form the codeword 111 by changing one digit, since the word 101 has minimum hamming distance from the codeword 111. Therefore we say that the transmitted codeword was 111, so we correct 101 to 111 by replacing one digit.

Moreover 111 is the nearest codeword to 101.

Lemma 2.4. If x, y ∈ Bn, then d(x, y) = wt(x + y).

(14)

2.9 Error detection and correction process

Let C⊆ Bn be a code, and let y∈ Bn be a received word. Now we check whether y is a codeword or not. After receiving y we check that if y∈ C. Then we can say that y is codeword and it needs not to be corrected. But if y∈ C then y is not a codeword. In this/ case some error has occured and we can say that we have detected an error.

Error detection

In this section we will find condition for error detection. The following theorem gives this condition.

Theorem 2.5. If k+ 1 is the minimum distance between any two codewords, then it is the necessary and sufficient condition for a code C to detect all k or fewer errors.

Proof. First, we suppose that the code C detects all sets of k or fewer errors. From this we can say that c+ e is not a codeword, where c is any codeword and e is error word with wt(e) ≤ k. We suppose that c and c0are any two codewords such that the distance between c and c0 is less than or equal to k i.e d(c, c0) ≤ k. Let e = c + c0. Then w(e) ≤ k. Also c+ e = c + c + c0= c0 is a codeword. This shows that the error vector e is undetected.

This is a contradiction, hence

d(c, c0) ≥ k + 1, for all c,c0∈ C c 6= c0.

Conversely, let C be the set of all codewords. Suppose that for all distinct c, c0∈ C, d(c, c0) ≥ k + 1. Let c ∈ C be transmitted and e = e1e2···enbe the error word with

wt(e) =

n

i=1

ei≤ k.

Then the received word is c+ e and

d(c + e, c) = wt(c + e + c) = wt(e + 2c) = wt(e) ≤ k.

According to our supposition distance between any two words in C is at least k+1. There- fore c+ e is not a codeword, so we say that an error e is detected. [8, page 5]

Error Correction

In this section we will define condition for error correction. The following theorem gives this condition.

Theorem 2.6. If d≥ 2k + 1, then all errors of weight at most k can be corrected.

Proof. Suppose d≥ 2k + 1, we want to prove that all errors of weight at most k can be corrected. Suppose this is not the case. Then there is a word x that belongs to two different Hamming spheres B(c1, k) and B(c2, k) where c1, c2are codewords. By triangle inequility

d(c1, c2) ≤ d(c1, x) + d(c2, x), we get

d(c1, c2) ≤ k + k.

Which is contradiction to supposition. Hence the theorem is proved.

(15)

3 Linear Codes

In this chapter, we study special codes with richer structure called linear codes. There are many reasons that the linear codes are widely used. One is that they are easy to construct, secondly encoding of a linear code is very quick and easy. Decoding is also often facilitated by the linearity of the code. We begin with some simpler binary linear codes but the theory of general linear codes requires the use of abstract algebra. In this chapter we will also discuss some algebraic structures that have great importance. Most of the material in this chapter is taken from [1] and [4].

3.1 Linear Codes

Definition 3.1 (Linear Code). If the sum of any two codewords is again a codeword then this code is said to be linear. i.e. for any u, v ∈ C

u+ v ∈ C.

In other words we can say that linear code is a code which is closed under addition of codewords. For example if we consider a code C1={00,11}, then C is linear because all the four sums

00+ 00, 00 + 11, 11 + 00, 11 + 11 are in C1.

3.2 Structure of Generating matrices

In this section, we will try to show the connection between linear algebra and coding theory. Here we will also find a generating matrix and parity check matrix of a linear code.

Generating matrices

Let Bkand Bndenots the sets of binary words of length k and n respectively, where n> k.

Suppose we want to send a word of length k from source to destination, for this purpose we define a one-to-one mapping

E : Bk−→ Bn. We then speak of the code

C=n

E(x)|x ∈ Bko

⊆ Bn, as a(n, k)-code.

If C is a linear (n, k) code then a k × n matrix can be used to define a one-to-one encoding mapping

E : Bk−→ Bn.

Suppose x= (a1, a2, a3, ··· ,ak) ∈ Bk that is to be encoded. Form the row vector x= (a1 a2 ··· ak) with k elements. Let G be a k × n matrix. compute y = xG under above defined mapping. We will then receive a vector of length n. Suppose we denote this vector by y= (b1 b2 ··· bn), where as whole operation is performed modulo 2.

Definition 3.2 (Generating Matrix). Any matrix whose rows form a basis for C is called the generating matrix, where C is the linear code of length n and dimension m.

(16)

Example 3.1. A generator matrix of a linear(7, 3)-code is

G1=

1 0 0 0 0 0 1 0 1 0 1 1 0 0 0 1 1 0 0 1 1

. A generator matrix of a linear (6,4)-code is

G2=

1 0 0 0 0 0 0 1 0 1 1 0 0 1 1 0 0 1 0 1 1 0 0 1

 .

A generator matrix of a linear (7,4)-code is

G3=

1 0 0 0 0 0 1 0 1 0 1 1 0 0 0 1 1 0 0 1 0 0 1 1 0 0 1 1

 .

Definition 3.3. Systematic code

A systematic code is a code generated by the generator matrix G defined by G= (Ik, A), where Ikdenote the identity matrix.

Example 3.2. A generator matrix of a systematic linear(3, 7)- code is

1 0 0 0 0 0 1 0 1 0 1 1 0 0 0 0 1 0 0 1 1

. A generator matrix of a systematic linear(4, 6)-code is

1 0 0 0 0 0 0 1 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 1

 .

A generator matrix of a systematic linear(4, 7)- code is

1 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 1 0 0 0 0 1 0 1 1

 .

3.3 Parity- check matrix

We have successfully found the generating matrix. Now, we can find the parity check matrix from the generating matrix G.

Definition 3.4 (Parity Check Matrix). Suppose G denote a k× n generator matrix for a (n, k)-linear code C. Then a binary n × (n − k) matrix denoted by H is said to be parity check matrix if

GH ≡ O (mod 2), where above mentioned O represents the zero matrix.

(17)

Example 3.3. A generating matrix of a systematic linear(3, 7)-code is

G=

1 0 0 0 0 0 1 0 1 0 1 1 0 0 0 0 1 0 0 1 1

 where

I3=

1 0 0 0 1 0 0 0 1

 and

A=

0 0 0 1 1 1 0 0 0 0 1 1

. A parity-check matrix is

H=

0 0 0 1 1 1 0 0 0 0 1 1 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1

 .

3.4 Process to detect error by using the parity check matrix

Let G be a k× n generating matrix and let H be a parity check matrix of linear block code C. Then H can be used to correct single bit errors. The decoding algorithm for linear code that corrects up to single bit error is as below. Compute the syndrome s= yH for any received vector y.

1. If S= 0 then received word y is accepted as a codeword.

2. If S6= 0 is the ith row of the parity check matrix, then the received word y is not a codeword. We correct y to the nearest codeword by changing the ith bit of y.

3. If S6= 0 is not equal to the any row of the parity check matrix, then more than one error has occured.

The following theorem gives us a summary of the whole discussion about generating matrices and parity check matrices.

Theorem 3.1. A matrix G is a generating matrix and H is a parity-check matrix for a linear code C if and only if

1. the rows of the generating matrix G are linearly independent, 2. the columns of the parity check matrix H are linearly independent, 3. If G is of type k× n then H is of type n × (n − k) and

4. GH = 0.

(18)

4 Finite fields

4.1 Introduction

Roughly speaking, a field is a set where one can add, subtract, multiply and divide (by non-zero element). In this chapter, we return to the development of the structure of Galois fields begun in Chapter 3. There we introduced the definition of a field, but did not develop procedures for actually constructing Galois fields in terms of their addition and multiplication tables. Finite fields are used heavily in the study of coding theory. The references [5] and [8] are used in this chapter.

4.2 Polynomials

Polynomials have great importance in abstract algebra. In this section important defini- tions of polynomials along with their examples will be discussed.

Definition 4.1 (Polynomial). Suppose F is a field. Then a expression of the form c0+ c1x+ c2x2+ ··· + cnxn

where n≥ 0 is an integer and the coefficients c0, c1, ··· ,cnare elements of F and cn6= 0, is said to be a polynomial over F. We denote the set of all polynomials over the field F by F[x].

Definition 4.2 (Degree of Polynomial). Suppose F is a field and f(x) ∈ F[x] such that f(x) = c0+ c1x+ c2x2+ ··· + cnxn.

Then the highest power of the variable x with non zero coefficient is said to be the degree of polynomial and is denoted by deg( f (x)).

Example 4.1. 1. The expression x+ x7+ 26x9 is a polynomial of degree 9 2. The expressions 9+ 2x73, 2+ 3x + x−3and 4+ 2x + 3xare not polynomials.

Different polynomials are added and multiplied in usual fashion.

Example 4.2. Suppose F be a binary field and let k(x) = x + x2+ x3, g(x) = x3+ x4 and h(x) = 1 + x2 be polynomials of F[x]. Then

1. k(x) + g(x) = x + x2. 2. k(x) + h(x) = 1 + x.

3. k(x)h(x) = x + x2+ x4+ x5.

Theorem 4.1 (Division Algorithm for polynomials). Suppose F is a field. Let g(x) and k(x) be two polynomials in F[x] with k(x) 6= 0. Then there exist two unique polynomials q(x) (the quotient) and r(x) (the remainder) in F[x] such that

g(x) = q(x)k(x) + r(x)

where r(x) is either equal to zero or degree of r(x) is less than degree of the k(x). If r(x) = 0 then we say that g(x) divides k(x) and write g(x) | k(x)

(19)

Example 4.3. Suppose F be a binary field Let k(x) = 1 + x7 and g(x) = 1 + x3+ x7+ x12 be polynomials in F[x]. Then dividing g(x) by k(x) gives the remainder r(x) = x5+ x3. We can also write this process as

r(x) ≡ g(x) (mod k(x)) Definition 4.3 (Monic Polynomial). A non zero polynomial

f(x) = c0+ c1x+ c2x2+ ··· + cnxn. whose leading coefficient cnis equal to 1 is said to be monic.

For example f(x) = x12− 7 is a monic polynomial.

Definition 4.4 (Monic Polynomial of least Degree). Even if we discuss monic polynomi- als, we see there exist a number of polynomials which have the same zero. if we discuss

√3 with different polynomials, we see that√

3 is zero of

x2− 3 ,x4− 9 ,(x2− 3)2 , (x2− 3)(x5+ 7) , (x2− 3)(x30+ 78x3+ 1).

Among all polynomials x2− 3 has least degree. Thus x2− 3 is said to be the monic polynomial of least Degree.

Definition 4.5 (Zero of a polynomial). Let F be a field and f(x) ∈ F[x] and β ∈ F. We say thatβ is zero of f(x) if f (β ) = 0.

Example 4.4. Consider the polynomial

f(x) = x + 7,

over a field of real numbers R. Then x= −7 is zero of the polynomial f (x) because f(−7) = 0.

Definition 4.6 (Extension of a field). Let K and F are two fields such that F⊆ K, then K is called an extension of the field F.

Definition 4.7 (Algebraic Number). A numberδ ∈ K is said to be algebraic over a field F ⊆ K if there exist a nonzero polynomial p(x) ∈ K[x] such that δ is a zero of p(x); that is there is a

p(x) = c0+ c1x+ ··· + cmxm where c0, c1, ··· ,cmare in F, at least one of ci6= 0 such that

p(δ ) = 0.

Example 4.5. The number√

7 is algebraic over Q as it is a zero of the polynomial x2− 7, which is nonzero over R and its coefficients belong to Q.

Definition 4.8 (Reducible Polynomial). Suppose F is a field. A non constant polynomial f(x) belonging to F[x] is said to be reducible over F if there exist two polynomials, say h(x) and k(x), in F[x] such that

1. The degree of each of these two polynomials is less than that of f(x).

2. f(x) = h(x)k(x).

(20)

Example 4.6. Polynomial x2− 7 is a reducible over R.

Proof. We can express x2− 7 = (x −√

7)(x +√

7) where both polynomials of R.H.S belong to R[x] and each polynomial has degree less than that of x2− 7.

Example 4.7. The polynomial x2− 7 is not reducible over Q.

Definition 4.9 (Irreducible Polynomial). Let F be a field. A polynomial f(x) ∈ F[x] is said to be irreducible if it is not reducible.

Lemma 4.2. Let F be a field and F[x] be a set of all polynomials over F. If c(x), d(x) and e(x) belong to F[x] such that e(x) 6= 0 and gcd(c(x),e(x)) = 1, then

e(x)|c(x) · d(x) =⇒ e(x)|d(x).

Proof. As it is given gcd(c(x), e(x)) = 1, therefore by Euclidean algorithm there exist polynomial l(x) and m(x) over F such that

l(x)c(x) + m(x)e(x) = gcd(c(x), e(x)) = 1.

Taking modulo e(x)

l(x) · c(x) ≡ 1 (mod e(x)).

Multiplying on both sides by d(x) yields

l(x) · c(x) · d(x) ≡ d(x) (mod e(x)).

Now, since

l(x) · c(x) · d(x) ≡ 0 (mod e(x)).

Hence

d(x) ≡ 0 (mod e(x)).

This show that e(x)|d(x).

Theorem 4.3 (Factor Theorem). Let F be a field andβ be a number and let f(x) ∈ F[x]

be a polynomial such thatdeg( f (x)) ≥ 1. Then

f(x) = 0 ⇐⇒ (x − β )| f (x).

Proof. By division algorithm we have

f(x) = (x − β )q(x) + r(x),

where q(x) and r(x) are polynomials in F[x] over field F and r(x) = 0 or deg r(x) <

deg(x − β ) = 1. This shows that r(x) = 0 or degr(x) = 0. This mean that r(x) is constant.

Then

f(x) = (x − β )q(x) + c.

This gives

f(β ) = 0 ⇐⇒ (β − β )q(β ) + c = 0 ⇐⇒ c = 0,

⇐⇒ f (x) = (x − β )q(x) ⇐⇒ (x − β )| f (x).

Hence the proof is complete.

(21)

4.3 Primitive elements of finite fields

The multiplicative group of a finite field has a property that it is cyclic. We will see that finite fields can be generated by a single element. It is possible that a finite field may have more than one generator.

Definition 4.10 (Finite Field). A field F is called a Finite Field, if F is finite. We write F= GF(q) if F contains q elements.

Example 4.8. Let p be a prime. Then Zp={0,1,··· , p − 1} will be a finite field.

Remark: We shall denote the multiplicative cyclic group of a finite field F by F. Definition 4.11 (Primitive element). Let F be a finite field and b∈ F be one of its element.

The element b is called a primitive element of F if every element of F can be written as a power of b.

Another way of defining primitive element is the following one. If F=hbi = {bn|n ∈ Z}for some b ∈ F then b is called a primitive element of F.

Example 4.9. Consider F7={1,2,3,4,5,6}. To find out primitive element, we take an element of F say 2 and take different integers on its power, this process yields a set {1,2,4} 6= F. so 2 is not a primitive element. The same process is repeated turn by turn on other elements of F, we find that 3 is such an element that every element in F7 is some power of 3. Therefore 3 is primitive element of F7.

Example 4.10. Let p(x) = x4+ x + 1. We construct the field GF(24) over F = GF(2) modulo the polynomial p(x). It will contain 16 elements. Since, we are taking mod(x4+ x+ 1), we have x4= x + 1. The sixteen elements of GF(24) are listed below

GF(24) = {0,1,x,x2, x3, 1 + x, x + x2, x2+ x3, 1 + x + x3, 1 + x2, x + x3, 1 + x + x2, x+ x2+ x3, 1 + x + x2, 1 + x2+ x3, 1 + x3}

In the above finite field, each element can be expressed as a power of x modulo the poly- nomial (x4+ x + 1). Hence x is a primitive element.

Theorem 4.4. Letα1, α2, ··· ,αq−1denote the nonzero field elements of GF(q). Then xq−1− 1 = (x − α1)(x − α2) ···(x − αq−1).

Theorem 4.5. The group of nonzero elements of GF(q) under multiplication is a cyclic group.

Theorem 4.6. Every finite field has a primitive element.

Proof. As a cyclic group, the nonzero elements of GF(q) include an element of order q− 1. This is a primitive element. [3, page 85]

The following examples explain the primitive element for multiplication

(22)

1. The finite field GF(8) can be constructed by the polynomial p(x) = x3+ x + 1, based on the primitive element β = x. In the finite field GF(8), every non zero element has order that divides seven. We have

β = x β2= x2 β3= x + 1 β4= x2+ x β5= x2+ x + 1 β6= x2+ 1 β7= 1 = β0.

With this representation, multiplication is easy. For example, β4· β5= β7· β2= β2.

2. In GF(16), every nonzero element has an order that divides fifteen. An element may have order 1, 3, 5 or 15. An element with order 15 is primitive. We can construct GF(16) with the polynomial p(x) = x4+ x + 1, and the element β = x is primitive.

We have

β = x β2= x2 β3= x3 β4= x + 1 β5= x2+ x β6= x3+ z2 β7= x3+ x + 1 β8= x2+ 1 β9= x3+ x β10= x2+ x + 1 β11= x3+ x2+ 1 β12= x3+ x2+ x + 1 β13= x3+ x2+ 1 β14= x3+ 1 β15= 1.

Again, with this representation, multiplication in GF(16) is easy. For example, β11· β13= β24= β15· β9= β9.

Example 4.11. In GF(7), the numbers 5 and 3 are primitive:

51= 5, 52= 4, 53= 6, 54= 2, 55= 3, 56= 1.

31= 3, 32= 4, 33= 6, 34= 4, 35= 5, 36= 1.

(23)

4.4 The structure of finite fields

Now we look at the structure of the finite fields. We suppose that we have a finite field GF(q) with q elements.

Definition 4.12 (Minimal polynomials). Let K be an extension of a field F, and letβ ∈ K be algebraic over F and we consider

A={p(x) ∈ F[x] : p(β ) = 0}.

Then A is an ideal of F[x] and F[x] is a principal ideal domain. Let f (x) ∈ F[x] be a generator of the ideal A. If b is the coefficient of the highest power of x in f(x), then g(x) = a−1f(x) is a monic polynomial with deg g(x) = deg f (x) which is also generator of A. If

g(x) = r(x)s(x) for some r(x), s(x) ∈ F[x],

then either r(β ) = 0 or s(β ) = 0. It means that g(x)| r(x) or g(x) | s(x). But deg g(x) = deg r(x) + deg s(x),

and therefore either deg r(x) = 0 or deg s(x) = 0. Hence g(x) is irreducible. Also g(x) is monic, irreducible polynomial of the least degree which hasβ as a root, by its choice.

such polynomial with these properties is called minimal polynomial ofβ over F.

Definition 4.13 (Characteristic of a field). Let F be a field, the characteristic of F is the order of the element 1 in the additive group, if this order is finite. The characteristic of a field is defined to be zero if the order of element 1 is infinite. The characteristic of F will be denoted by c(F)

Theorem 4.7. Every finite field contains a unique smallest subfield having prime number of elements. Hence the characteristic of every Galois field is a prime number. [3, page 87]

Theorem 4.8. Let GF(Q) be a finite extension field of GF(q) and α be its a primitive element of GF(Q). Let f (x) be the minimal polynomial of degree m over GF(q). Then the number of elements in the field is Q= qm, and each elementα can be written

α= bm−1βm−1+ bm−2βm−2+ ··· + b1β+ b0

where bm−1, ··· ,b0 are elements of GF(q)[3, page 89]

Proof. Clearly, an expression of any elementα in the form

α= bm−1βm−1+ bm−2βm−2+ ··· + b1β+ b0

is an element of GF(Q). This presentation of α is unique because if we take another presentation

α= cm−1βm−1+ cm−2βm−2+ ··· + c1β+ c0, then

0= (bm−1− cm−1m−1+ ··· + (b1− c1)β + (b0− c0),

and thus a polynomial of degree at most m− 1 has β as zero. This contradicts the defini- tion of minimal polynomial of degree m. The number of suchα are qm, and therefore Q is at least as large as qm.

(24)

On the other hand, every nonzero field element can be expressed as a power ofβ . But if f(x) is the minimal polynomial of β , then f (β ) = 0. Hence

βm+ fm−1βm−1+ ··· + f1β+ f0= 0.

Thusβmcan be written lower powers ofβ :

βm= − fm−1βm−1− ··· − f1β− f0.

To reduce any power of β to a linear combination of βm−1, ··· ,β1, andβ0, the above presentation can be used. That is,

βm+1= − fm−1(− fm−1βm−1− ··· − f1β− f0) − fm−2βm−1− ··· − f1β2− f0β, and so forth. Hence every element of GF(Q) can be expressed as a distinct linear combi- nation ofβm−1, βm−2, ··· ,β0. Consequently, Q is not larger than qm, and the theorem is proved. [3, page 89]

(25)

5 Cyclic Codes

In this chapter we will explain the structure of the cyclic codes and will also discuss construction of generating and parity check matrices from the generating and parity check polynomials of linear cyclic codes. The material in this chapter is taken from [6], [4] and [1].

5.1 Introduction to Cyclic Codes

Definition 5.1 (Cyclic Codes). A code C is said to be cyclic if each cyclic right shift of any codeword of C is again a codeword in C i.e, if

a1a2a3···an−2an−1an∈ C =⇒ ana1a2a3···an−2an−1∈ C Example 5.1. Consider the binary codes C1={000,110,011,101} and

C2={101010,010101,000000,111111}.

Since right shift of each element of C1 and C2 are again in C1 and C2, both codes are cyclic. On the other hand the code

C3={11001,00101,10010,01110,00000,11100,01011,10111}

is not a cyclic code, since 01011∈ C3, but 10101∈ C/ 3.

Example 5.2. Let C2 ={101010,010101,000000,111111} be a cyclic code. The code- words of C2 can also be expressed as polynomials over Z2[x]. For example the codeword 101010 can be written as

p(x) = 1 + x2+ x4∈ Z2[x]

Since, the code is cyclic therefore the right shift of C2 gives us the word 010101 and corresponding polynomial is

p1(x) = x + x3+ x5 .

Note: We will use the terms "codeword" and "code polynomial" interchangeably.

Theorem 5.1. Let

b= b0b1b2···bn−1

belong to Bnand

p(x) = b0+ b1x+ b2x2+ ··· + bn−1xn−1

its corresponding polynomial in Z2[x]. Suppose b0 is the word obtained from b by one cyclic right shift, and let p1(x) be the polynomial that corresponding to b0. Then p1(x) is the remainder when dividing x· p(x) by xn− 1, according to the division algorithm in Z2[x], i.e.

x· p(x) = bn−1· (xn− 1) + p1(x). (5.1) Proof. Since

b0= bn−1b0b1b2···bn−2, the corresponding polynomial of b0 is

p1(x) = bn−1+ b0x+ b1x2+ ··· + bn−2xn−1.

(26)

Now we consider the right hand side of (5.1) bn−1(xn− 1) + p1(x) = bn−1xn− bn−1

+ bn−1+ b0x+ b1x2+ ··· + bn−2xn−1

= b0x+ a1x2+ ··· + bn−1xn

= x · (b0+ b1x+ b2x2+ ··· + bn−1xn−1)

= x · p(x), which proves the theorem.

Example 5.3. We choose a word 01010011 in B8 and the corresponding polynomial is f(x) = x + x3+ x6+ x7. Now we make two right shifts of the polynomial f(x), and get

x2· f (x) = x3+ x5+ x8+ x9.

Since the word is taken from B8, therefore we divide x2· f (x) by x8− 1 and hence get the remainder

f2(x) = 1 + x + x3+ x5.

Converting this polynomial again to binary representation yields 11010100.

5.2 Generating polynomial, Parity check polynomial

To define the generating and parity check polynomial we state the following important theorem.

Theorem 5.2. Let C be a cyclic code in Bn. Let Pcbe the set of all polynomials in Z2[x]

that represents the codewords in C. Among the polynomials in Pc, choose a polynomial g(x) 6= 0 of minimal degree, and put s = degg(x). Then

1. g(x) is uniquely determined.

2. g(x) divides xn− 1.

3. Let f(x) ∈ Z2[x] be any polynomial with deg f (x) ≤ n−degg(x)−1. Then g(x) f (x) represents a polynomial in C.

4. There exists a polynomial h(x) ∈ Z2[x] such that

xn− 1 = g(x)h(x). (5.2)

5. Let m(x) be a polynomial of Z2[x] of degree at most n − 1 represent a code in C if and only if m(x)h(x) ≡ 0 (mod xn− 1), where h(x) is defined by (5.2).

Definition 5.2 (Generating polynomial, Parity check polynomial). The polynomials g(x) and h(x) in the theorem 5.2 are called the generating polynomial and the parity check polynomial, respectively, of the code C.

Note: The parity check polynomial h(x) can be used to detect errors. Suppose m(x) is the polynomial, associated to a received word. Then according to the previous theorem this word is a codeword, if and only if m(x)h(x) is a multiple of xn− 1.

(27)

Example 5.4. Let C be a cyclic code C= {0000,1010,0101,1111}. The corresponding set of polynomials is

P= {0,1 + x2, x + x3, 1 + x + x2+ x3}.

We see that 1+ x2 is a polynomial of least degree in the set P and it is unique. This polynomial is the generating polynomial for C. Every word in C can be represented as a multiple of the generating polynomial.

Construction of Cyclic Codes

The following procedure is used to construct a linear cyclic(n, k)-code,

1. First of all choose a generating polynomial g(x) ∈ Z2[x] such that xn− 1 ∈ Z2[x] is a multiple of g(x).

2. To produce all possible codewords, we choose polynomials f(x) ∈ Z2[x] and multi- ply it with g(x), where f (x) has to be of degree at most k − 1.

5.3 Generating Matrices and Parity Check matrices

Now we will learn how to construct generating and parity check matrices from the gener- ating and parity check polynomials.

Theorem 5.3. Let g(x) = a0+ a1x+ ··· + arxr∈ Z2[x] be a polynomial of linear cyclic code of length n. Then n− r is the dimension of code C. Then (n − r) × n generating matrix of C will be written as

a0 a1 ··· ar 0 0 ··· 0 0 a0 a1 ··· ar 0 ··· 0 0 0 a0 a1 ··· ar . .. ... ... ... . .. ... ... ··· ... 0 0 0 ··· 0 0 a0 a1··· ar

 .

Example 5.5. In order to construct a binary cyclic code C of length n= 7, we need to find a generator polynomial that is a factor of x7−1 in GF(2)[x]. Choose g(x) = 1+x+x2+x4. Since g(x) has degree r = 4, it follows that C has dimension k = n − r = 3. So our generator matrix G will be a 3× 7 matrix:

1 1 1 0 1 0 0 0 1 1 1 0 1 0 0 0 1 1 1 0 1

.

Theorem 5.4. Suppose h(x) = b0+ b1x+ ··· + bkxk is a parity check polynomial of a (n, k) cyclic code C. Then, we form the (n − k) × n matrix

H0=

bk bk−1 ··· b0 0 0 ··· 0 0 bk bk−1 ··· b0 0 ··· 0 0 0 bk bk−1 ··· b0 . .. ...

... ... . .. . .. ... ··· . .. 0 0 0 ··· 0 bk bk−1··· ··· b0

 ,

and use its transposed matrix H= (H0)T for the parity check matrix.

(28)

Example 5.6. The(7, 3) code C constructed in example 5.5 has parity check polynomial h(x) = 1 + x + x3. From this polynomial we form the[(n − k) × n] = [4 × 7] matrix

H0=

1 0 1 1 0 0 0 0 1 0 1 1 0 0 0 0 1 0 1 1 0 0 0 0 1 0 1 1

 and by taking transpose of H0we get parity check matrix H

H= (H0)T =

1 0 0 0 0 1 0 0 1 0 1 0 1 1 0 1 0 1 1 0 0 0 1 1 0 0 0 1

5.4 BCH Codes

In this section, we discuss the BCH codes. These codes were introduced initially by Hocquenghem (1959), Bose and Ray-Chaudhuri (1960). These authors proved that BCH codes have minimum distance at least 2t+ 1, but they did not construct an error processor.

The original BCH codes were generalized by several authors, notably Gorenstein and Zierler in 1961.

Definition 5.3. Let GF(qm) be the smallest extension field of GF(q), that contains an element α of order n. If the polynomial g(x) ∈ GF(q) is the generator polynomial and this generator polynomial is the least common multiple of the minimal polynomials of

αl, αl+1, ··· ,αl+δ −2, where l ≥ 0 and δ .

then the cyclic code of the length n over the field GF(q) is called a BCH code having distanceδ .

Before going into details of BCH codes, we first need to define Vandermonde matrices.

Definition 5.4. A Vandermonde matrix having dimension m× m is defined as:

V =

1 β1 β12 β13 ··· β1m−1

1 β2 β22 β23 ··· β2m−1

... ... ... ... ... ... 1 βm βm2 βm3 ··· βmm−1

 ,

The elementsβiof the Vandermonde matrix V are the elements of a field which may be finite or infinite.

If the elementsβiof V are different from each other then the determinant of the matrix V is nonzero and is equal to ∏mj=1−1mi= j+1i− βj).

In order to show that the Vandermonde matrix V has non zero determinant, we consider the polynomials p(y), where each of the polynomial have the degree m − 1. For any m values a1, a2, ··· ,am, there exists a polynomial having degree m− 1, such that We can write in the form of an expression as:

p(yi) = ai for 1≤ i ≤ m.

(29)

The above expression can also be written in the form of a matrix. The matrix equation of the above expression is

1 y1 y21 ··· ym1−1

1 y2 y22 ··· ym2−1 ... ... ... ... ... 1 ym y2m ··· ymm−1

 x1 x2 ... xm

=

 a1 a2 ... am

(5.3)

We can write matrix equation (5.3) as

AY = a.

It is clear from the matrix equation (5.3) that the coefficient matrix A is clearly Vander- monde. It is clear that the zero polynomial is the only polynomial having degree m−1 and mroots. Hence the equation AY = 0 has only the solution Y = 0. From this it is clear that Ais non singular. So it follows that Vandermonde matrices have non zero determinant.[1, page 55]

To understand BCH codes, here we try to explain two binary BCH codes. Let q= 2 and m= 5, as n = qm− 1 so n = 31. Let β be primitive element of order 31 in GF(25) having x5+ x2+ 1 as its minimal polynomial over GF(2). The chart of conjugacy classes of all elements in GF(25) and their associated minimal polynomials are presented below.

References

Related documents

This
is
where
the
Lucy
+
Jorge
Orta
have
founded
the
Antarctic
Village,
the
first
symbolic


Konventionsstaterna erkänner barnets rätt till utbildning och i syfte att gradvis förverkliga denna rätt och på grundval av lika möjligheter skall de särskilt, (a)

While much has been written on the subject of female political participation in the Middle East, especially by prominent scholars such as Beth Baron 5 and Margot Badran, 6 not

Regarding to the effectiveness of KPIs, the “12 characteristics of Effective KPIs” defined by Wayne W.Eckerson[3] was chosen, because I believe that the 12

Om det lekfulla i nationalismen skulle försvinna i Sveriges presentation av sig själv, till exempel genom att Sverige lyfts fram som ett bättre land än övriga europeiska länder

Conceptually, it is possible to detect significant power losses due to system faults even with a model that cannot precisely predict the power production for any given situation,

I have gathered in a book 2 years of research on the heart symbol in the context of social media and the responsibility of Facebook Inc.. in the propagation of

Questions will be related to your privacy concerns and perception of collecting your location data by companies, when using entertainment smartphone application or before