• No results found

Generating functions and regular languages of walks with modular restrictions in graphs

N/A
N/A
Protected

Academic year: 2021

Share "Generating functions and regular languages of walks with modular restrictions in graphs"

Copied!
99
0
0

Loading.... (view fulltext now)

Full text

(1)

Department of Mathematics, Linköping University Ludwig Rahm

LiTH-MAT-EX–2017/11–SE

Credits: 16 hp Level: G2

Supervisor: Jan Snellman,

MAI, Linköping University Examiner: Jesper Thorén,

Department of Mathematics, Linköping University Linköping: June 2017

(2)
(3)

This thesis examines the problem of counting and describing walks in graphs, and the problem when such walks have modular restrictions on how many times it visits each vertex. For the special cases of the path graph, the cycle graph, the grid graph and the cylinder graph, generating functions and regular languages for their walks and walks with modular restrictions are constructed. At the end of the thesis, a theorem is proved that connects the generating function for walks in a graph to the generating function for walks in a covering graph. Keywords:

Graph Theory, Combinatorics, Generating Function, Voltage Graph, Reg-ular Languages, Dyck’s Paths

URL for electronic version: Theurltothethesis

(4)
(5)

Detta examensarbete undersöker problemet att räkna och beskriva stigar i gra-fer, och problemet när dessa stigar också utsätts för modulära restriktioner på hur många gånger de får besöka varje nod. För specialfallen linjegrafen, cykel-grafen, rutnätsgrafen och cylindergrafen så konstrueras genererande funktioner och reguljära språk för stigar och stigar med modulära villkor. Rapporten av-slutas med att bevisa en sats som sammankopplar genererande funktioner för stigar i en graf med genererande funktioner för stigar i en övertäckande graf.

(6)
(7)

I want to thank Jan Snellman for his help as a supervisor and for suggesting the aim of this thesis. I also acknowledge that I got the proofs in section 2.6 from him, although he has no published works on the subject that I can cite. I also want to thank my opponents Mikael Böörs and Tobias Wängberg for insightful advice and comments on the thesis.

I acknowledge that not all results in the result section are previously unknown. For example, it has come to my attention that the spectral properties of what I call the cartesian product of graphs has already been thoroughly studied in more detail than the conclusions I reach in this paper [1] and that the matrix Im⊗ G1+ G2⊗ In is known as the kronecker sum [6, p.142].

(8)
(9)

1 Introduction 1

2 Preliminaries 3

2.1 Graphs . . . 3

2.2 Regular Languages and Finite Automata . . . 9

2.3 Generating Functions . . . 13

2.4 Noncommutative Generating Functions . . . 15

2.5 Voltage Graphs . . . 20

2.6 Walks With Prescribed Degree Sequences . . . 22

2.7 Block Matrices . . . 26 2.8 Matrix Polynomials . . . 28 2.9 Dyck’s Paths . . . 30 3 Results 33 3.1 Generating Function of Pn . . . 33 3.2 Spectrum of Pn . . . 36 3.3 Generating Function of Cn. . . 38 3.4 Spectrum of Cn . . . 40 3.5 Spectrum of Pn× Pm . . . 42 3.6 Generating Function of Pn× Pm . . . 45 3.7 Spectrum of Pn× Cm . . . 48

3.8 Periodical Properties of Pn mod 2 . . . 51

3.9 Adjacency Matrix and Characteristic Polynomial of Pn mod 2 . . 55

3.10 Adjacency Matrix of G mod k . . . 58

3.11 Generating Function of Pn mod k . . . 60

3.12 Noncommutative Generating Function of Pn . . . 65

3.13 Main Theorems . . . 66

3.14 Graphs With Loops . . . 80

(10)
(11)

Introduction

The problem this thesis is trying to answer is what happens when you impose modular restrictions on how many times a walk in a graph can visit each vertex. Using the spectrum, generating functions and regular languages to describe walks, is there a pattern connecting the walks with such restrictions to walks without them? To study this problem, we look at concrete cases.

Primarily limiting ourselves to closed walks, we want to find generating func-tions and regular languages for walks in the following graphs: the path graph, the cycle graph, the grid graph and the cylinder graph; and then also impose modular restrictions for how many times each vertex has to be visited. An example of this could be the generating function for a closed walk in the path graph with fixed starting vertex that visits each vertex in the graph an even number of times. The next step is then to compare the cases and see if we can find a pattern.

The thesis will have the following structure:

• Chapter two introduces all the underlying theory needed to approach and understand the problem.

• Chapter three approaches the problem. • Chapter four concludes the thesis.

(12)
(13)

Preliminaries

2.1

Graphs

The notation used in graph theory varies within the literature. In this section, we’re defining the notation using in this thesis and stating theorems needed to solve the problem.

Definition 2.1.1. [14, p.241] [2, p.336] [5]An unweighted graph is a triple (V, E, σ) where V={v1, ..., vp} is a set of vertices, E = {e1, ..., ek} is a set of

edges and σ = (σinit, σfin) is a pair of functions, σinit : E → V , σfin : E → V .

If σ(e) = (v1, v2), then e is an edge from v1 to v2, v1 is the initial vertex of e

and v2 is the final vertex of e. A graph is undirected if its edges are regarded

as unordered pair of vertices, and directed if they are regarded as ordered pairs; an undirected graph can be transformed into a directed one by replacing each undirected edge by two directed edges. A directed graph is also called a digraph. If there is an edge connecting v1 and v2, then v1and v2are called adjacent. An

edge e, σ(e) = (vi, vi) for some vertex vi is called a loop. If there exists distinct

edges e1, e2such that σ(e1) = σ(e2) in a graph G, then e1, e2are called multiple

edges and G is called a multigraph. An undirected graph without multiple edges and without loops is called simple. When treating graphs without multiple edges, σ can be suppressed from the notation and edges can be treated as (ordered) pairs of vertices.

(14)

Definition 2.1.2. [14, p.241]A walk of length n in a graph is an interlaced sequence of edges and vertices v1, e1, v2, ..., vn, en, vn+1, such that the initial

vertex of ei is the final vertex of ei−1, i = 2, ..., n. v1 is called the initial vertex

of the walk and vn+1 is called the final vertex of the walk. If v1 = vn+1, the

walk is called a closed walk. If for every pair of verticies (v1, v2) in a graph

G = (V, E, σ), there exist a walk with initial vertex v1 and final vertex v2, then

G is said to be strongly connected.

Definition 2.1.3. [5]For every vertex v ∈ V for a graph G = (V, E, σ), we define the forward-, and backwards neighborhoods of v as N+(v) = {e ∈ E :

∃v0 ∈ V : φ(e) = (v, v0)} and N(v) = {e ∈ E : ∃v0 ∈ V : φ(e) = (v0, v)}

respectively. If G is undirected, then N+(v) = N(v).

Definition 2.1.4. An autoconnected component of a graph G = (V, E, φ) is a subset A ⊆ V such that, if every directed edge in E is replaced by an undirected edge between the same vertices, then there is a walk between any two verticies in A.

Definition 2.1.5. [5] Let G1= (V1, E1, σ1), G2= (V2, E2, σ2) be two graphs,

a graph homomorphism φ : G1 → G2 is a pair of functions φ = (φV, φE),

φV : V1→ V2, φE: E1→ E2 such that the following diagrams commute

E1 E2 V1 V2 E1 E2 V1 V2 φE (σ1)init (σ2)init φV φE (σ1)fin (σ2)fin φV

Definition 2.1.6. [5] A graph homomorphism φ is an epimorphism if both φV

(15)

The path-, cycle-, grid- and cylinder graphs mentioned in the introduction are referring to the following definitions.

Definition 2.1.7. The (undirected) path graph of length n, denoted Pn, is the

graph with n vertices and n−1 edges such that E = {(v1, v2), (v2, v3), ..., (vn−1, vn)}.

v1 v2 v3 v4

e1 e2 e3

Figure 2.1 The graph P4

Definition 2.1.8. The undirected cyclic graph of length n, denoted Cn is the

graph obtained by adding the edge (v1, vn) to Pn. The directed cyclic graph,

denoted Cdir

n is the graph on n vertices with directed edges (v1, v2), (v2, v3),

, ..., (vn−1, vn), (vn, v1). v1 v2 v3 v4 e1 e2 e3 e4

(16)

Definition 2.1.9. Let G1= (V1, E1, σ1) and G2= (V2, E2, σ2) be graphs, then

we define the Cartesian product of G1 and G2 as G1× G2 = G = (V, E, σ)

where V = {(v1, v2) : v1 ∈ V1, v2 ∈ V2} and there is an edge e ∈ E with

σ(e) = ((v1, w1), (v2, w2)) iff ∃e1 ∈ E1 : σ1(e1) = (v1, v2) and w1 = w2 or if

∃e2∈ E2: σ2(e2) = (w1, w2) and v1= v2.

The graph Pn× Pm is called a grid graph, and the graph Pn× Cm is called a

cylinder graph. v(1,1) v(2,1) v(3,1) v(4,1) v(1,2) v(2,2) v(3,2) v(4,2) e1 e2 e3 e4 e5 e6 e7 e8 e2 e2

(17)

v(1,1) v(2,1) v(3,1) v(2,2) v(1,2) v(3,2) e1 e2 e3 e4 e5 e6 e7 e8 e9

Figure 2.4 The graph P2× C3

Definition 2.1.10. [14, p.241] The adjacency matrix A of a graph G is the matrix of size |V | × |V | whose entry aij is #{e ∈ E : σ(e) = (vi, vj)}.

        0 1 1 1 0 0 1 0 1 0 1 0 1 1 0 0 0 1 1 0 0 0 1 1 0 1 0 1 0 1 0 0 1 1 1 0        

Figure 2.5 The adjacency matrix of P2× C3

Theorem 2.1.11. [14, p.242] Let G be a graph and A be the adjacency matrix of G, then the (i, j)-entry of An is the number of walks in G of length n with initial vertex i and final vertex j.

(18)

Theorem 2.1.12. [2, p.345]Let G be a strongly connected graph and let A be the adjacency matrix of G, then A has a unique positive eigenvalue λpf such

that |λ| ≤ λpf for every other eigenvalue λ of A, this eigenvalue is called the

Perron-Frobenius eigenvalue.

Definition 2.1.13. [2, p.290] Let G be a graph and let A be the adjacency matrix of G, then the eigenvalues of A are called the spectrum of G and the largest eigenvalue of A is called the spectral radius.

The spectrum of a graph, and especially the spectral radius, are of interest when studying walks in graphs as they give partial information on the number of walks of a fixed length.

Definition 2.1.14. [9, p.137] Let G be a graph with eigenvalues λ1≥ |λ2| ≥

... ≥ |λn|, then the spectral gap of G is λ1− |λ2|.

Definition 2.1.15. [2, p.341] A strongly connected graph G is said to be periodic with parameter d iff V can be partitioned into d classes, V = V0∪

... ∪ Vd−1, such that any edge e ∈ E, σ(e) = (v1, v2) with v1 ∈ Vj will have

v2 ∈ Vj+1 mod d. The largest possible d is called the period. A graph with

period 2 is called bipartite. If there exists no decomposition with d ≥ 2, the graph is called aperiodic.

The period of the graphs gives information about the walks in a graph since every step taken in a walk will also move you 1 step in the d-cycle of vertex classes.

Theorem 2.1.16. [2, p.345]Let G be a graph with period d, adjacency matrix A and spectral radius λ1. Then the number of eigenvalues λ of A such that |λ| = λ1

is d. In particular, if G is aperiodic, no other eigenvalue has a modulus equal to λ1.

(19)

Theorem 2.1.17. [2, p.345]Let G be a connected graph with period d, adja-cency matrix A, and spectral radius λ1. Then the d largest eigenvalues of A

are

λ1ej

2iπ

d , j=0,1,...,d-1.

Theorem 2.1.18. Let G be a graph with period d and let v1, v2∈ V , such that

v1 ∈ Vi and v2 ∈ Vj. Then any walk between v1 from to v2 must have length

j − i mod d.

2.2

Regular Languages and Finite Automata

Regular languages offer one way to describe walks in graphs, and the theory of regular languages is often derived in the context of finite automata.

Definition 2.2.1. [7, p.1] A set of symbols A is called an alphabet. A finite sequence of symbols from A is called a string over A, or just a string. The elements of A are called symbols or letters. The set of all strings over A is denoted A∗.

For example, A = {0, 1, 2} is an alphabet and 0010 is a string over A.

Given two strings x,y over the same alphabet A, the operator concatenation of x and y denoted x · y, is defined by adjoining the letters in y to those in x. For example, A = {0, 1, 2}, x = 2, y = 1001, x · y = 21001, y · x = 10012. This shows that concatenation of strings is a non-commutative operator. Concatenation of strings is associative.The empty string is denoted  and commutes with all elements in A∗. [7, p.3]

(20)

Definition 2.2.2. [7, p.6] For any alphabet A, any subset of A∗ is called a language over A, or just a language. Elements of a language are called words.

If A = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9}, then A∗ is the set of all natural numbers, with possible starting zeros. Example of languages over A are then L1 = { n ∈ N :

n is odd } and L2= { n ∈ N : n is prime }.

Let L1, L2 be languages over A, then L1+ L2 is defined as the union of L1and

L2. L1· L2 is defined as {x · y : x∈ L1 and y∈ L2}, also written as L1L2. For a

language L, L0= {}, Ln+1= Ln· L for n > 0.

Definition 2.2.3. [7, p.8] The Kleene star of a language L, denoted L∗, is defined as

L∗= L0+ L1+ L2+ ...

The operators Kleene star, union and product of languages can be used to de-scribe languages. When describing languages this way, brackets around sets are often omitted. For example, if A = {a, b}, then L = (a + b)∗(ab) is the language of all strings in A ending with ab. An expression of this type is called a regular expression.

Definition 2.2.4. [7, p.98] A regular expression over an alphabet A = {a1, ..., an}

is a string over A formed by repeated application of the following rules: (R1) ∅ is a regular expression.

(R2)  is a regular expression.

(R3) a1, ..., an are regular expressions.

(R4) If s and t are regular expressions then so is (s + t). (R5) If s and t are regular expressions then so is (s · t). (R6) If s is a regular expression then so is (s∗).

(21)

(R7) Every regular expression arises by a finite number of applications of the rules (R1) to (R6).

Given a regular expression s, you can calculate a language L(s), using the fol-lowing rules: [7, p.99] (D1) L(∅) = ∅. (D2) L() = . (D3) L(ai) = {ai}. (D4) L(s + t) = L(s) + L(t). (D5) L(s · t) = L(s) · L(t). (D6) L(s∗) = L(s)∗.

If there exists a regular expression s so that L=L(s), L is said to be a regular language. [7, p.100]

Definition 2.2.5. [7, p.14,p.30] A finite automaton is a machine specified by 5 pieces of information:

A = (S, A, i, δ, T )

Where S is a finite set called the set of states, A is the finite input alphabet, i is a fixed element of S called the initial state, δ is a function δ : S × A → S called the transition function, and T is a subset of S called the set of terminal states. If δ is not defined for every element in the set S × A, then the automaton is called incomplete.

A finite automaton takes a string from its input language as input and returns "yes" or "no". An example of such an automaton is the following:

Consider the graph P3. Let S = {v1, v2, v3}, A = {e1, e2}, i = v1, δ be the

function that maps a vertex and an edge to the other vertex incident to the edge, and T = {v1}. This automation determines if a walk in P3 starting at v1

is a closed walk, and the automaton is incomplete because δ(v1, e2) and δ(v3, e1)

(22)

Definition 2.2.6. [7, p.14] A transition diagram for a finite automaton A is a digraph whose vertices are the states S and has an edge f (e) = (v1, v2) if

δ(v1, e) = v2, meaning that input e changes the state of the automaton from v1

to v2. The initial state is marked by an inward-pointing arrow, and the terminal

states by double circles.

v1

start v2 v3

e1 e2

Figure 2.6 The transition diagram of the automaton AP3

The definition of a finite automaton doesn’t tell us how to evaluate string in-puts. Consider the automaton A = (S, A, i, δ, T ) and the string a = a1a2...an,

then we define δ(s, a) = δ(δ(s, an), a2...an). For example, in AP3 we have

δ(v1, e1e2) = δ(δ(v1, e1), e2) = δ(v2, e2) = v3. [7, p.16]

Definition 2.2.7. The language accepted or recognized by an automaton A = (S, A, i, δ, T ), denoted L(A), is L(A) = {w ∈ A∗: i · w ∈ T }. A language is said to be recognizable if there exists an automaton that recognizes it.

Theorem 2.2.8. [7, p.103]A language is recognizable if and only if it is regular.

Corollary 2.2.9. The language of all walks with initial vertex v1 and final

vertex v2 in a graph G = (V, E) can be expressed with a regular expression over

E.

Definition 2.2.10. Let G be a graph and let u,v be vertices in G, then LG(u, v)

(23)

Definition 2.2.11. The entropy of a language L ⊆ A is defined as lim sup

d→∞

|L ∩ A∗d|1/d

where A∗d is the set of words in A∗ of length d.

2.3

Generating Functions

Generating functions is one of the more common tools used to solve the combi-natorial problem of counting the number of elements in a finite set, and is the tool we’re going to use to count walks in graphs. Given a (possibly infinite) class of sets S1, S2, S3, ... we define f (i) to be the number of elements in S1.

Definition 2.3.1. [14, p.3] A generating function to a combinatorial problem is the formal power series

X

n≥0

f (n)xn

Definition 2.3.2. [14, p.3] A multivariate generating function is a generating function in several variables

G(x1, ..., xn) = X (k1,...,kn)∈Nn f (k1, ..., kn)xk11· ... · x kn n

Given two generating functions, we define addition and multiplication of power series by the following rules

X n≥0 anxn+ X n≥0 bnxn= X n≥0 (an+ bn)xn

(24)

X n≥0 anxn· X n≥0 bnxn= X n≥0 cnxn Where cn= n P n=0 aibi−1.

If F (x), G(x) are generating functions satisfying F (x)G(x) = 1 + 0x + 0x2+ ... =

1, then G(x) = F (x)−1. [14, p.4]

Definition 2.3.3. [14, p.202] A generating function F(x) is called rational iff there exists polynomials P(x),Q(x) such that F (x) = P (x)Q(x)−1and Q(0) 6= 0.

Theorem 2.3.4. [14, p.242]Let G be a graph and let A be the adjacency matrix of G, furthermore let fij(n) be the number of walks of length n with initial vertex

i and final vertex j and let Fij be the generating function of fij, then

Fij(λ) =

(−1)i+jdet(I − λA : j, i)

det(I − λA) ,

where (B:j,i) denotes the matrix obtained by removing the j-th row ad i-th column of B. Thus in particular Fij is a rational function of λ whose degree is strictly

less than the multiplicity n0 of 0 as an eigenvalue of A.

Definition 2.3.5. Let p(x) be a polynomial of degree n, then we define the reciprocal of p, denoted p∗(x), as xn· p(1

x).

If x0is a root to p(x), then x1

0 is a root to p

(x). det(I −λA) and det(A−λI) are

reciprocals. If p(x) = a0+a1x+...+anxn, then p∗(x) = an+an−1x+...+a0xn. [2,

p.337]

Definition 2.3.6. [2, p.243] Let Snbe a sequence such that limn→∞sup|Sn|1/n=

(25)

In particular, let Sn be the number of words of length n in a language, then the

exponential order of Sn is the same as the entropy of the language.

Theorem 2.3.7. [2, p.244]Let f (t) = f0+ f1t + f2t2 + ... be a generating

function with positive coefficients and suppose that f is analytic at 0. Let R be the modulus of a singularity nearest to the origin, then the coefficient sequence fn is of exponential order (R1)n.

Corollary 2.3.8. Let G be a connected aperiodic graph with adjacency matrix A and let Sn be the number of walks from node i to node j of length n G, then

the exponential order of Sn is λn, for λ being the inverse of smallest root of

det(I − λA), and thus the largest eigenvalue of A.

Theorem 2.3.9. Let G be a graph with period d ≥ 2, so that it’s generating function f(n) has multiple poles λ1, λ2, ..., λd at distance λ1from the origin, then

f(n) is asymptotically proportional to [2, p.398,401] d X k=1 ( 1 λk )n.

Corollary 2.3.10. Let G be a graph with period d, spectral radius λ1 and

gen-erating function f(n), then f(n) is of exponential order λn

1 for n ≡ 0 mod d, and

f(n)=0 otherwise.

2.4

Noncommutative Generating Functions

(26)

the commutative generating function. Instead of dealing with a single variable x for the number of steps in a walk, we will give each edge its own variable. Since we’re not going to let these variables commute, each term in the generating function will be a word corresponding to a specific walk in the graph.

Definition 2.4.1. [15, p.195] Let K denote a fixed field and let X be an alphabet. A formal (power) series in X over K is a function S : X∗ → K. We write hS, wi for S(w) and then write

S = X

w∈X∗

hS, wiw.

The set of all formal series in X is denoted KhhXii.

Addition is componentwise and and multiplication is defined similarly to in the commutative case [15, s.196]. S + T =X w (hS, wi + hT, wi)w, (XhS, uiu)(XhT, viv) =X u,v hS, uihT, viuv =X w (X uv=w hS, uihT, vi)w.

Definition 2.4.2. [15, p.196] A noncommutative polynomial is a series S ∈ KhhXii that is a finite sum PhS, wiw. The set of polynomials S ∈ KhhXii forms a subalgebra of KhhXii denoted KhXi, or KpolhhXii.

A noncommutative rational series is an element of of the smallest subalgebra KrathhXii of KhhXii containing KhXi such that if S ∈ KrathhXii and S−1

exists, then S−1 ∈ KrathhXii.

Theorem 2.4.3. [15, p.201] Suppose that B1, ..., Bn are formal series

satisfy-ing n linear equations of the form

(27)

B2c21+ B2(1 + c22) + ... + Bnc2n= d2

.. .

B1cn1+ B2cn2+ ... + Bn(1 + cnn) = dn,

where each cij is a rational series with zero constant term, and where each dj is

a rational series. Then B1, ..., Bn are rational series, and are the unique series

satisfying the above system of linear equations.

Definition 2.4.4. [15, p.197] A series S ∈ KhhXii is recognizable if there exist a positive integer n and a homomorphism of monoids

µ : X∗→ Kn×n,

as well as two matrices ν ∈ K1×n and γ ∈ K1×n such that for all w ∈ X+ we have

hS, wi = ν · µ(w) · γ.

Theorem 2.4.5. [15, p.202] A formal series S ∈ KhhXii is rational if and only if it is recognizable.

Theorem 2.4.6. Let L ⊆ X∗ be a language and let

S = X

w∈L

hS, wiw

be the noncommutative generating function for L. Then, if L is regular, S is rational.

(28)

Proof. S is recognizable, and therefore rational. To show that S is recognizable, we construct n, µ, ν, γ. Since L is regular, we know that there is a finite au-tomaton A that accepts L. Call the transition diagram of A for T = (V, E, σ) and consider it as a digraph. Set n to be the number of vertices in T (states in A). Since µ is a homomorphism, it’s enough to define it on the edges of T. For e ∈ E, σ(e) = (vi, vj) we choose µ(e) as the n × n matrix with a 1 in entry (i,j)

and zeroes elsewhere. Choose ν as the vector with 1 in the entry correspond-ing to the initial state of A and zeroes elsewhere, γ as the vector with 1 in all the entires corresponding to the terminal states of A and zeroes elsewhere. For arbitrary w = e1e2...ef ∈ X+, we have µ(w) = µ(e1) · ... · µ(ef), which is the

matrix with a 1 in position (σinit(e1), σfin(ef)) and zero elsewhere. If w ∈ L,

the 1 is in the row that corresponds to the inital state of A and the in a row corresponding to a terminal state, so ν · µ(w) · γ = 1. For u /∈ L, we know that it either doesn’t start in the initial state or doesn’t end in terminal state, so ν · µ(u) · γ = 0. We also know that hS, wi is 1 if w ∈ L and 0 otherwise.

In particular, the above theorem tells us that every noncommutative generating function for walks in graphs is rational.

Theorem 2.4.7. Let G be a graph and let Gv1,v2(n) be the multivariate

gen-erating function for the number of walks in G starting in v1, ending in v2 and

uses edge ek nk times, then

Gv1,v2(e1, ..., en) = δv1,v2+ X e=(v1,u) e · Gu,v2(e1, ..., en) = = δv1,v2+ X e=(u,v2) Gv1,u(e1, ..., en) · e where δv1,v2= 1, if v1= v2, δv1,v2= 0, if v16= v2.

The specialization Gv1,v2(x, ..., x) is the generating function that counts the

(29)

Proof. A walk to v2 will either be the empty walk, or it will have a 2nd to last

vertex adjacent to v2. So any non-empty walk to v2will be a walk to an adjacent

vertex u followed by taking the edge (u, v2).

A walk from v1 will either be the empty walk, or it will have a 2nd vertex

adjacent to v1. So any non-empty walk from v1 will be taking the edge (v1, u)

to an adjacent vertex u, followed by a walk from u.

The above theorem can be used to generate a system of linear equations for generating functions by fixing the initial node and going through each node in the graph as final node. For the case of initial and final node v1, we get the

following:      Gv1,v1(x) Gv1,v2(x) .. . Gv1,vn(x)      =      1 0 .. . 0      +xAT      Gv1,v1(x) Gv1,v2(x) .. . Gv1,vn(x)      ⇐⇒ (I−xAT)      Gv1,v1(x) Gv1,v2(x) .. . Gv1,vn(x)      =      1 0 .. . 0      ⇐⇒ ⇐⇒      Gv1,v1(x) Gv1,v2(x) .. . Gv1,vn(x)      = (I − xAT)−1      1 0 .. . 0      ,

where A is the adjacency matrix of the graph. This way of generating a linear system will be referred to as the transfer matrix method.

Theorem 2.4.8. Let G be a digraph and let S be the (rational) formal series over E such that hS, wi = 1 if w ∈ E is a walk from vi to vj and hS, wi = 1

otherwise, then the regular language over E for the walks from vi to vj will be

the result of repeated application of the following rules on S [15, p.201]

R1 If s1, s2 are regular expressions, then s1 + s2 (addition) is replaced by

s1+ s2 (union).

R2 If s1, s2 are regular expressions, then s1· s2(multiplication) is replaced by

s1· s2 (concatenation).

(30)

2.5

Voltage Graphs

Voltage graphs are of interest because they turn out to be a generalization of the problem in this thesis. This section will define a voltage graph and supply the theory on them that we need, the next section will specialize this information and apply it to our problem.

Definition 2.5.1. [12, p.17] [4] A voltage graph is a triple Λ = (G, V, ϕ) where G = (V, E, σ) is a digraph, V is a finite abelian group, and ϕ : E → V is an assignment of group elements to the edges of G. The voltage of a walk w = v1, e1, ..., en, vn+1in G is defined by ϕ(w) =P

n

i=1ϕ(ei). We call the graph

G the base graph of the voltage graph and V the voltage group.

Definition 2.5.2. [12, p.18] A balanced cycle of a voltage graph G is defined as a closed walk in G whose voltage is the identity element of the voltage group.A balanced assignment is a function ϕ such that every closed walk in the voltage graph is balanced.

Definition 2.5.3. [12, p.19] [5] Let Λ = (G, V, ϕ) be a voltage graph. We define the derived graph ˜G = ( ˜V , ˜E, ˜σ), such that ˜V = V × V and ˜E = E × V. For q ∈ V, e ∈ E we have ˜σ(e, q) = ((σinit(e), q), (σfin(e), ϕ(e)q)) and ϕ((e, q)) =

ϕ(e). For v ∈ V and k ∈ V, we denote (v, k) ∈ ˜V by vk, for e ∈ E and k ∈ V,

we denote (e, k) ∈ ˜E by ek. Let e = (u, v) ∈ E, and let ϕ(e) = k ∈ V, then

˜

σ(ei) = (ui, vi+k).

Definition 2.5.4. [5] A graph homomorphism φ : G1→ G2is a covering, and

G1covers G2, iff φ is an epimorphism and if, for every v ∈ V1, the map

N+(v) 3 e → φE(e) ∈ N+(φV(v))

(31)

Theorem 2.5.5. [12, p.21] Let G = (V, E, σ) be a graph with derived graph ˜

G = ( ˜V , ˜E, ˜σ) induced by voltage graph Λ = (G, V, ϕ). Then ˜G covers G with covering function p : ˜V → V defined by p(vk) = v.

Definition 2.5.6. [5] Let G = (V, E, σ) be a graph. A partition V = V1∪ V2∪ ... ∪ Vk

of the vertex set is equitable if there is an k × k matrix M,

M =    m11 · · · m1k .. . . .. ... mk1 · · · mkk   ,

such that there are exactly mij arc from in G initial vertex in Viand final vertex

in Vj.

Theorem 2.5.7. [5] If the graph G is equitable, then the characteristic poly-nomial of M divides the characteristic polypoly-nomial of the adjacency matrix of G.

Theorem 2.5.8. [5]Let G1, G2 be two graphs with adjacency matrices A1, A2

and let G2 be a covering of G1, then G2 is equitable with M = A1 and the

characteristic polynomial of A1 divides the characteristic polynomial of A2.

Definition 2.5.9. [12, p.22] Let G = (V, E) be a graph with derived graph ˜

G = ( ˜V , ˜E). Then the fiber of v ∈ V is p−1(v) = {v

k|k ∈ V } where p is the

(32)

Theorem 2.5.10. [12, p.23] Let w = v1, e1, ..., envn+1 be a walk in a voltage

graph with initial vertex v1, then for each vertex (v1)ain the fiber over v1there is

a unique walk wa = (v1)a, (e1)ϕ(e1), ..., (en)ϕ(en), (vn+1)a+Pni=1ϕ(ei) determined

by w.

Theorem 2.5.11. [12, p.23] Let w be a walk from u to v in a voltage graph Λ = (G, V, ϕ) and let b be the voltage of w. Then the walk wa in the derived

graph of Λ with initial vertex ua, will have final vertex va+b.

2.6

Walks With Prescribed Degree Sequences

The problem of counting and describing walks in graphs where you keep track of how many times each vertex has been visited, modulo some number, turns out to be a special case of voltage graphs, as you can construct a group such that the derived graph will describe the problem.

Definition 2.6.1. Let G = (V, E, σ) be a graph and k = (k1, ..., k|V |) be a

vector, then we define the derived graph Gmod k= ( ˜V , ˜E, ˜σ), with base graph G

and modulo vector k, by ˜V = V × Zk1× Zk2× ... × Zk|V | and

((vi, (s1, ..., s|V |)), (vj, (t1, ..., t|V |))) ∈ ˜E iff (vi, vj) ∈ E and (s1, ..., s|V |) +

(0, ..., 0, 1, 0, ..., 0) = (t1, ..., t|V |) in Zk1 × Zk2 × ... × Zk|V |, where the 1 in

(0, ..., 0, 1, 0, ..., 0) is in the jth position. The vertex (vi, (s1, ..., s|V |)) = (vi, s) ∈

˜

V is said to have state vector s. The function ϕ : E → ˜E is defined by ϕ(ei) = (ei, s), such that (ei, s) is the unique edge in ˜E with first component ei.

This construction is completely analogous to the one used for voltage graphs, so it follows that we can apply all theory about voltage graphs here too.

Theorem 2.6.2. Let G be a graph and k a modulo vector, then Gmod k is a

(33)

Definition 2.6.3. Let G be a graph, w = e1, v1, ..., en, vn+1a walk in G, and k

a modulo vector, then the voltage of the walk w mod k is defined asPn

i=1ϕ(ei).

Definition 2.6.4. Let G be a graph with derived graph Gmod k. Then the fiber

of v ∈ V is p−1(v) = {(v, s)|s ∈ Zk1× Zk2× ... × Zk|V |}, where p is the covering

map.

Definition 2.6.5. The pair (vi, si) comes before the pair (vj, sj) in

lexico-graphic order if i < j or if i = j and si comes before sj lexicographically.

Theorem 2.6.6. Let w = v1, e1, ..., envn+1 be a walk in a graph G with initial

vertex v1, then for each vertex (v1, s) in the fiber over v1 there is a unique walk

ws= (v1, s), (e1, ϕ(e1)), ..., (en, ϕ(en)), (vn+1, s +P n

i=1ϕ(ei)) determined by w.

Thus there is a bijection between walks in G and walks in Gmod k.

Theorem 2.6.7. Let w be a walk from u to v in a graph G and let b be the voltage of w mod k. Then the walk ws in the derived graph Gmod k with initial

vertex (u, s), will have final vertex (v, s + b).

Definition 2.6.8. Let G be a graph and let w = v1, e1, v2, ..., en, vn+1 be a

walk in G. Then we define the degree sequence of w, degree(w), as the function degree(w): V → N that counts the number of times the walk w visits the vertex v. degree(w)(v) = #{j|vj= v, 1 < j ≤ n + 1}.

We’re not interested in the number of times each vertex is visited, but the number of times that the vertex vi is visited modulo ki. Modifying the above

(34)

definition, we can construct a modular degree sequence degreemod k(w)(vi) =

#{j|vj = vi, 1 < j ≤ n + 1} mod ki.

Because a walk will always start at the state s0= (0, ..., 0), we can equate walks

in G that has degree sequence s with walks in Gmod kthat has voltage s, and the

problem of the thesis becomes equivalent to counting and describing walks in Gmod kthat has a prescribed voltage. Since the walks are characterized by their

voltage, the initial and final states becomes irrelevant and it’s only interesting to look at their difference. In particular, this means that it’s no limitation to only look at initial state s0.

Definition 2.6.9. Let LG,k(u, v, s) be the language of walks in a derived graph

graph Gmod k that starts in vertex (u, s0) and ends in vertex (vs).

Theorem 2.6.10. Let G be an undirected connected graph and let u,v be vertices in G, then the entropy of LG,k(u, v, s) is bounded by the entropy of LG(u, v),

with equality for some s. Proof.

LG(u, v) = ∪si∈Zk1×...×ZknLG,k(u, v, si).

Theorem 2.6.11. LG,k(u, v, s) is a regular language.

Proof. It follows from theorem 1.6.5. and Corollary 1.2.9.

Theorem 2.6.12. Let G be an undirected, connected graph. Then, for any moduli vector k and prescribed degree sequence s, and for any starting vertex v ∈ V , there is some walk w, starting in v and ending in v, which has the correct degree sequence, except possibly at v.

(35)

Proof. First, note that since G is connected, and hence possesses a spanning tree, it is enough to prove the result for a tree.

Next, we prove the result for Pn. Here it suffices to see that we can walk from

v1 to vn and then back again, and repeat this until the degree at vn becomes

the prescribed value, and then return to v1. By induction, there is a walk in

Pn−1with the prescribed degree at all vertices except possibly at v1.

Finally, suppose that G is a tree, rooted at v. Choose any branch in the tree. This branch, viewed as an induced subgraph, is isomorphic to Pn for some n;

hence there is a walk from v to v with the prescribed degrees at the vertices of the branch, except possibly at v. Removing the part of this branch that has no common edge with another branch, we still have a tree rooted at v. By induction, there is a walk in this smaller tree with the prescribed degree sequence, except possibly at v.

Theorem 2.6.13. Let G be an undirected connected graph. Assume that d is moduli vector with every component equal to d, and let Gmod d be the dervied

graph. Then,

1. The length of a closed walk in Gmod d is divisible by d. In particular,

Gmod d is periodic.

2. Gmod d has at most d autoconnected components.

3. If G is periodic with period divisible by d then Gmod d has exactly d

auto-connected components. Proof.

1. Since the walk is closed, the initial and final states are identical and there-fore every vertex has to be visited 0 mod d times, and thus every closed walk has length divisible by d.

2. Let v1, ..., vd+1 be d+1 different vertices in G. By theorem 1.6.12., there

exists a walk from in Gmod d from vi to (wj, s0), and by the pigeon-hole

principle, at least one wj will be connected by walks to two different

vi. And therefore there can at most be d different vertices in different

autoconnected components.

3. Let 0 < i < d and consider the two vertices u = (v1, s0), w = (v1, (i, 0, ..., 0)).

We claim that there is no walk from u to w. Since G is periodic with pe-riod d, any walk from u to w will have length divisible by d. However,

(36)

each step taken in the walk will increase the component sum of the state by 1 mod d, which implies that the final vertex of a walk of length divis-ible by d will be a vertex whose state has a component sum of 0 mod d, which contradicts that the final vertex is (v1, (i, 0, ..., 0)). Thus no such

walk can exist and therefore there are at least d different autoconnected components.

2.7

Block Matrices

A large part of the results are formulated or proved with the help of block matrices, this section will list the theorems needed to perform the necessary calculations.

Theorem 2.7.1. [13]If M=A B

C D



is a 2 × 2 block matrix with all blocks of the same size, then

1. AC = CA ⇒ det(M)=det(AD-CB) 2. CD = DC ⇒ det(M)=det(AD-BC) 3. BD = DB ⇒ det(M)=det(DA-BC) 4. AB = BA ⇒ det(M)=det(DA-CB)

Definition 2.7.2. [13] Let P and Q be matrices, then we define the tensor product of P and Q as the block matrix

P ⊗ Q =      p11Q p12Q · · · p1nQ p21Q p22Q · · · p2nQ .. . ... . .. ... pm1Q pm2Q · · · pmnQ     

(37)

Theorem 2.7.3. [13]Let P be an n × n matrix and let Q be an m × m matrix, then det(P ⊗ Q) = (detP)n(detQ)m.

Theorem 2.7.4. [6, p.140,141] Let A,B,C,D be complex valued matrices, then the tensor product satisfy the following properties:

1. (A ⊗ B)(C ⊗ D) = AC ⊗ BD, if the matrix multiplications AC,BD are defined.

2. (A ⊗ B)T = AT ⊗ BT.

3. (A ⊗ B)−1= A−1⊗ B−1, if A,B are non-singular.

4. If A has eigenvalues λ1, ..., λnand B has eigenvalues ψ1, ..., ψm, then A⊗B

has eigenvalues λ1ψ1, ..., λ1ψm, λnψ1, ..., λ2ψm, ..., λnψm.

Theorem 2.7.5. [11]Let S be an (nN ) × (nN ) complex matrix, partitioned into N2 blocks of side n × n:

S =      S11 S12 · · · S1N S21 S22 · · · S2N .. . ... . .. ... SN 1 SN 2 · · · SN N      .

Then the determinant of S is

det(S) = N Y k=1 det(α(N −k)kk ), where α(k) is given by αij(0)= Sij α(k)ij = Sij− σTi,N −k+1S˜ −1 k sN −k+1,j, k ≥ 1,

(38)

and the vectors σT

ij and sij are

sij=Sij Si+1,j · · · SN j

T

, σTij=Sij Si,j+1 · · · SiN .

And the matrix ˜Sk is

˜ Sk =      SN −k,N −k SN −k,N −k+1 · · · SN −k,N SN −k+1,N −k SN −k+1,N −k+1 · · · SN −k+1,N .. . ... . .. ... SN,N −k SN,N −k+1 · · · SN N      .

Theorem 2.7.6. [10]Let M be the block-tridiagonal matrix

M =          A1 B1 0 0 · · · 0 C1 A2 B2 0 · · · 0 0 C2 A3 B3 · · · 0 .. . . .. . .. . .. . .. ... 0 · · · 0 Cn−2 An−1 Bn−1 0 · · · 0 0 Cn−1 An          ,

with blocks of size n × n, then

det(M ) = (−1)m·n· det(T11(0)) · det(B1· B2· ... · Bn),

where T11(0) is the upper left m × m block of the matrix

T(0)=−An −Cn−1 Im 0  ·−B −1 n−1· An−1 −Bn−1−1 · Cn−2 Im 0  ·...·−B −1 1 · A1 −B1−1 Im 0 

2.8

Matrix Polynomials

(39)

Definition 2.8.1. [3, p.1] A matrix polynomial L(λ) is a matrix-valued func-tion of a complex variable of the form

L(λ) =

l

X

i=0

Aiλi,

where A0, A1, ..., Alare complex n × n matrices.

Definition 2.8.2. [3, p.2,24] A set of vectors x0, x1, ..., xk, with x06= 0 is called

a Jordan Chain of length k+1 associated with eigenvalue λ0 and eigenvector x0

if it satisfies the relations

j X p=0 1 p!L (p) 0)xj−p= 0, j = 0, 1, ..., k

where L(p)(λ) is the pth derivative of L(λ) with respect to λ. The eigenvalues

with associated eigenvectors and jordan chains of a matrix polynomial is referred to as its spectral data and the set of all eigenvalues is called the spectrum of L(λ).

Definition 2.8.3. [3, p.181] A matrix polynomial L(λ) is called regular if its determinant is nonzero.

Definition 2.8.4. [3, p.186] Let L(λ) be a regular n × n matrix polynomial of degree l. An nl × nl matrix polynomial S0+ S1λ is called a linearization of

L(λ) if

L(λ) 0

0 In(l−1)



= E(λ)(S0+ S1λ)F (λ),

for nl × nl matrix polynomials E(λ) and F (λ) with constant nonzero determi-nants.

(40)

Definition 2.8.5. [3, p.186] Let L(λ) = Pl

i=0Aiλ

i be a matrix polynomial,

then its companion polynomial CL(λ) is defined as

CL(λ) =        I 0 0 · · · 0 0 I 0 · · · 0 .. . . .. . .. . .. ... 0 · · · 0 I 0 0 · · · 0 0 Al        λ +        0 −I 0 · · · 0 0 0 −I · · · 0 .. . ... . .. . .. ... 0 0 · · · 0 −I A0 A1 · · · Al−2 Al−1        . Theorem 2.8.6. [3, p.186] CL(λ) is a linearization of L(λ).

2.9

Dyck’s Paths

Definition 2.9.1. [2, p.319] [8] A Dyck’s Path is a finite sequence of natural numbers yi, i = 1, ..., n such that |yi+1− yi| = 1, n is called the length of the

Dyck’s Path. A step yi+1 = yi+ 1 is called an ascent and a step yi+1 = yi− 1

is called a descent. y1 is called the initial altitude and yn is called the final

altitude. The maximal value obtained by yi for i = 1, ..., n is called the height

of the path, and the minimal value is called the depth. A point yi such that

yi > yi−1= yi+1 is called a peak and a point yj such that yj < yj−1= yj+1is

called a valley. A Dyck’s Path that allows the case yi+1 = yi is called a lattice

path.

Dyck’s paths are well-studied mathematical objects with a lot of interesting com-binatorial results, such as generating functions for dyck’s paths of fixed length and restrictions on their peaks and valleys. Often, these generating functions are expressed with the help of so-called Chebyshev polynomials [8]. It turns out that Dyck’s paths can be modeled with the help of some of the graphs we are examining in this thesis, and that placing modular restrictions on how many times a Dyck’s path can visit each height hasn’t been previously studied.

(41)

Figure 2.7 A Dyck’s Path from (0,0) to (0,0) of length 7, height 3, depth 0.

A Dyck’s Path can be described with a word on 2 letters, one for ascent and one for descent. With a meaning ascent and d meaning descent, the above figure is described by the word aadadd.

Theorem 2.9.2. Dyck Path’s of length k and of height at most n-1 corresponds bijectively to walks in Pn of length k.

Proof. For every natural number 0 ≤ m ≤ n − 1, associate m to vm+1, then each

step in a Dyck’s Path corresponds to a step to an adjacent vertex in Pn.

Corollary 2.9.3. For each fixed maximum height k, there exist a regular lan-guage describing every Dyck’s Path of height at most k.

One could also consider a generalized Dyck’s path for higher dimensions. If we were to consider a type of 3-dimensional Dyck’s path yi = (ai, bi) where each

step is of the form yi+1= yi± (1, 0) or yi+1= yi± (0, 1), this could be similarly

modeled by Pn× Pm.

If we were to instead consider lattice paths, these could be modeled by Pn× P2.

(42)

edge from (vi, vj) to (vi±1, vj) correspond to an ascent/descent, and let an edge

(43)

Results

3.1

Generating Function of P

n

Pnis a bipartite graph with decomposition odd and even vertices, so any closed

walk in it will have even length.

Let Pn be the adjacency matrix to the graph Pn and define

DPn= det(Pn− λ · I) = det             −λ 1 0 0 0 · · · 0 1 −λ 1 0 0 · · · 0 0 1 −λ 1 0 · · · 0 .. . . .. . .. . .. ... .. . . .. . .. . .. ... 0 0 · · · 0 1 −λ 1 0 0 · · · 0 0 1 −λ            

Then theorem 2.3.4. gives that the generating function for walks from v1to v1

in Pn is D∗P n−1 DP∗ n . Rahm, 2017. 33

(44)

We also have the equality DPn= −λ · DPn−1− DPn−2, since det        −λ 1 0 · · · 0 1 −λ 1 · · · 0 .. . . .. . .. . .. ... 0 · · · 1 −λ 1 0 · · · 0 1 −λ        n = −λ · det        −λ 1 0 · · · 0 1 −λ 1 · · · 0 .. . . .. . .. . .. ... 0 · · · 1 −λ 1 0 · · · 0 1 −λ        n−1 − −det        1 1 0 · · · 0 0 −λ 1 · · · 0 .. . . .. . .. . .. ... 0 · · · 1 −λ 1 0 · · · 0 1 −λ        n−1

Where the left hand side matrix has size n × n and the right hand side matrices has size (n − 1) × (n − 1).We also have that DP1= −λ and DP2 = λ

2− 1, which

is enough to recursive determine any DPn and therefore the generating function

for any Pn.

If we define DP0 = 1 and expand the recursive relation to express it DP0 and

DP1, we get: DPn= DP0·((−1) n 2· n 2 − 1 n 2 − 1  ·λ0+(−1)n2+1· n 2+ 0 n 2− 2  ·t2+...+(−1)n−1·n − 2 0  ·λn−2)+ +DP1·((−1) n 2· n 2 + 0 n 2 − 1  ·λ1+(−1)n 2· n 2 + 1 n 2 − 2  ·λ3+...+(−1)n 2·n − 1 0  ·λn−1) = = (−1)n2 · n 2 − 1 n 2 − 1  · λ0+ (−1)n 2+1· λ2· ( n 2 + 0 n 2 − 2  + n 2 + 0 n 2 − 1  )+ +... + (−1)n−1· λn−2· (n − 2 0  +n − 2 1  ) + (−1)n2 · λn·n − 1 0  = = n 2 X k=0 (−1)n2+k· n 2 + k n 2 − k  · λ2k, n even.

(45)

DPn= DP0· ((−1) n−1 2 · n−1 2 + 0 n−1 2 − 1  · λ1+ (−1)n−12 +1· n−1 2 + 1 n−1 2 − 2  · λ3+ ...+ +(−1)n−2·n − 2 0  ·λn−2)+D P1·((−1) n−1 2 · n−1 2 + 0 n−1 2 + 0  ·λ0+(−1)n−12 +1· n−1 2 + 1 n−1 2 − 1  ·λ2+...+ +(−1)n−1·n − 1 0  · λn−1) = (−1)n−12 · λ1· ( n−1 2 + 0 n−1 2 − 1  + n−1 2 + 0 n−1 2 + 0  )+ +(−1)n−12 +1· λ3· ( n−1 2 + 1 n−1 2 − 2  + n−1 2 + 1 n−1 2 − 1  )+ +... + (−1)n−2· λn−2· (n − 2 0  +n − 2 1  ) + (−1)n−1· λn−1·n − 1 0  = = n−1 2 X k=0 (−1)n−12 +k· n−1 2 + k + 1 n−1 2 − k  · λ2k+1, n odd.

Where the binomial coefficients comes from counting the number of ways of going from n to 0 or 1 in steps of size 1 or 2, where the number of steps of size 1 is determined by the exponent of λ. The formula gives that the absolute values of the coefficients for DPn can be read off the diagonals in Pascal’s triangle.

And the coefficients for the reciprocal polynomial can be read off by reading the diagonals in the other direction.

1 1 1 1 2 1 1 3 3 1 1 4 6 4 1 1 5 10 10 5 1 1 6 15 20 15 6 1

Figure 3.1 Pascal’s triangle with the coefficients of DP5 = λ

5− 4λ3+ 3λ

circled in blue and the coefficients of DP6 = −λ

(46)

n characteristic polynomial 1 −λ 2 (λ − 1)(λ + 1) 3 −λ(λ2− 2) 4 (λ2+ λ − 1)(λ2− λ − 1) 5 −λ(λ − 1)(λ + 1)(λ2− 3) 6 (λ3− λ2− 2λ + 1)(λ3+ λ2− 2λ − 1) 7 −λ(λ2− 2)(λ4− 4λ2+ 2) 8 (λ − 1)(λ + 1)(λ3− 3λ + 1)(λ3− 3λ + 1) 9 −λ(λ2+ λ − 1)(λ2− λ − 1)(λ4− 5λ2+ 5) 10 (λ5+ λ4− 4λ3− 3λ2+ 3λ + 1)(λ5− λ4− 4λ3+ 3λ2+ 3λ − 1) 11 −λ(λ − 1)(λ + 1)(λ2− 2)(λ2− 3)(λ4− 4λ2+ 1) 12 (λ6+ λ5− 5λ4− 4λ3+ 6λ2+ 3λ − 1)(λ6− λ5− 5λ4+ 4λ3+ 6λ2− 3λ − 1) 13 −λ(λ6− 7λ4+ 14λ2− 7)(λ3− λ2− 2λ + 1)(λ3+ λ2− 2λ − 1) 14 (λ − 1)(λ + 1)(λ2 + λ − 1)(λ2 − λ − 1)(λ4 − λ3 − 4λ2 + 4λ + 1)(λ4 + λ3 − 4λ2 − 4λ + 1) 15 −λ(λ2− 2)(λ4− 4λ2+ 2)(λ8− 8λ6+ 20λ4− 16λ2+ 2)

Figure 3.2 Characteristic polynomial of Ln for n = 1, ..., 15.

3.2

Spectrum of P

n

(47)

n spectral radius 1 0 2 1 3 1.414 4 1.618 5 1.732 6 1.802 7 1.845 8 1.880 9 1.902 10 1.919 11 1.932 12 1.942 13 1.950 14 1.956 15 1.962 16 1.966 17 1.970 18 1.973 19 1.977 20 1.981

Figure 3.3 Approximations of the spectral radius of Pn for n = 1, ..., 20.

P3has eigenvalues {± √ 2, 0}, P7has eigenvalues {± p 2 ±√2, ±√2, 0}, P15has eigenvalues {± q 2 ±p2 ±√2, ±p2 ±√2, ±√2, 0}, P31 has eigenvalues {± r 2 ± q 2 ±p2 ±√2, ± q 2 ±p2 ±√2, ±p2 ±√2, ±√2, 0}. If we call the spectral radius of Pn for S(Pn), then this seems to indicate the relation

S(P2n−1) =

q

2 + S(P2n−1−1).

And that every eigenvalue of P2n−1−1 is also an eigenvalue of P2n−1. If it turns

out that this equation is true, then that would give an estimate of how quickly the spectral radius of Pn approaches 2 when n grows.

Partial Result 3.2.1. Every eigenvalue of Pn is also an eigenvalue of P2n+1,

(48)

Proof. Suppose λ is an eigenvalue to Pn with eigenvector a and construct the

eigenvector [a 0 -a]T, then

                0 0 0 · · · 0 Pn− λIn ... ... 0 0 0 · · · 0 1 0 0 · · · 0 0 · · · 0 1 −λ 1 0 · · · 0 0 · · · 0 0 1 0 · · · 0 0 0 .. . ... Pn− λIn 0 · · · 0 0 0                 ·             a1 .. . an 0 −a1 .. . −an             = Pn· a +0 · · · 1 −λ 1 · · · 0 ·   a 0 -a  − Pn· a = 0 + 1 · an− λ · 0 − 1 · an+ 0 = 0 So λ is also an eigenvalue to P2n+1.

3.3

Generating Function of C

n

Cnis bipartite for n even, and a periodic for n odd, so every closed walk in C2n

will have even length.

(49)

look at similar matrix calculations as for Pn. Define DCnsimilarly to DPn, then

theorem 2.3.4. gives that the generating function to Cn is

D∗P n−1 D∗C n = D ∗ Pn−1 (−t · DPn−1− 2 · DPn−2+ 2 · (−1) n−1)∗, since det        −t 1 0 · · · 1 1 −t 1 · · · 0 .. . . .. ... 0 · · · 1 −t 1 1 · · · 0 1 −t        n = −t · det        −t 1 0 · · · 0 1 −t 1 · · · 0 .. . . .. ... 0 · · · 1 −t 1 0 · · · 0 1 −t        n−1 − − det        1 1 0 · · · 0 0 −t 1 · · · 0 .. . . .. ... 0 · · · 1 −t 1 1 · · · 0 1 −t        n−1 + (−1)n−1·        1 −t 1 · · · 0 0 1 −t · · · 0 .. . . .. ... 0 · · · 0 1 −t 1 · · · 0 0 1        n−1 = = −t · DPn−1− DPn−2− (−1) n· det        1 0 0 · · · 0 −t 1 0 · · · 0 .. . . .. ... 0 · · · −t 1 0 0 · · · 1 −t 1        n−2 + +(−1)n−1·(det        1 −t 1 · · · 0 0 1 −t · · · 0 .. . . .. ... 0 · · · 0 1 −t 0 · · · 0 0 1        n−2 +(−1)n·det        −t 1 0 · · · 0 1 −t 1 · · · 0 .. . . .. ... 0 · · · 1 −t 1 0 · · · 0 1 −t        n−2 ) = −t · DPn−1− 2 · DPn−2+ 2 · (−1) n−1

(50)

n characteristic polynomial 1 −λ 2 (λ − 1)(λ + 1) 3 −(λ − 2)(λ + 1)2 4 λ2(λ − 2)(λ + 2) 5 −(λ − 2)(λ2+ λ − 1)2 6 (λ − 2)(λ + 2)(λ − 1)2(λ + 1)2 7 −(λ − 2)(λ3+ λ2− 2λ − 1)2 8 λ2(λ − 2)(λ + 2)(λ2− 2)2 9 −(λ − 2)(λ + 1)23− 3λ + 1)2 10 (λ − 2)(λ + 2)(λ2+ λ − 1)22− λ − 1)2 11 −(λ − 2)(λ5+ λ4− 4λ3− 3λ2+ 3λ + 1)2 12 λ2(λ − 2)(λ + 2)(λ − 1)2(λ + 1)22− 3)2 13 −(λ − 2)(λ6+ λ5− 5λ4− 4λ3+ 6λ2+ 3λ − 1)2 14 (λ − 2)(λ + 2)(λ3− λ2− 2λ + 1)23+ λ2− 2λ − 1)2 15 −(λ − 2)(λ + 1)22+ λ − 1)24− λ3− 4λ2+ 4λ + 1)2

Figure 3.4 The characteristic polynomial of Cn for n = 1, ..., 15.

3.4

Spectrum of C

n

Some spectral properties of Cn are examined.

n spectral radius 1 0 2 1 3 2 4 2 5 2 6 2 7 2 8 2 9 2 10 2

(51)

The table hints at that the spectral radius of Cn, n ≥ 3, is 2. It’s indeed true that 2 is an eigenvalue of Cn, n ≥ 3, (        0 1 0 · · · 1 1 0 1 · · · 0 .. . . .. ... 0 · · · 1 0 1 1 · · · 0 1 0        −        2 0 0 · · · 0 0 2 0 · · · 0 .. . . .. ... 0 · · · 0 2 0 0 · · · 0 0 2        ) ·        1 1 .. . 1 1        =        0 0 .. . 0 0        ,

since the sum of the elements in every row is 0. So the spectral radius is at least 2 for every Cn, n ≥ 3. Additionally, no Cn can have an eigenvalue larger than

2. Assume such an eigenvalue λ > 2 exists, then there also exists an eigenvector such that        −λ 1 0 · · · 1 1 −λ 1 · · · 0 .. . . .. ... 0 · · · 1 −λ 1 1 · · · 0 1 −λ        ·        a1 a2 .. . an−1 an        =        0 0 .. . 0 0       

which implies that

a2+ an= λa1⇒ |a2| + |an| ≥ |a2+ an| = |λa1| a1+ a3= λa2⇒ |a1| + |a3| ≥ |a1+ a3| = |λa2| a2+ a4= λa3⇒ |a2| + |a4| ≥ |a2+ a4| = |λa3| .. . an−3+ an−1= λan−2⇒ |an−3| + |an−1| ≥ |an−3+ an−1| = |λan−2| an−2+ an= λan−1⇒ |an−2| + |an| ≥ |an−2+ an| = |λan−1| a1+ an−1= λan⇒ |a1| + |an−1| ≥ |a1+ an−1| = |λan|

If we add the equations, we get

2(|a1| + |a2| + ... + |an−1| + |an|) ≥ λ(|a1| + |a2| + ... + |an−1| + |an|)

This contradicts that λ > 2 and thus we have that the spectral radius of Cn,

n > 3, is 2. And because every walk in Pn is also a walk in Cn, this also gives

that no Pn has a spectral radius greater than 2.

(52)

properties. If G is a graph whose adjacency matrix has exactly k ones in every row, meaning that every vertex in G has exactly k outgoing edges, then k is an eigenvalue with the eigenvector having all ones as entries. If G is a graph whose adjacency matrix has at most j ones in every column, meaning that every vertex in G has at most j ingoing edges, then G can’t have an eigenvalue greater than j, which confirms that Pncan’t have an eigenvalue greater than 2. In particular,

an undirected graph G whose every vertex is adjacent to exactly i edges will have i as its spectral radius.

3.5

Spectrum of P

n

× P

m

Pn× Pmis bipartite, so every closed walk in it will have even length.

For the graph Pn× P2, it is easy to see that the adjacency matrix is the block

matrix

Pn×2=

Pn In

In Pn



where Inis the identity matrix in n dimensions, and where the Pnblocks connect

the vertices in the Pnpath and the Inblocks connect the vertices in the P2path.

Since all the blocks in this 2 × 2 block matrix commute, we can apply theorem 2.7.1. to get the determinant

det(Pn×2−λ·I2n) = det((Pn−λ·In)2−In2) = det(Pn−λ·In+In)·det(Pn−λ·In−In) =

= DPn(λ + 1) · DPn(λ − 1)

Thus we have an easy way to calculate the denominator of its generating func-tion. For the general graph Pn× Pm, one can similarly see that the adjacency

matrix is the block matrix

Pn×m=          Pn In 0n 0n · · · 0n In Pn In 0n · · · 0n 0n In Pn In · · · 0n .. . ... . .. . .. . .. ... 0n 0n · · · In Pn In 0n 0n · · · 0n In Pn         

(53)

where 0n is the n × n matrix with only zero entries.

Furthermore, if we let G be an arbitrary graph, then similar identities hold for G × Pn. The adjacency matrix for G × Pn is

         G In 0n 0n · · · 0n In G In 0n · · · 0n 0n In G In · · · 0n .. . . .. ... 0n 0n · · · In G In 0n 0n · · · 0n In G         

and the characteristic polynomial for the adjacency matrix of G × P2is DG(λ +

1) · DG(λ − 1).

Proposition 3.5.1. Let G = (V, E), G1 = (V1, E1), G2 = (V2, E2) be graphs

and let G = G1× G2. Then you obtain the adjacency matrix of G by taking the

adjacency matrix of G2 and putting the adjacency matrix of G1as blocks on the

diagonal and replace every 1 in G02s adjacency matrix by the identity block and every 0 in G02s adjacency matrix by the 0 block, every block having the same dimension as the adjacency matrix of G1. This can also be expressed as

G = I|V2|⊗ G1+ G2⊗ I|V1|

Proof. Assume G1has n vertices and G2has m vertices, then G has mn vertices

and its adjacency matrix has dimension mn × mn, divide this into an n × n block matrix with blocks of size m × m. Number these vertices such that v1=

(h1, w1), v2 = (h2, w1), ..., vn = (hn, w1), vn+1 = (h1, w2), ..., vmn = (hn, wn),

where vi is in G, hj is in G1 and wk is in G2. Call the adjacency matrix of G

for G, the adjacency matrix for G1 G1 and the adjacency matrix for G2 G2.

Assume (va, vb) = ((h, w), (h0, w0)) ∈ E, then either w = w0 and (h, h0) ∈ E1,

or h = h0 and (w, w0) ∈ E2. If w = w0, then the matrix entry (a,b) is in the

diagonal block of G that corresponds to the numbering of w in G2, and since

(h, h0) ∈ E1, the entry (a,b) will correspond to a 1 in the adjacency matrix for

G1. If h = h0, then |a − b| = r · m for some integer 1 < r ≤ n, which means that

the matrix entry (a,b) is on the diagonal of its block.

(54)

diagonal of a block that corresponds to a 1 in G2 or that the entry (a,b) is

a position in a diagonal block that corresponds to a 1 in G1. In the former

case, h = h0 and there is an edge between w and w’ in G2, by construction.

Similarly in the latter case, w=w’ and there is an edge between h and h’ in G1.

Thus, every 1 in the adjacency matrix of G is in such a position and every such position has a 1.

P2× Pm has its spectral radius bounded by 3 as that’s the maximum number

of ingoing edges for any vertex. Pn× Pm, n, m ≥ 3 will have its spectral radius

bounded by 4 for the same reason. The spectral radius of P2× Pm, Pn× Pm

will approach 3 and 4 respectively as m increases, as the ratio of nodes with 3 respective 4 ingoing edge approaches 1.

graph generating function spectral radius

P2× P2 −2λ 2+1 −(2λ−1)(2λ+1) 2 P2× P3 2λ 4−5λ2+1 −(λ−1)(λ+1)(λ2+2λ−1)(λ2−2λ−1) 2.414 P2× P4 −3λ 6+12λ4−8λ2+1 (λ2+3λ+1)(λ2−3λ+1)(λ2+λ−1)(λ2−λ−1) 2.618 P2× P5 (λ 2+3λ+1)(λ2−3λ+1)(λ2+λ−1)(λ2−λ−1) (λ−1)(2λ+1)(2λ−1)(λ+1)(2λ2+2λ−1)(2λ2−2λ−1) 2.732 P3× P3 −12λ 4+16λ2−2 −2(8λ2−1)(2λ2−1) 2.828 P3× P4 −3λ 10+44λ8−102λ6+66λ4−15λ2+1 (λ2+λ−1)(λ2−λ−1)(λ4−6λ3+5λ2+2λ−1)(λ4+6λ3+5λ2−2λ−1) 3.032 P3× P5 −λ 14+27λ12−204λ10+461λ8−383λ6+135λ4−20λ2+1 −(λ−1)(λ+1)(3λ2−1)(2λ2−1)(λ4−10λ2+1)(λ2+2λ−1)(λ2−2λ−1) 3.146 P4× P4 120λ 8−400λ6+335λ4−80λ2+5 400λ8−780λ6+465λ4−90λ2+5 3.236

Figure 3.6 The generating function and approximations of the spectral radius of some Pn× Pm.

(55)

Now consider the graph G × P3, for arbitrary graph G with adjacency matrix G, det   G − λI I 0 I G − λI I 0 I G − λI  = det(α (2) 11 · α (1) 22 · α (0) 33) =

= det((G−λI)·((G−λI)+(G−λI)−1)·((G−λI)−I 0G − λI I

I G − λI −1I 0  )) = =G I I G −1 =−(I − G 2)−1G (I − G2)−1 (I − G2)−1 −(I − G2)−1G  =

= det(((G − λI)2− I) · ((G − λI) + (I − (G − λI)2)−1(G − λI))) =

det((G − λI)3− 2(G − λI)) = det((G − λI) · ((G − λI) −√2) · ((G − λI) +√2)) =

= DG(λ) · DG(λ + √ 2) · DG(λ − √ 2)

3.6

Generating Function of P

n

× P

m

Applying the transfer matrix method to Pn× Pm, we get the system

G(v1,v1),(v1,v1)(x) = 1 + xG(v1,v1),(v2,v1)(x) + xG(v1,v1),(v1,v2)(x)

G(v1,v1),(v2,v1)(x) = xG(v1,v1),(v1,v1)(x) + xG(v1,v1),(v3,v1)(x) + xG(v1,v1),(v2,v2)(x)

.. .

(56)

G(v1,v1),(vn,v1)(x) = xG(v1,v1),(vn−11,v1)(x) + xG(v1,v1),(vn,v2)(x)

G(v1,v1),(v1,v2)(x) = xG(v1,v1),(v2,v2)(x) + xG(v1,v1),(v1,v1)(x) + xG(v1,v1),(v1,v3)(x)

.. .

G(v1,v1),(vn,vm)(x) = xG(v1,v1),(vn−1,vm)(x) + xG(v1,v1),(vn,vm−1)(x)

Using the notation

Gvk(x) =      G(v1,v1),(v1,vk)(x) G(v1,v1),(v2,vk)(x) .. . G(v1,v1),(vn,vk)(x)      ,

the system simplifies to

(Inm− x · (Im⊗ Pn+ Pm⊗ In)T) ·    Gv1(x) .. . Gvm(x)   =      1 0 .. . 0      .

Looking at one block-row at a time, starting from the bottom, and applying substitution, we get −xGvm−1(x) + (I − xPn)Gvm(x) = 0 ⇐⇒ Gvm(x) = (I − xPn) −1xG vm−1(x), −xGvm−2(x) + (I − xPn)Gvm−1(x) − xGvm(x) = 0 ⇐⇒ ⇐⇒ −xGvm−2(x) + (I − xPn− x(I − xPn) −1x)G vm−1(x) = 0 ⇐⇒ ⇐⇒ Gvm−1(x) = (I − xPn− x(I − xPn) −1x)−1xG vm−2(x),

(57)

.. .

Gv2(x) = (I−xPn−x(I−xPn−x(...(I−xPn)

−1x...)−1x)−1x)−1xG v1(x), m-1 inverses, (I − xPn)Gv1(x) − xGv2(x) =      1 0 .. . 0      ⇐⇒

⇐⇒ Gv1(x) = (I−xPn−x(I−xPn−x(...(I−xPn)

−1x...)−1x)−1x)−1      1 0 .. . 0      , m inverses.

Meaning that the generating function for walks from (v1, v1) to (vk, v1) in Pn×

Pmis the (k,1)-entry of the matrix

(I − xPn− x(I − xPn− x(...(I − xPn)−1x...)−1x)−1x)−1, m inverses.

As no properties of Pnwere used in these calculations, it becomes obvious that,

if we let G be an arbitrary graph with adjacency matrix A, the generating function for walks from (v1, v1) to (vk, v1) in G × Pm is the (k,1)-entry of the

matrix

(I − xAT− x(I − xAT − x(...(I − xAT)−1x...)−1x)−1x)−1, m inverses,

(58)

graph generating function C3× P2 2x 3−2x2−2x−1 6x3−5x2−2x+1 C3× P3 −x 5−3x4+12x3−4x2−3x+1 4x6−14x5+4x4+19x3−7x2−3x+1 C3× P4 2x 7−16x6+26x5+9x4−28x3+5x2+4x−1 5x8−10x7−34x6+74x5−3x4−38x3+8x2+4x−1 C4× P2 −7x 2+1 9x4−10x2+1 C4× P3 8x 8−48x6+52x4−15x2+1 (2x2−1)(16x6−52x4+16x2−1) C4× P4 −40x 10+279x8−344x6+145x4−22x2+1 (1+x4−3x2)(25x8−190x6+131x4−22x2+1)

Figure 3.7 The generating function for walks (v1, w1) to (v1, w1) in some

Cn× Lm.

3.7

Spectrum of P

n

× C

m

Pn× Cmis bipartite for m even, and aperiodic for m odd, so every closed walk

in Pn× C2mwill have even length.

Since P2and C2are the same graph, the result for G×P2also applies for G×C2.

However, in the case of G × C4, we can say some more things. We have the

adjacency matrix     G In 0n In In G In 0n 0n In G In In 0n In G    

that we can see as a 2 × 2 block matrix with the blocks  G In

In G

(59)

and

0n In

In 0n

 .

Since these two blocks commute, we can again apply Silvester’s result and get

det     G − λIn In 0n In In G − λIn In 0n 0n In G − λIn In In 0n In G − λIn     = det(G − λIn In In G − λIn 2 −I2n) = = det(G − λIn In In G − λIn  − In) · det( G − λIn In In G − λIn  + In) = = DG×P2(λ + 1) · DG×P2(λ − 1) = (DG(λ + 1) · DG(λ − 1))(λ + 1)· ·(DG(λ + 1) · DG(λ − 1))(λ − 1) = DG(λ + 2) · D2G(λ) · DG(λ − 2).

For the case of G × C3, we can apply theorem 2.7.5. and get,

det   G − λI I I I G − λI I I I G − λI  = det(α (2) 11 · α (1) 22 · α (0) 33) =

= det((G−λI)·((G−λI)+(G−λI)−1)·((G−λI)−I IG − λI I

I G − λI −1I I  )) = =G I I G −1 =−(I − G 2)−1G (I − G2)−1 (I − G2)−1 −(I − G2)−1G  =

= det(((G − λI)2+ I) · ((G − λI) − (2I − 2(G − λI)))) =

(60)

= det(((G−λI)+2I)·((G−λI)−I)·((G−λI)−I)) = (DG◦λ → λ+2)·(DG◦λ → λ−1)2

Proposition 3.7.1. Let G, G1, G2 be graphs such that G = G1× G2, let G1

have eigenvalues λ1, λ2, ..., λn and let G2 have eigenvalues ψ1, ψ2, ..., ψm. Then

the eigenvalues of G are λ1+ ψ1, λ1+ ψ2, ..., λ1+ ψm, λ2+ ψ1, ..., λn+ ψm.

Proof. Let G, G1, G2denote the respective adjacency matrices of G, G1, G2, let

λ be an arbitrary eigenvalue to G1with eigenvector a =a1 a2 · · · an

T and let ψ be an arbitrary eigenvalue to G2with eigenvector b =b1 b2 · · · bm

T . Construct the eigenvalue φ = λ+ψ with the eigenvector c =b1a b2a · · · bma

T , then we have (G − φ · Imn) · c = (G − λ · Imn− ψ · Imn) · c = = (Im⊗ G1+ G2⊗ In− λ · Imn− ψ · Imn) · c = = (Im⊗ G1− λ · Imn) · c + (G2⊗ In− ψ · Imn) · c = =      G1− λ · In 0n · · · 0n 0n G1− λ · In · · · 0n .. . ... . .. ... 0n 0n · · · G1− λ · In      ·      b1a b2a .. . bma      + +      g11· In− ψ · In g12· In · · · g1m· In g21· In g22· In− ψ · In · · · g2m· In .. . ... . .. ... gm1· In gm2· In · · · gmm· In− ψ · In      ·      b1a b2a .. . bma      =

(61)

= b ⊗ ((G1− λ · In) · a) + ((G2− ψ · Im) · b) ⊗ a = 0.

So every eigenvalue of the form λ + ψ is an eigenvalue to G, and since there are mn such eigenvalues it follows that every eigenvalue of G is of this form.

3.8

Periodical Properties of P

n

mod 2

By theorem 2.6.13., we know that (Pn)mod 2 has exactly 2 autoconnected

com-ponents. These autoconnected components are characterized by the sum of the elements in the degree sequence. If you start in ˆv1= (v1, {0, 0, ..., 0}), then the

degree sequence is going to have an even sum in odd vertices and an odd sum in even vertices, since the degree sequences changes in one position per step in a walk and every edge in Pn connects two verticies of opposite parity. The

two autoconnected components of (Pn)mod 2are certainly isometric, so we could

limit ourselves to look only at the component that contains (v1, s0) without loss

(62)

v1, (0, 0, 0) v2, (0, 1, 0) v3, (0, 0, 0) v3, (1, 1, 0) v1, (1, 1, 0) v1, (0, 1, 1) v3, (0, 1, 1) v2, (1, 0, 0) v2, (0, 0, 1) v1, (1, 0, 1) v3, (1, 0, 1) v2, (1, 1, 1)

(63)

Now consider the graph Pn∗, where instead of keeping track of how many times each vertex has been visited mod 2, we keep track of how many steps we have taken mod 4. Similarly to (Pn)mod 2, Pn∗ will have two autoconnected

compo-nents defined by if a walk starts in an odd vertex or an even vertex. And we will here only considered one of the components.

v1, 0 v2, 0 v3, 0 v1, 1 v1, 2 v1, 3 v2, 1 v2, 2 v2, 3 v3, 1 v3, 2 v3, 3

Figure 3.9 The graph P3

I claim that (Pn)mod 2 is a covering of Pn∗. Define the function ξ : V → Z2 on

(Pn)mod 2 such that

ξ(vm, {s1, s2, ..., sn}) = bn 2c X k=1 s2k, for m even,

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

Since the greedy walk on a homogeneous Poisson process on the real line, starting from 0, almost surely does not visit all points, in Paper III we find the distribution of the

Our approach is based on: (i) representing the input languages as finite automata, (ii) a well-known result that for every regular language there is a unique (up to isomorphism)

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically