• No results found

Emergence of Structure in a Recurrent Network with Anisotropic Spatial Connectivity

N/A
N/A
Protected

Academic year: 2021

Share "Emergence of Structure in a Recurrent Network with Anisotropic Spatial Connectivity"

Copied!
45
0
0

Loading.... (view fulltext now)

Full text

(1)

INOM EXAMENSARBETE TEKNIK, GRUNDNIVÅ, 15 HP , STOCKHOLM SVERIGE 2018

Emergence of Structure in a

Recurrent Network with

Anisotropic Spatial Connectivity

CARL NORDLING

MALCOLM TIVELIUS

KTH

(2)

Emergence of Structure in a

Recurrent Network with

Anisotropic Spatial

Connectivity

CARL NORDLING AND MALCOLM TIVELIUS

Bachelor in Computer Science Date: June 6, 2018

Supervisor: Arvind Kumar Examiner: Pavel Herman

Swedish title: Uppkomst av strukturer i ett recurrent network med anisotropiska kopplingsegenskaper

(3)
(4)

iii

Abstract

Today homogeneous Locally Connected Random Networks are often used while simulating activity in the brain. The two possible activa-tion patterns in this case are either a stand still activaactiva-tion or a travelling wave. The activation pattern in the brain could however be seen as something else. By introducing in-homogeneity and anisotropy in the connectivity for neurons one can create small feed forward networks in an otherwise random network. By using Perlin noise one can create connectivity rules where there is direction of connectivity and neigh-bouring neurons have similar rules. In this study it was investigated if these modified LCRNs have the structural properties communities and feed forward chains.

Three types of networks, differentiated by distinct connectivity rules, were analysed. These were networks with isotropic connections, con-nection with a preferred direction with all directions chosen indepen-dently and connection with a preferred direction with directions cho-sen in a way that neighbouring nodes have similar direction.

(5)

iv

Sammanfattning

Idag används homogena Locally Connected Random Networks vid si-mulering av hjärnaktivitet. De två olika aktiveringsmönster som kan uppkomma vi användandet av dessa är stillastående aktivering eller travelling wave. Det faktiska aktiveringsmönstret kan dock ses som något annat. Genom att introoducera inhomogenitet och anisotropi i kopplingsreglerna för neuroner kan man skapa korta flöden i ett nät-verk som vanligtvis är helt random. Genom användandet av Perlin-brus kan man skapa kopplingsregler där en neurons grannar har lik-nande regler. I denna studie undersöks det huruvida dessa modifiera-de LCRN har strukturala egenskaper som communities och flömodifiera-de. Tre typer av nätverk med olika kopplingsregler var analyserade. Dessa är nätverk med isotropiska kopplingar, nätverk med kopplingar med en föredragen riktning där varje riktning är individuellt vald och nät-verk med kopplingar med en föredragen riktning där noder som är grannar har liknande riktningar.

(6)

Contents

1 Introduction 1

1.1 Aim and research question . . . 2

1.2 Scope and constraints . . . 3

1.3 Approach . . . 3 1.4 Terminology . . . 4 2 Background 5 2.1 Graph definition . . . 5 2.2 Graph properties . . . 5 2.2.1 Communities . . . 5

2.2.2 Feed forward chains . . . 6

2.3 Locally Connected Random Networks (LCRN) . . . 6

2.4 Graphs with isotropic connections . . . 6

2.5 Graphs with connections based on offset . . . 6

2.5.1 Independent . . . 7

2.5.2 Identical . . . 7

2.5.3 Neighbouring-rule . . . 8

2.6 Perlin noise . . . 8

2.7 Algorithms for finding feed forward chains . . . 8

2.8 Algorithms for finding communities . . . 9

2.8.1 Eigenvalue-spectra of network . . . 9

2.8.2 Definition of spectral band and outlying eigen-values . . . 9

2.9 Network activity dynamics . . . 10

3 Method 12 3.1 Creating the data sets . . . 12

3.1.1 Toy Data . . . 12

3.1.2 Spatial Connectivity Data . . . 13

(7)

vi CONTENTS

3.2 Calculating Communities . . . 15

3.3 Nordling-Tivelius algorithm . . . 15

3.4 Calculating Feed forward chains using the Nordling-Tivelius algorithm . . . 16

3.5 Calculating correlation between Communities and Feed forward chains . . . 16

4 Results 18 4.1 Toy data . . . 18

4.1.1 Feed Forward 0% network . . . 18

4.1.2 Feed forward 25% network . . . 20

4.2 Spatial connectivity data . . . 22

4.2.1 Eigenvalues . . . 22

4.2.2 Eigenvalue spectra . . . 24

4.2.3 Flow and community . . . 25

4.2.4 Correlation . . . 27

5 Discussion 29 5.1 Discussion of the results . . . 29

5.2 Discussion of the method . . . 30

5.2.1 Calculating feed forward chains . . . 30

5.2.2 Calculating the number of communities . . . 31

5.2.3 Calculating the correlation . . . 32

5.3 Future work . . . 33

6 Conclusion 34

Bibliography 35

(8)

Chapter 1

Introduction

The brain is a network consisting of different types of neurons. If one where to describe the network on a basic level there are mainly two categories of neurons: excitatory and inhibitory. In the neocortex, the part of the brain that is involved in high-order brain functions such as cognition and generation of motor commands [8] about 80% of the neurons are excitatory and 20% are inhibitory. Experimental data pro-poses that a neuron receives up to 10 000 inputs from other neurons within close range. For each neuron the connections probability is about 10%, meaning that the network is sparse [2].

The connection capability of neurons in the brain is controlled by a physical constraint, meaning that a neuron can only connect to other neurons within a certain distance. The probability of a connection oc-curring is generally decreased in correlation with the distance between neurons. This is mainly due to structure of en axon and dendrites of the neuron [5].

Today theoreticians are using the locally connected random networks (LCRN) model to study the dynamics of such neural activity. In LCRNs neurons are connected according too a spatial connectivity rule, that could be based on for example a Gaussian distribution function [9][12] [4][7] or a Gamma function[13][4] where it models the connection prob-ability based on the distance between neurons.

However, one has avoided working with heterogeneous cortical net-work models, by for example giving all neurons identical properties.

(9)

2 CHAPTER 1. INTRODUCTION

These models are often preferred due to the simplicity of computing mathematical analysis and numerical simulations. This however leads to a problem. When activating an area of neurons in a homogeneous LCRN two possible activation patterns are possible. (1) The activation does not move, it simply stays put in the area [7]. (2) The activation moves creating a travelling wave (TW) that affects the whole network [3][12][4].

This is a problem due to that one wants to stimulated activity spread-ing through a network in a controlled manner, which the homogeneous-isotropic LCRN does not offer. To bypass this problem Spreizer et al-2018 in the lab of Prof. Arvind Kumar came up with a way to create short feed forward networks in an otherwise random network by in-troducing both in-homogeneity and anisotropy. In a modified LCRN connectivity rule they let neurons connect more in a certain direction and let neighbouring neurons have a similar whilst faraway neurons have any . This was done with using Perlin noise.

It was shown that neurons generate spike in a temporal order in these networks, however the sequences where not travelling waves. These temporal sequences are instead different in each region in the network. If one where to stimulate a specific region in these networks the result would be activity along a spatial path and temporal sequence. This solves the problem with homogeneous LCRNs, however it is not clear why such sequences arise by the spatial asymmetry in neural connec-tivity.

1.1 Aim and research question

This study aims at investigating the emergence of structure in a re-current network with anisotropic spatial connectivity. If one looks at isotropic spatial connectivity one would see a random network. The anisotropic however creates the possibility of flow and neuron assemblies in the network, which can have important functional con-sequences. The finding of these kinds of structures could help with general research regarding the structure of neural networks.

(10)

CHAPTER 1. INTRODUCTION 3

Can structural anisotropy in the spatial connectivity lead to emergence of communities and feed forward chains in a recurrent network?

1.2 Scope and constraints

Due to constraints in computational power the graphs that where com-puted in the study were of maximum size 80x80, i.e. contained 6400 nodes.

Due to constraints in computational power the number of graphs that were computed in the study was maximum 30 for each type of graph. All graphs used in this study are directed.

This study does not analyse different sizes of a nodes connectivity area. All nodes in all networks will have a fixed connectivity radius.

1.3 Approach

(11)

4 CHAPTER 1. INTRODUCTION

1.4 Terminology

Isotropic -Physics. of equal physical properties along all axes.

Anisotropic - Physics. of unequal physical properties along different axes.

Realisation - Refers to realisation of a network. A network that has

been created with given properties.

Eigenvalue - A scalar associated with a given linear transformation

of a non-vector space that only changes by a scalar factor when that linear transformation is applied to it [1].

Network activity dynamics - The dynamics of a network when

(12)

Chapter 2

Background

2.1 Graph definition

A graph is an order pair G = (V,E) where,

• V is a set of vertices, also known as nodes.

• E is a set of edges. Each edge is a tuple consisting of two nodes. The edge represents a connection between the nodes.

In this study one comes across the term network. A network is simply a graph which represents the relations between discrete objects.

2.2 Graph properties

Graph properties are properties which are depending on the abstract structure of a graph. There are several common properties to a graph. In this study two are being analysed. These are communities and feed forward chains.

2.2.1 Communities

A community is a set of vertices having a high density of edges within them and a lower density of edges between it and other sets [10].

(13)

6 CHAPTER 2. BACKGROUND

2.2.2 Feed forward chains

A feed forward chain in this study is based on the definition of a path in a network. A path is defined as a sequence of edges that connect a sequence of vertices, where no vertices nor edges are repeated. How-ever, whilst looking at complex networks, with a large number of ver-tices and edges, these paths can be very difficult to locate since there is a high possibility of loops. Therefore the definition of feed forward chains ignore some repeating structure and simply look at a general direction of movement within the network. In the study this general direction of movement can sometimes be referred to as the flow in a network.

2.3 Locally Connected Random Networks (LCRN)

A LCRN is a random network where spatial properties are taken into

consideration. Meaning that each node in the network has a position in space [9]. This could be compared to a simple random network where nodes do not have spatial property. By giving each node a spa-tial property one can give the node connectivity rules based on the distance between it and other nodes.

2.4 Graphs with isotropic connections

A graph with isotropic connections means that its connection distance, the distance that each node can connect to other nodes, has the same property as the radius of a circle, i.e. same value in all directions. In this paper these graphs are referred to as Symmetric networks.

2.5 Graphs with connections based on

off-set

(14)

CHAPTER 2. BACKGROUND 7

Figure 2.1: Isotropic connection

the offset . If one where to compare with the connection rule of a graph with isotropic connection one can see the offset as a shift of the circle in a direction . The offset can be distributed in many ways, in this report the three ways that are relevant are presented below.

Figure 2.2: Off-set connection

2.5.1 Independent

Independent means that the for each node is chosen independently from a distribution for each node. In this paper these graphs are re-ferred to as Random graphs/networks.

2.5.2 Identical

(15)

8 CHAPTER 2. BACKGROUND

2.5.3 Neighbouring-rule

Neighbouring rule means that the for each node is chosen from a dis-tribution in such a way that neighbouring nodes have similar . In this paper these graphs are referred to as Perlin graphs/networks.

2.6 Perlin noise

Perlin noise is a type of gradient noise, which was developed by Ken Perlin to create natural looking textures in CGI. When applying Perlin noise to the values of explained graphs one gets a so-called smooth transition between each value, which can be viewed as neighbour-ing dependencies [11]. Thus the chosen direction, of the connectivity circle, for one node will influence neighbouring nodes to point in sim-ilar direction. Perlin noise also has a specified size ranging from 0 to p

n⇤ n where n is the number of nodes in the network. The size is de-fined as the area of influence for each node. Perlin size 0, or Perlin 0, means the influence area has a radius of 0, and will therefore not influ-ence any neighbours. A Perlin with size 6 will influinflu-ence nodes within a circle with the radius 6. Perlin 0 is therefore a Random network and Perlinpn⇤ n is equal to a Homogeneous network (where all values point in the same direction).

(16)

CHAPTER 2. BACKGROUND 9

2.8 Algorithms for finding communities

There exist a number of different methods for finding the number of communities in a network. One of them is to look at the eigenvalue spectra of a network, define a spectral band and then look at the num-ber of eigenvalues outside this band.

2.8.1 Eigenvalue-spectra of network

According to Zhang et al., the number of eigenvalues outside the spec-tral band of a network equals to the number of communities in that network [14]. Since all examined eigenvalues are complex numbers, a simple plotting of the eigenvalues will yield a plot where the x-axis represents the real part of an eigenvalue and the y-axis the imaginary part. The eigenvalue spectra of a graph, on the other hand, is a projec-tion of the eigenvalues onto the x-axis (real value axis). The y-axis in this case now represents the total number of eigenvalues with a par-ticular real value. Another possibility is to let the y-axis represent the density of the real values, which can be seen in figure 2.3 This yields a similar result.

Figure 2.3: Example of an eigenvalue spectra from Zhang et al. [14]

2.8.2 Definition of spectral band and outlying

eigen-values

(17)

10 CHAPTER 2. BACKGROUND

the eigenvalues in the network are between the real values -20 and 20. The plot in between these values defines the spectral band. The outlying eigenvalues are eigenvalues that are outside this band. In fig-ure x.x they are represented by two dark blue vertical lines. It is also noted that the outlying eigenvalues all have real values greater than those that preside within the band. Thus they can simply be quanti-fied.

2.9 Network activity dynamics

Figure 2.4 shows clustered activity to the left and the angle of for each neuron in a 120x120 network to the right. The top pictures repre-sent a Random network, the middle a Perlin network and at the bot-tom a Homogeneous network. These are created by similar rules as the networks used in this study. The x-axis on the pictures to the left is time in ms and the y-axis node ID. The different colours to the left represents different clusters and to understand what the clusters actu-ally are one must understand the work of Max and Anders. In short what they have done is creating networks with different connectivity rules, and then sent noise through the network looking at when dif-ferent nodes in time gets activated by the noise[6]. A cluster is a set of nodes close in space and close in the time slot when they activate. One can compare this to flow in the network. One sees that Random has more or less no moving flow, Homogenous has a clear direction of flow and Perlin has flow but in a unclear fashion.

(18)

CHAPTER 2. BACKGROUND 11

Figure 2.4: A picture adapted from the work of Max Larsson and Anders Kjerrgren [6]. Note that label description is missing, this is

(19)

Chapter 3

Method

3.1 Creating the data sets

There are two data sets used in this report. Toy Data is the data set that represents the toy example of the problem and was created by the authors of this report. Spatial Connectivity Data is the data created with code by Sebastian Spreizer at the University of Freiburg. The networks in this data set is of size 80x80.

3.1.1 Toy Data

The data set Toy Data contains two networks. One represents a net-work with 0% feed forwardness and the other represents a netnet-work with 25% feed forwardness. Each matrix contains 2500 nodes where each node belongs to a group of 250 nodes. The nodes are divided into the groups by id.

Group Nodes 1 0-250 2 251-500 3 501-750 ... ... 10 2251-2500 Table 3.1: Toy Data groups

(20)

CHAPTER 3. METHOD 13

Each matrix has two properties: pinside is the chance that a node can connect to another node in the same group. poutside is the chance that a node can connect to a node in another group.

A node x can only connect to a node y where the group number of x is greater than the group number of y. A group has connections to three other groups. Group 8 only has two connections, group 9 has one and group 10 has no connections to other groups. This is the result of only being able to connect to groups with higher group number.

Matrix pinside poutside

0% 0.4 0.0

25% 0.3 0.1

Table 3.2: Toy Data matrices

3.1.2 Spatial Connectivity Data

All networks in the Spatial Connectivity Data set are LCRN. Every node has an ID and a fixed position determined by that ID. The length between two adjacent nodes is 1 unit length.

(21)

14 CHAPTER 3. METHOD

Figure 3.1: Nodes represented by ID in a 80x80 network

Defining the connectivity radius

When creating the matrices the nodes at the edges of the matrix where able to connect to nodes at the opposite side of the matrix. One can think of it as drawing the matrix on a piece of paper and then folding the paper as a torus. The nodes on the edge of the paper is then neigh-bour to the nodes on the other end.

Figure 3.2: Visualisation of folding the matrix as a tarus

(22)

CHAPTER 3. METHOD 15

3.2 Calculating Communities

To calculate the number of communities in a network, the method of looking at the eigenvalue spectra, proposed by Zhang et al, was used [14]. In the implementation of the idea, first the range of the spectral band was computed. It was done as follows:

1. Calculate the median (m) of all eigenvalues on the real axis. m serves as the median of the spectral band since the outlying eigen-values are too few to affect the median.

2. Calculate the distance from m to l, where l is the eigenvalue with the lowest value on the real axis.

3. [m-abs(l),m+abs(l)] is the range of the spectral band.

This means that the maximum value on the spectral band is equal to m+abs(l). After that a simple iteration over all eigenvalues is per-formed, and a count variable incremented for each real eigenvalue greater than the spectral maximum. The count variable holds the fi-nal result.

3.3 Nordling-Tivelius algorithm

Nordling-Tivelius algorithm is an ad-hoc type of algorithm that finds, what most likely are, feed forward chains in a network.

1. Pick a node s0 in the network.

2. Find all of the nodes within a radius of r0 from s0. Add them to an array, N0.

3. Find all nodes that all elements in N0 connect to. Add to an array, postarray.

4. Calculate the c0 most common nodes that the elements in postar-ray connects to. Add them to two arpostar-rays, N1 and A.

5. N0 = N1 and N1 = []. Repeat 3-5 x times.

(23)

16 CHAPTER 3. METHOD

3.4 Calculating Feed forward chains using

the Nordling-Tivelius algorithm

Table 3.3: Initial values for Nordling-Tivelius algorithm

Variable Value

c0 20

r0 5

x 20

Starting points 81

The result of the Nordling-Tivelius algorithm are presented as an array with the number of unique nodes visited with a distance greater than r0 from each starting point.

3.5 Calculating correlation between

Commu-nities and Feed forward chains

To calculate the correlation between the feed forwardness of a network and the community structure, the results from the Nordling-Tivelius algorithm and count of outlying eigenvalues was used. In total, three plots were created which show one of two things. (1) The correlation between the average unique visited nodes divided by the total amount of possible visited nodes and number of outlying eigenvalues divided by the total number of eigenvalues. (2) The correlation between the probability of finding a feed forward chain from a starting point and the number of outlying eigenvalues divided by the total number of eigenvalues.

For type 1 two plots where created. One plot using Symmetric and Random in a 80x80 network. One plot using Perlin 4, Perlin 8 and Perlin 16 in a 80x80 network. With our starting values presented in section 3.3, the total amount of possible visited nodes are 400. This is calculated by multiplying c0 with x.

(24)

de-CHAPTER 3. METHOD 17

fined as having visited more than 130 unique nodes from a starting point.

(25)

Chapter 4

Results

4.1 Toy data

The following results are generated with the sole purpose of testing the theory stated in Zhang et al. community detection [14]. The the-ory states that the number of communities in a network is equal to the number of eigenvalues outside the spectral band. There are two different networks presented here to test the theory. A network with 0% feed forward connections and a network with 25% feed forward connections. The two networks each consists of 2500 nodes and are represented first by a plot of the adjacency matrix, and then a plot of all the eigenvalues. The number of eigenvalues in each plot is equal to the number of nodes (2500). The blue vertical line in the plots repre-sent the maximum real value on the spectral band.

4.1.1 Feed Forward 0% network

By looking at the adjacency matrix of the network, figure 4.1, it is clear that it contains 10 distinct components, since the nodes in each block only connects to nodes in the same block.

(26)

CHAPTER 4. RESULTS 19

Figure 4.1: Adjacency matrix for the feed forward 0% network The eigenvalue plot of the network, figure 4.2, shows that the vast ma-jority of the eigenvalues are located to the left of the maximum value on the spectral band. By looking at the zoom in on the plot one can clearly see that there are 10 eigenvalues outside the spectral band. The same number was also calculated by the method described in section 3.2.

(27)

20 CHAPTER 4. RESULTS

Figure 4.3: Zoom of eigenvalue plot for the feed forward 0% network

4.1.2 Feed forward 25% network

In the network in figure 4.7 there does not exist 10 obvious components since 25% of the connections occur outside the block of the source node. However, there still exist 10 communities, indicated by the fact that the diagonal line of blocks are more coloured than the rest.

(28)

CHAPTER 4. RESULTS 21

The eigenvalue plot of the network, figure 4.5, looks similar to the eigenvalue plot of the feed forward 0% network, figure 4.2. The only difference is that the outlying eigenvalues in this network span a wider range on the real axis. The number of outlying eigenvalues (10) stayed the same.

Figure 4.5: Eigenvalue plot of the 25% feed forward network

(29)

22 CHAPTER 4. RESULTS

Even though the network isn’t divided into 10 clear components, fig-ure 4.4, there still exist 10 eigenvalues outside the spectral band - indi-cating there are 10 communities.

4.2 Spatial connectivity data

4.2.1 Eigenvalues

The following images show examples of the eigenvalue plots for a Random-, a Symmetric- and a Perlin 4 network. This is shown so the reader can get a grasp of how different spectra might look. The blue vertical line represents the maximum real value on the spectral band. In total there are 6400 eigenvalues plotted (one for each node) and the vast majority of them is on the left side of the vertical line.

(30)

CHAPTER 4. RESULTS 23

Figure 4.8: Eigenvalue plot of the Symmetric network

(31)

24 CHAPTER 4. RESULTS

4.2.2 Eigenvalue spectra

As seen in figure 4.11, the eigenvalue structure of the three graphs are very similar. The maximum real value of the spectral band is in fig-ures 4.7-4.9 drawn in between the values 19.57 - 51.00. It is also noted that for real-values larger than 19.57 the small difference in structure becomes harder to distinguish.

Figure 4.10: A histogram of the real-part of figures 4.7-4.9

(32)

CHAPTER 4. RESULTS 25

4.2.3 Flow and community

The figures in this subsection are not used in the actual study. They are merely a way for the reader to see visualised flow. Each plot uses the same values presented in section 3.3 except the number of starting points which is 5.

Figure 4.12 is a general plot of the flow in a Perlin 16 network. The starting area is represented as the darkest grey. Each type of grey rep-resents an iteration. In the figure one can see strong flow in the coor-dinates [18,30] and [64,15], and weak flow in coorcoor-dinates [40,40]. Note that all starting points ran the same amount of iterations.

(33)

26 CHAPTER 4. RESULTS

Figure 4.13 is a general plot of the flow in a Random network. The starting area is represented as the darkest grey. Each type of grey rep-resents an iteration. The figure shows some flow in the coordinates [20,40], otherwise there is no flow.

(34)

CHAPTER 4. RESULTS 27

4.2.4 Correlation

Each plotted point represents a realisation of a network, and the dif-ferent colours represents difdif-ferent types of networks. There are a total of 30 network realisations per network type.

Figure 4.14: The correlation plot type 1 of Perlin 4-, Perlin 8- and Perlin 16 networks

(35)

28 CHAPTER 4. RESULTS

(36)

Chapter 5

Discussion

In this report, it was investigated whether structural anisotropy in the spatial connectivity lead to emergence of communities and feed for-ward chains in a recurrent network. The results show that structure emerges and that there exists a correlation in these graphs between the number of communities and the average flow.

5.1 Discussion of the results

One possible reason as to why we see an increase of flow with a higher amount of communities is that more communities allow for more feed forward chains to occur. If a network consisted of only one large com-munity, containing all the nodes in the network, there would be no room for flow. This is because all nodes would be heavily connected to each other and it would be nearly impossible for a certain direction to occur. If one instead divide all nodes into two communities, there is a possibility of flow between these communities. However, the chains would only be one edge long. The conclusion can therefore be drawn that nodes not part of any community, or only part of a very small community, play a big role in the emergence of feed forward chains. These nodes serve as bridges, or chains, between communities. This explains why certain chains are longer than others since communities have different distance between them.

(37)

30 CHAPTER 5. DISCUSSION

Another theory is that the nodes not part of a community will form chains that avoid communities in its path. This theory is derived form figure 4.12, where it is seen that the flows are curved, almost as if avoiding obstacles in its path. This theory is also reliant on the conclu-sion that chains emerges from nodes not part of big communities. It is however noted that, many communities does not instantly result in more flow in a network, just an increased possibility of it. In figure 4.14 the Perlin 4 realisation with the least amount of communities is also the one with the longest average flow. A correct correlation can thus not be drawn by looking at only one type of network, it becomes apparent on a bigger scale. Comparing three different networks eliminates the error factor where realisations had more flow just by chance.

5.2 Discussion of the method

The problem this study is looking at can be seen as a novel problem. This means that methods for analysing and/or solving the problem are not common, at least when it comes to the flow in graphs. Hence one should look at the Nordling-Tivelius algorithm more in detail since without proper discussion regarding it, the results can be highly ques-tionable. The method used for finding communities should also to some extent be discussed. Finally one should discuss the calculations regarding the correlation between the values.

5.2.1 Calculating feed forward chains

(38)

CHAPTER 5. DISCUSSION 31

wants to pick initial values in a way that starting points do not merge with others, and the number 4200 shows that the chosen values cre-ates merges. A solution to this would be to pick fewer starting points. However this affects the results in other terms. Firstly, the algorithm does not makes sure that starting points does not visit the same nodes, i.e. a total of 6400 unique visited nodes all starting points combined does not mean that starting points does not share visited nodes. If one where to check this one would affect the natural flow, creating uncertain results. Secondly, fewer starting points increases the risk of not analysing the whole graph since one cant decide where each chain shall go. Thus, letting the starting points share in this case at least 4200 nodes is necessary for having viable results. Using exactly 81 start-ing points has to do with that it creates a 9x9 grid which increases the chance of analysing the whole graph.

One could also discuss the choice of setting x to 20 and c0 to 20. During testing of the Nordling-Tivelius algorithm it seemed that setting x to a value below 20 one would not get enough visited nodes to distinguish chains. Increasing it above 20 would increase the chance of visiting the same nodes. Setting c0 to 20 has to do with the same reasons. Instead of choosing the 20 most visited nodes one could use a histogram to find the nodes which had a certain amount of connections to. This would create a flow that is occurring more natural since only highly targeted nodes would be a part of the chain. However, this does not work in practice since one can not choose this value and at each iteration find a reasonable amount of nodes above it. Either the value is to small and one would get to many nodes for each iteration, or the value would be to small and one would not be able to find any nodes after just a few iterations. This is why a number of most visited nodes was used.

5.2.2 Calculating the number of communities

(39)

32 CHAPTER 5. DISCUSSION

can easily be seen that the location to draw the line is at x-value 20, the maximum value on the spectra. However, in figure 4.10 the location to draw the line is not as apparent. This is due to the vast amount of eigenvalues positioned closer to the density circle. By instead looking at figures 4.7-4.9, the radius of the density circle was calculated and the line was drawn at the maximum x-value on the circle. This is possible partly because all plots of eigenvalues formed similar circles. By look-ing at the spectra plot, figure 4.10, one can see that the plotted lines are very similar in structure around the values where the lines were drawn. This means that even if the line is not drawn at the perfect position, the method can still be used since all networks are evaluated the same. This will give correct results regarding the correlation, but it is also noted that the actual number of communities in the network may be calculated incorrectly.

5.2.3 Calculating the correlation

To calculate the correlation between the community structure and the feed forward chains, two different methods were used, referred to type 1 and type 2. As one can see by looking at figures 4.14 - 4.16 all axes represents a percentage value. This is because, if the average length of all paths in a realisation is 85 unit lengths, the number 85 does not give the reader any interesting information. By instead dividing that number by the total possible length of a path it gives a valuable per-spective and is no longer dependent on the size of the network. The same holds for the eigenvalues on the x-axis.

(40)

CHAPTER 5. DISCUSSION 33

5.3 Future work

For future work it would be reasonable to analyse larger networks. The graphs that has been analysed are not even close to being the same size as the network in the brain. By changing the size of the network one could get different results.

The algorithms used in this study have been questioned, however they have not been compared to other algorithms for finding feed for-ward chains and communities. Regarding chains there are no known algorithms, hence finding new improved methods should be recom-mended. For finding communities there are many used today. Com-paring these could give more viable results.

(41)

Chapter 6

Conclusion

A conclusion can be drawn that there exists community structure and feed forward chains in networks with structural anisotropy in their spatial connectivity. There is a clear correlation between the number of communities in a network and the amount of feed forwardness in said network. With an increase of the number of communities one see an increase in the flow of the networks. The correlation is not strictly linear but is apparent when analysing many realisations and different types of networks.

(42)

Bibliography

[1] G Arfken. Mathematical Methods for Physicists, 3rd ed. Orlando, FL: Academic Press, 1985, pp. 229–237.

[2] Markus Butz, Wolfram Schenck, and Arjen van Ooyen. “Anatomy and Plasticity in Large-Scale Brain Models”. In: Frontiers in neu-roanatomy 10 (2016), p. 108.

[3] G Bard Ermentrout and J Bryce McLeod. “Existence and unique-ness of travelling waves for a neural network”. In: Proceedings of the Royal Society of Edinburgh Section A: Mathematics 123.3 (1993), pp. 461–478.

[4] Axel Hutt, Connie Sutherland, and André Longtin. “Driving neu-ral oscillations with correlated spatial input and topographic feed-back”. In: Physical Review E 78.2 (2008), p. 021911.

[5] Xiaolong Jiang et al. “Principles of connectivity among morpho-logically defined cell types in adult neocortex”. In: Science 350.6264 (2015), aac9462.

[6] Anders Kjerrgren and Anders Kjerrgren. Introducing heterogeneities in biological neuronal network models with distance-dependent con-nectivity. Royal Institute of Techonoly, May 2018.

[7] Arvind Kumar, Stefan Rotter, and Ad Aertsen. “Conditions for propagating synchronous spiking and asynchronous firing rates in a cortical network model”. In: Journal of neuroscience 28.20 (2008), pp. 5268–5280.

[8] Simona Lodato and Paola Arlotta. “Generating neuronal diver-sity in the mammalian cerebral cortex”. In: Annual review of cell and developmental biology 31 (2015), pp. 699–720.

[9] Carsten Mehring et al. “Activity dynamics and propagation of synchronous spiking in locally connected random networks”. In: Biological cybernetics 88.5 (2003), pp. 395–408.

(43)

36 BIBLIOGRAPHY

[10] Mark EJ Newman. “The structure and function of complex net-works”. In: SIAM review 45.2 (2003), pp. 167–256.

[11] Ken Perlin. “An image synthesizer”. In: ACM Siggraph Computer Graphics 19.3 (1985), pp. 287–296.

[12] Alex Roxin, Nicolas Brunel, and David Hansel. “Role of delays in shaping spatiotemporal dynamics of neuronal activity in large networks”. In: Physical review letters 94.23 (2005), p. 238103. [13] Sebastian Spreizer et al. “Activity Dynamics and Signal

Repre-sentation in a Striatal Network Model with Distance-Dependent Connectivity”. In: eNeuro 4.4 (2017), ENEURO–0348.

(44)

Appendix A

Spatial Connectivity Code

The code created by Sebastian Spreizer at the University of Freiburg can be found in the following public GitHub repository:

https://github.com/babsey/spiking-activity-dynamics

(45)

References

Related documents

Stöden omfattar statliga lån och kreditgarantier; anstånd med skatter och avgifter; tillfälligt sänkta arbetsgivaravgifter under pandemins första fas; ökat statligt ansvar

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

As the on-load tap changer of the HV/MV transformers sets the voltage to be between +2% of the nominal value (in case of high production of the decentralized generators