• No results found

Scheduling in a Blockchain

N/A
N/A
Protected

Academic year: 2021

Share "Scheduling in a Blockchain"

Copied!
45
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Electrical Engineering

2020 | LiTH-ISY-EX–20/5286–SE

Master thesis, 30 ECTS

Scheduling in a Blockchain

Schemaläggning i en blockkedja

Fabian Petersen

Supervisor : Mikael Asplund Examiner : Jan-Åke Larsson

(2)

Upphovsrätt

Detta dokument hålls tillgängligt på Internet - eller dess framtida ersättare - under 25 år från publicer-ingsdatum under förutsättning att inga extraordinära omständigheter uppstår.

Tillgång till dokumentet innebär tillstånd för var och en att läsa, ladda ner, skriva ut enstaka kopior för enskilt bruk och att använda det oförändrat för ickekommersiell forskning och för undervisning. Över-föring av upphovsrätten vid en senare tidpunkt kan inte upphäva detta tillstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För att garantera äktheten, säkerheten och till-gängligheten finns lösningar av teknisk och administrativ art.

Upphovsmannens ideella rätt innefattar rätt att bli nämnd som upphovsman i den omfattning som god sed kräver vid användning av dokumentet på ovan beskrivna sätt samt skydd mot att dokumentet än-dras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsmannens litterära eller konstnärliga anseende eller egenart.

För ytterligare information om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years starting from the date of publication barring exceptional circumstances.

The online availability of the document implies permanent permission for anyone to read, to down-load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educational purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are conditional upon the consent of the copyright owner. The publisher has taken technical and administrative measures to assure authenticity, security and accessibility. According to intellectual property law the author has the right to be mentioned when his/her work is accessed as described above and to be protected against infringement.

For additional information about the Linköping University Electronic Press and its procedures for publication and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

(3)

Abstract

Fewer customers in Sweden are using cash in their everyday transactions than ever before. If this trend continues, then the Swedish payment system will, in a few years, be entirely controlled by private companies. Therefore the central bank needs a new digital asset trad-ing platform that can replace the reliance on private companies with a system supplied by a government entity (central bank).

This thesis revolves around the creation of a digital asset trading platform focused on the capital market, which can serve that role. The primary focus of the thesis is to investigate how time can be introduced to a blockchain so that events such as a coupon payment or a dividend can be scheduled to occur at a specific time.

The digital trading platform created as part of this thesis was after creation tested to as-certain the best method of introducing time. The results presented in this thesis show that one of the methods has a higher accuracy, with a 1.3 seconds average between the desired execution time and the actual execution time.

The platform was also used to evaluate the feasibility of a digital “currency” based on blockchains, as a replacement for credit cards supplied by Mastercard or Visa. The results indicate that a blockchain solution is a somewhat feasible replacement while suffering from some disadvantages, primarily in throughput.

(4)

Acknowledgments

I would like to thank my supervisor Mikael Asplund for support during the thesis work and Jan-Åke Larsson for being the examiner. I would also like to thank everyone at Visigon Nordic AB but especially Gustav Ekeblad for providing valuable feedback and advise throughout at all stages of the thesis. Furthermore, i would like to thank my family and friends for giving me motivation throughout the thesis work.

Fabian Petersen Linköping, 2020

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vii

List of Tables viii

1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 3 1.3 Research questions . . . 3 1.4 Method Overview . . . 4 1.5 Delimitations . . . 4 1.6 Previous Work . . . 4 2 Theory 5 2.1 Blockchain . . . 5 2.2 Tendermint . . . 7 2.3 Transfer throughput . . . 9 3 Method 11 3.1 Functional system requirements . . . 11

3.2 Choosing a Blockchain . . . 12

3.3 Introducing time to a blockchain . . . 13

3.4 System overview . . . 15

3.5 Performance evaluation . . . 17

4 Results 21 4.1 Average block time . . . 21

4.2 Processed transactions . . . 23

4.3 Time until processed transactions . . . 24

4.4 Accuracy of block time solution . . . 26

4.5 Accuracy of observer-driven time . . . 26

5 Discussion 29 5.1 Results . . . 29

5.2 Method . . . 32

5.3 The work in a wider context . . . 34

6 Conclusion 35 6.1 The more accurate method . . . 35

(6)

6.2 Low accuracy when predicting block time . . . 36 6.3 Transfers affect the block time . . . 36 6.4 System throughput . . . 36

(7)

List of Figures

1.1 Amount that used cash in their last transaction . . . 2

2.1 Time between blocks in Bitcoin measured in minutes . . . 7

2.2 Time between blocks in Ethereum measured in seconds . . . 8

2.3 A simplified state machine model of Tendermint core . . . 8

2.4 Digital transactions per year . . . 10

3.1 Event scheduling using block-driven time . . . 13

3.2 Event scheduling using observer-driven time . . . 14

3.3 Architecture Overview . . . 15

3.4 The geographical position of the servers . . . 19

4.1 The accumulated difference in ms between the average commit time and the actual commit time during low load . . . 22

4.2 The accumulated difference in milliseconds between the average commit time and the actual commit time during average load . . . 23

4.3 The accumulated difference in milliseconds between the average commit time and the actual commit time during high load . . . 24

(8)

List of Tables

4.1 Block time during low load (ms) . . . 22

4.2 Block time during medium load (ms) . . . 22

4.3 Block time during high load (ms) . . . 23

4.4 Processed transactions per minute for different load scenarios . . . 24

4.5 Time between sending and processing transactions during low load (ms) . . . 25

4.6 Time between sending and processing transactions during medium load (ms) . . . 25

4.7 Time between sending and processing transactions during high load (ms) . . . 25

4.8 Accuracy of block time solution during low load (ms) . . . 26

4.9 Accuracy of block time solution during medium load (ms) . . . 26

4.10 Accuracy of block time solution during high load (ms) . . . 27

4.11 Trigger delay of observer driven time during low load (ms) . . . 27

4.12 Accuracy of observer driven time during low load (ms) . . . 27

4.13 Trigger delay of observer driven time during medium load (ms) . . . 28

4.14 Accuracy of observer driven time during medium load (ms) . . . 28

4.15 Trigger delay of observer driven time during high load (ms) . . . 28

(9)

1

Introduction

This section gives a overview of the problem that is studied in this thesis.

1.1

Motivation

The usage of cash for everyday transactions has been steadily decreasing over the last few years, where more and more customers prefer to use digital payment methods such as credit cards instead of regular cash1. A bi-yearly survey made by the Swedish central bank (Riks-banken) shown in Figure 1.1 showed that the customers that used cash in their latest transac-tion were 39 % in 2010 but lowered to 13 % in 20181.

If this trend continues, then the usage of cash is likely to be phased out in a few years. Es-pecially since the results also show that cash was predominantly used by elderly customers between 64 and 84 years old, while the younger generations preferred to use digital payment methods like Swish and debit cards1.

1.1.1 Reliance on private companies

A problem with many digital payment methods like credit cards, debit cards, Google Pay, and Apple Pay is that they are entirely supplied and controlled by private companies[1]. These might over time reduce the number of available options to the general public, especially to those who are not deemed to be profitable or who are slow or unable to utilize the new technology[1]. Furthermore, it is also possible that the reliance on private companies will erode the public’s confidence in the Swedish payment system and its ability to control the currency.

There are currently three large companies (Visa, Mastercard, and American Express) that con-trol the majority of the debit and credit card market in the western world. These companies 1Sveriges Riksbank. Payment patterns in Sweden 2018. URL:https://www.riksbank.se/globalassets/

(10)

1.1. Motivation

Figure 1.1: Amount that used cash in their last transaction

have an immense ability to control the flow of money without the interference of any gov-ernment. Therefore, it would be in the public’s best interest that the central bank supplies a solution for digital transfers that can rival the ones that are currently dominating the market. A digital solution that is controlled by a neutral central party, who acts in the interest of the public without profit as an incentive.

1.1.2 Using blockchains for trading

How could such a digital payment solution work? New technology has emerged in the last few years, which has been used in several digital trading systems, namely blockchain. Blockchain networks consist of several connected computers called nodes. All nodes work together to group the messages they receive into blocks, which are then in turn added after each other like links in a chain. The blocks (links) are connected through advanced mathemat-ics (cryptography) which makes it near impossible to alter the content of blocks or the order of the blocks on the chain without it being detectable.

The process of adding blocks to the chain is called “committing” and the commit time T(n) of a block n is the equivalent real time of when the block was added. The accuracy of the commit time T(n) is measured as the difference (in milliseconds) between the estimated time when a block was supposed to be committed and when it was committed, where a smaller difference (i.e., greater accuracy) is better.

Blockchains are primarily used for digital “currencies” such as Bitcoin and Etherium, but can blockchains be used by a central bank as the backbone to supply a digital version of the commonly used Swedish currency? Is it possible to create a blockchain solution that works similar to credit cards in the eyes of the end-users but with the significant difference that the system is controlled by the central bank and other trusted financial institutions?

1.1.3 The importance of accurate scheduling

Ethereum and Bitcoin are, in essence, quite simple systems for the end users where only a few actions are permitted. One such action is a simple currency-transfer between account on the system. However, when creating more complex systems similar to the ones needed in fiat (classic) currency, it becomes readily apparent that a massive problem remains to be solved. There is no universally adopted way to schedule actions in a blockchain, that should happen at a specific time in the future.

(11)

1.2. Aim

When trading digital assets, it is common that real-world events occur that affect the value of assets. For example, payouts based on asset ownership or actions such as coupon and dividend payouts change the value of the affected assets. It is, therefore, imperative that the actions corresponding to those events are executed on time and in a predictable and consis-tent order and that all parties share the same view of the current state (for example, paid/not paid).

If it is unclear exactly when real-world events change the value of assets, then it near im-possible to trade fairly on the platform. A typical scenario would be that a user places a bid for an asset at price X, but before the bid is finalized, then there is a payout Y based on the ownership of that asset. Which would give the asset the new market value of X - Y, and the user would have unknowingly overpaid for the asset.

The naïve solution to schedule events is to use the local clock on each blockchain node, and when the time of the scheduled time of the event occurs then, the event is executed. However, this will undoubtedly lead to race-conditions since regular transfers are processed at different times on separate nodes due to network delays or slight clock drift. This inconsistency will, over time, lead to an inconsistent state where multiple nodes have different versions of the application state, where all states are equally correct. Hence a better solution is needed. This thesis investigates two solutions (described in Section 3.3) for introducing a time concept to a general blockchain solution, that can be used to schedule events that are months away with near second accuracy. The solution itself is general enough to be used for any digital trading system that is used to exchange digital assets between users.

1.2

Aim

The thesis aimed to investigate ways to introduce a time concept to a generalized blockchain solution that can be used to schedule events in the context of a digital asset trading platform. The thesis also evaluates the feasibility of using blockchains as the backbone for any platform where digital assets are exchanged between users.

1.3

Research questions

The thesis will cover the following questions.

1. How does the accuracy of a block time solution compare to an observer-driven

solu-tion?

This question aims to identify the scenarios where one of the two solutions for intro-ducing time performs with greater accuracy, in order to find a more accurate solution for a specific use-case.

2. With which accuracy can the commit time T(n) be predicted given the commit time

of block T(0)?

This question is core to the usability of the block driven time. If the commit time of a block in the future cannot be predicted with accuracy, then the solution cannot be used to schedule events.

3. Can end-users affect the execution time of events by making transfers?

The goal of this question is to determine the security of the system from outside threats and determine if the blockchain nature of the system may give some advantage to ma-licious users who are attempting to alter the execution time of events.

(12)

1.4. Method Overview

4. What is the highest throughput that can be achieved by the system?

While the accuracy of the solutions is essential, it is also crucial that the system as a whole can handle the load that could be placed on it in a real-world scenario. Therefore this question considers the throughput as measured by the average number of transac-tions per minute.

1.4

Method Overview

This thesis compares two different methods of introducing time to a generalized blockchain in the context of a digital asset trading platform; the time concept was then used to sched-ule events. A rudimentary digital asset trading platform implementing both methods of in-troducing time was created as part of this thesis in order to evaluate both methods under otherwise identical conditions.

The methods are evaluated using metrics of accuracy, such as clock drift from real-time as well as consistency. While also evaluating the feasibility of using blockchain technology as the backbone for a digital currency or any other platform where digital assets are exchanged between users. The thesis evaluates the methods during poor conditions such that any system that uses the solution would seldom perform worse than expected.

1.5

Delimitations

This study will not cover on the following factors.

1. How to create a general blockchain solution with a consensus engine and communica-tion protocol. This study used Tendermint, which implements all low-level blockchain details.

2. There will not be any research into the potential legal implications of trading digital assets on a blockchain solution. Any legal ramifications are outside the scope of this thesis as it only focuses on the technical aspects of the blockchain solution.

3. How an insider with access to one of the nodes can affect the system. Any modifications to the systems are outside the scope of the thesis, such as modifying the application that’s running or changing system settings such as the system clock.

1.6

Previous Work

There has been much research made into how to maintain the block time in a network with a changing amount of computational power. In essence, all proof-of-work type blockchains need to adjust the difficulty of their problem according to the current computational power[7, 5]. Otherwise, a block time of 60 seconds would decrease to 30 seconds if the network was doubled. Making the decision on the trade-off between the commit time of two subsequent blocks practically useless[7, 5].

There has also been research made into the amount of electricity used to support the bitcoin network (a proof-of-work blockchain) and that it is currently estimated to match the electric-ity usage of smaller countries such as Ireland[4]. The research is mainly focused on the sig-nificant cost associated with its usage and its sustainability as the bitcoin network expands[4, 6].

Apart from the practical aspects of running a blockchain network, there has also been a lot of research into how society would transition into being cashless, and how being cashless could influence everything from criminality (such as tax evasion or theft) to the economy as a whole[1, 10].

(13)

2

Theory

This chapter covers the theory that will be used to create the system and analyze the results.

2.1

Blockchain

A blockchain is a list of messages grouped into blocks, which are chained together through the use of cryptography[8]. The cryptography prevents changes to the blocks after they have been added to the chain[8]. The only way to change a block is to recalculate the hash for all subsequent blocks, which is noticeable for all other actors that use the chain[8].

Blockchain technology was initially created by the pseudonym Satoshi Nakamoto to serve as the backbone of the Bitcoin cryptocurrency [8]. Blockchains solves the double spending problem without the use of a trusted centralized server.

2.1.1 Block height

The height of a block is the number of blocks between it and the genesis (first) block on the chain. Hence, the genesis block has a block height of zero, with the overall height of the chain being one less than the number of blocks1.

2.1.2 Public vs Private blockchain

The difference between public and private blockchains is who is allowed to participate in the blockchain[9]. A public blockchain solution is open for everyone to participate in, while a private blockchain uses internal keys to prevent unauthorized access[9].

The system investigated in this thesis was a private blockchain in that only a select number of nodes were allowed to join the blockchain, and where only financial institutions were in control of the nodes. Private parties and end-users communicate with a layer that was situated above the blockchain and does a preliminary check to see if the transaction is valid in order such that the blockchain is not flooded with invalid transactions.

1Bitcoin Project. Block Chain Height, Block Height. Jan 2019. URL: https://bitcoin.org/en/glossary/

(14)

2.1. Blockchain

2.1.3 Issues with scale

One significant limitation is that blockchains do not scale well in some use-cases[3]. They often have a fixed amount of transactions for each block; this enforces an implicit maximum on the number of transactions per second[3], making them significantly slower compared to well designed centralized systems.

Another consideration with blockchains is that the chain is always growing, each transaction is small, but the sum of all transactions will, over time, grow to an immense size [8]. For example, the blockchain of Bitcoin is over 200 GB in size2, while the Ethereum blockchain is over 180 GB in size3.

2.1.4 Block time

The block time is the average time it takes for the connected network of nodes to generate a new block in the chain. The block time is often theoretically calculated in proportion to the computational power of the network[8]. However, in practice, the performance differs over time, causing the block time to vary with each block as shown in Figures 2.2 and 2.1.

Multiple studies have been done on the advantages and disadvantages of different block times, where most studies found that a shorter block time will lead to more computer re-sources being wasted after a correct solution has been found, and a higher rate of forks in the system[5] while a shorter block time was found to lead to an increase in the number of trans-actions per second. It also leads to an increase in the number of stale blocks i.e., blocks that are empty except for the header[5]. These blocks make the size of the blockchain grow faster than it otherwise would compared to a longer block time with a more substantial amount of transactions per block[5].

There will also be a higher waste of computational power as it will take some time to prop-agate the new block from the finder to all nodes in the network[5], which means that it will be some lag between the time where a hash is found on one node and the time between the hash being known on the rest of the network[5]. This extra time between will cause some of the nodes to keep trying to find the hash even though the hash has already been found[5], thereby wasting computer resources trying to find a hash that already has been found. This was found to create a trade-off scenario where the faster transactions of a short block time were traded against the lower overhead (in performance and storage) of a longer block time[5]. Hence, there is no perfect block time, which is why every blockchain differs widely in block time. From 15 seconds in Etherium to 10 minutes in Bitcoin, as shown in Figures 2.2 and 2.1.

2.1.5 Block time in Ethereum and Bitcoin

The two most extensive networks of blockchain “currencies” are Bitcoin and Etherium, and their consistency concerning block time was used to evaluate the measurements acquired for the Tendermint based solution.

Figure 2.1 (with data from Bitcoinity.org4) shows an average of around 500 000 data points, where the block time of Bitcoin have been measured. These values show a deviation of several

2Bitcoin.com. Blockchain size. Mar. 2019. URL: https://charts.bitcoin.com/btc/chart/

blockchain-size#5moc

3Etherscan.io. Ethereum Chain Data Size Growth. Mar. 2019. URL: https://etherscan.io/chart2/

chaindatasizefast

4Bitcoinity.org. Average time to mine a block in minutes. Mar. 2019. URL: https://data.bitcoinity.org/

(15)

2.2. Tendermint

minutes at multiple points and are consistently about 18 seconds off the desired mark of 10 minutes, which would be an deviation of 3 % in the best case, where some of the worse conditions like during march of 2013 caused the block time to deviate with up to 150 seconds which is 25 % less than the desired block time of 10 minutes.

That being said, the consistency of the block time is harder to maintain in Bitcoin compared to Tendermint as the combined computational power of all miners is continuously changing. The use of a proof-of-work also creates a slightly randomized spread as the problem will take a different amount of time to solve, from block to block.

Figure 2.1: Time between blocks in Bitcoin measured in minutes

The corresponding graph for Ethereum (shown in Figure 2.2 with data from Etherscan.io5) show that Ethereum is a lot closer to the desired value of 15 seconds compared to Bitcoin with a deviation of 4 % from the desired value. However, there are significant spikes in Ethereum where the block time increases to at most 30 seconds. These spikes correspond to the updates to the Ethereum core, and it seems like there was a decrease of nodes leading up to each update resulting in an ever-increasing block time as more and more nodes leave the chain. Ethereum returned closer to the desired target of 15 seconds per block after the updates were rolled out.

2.2

Tendermint

Tendermint is a general blockchain solution that allows for state machine replication between nodes6. Tendermint uses a socket protocol which allows the application to be language-independent, by using a general interface that exists in several languages. This general inter-face is called the Application BlockChain Interinter-face (ABCI) that is provided by the Tendermint team to ease the socket communication6.

The ABCI allows the application to handle requests made on the chain by either allowing or denying them while letting the core system handle all replication and ordering of requests. 5Etherscan.io. Etherum Block Time History. Mar. 2019. URL: https://etherscan.io/chart/blocktime 6All In Bits Inc. (dba Tendermint Inc.) Tendermint Documentation. Feb 2019. https://tendermint.com/

(16)

2.2. Tendermint

Figure 2.2: Time between blocks in Ethereum measured in seconds

The ABCI makes it easier to create a new blockchain solution as many of the low-level net-working and cryptography issues can be abstracted away6.

The Tendermint core has four states which it goes through when committing (adding) a new block to the blockchain. All of which are outlined below.

Figure 2.3: A simplified state machine model of Tendermint core

2.2.1 Propose

A weighted round-robin is used to choose a node which is responsible for proposing a block that should be added to the chain. This stage ends when either a block has been proposed

(17)

2.3. Transfer throughput

or when the propose timeout of 3 seconds has expired [2]. In either case, the state will move forward to the prevote state.

2.2.2 Prevote

During this state, all the validator-nodes (a select subset of nodes) must vote on the proposal from the previous propose state[2]. If the validator-node has received a faulty proposal or if no proposal was received (in case of a timeout at the previous stage), then the node signs the proposal with null (empty key) otherwise the response is signed with the nodes private key[2]. This stage also has a timeout of 1 second, that needs to be met [2].

2.2.3 Precommit

Precommit is the final state during the round, where all the validators send their prevotes from their previous state to the other validators[2]. If more than 2/3 of prevotes are valid and received before the timeout of 1 second, then the state will continue to the commit state, but if a supermajority (2/3+) is not reached, then the system will return to the proposal state and restart the round with an increased timeout[2].

2.2.4 Commit

The commit state does a final check that more than 2/3 of precommits are received and valid (before the timeout of 1 second) and that the current node has received the block[2]. If both of these requirements are met, then the block is added to the blockchain[2]. Tendermint core will then go back to the proposal stage for another round[2].

2.3

Transfer throughput

The Tendermint development team says that Tendermint can handle up to 10 000 transfers per second when each transfer is 250 bytes large6, although the actual number of transfers in a typical application is usually much lower because the transfers are often larger than 250 bytes in size.

Other blockchain solutions can process a lot fewer transactions per second with 15 transac-tions per second for Ethereum and 3.3 - 7 transactransac-tions per second for Bitcoin[3]. Both of these numbers are far below the performance of large credit card companies like Visa that support over 24 000 transactions per second according to their website7.

Figure 2.4 shows that the number of digital transactions is increasing every year, which lines up with the statistics shown in Figure 1.1. The total amount of transactions was 5 000 million in 2017, and it is likely to increase when cash is phased out even more8. That means that there were approximately 9 500 transactions made per minute through a digital medium in 20178.

7Visa. Small buissness retail. URL: https://usa.visa.com/run-your-business/

small-business-tools/retail.html

8 Sveriges Riksbank. Payments. URL: https://www.riksbank.se/en-gb/statistics/

(18)

2.3. Transfer throughput

(19)

3

Method

This chapter aims to give a greater understanding of the choices made when creating the system and how those choices produce the results that are analyzed to answer the research questions stated in Section 1.3.

3.1

Functional system requirements

The functional requirements for the platform created in this thesis do not cover the entirety of functions needed to implement a fully working copy of the original system (capital market) but instead aim to give a reasonable estimate of potential performance with the most commonly used functions.

The following functions were identified as requirements for basic implementation: • The system need to be able to create and modify users.

• The ability to create and modify digital assets that can be owned and traded by users. • Each user needs to be able to create multiple accounts that hold an arbitrary number of

digital assets.

• The users need the ability to transfer digital assets between their accounts and to the accounts of other users.

• The system needs to be able to schedule and execute events that should occur in the future. For example, when doing a coupon payment or a dividend, where the total amount of a digital asset is either increased or reduced by a particular factor at a specific time in the future.

3.1.1 The downtime problem

The current capital market shuts down for large portions of each day in order to process asset changes (events) to prevent the problem with interference from regular transactions. This shut down restricts the usage to standard working hours and impedes the flow of assets.

(20)

3.2. Choosing a Blockchain

The shut down is mainly done to execute events that occur each day safely. The system sets a considerable period where no transaction can be made, and during that period, it completes the events that are scheduled for the day. This avoids race-conditions but has a hefty penalty in that asset changes are scheduled at the granularity of whole days and that both events and transactions are restricted to specific time slots.

3.1.2 Accessibility and request verification

All of the requirements above need be accessible through a web server that implements a REST-API. This web server layer lays on top of the blockchain solution and will process an incoming request by verifying if they are correctly formed and valid. If the request is valid, then it will be forwarded to the blockchain for processing, where additional checks will be made in order to verify that the operation still is valid.

The purpose of this dual check process is that the initial check will lower the load placed on the blockchain network by discarding invalid transactions before they ever reach the net-work, while the second check is needed if one or more of the nodes are malicious or if a malicious actor can circumvent the webserver layer.

3.1.3 Non-functional requirements

A single party centrally controls the current capital market. This central control is a problem since it forces all actors to trust a single point of failure. It would, therefore, be better to introduce a distributed solution where the large actors do not have to trust each other fully. Prompting the use of blockchain technology.

The non-functional requirements in this thesis are mostly requirements on specific technolo-gies that must be used to implement the overall system. The system must have the following characteristics:

• The system must be decentralized with multiple actors each controlling a separate node. • The system must use blockchain technology to link the nodes together.

• The blockchain should not be directly accessible to end-users; therefore, a layer must be implemented on top of the blockchain to filter, validate, and resend requests.

3.2

Choosing a Blockchain

There are definite advantages with having a proof-of-stake blockchain in a private network as it is less computationally intensive compared to proof-of-work blockchains such as Ethereum or Bitcoin, which should make a proof-of-stake blockchain cheaper to deploy. Tendermint is chosen for this thesis since it provides an easy to use interface (Application Blockchain Interface) which allows developers to build their application on top of a general blockchain engine, while being a proof-of-stake blockchain, with the built-in cryptography to make the blockchain private.

With the Application Blockchain Interface, there is no need to create an entirely new underly-ing blockchain implementation with a workunderly-ing protocol and consensus engine. However, the downside of using an existing consensus engine is that it cannot be tailor-made to increase the performance of the solutions proposed in this thesis. The existing consensus engine lim-its the effectiveness compared to a custom made solution but simultaneously allows for an implementation that could be implemented on most general-purpose blockchains.

(21)

3.3. Introducing time to a blockchain

3.3

Introducing time to a blockchain

In order to schedule events that have a significant effect on the price of an asset such as coupon payments or dividends, there has to be a unified concept of time on all nodes. Such that the events are executed in the same order in relation to each other, regular transactions, and as close as possible to the desired execution time on real-time clocks.

This thesis will investigate the following two methods for introducing time to a blockchain. The time will only be used to schedule events and is not intended to be used as a universal clock in the system to determine the ordering of all transactions.

3.3.1 Block-driven time

The first solution is based on the number of blocks on the Blockchain. Events are scheduled to run when the number of committed blocks (block height) reaches a specific number. This number is pre-calculated to match real-world time. For example, if the asset should be split at 18:00, then the receiving node will calculate an approximate block that corresponds to that time. It then sends the event on the chain, and when the block is committed, then the event is performed.

The event will, Therefore, not be completed at the same real-world time on all nodes, but it will always be completed in the same order compared to other actions. Therefore, the system will always remain consistent while not executing the events at the same time on all nodes. The basic idea in block-driven time is to use the block height, essentially a counter as a rough time concept inspired by the timestamp. A notable difference between the UNIX-timestamp and the block-driven time is that the latter is not automatically synchronized with real-world time.

Figure 3.1: Event scheduling using block-driven time

Potential Advantages: Since the block time is decided when the event is committed to the chain, this method will not be delayed by a large number of incoming messages near the event time.

Potential Disadvantages: It will be difficult to correctly estimate the correct block for the event as every block has a slightly different build time. If many slightly faster or slower commits are done in a row, then it will affect the time when the blockchain commits the event block.

Therefore it might be possible to alter the execution time by committing many transactions that take a long time to process — thereby expediting or postponing the event, which could be used to create profitable trades by levering the system.

(22)

3.3. Introducing time to a blockchain

3.3.2 Observer-triggered time

The second solution uses observers that are connected to each node on the blockchain. When a new event is scheduled, it is committed to the blockchain with the desired execution time. The observer on each node is then responsible for checking if the desired time has occurred against the local real-time clock. Only when the time has occurred, a new message is sent on the blockchain, calling for the execution of the event.

Race-conditions are avoided through the use of a second message. Even though all observers check their system-clock, none are executing the event based on that clock, but instead, they all wait for the second message to be accepted by all nodes. When the second message (trig-ger) is accepted, then the event is executed and added to the block.

The concept of observer-driven time is very similar to how the Ethereum Alarm Clock9 (EAC) uses TimeNodes to trigger transactions to be executed in the future on the Etherum blockchain. The largest difference with this solution compared to the EAC is that all nodes are keeping track of time, and will reject transactions that are executed ahead of the scheduled time.

Figure 3.2: Event scheduling using observer-driven time

Potential Advantages: There is no need to estimate the commit time of a future block. Fur-thermore, this method will not execute unless more than 2/3 of nodes agree that the desired time has passed, which prevents events from executing ahead of the desired time unless more than 2/3 of the network has an incorrect time.

Potential Disadvantages: If there are many messages in the pipeline to be executed, then the second message that triggers the actual execution of the event could be delayed. The trigger makes it hard to predict when the message actually will be received on all nodes. Al-though it will be executed in the same order on all nodes, the real-time that this occurs could be significantly influenced by the number of messages that are already awaiting processing.

3.3.3 Affecting the block time

While the system measures how the time is affected by varying degrees of load by usual transfers, this assumes that transfers will not be maliciously crafted to take additional time to process. Even a fraction of a second of difference will eventually lead to a drift that accumu-lates over time, which could, in turn, lead to events being executed days after the intended time that was scheduled. The delaying of certain events could be beneficial to some actors and must, therefore, be prevented as this gives an unfair advantage when trading.

(23)

3.4. System overview

3.4

System overview

This section contains an overview of the system that was created as part of this thesis and outlines how the two methods are implemented. The system consists of three major parts, all shown in Figure 3.3. There are the end-users who can use their regular devices (Phones, Computers, etc.) to execute actions through the application logic that is connected to each node — actions such as transferring assets between accounts or acquiring more assets. Then there are the financial institutions that create accounts that the end-users use, and they also create and manage the assets that are exchanged on the system. Both the financial insti-tutions and the end-users communicate with the system through an application logic layer; this is the core of the system where all requests are processed, validated, and finally passed on to the final database layer i.e., the blockchain. The database layer (blockchain) stores all actions that are made throughout the system and the order in which they were executed. While the system consists of three major parts, only two layers, the Application logic layer and the Interaction layer was implemented as part of this thesis. The Blockchain layer is pure Tendermint with no modifications, which is interacted with by the Application logic layer.

(24)

3.4. System overview

3.4.1 Blockchain layer

The blockchain layer is the underlying database that is used by the application layer to an-swer queries that are made through the web-service. It is responsible for replicating the data (accounts, transactions, etc.) across the nodes by using blockchain technology through Ten-dermint. Tendermint is a general-purpose blockchain that allows the applications to abstracts away all networking and cryptography that are commonly associated with blockchain tech-nology.

3.4.2 Application logic layer

The application logic layer has been tailor-made to fit the capital market as part of this thesis. It is an externally accessible web service that runs Java Spark, which receives the requests from end-users and financial institutions and does a preliminary validation. The preliminary validation determines if the request is correctly formed and is permitted for the user. For example, this layer would reject an end-user that tries to create a new asset since this action is only allowed for financial institutions.

The web-services purpose is to lower the load on the blockchain nodes such that they will not have to process as many transactions, which otherwise would lower throughput of “valid” actions throughout the whole network. Another purpose of the web-service is to create an easy-to-use API that would allow for a universal way for different devices to access the sys-tem without forcing the end-users or financial institutions to adopt certain specialized tech-nologies. This should, in theory, make the system easier to implement for service providers.

3.4.3 Interaction layer

The interaction layer is where different actors can interact with the system as a whole by sending HTTPS requests to the web service in the application layer. Actors can be financial institutions that create assets and accounts on the system or end-users that can exchange assets between each other.

3.4.4 Actions

The actions that are permitted in the system are the ones previously stated in the system requirements sub-chapter. Actions can be but are not limited to, creating and modifying users, assets, and events.

Each asset is split into balances which are connected to an account. Each account can only hold a single balance for a specific asset, however that balance can hold the amount in a single-precision 32-bit IEEE 754 floating-point value, which is far more than anyone should need in a single account for an individual asset. The balances can themselves be split into new balances when transferring an asset to a new account that does not currently hold a balance of that asset.

A typical transaction between users is displayed below. It consists of two accounts and two lists of balances; this allows for single operation transactions when one asset is traded for another on the system, such as when trading equity for currency.

{

" s o u r c e ": " 0 0 0 0 0000 1111 2 2 2 2 " ,

" d e s t ": " 2 2 2 2 1111 0000 0 0 0 0 " ,

(25)

3.5. Performance evaluation " sExxEGSS ": 1 0 0 . 0 } ] , " d e s t B a l a n c e s ": [ { "DHTssWR": 7 5 . 0 } ] , " b e f o r e E v e n t ": "EEdDESSHTRE" }

The system will allow transactions to be posted with a before event flag to enable the market to be open at all times. This flag will allow the users to decide if they want the transaction to take place only before an event has occurred. For example, if a split should be executed at 18:00, then all the transactions that want to be completed before the split will mark the event in the flag.

The system will then execute transactions with the before-event-flag set until the event has occurred, after which all transactions with the before-event-flag will be rejected and not exe-cuted. The flag allows trading to occur even if there are imminent events that might signifi-cantly alter the price of an asset.

An asset split (event) is detailed below. The conversion is represented by the scale left and right parameters, corresponding to the amount the asset should be changed. In this case, all balances of the asset with ID “PiYgbOyMBBYX” should be converted 1:2, meaning that they should be doubled such that a balance that holds 100 will after the event is triggered instead hold a balance of 200 of the asset.

{ " s c a l e R i g h t ": 2 , " s c a l e L e f t ": 1 , " a s s e t I D ": "PiYgbOyMBBYX " , " t r i g g e r T i m e ": "2019´02´14 1 3 : 3 7 : 2 5 " }

These are the main actions that are executed on the chain. The formatting of seldom used actions is omitted since they follow the same general format.

3.5

Performance evaluation

A testing application was created to measure the system. The test application acts as both a financial institution and as an end-user. It starts each run with creating X number of user accounts and assets, and it then starts transferring those assets at a rate defined by the load scenario. The testing application will run each test for 24 hours for each load scenario. The application creates multiple threads, with the exact number depending on the load sce-nario. The average time that is needed between two transactions on a single thread in order to keep the overall average transaction rate is then calculated. The time between transactions (Y) was varied between 0ms and two times the average desired delay (X) to keep the flow of transactions realistic (as in not at a steady rate). The variation creates some fluctuations while keeping the same average since the random function with a uniform distribution will evenly spread between the range of delays.

(26)

3.5. Performance evaluation

The application generates a log file with the local time where each transaction was processed on the local node for each run. These log files were then collected (from each node) and processed to create the results used in this thesis.

The application creates a log entry whenever one of the following events occur: • When a new block is committed to the chain.

• When an scheduled event is issued.

• When an scheduled event is received from a remote source.

• When a node executes the scheduled event using the block-driven time. • When a node triggers the scheduled event using the observer-driven time.

• When a node executes the scheduled event from a notification through the observer-driven time.

• When a transaction is issued.

• When a transaction is committed to the chain.

All messages are outputted to a file on the local machine, which is then manually collected at the end of each test. The file is automatically named after the current server location and test number (for example Germany-Frankfurt_1.txt) in order to make it easier to collect and process the files while reducing the risk of mistaking the runs or servers for each other.

3.5.1 Generating transactions

The system must mimic a realistic flow of transactions as closely as possible. Otherwise, the results might give an unrealistic expectation of the performance or accuracy of the system when it is implemented in a real application. Hence it is imperative that the flow of transac-tions is not at a completely flat rate.

All the nodes generate some transactions each minute, to generate a realistic flow of trans-actions from all the nodes in the network, that does not favor any node where the specific average amount is dependent on the load that is being simulated (from low load to heavy load).

The aim of this is to simulate that an end-user sends a transfer to their local node, which then forwards it on to the blockchain, thus creating a flow from all the nodes. The system is tested under different degrees of load to simulate realistic performance; this subsection provides the details that determine how many transfers each node should make, on average, over a minute in order to simulate performance.

The load scenarios are characterized the amount of transactions that are performed each minute. The following load scenarios are tested in this thesis.

• Low load is characterized by 20 transactions per minute per node. Low load aims to capture the system under the most optimal conditions, while still having a higher throughput compared to other blockchains technologies such as Bitcoin or Ethereum. • Medium load aims to capture how the system would perform under the most common

circumstances where 500 transactions are coming from each node every minute, yield-ing over 2 million transaction per day. This measurement is intended to match the most common load that will be on the system during normal operating hours.

(27)

3.5. Performance evaluation

• Heavy load is measuring how the system performs when the system is under 100 % load. Here transactions are generated as fast as possible, and aim to measure how the system performs during peak hours.

3.5.2 Hardware

The nodes of the system are placed on different continents to simulate a distributed system. If the system works with the nodes that far apart, then they should also perform just as well when the servers are in closer proximity to each other.

All the servers were hosted through Amazon Web Services, as it makes it easier to setup high-grade servers on different continents. The servers are of type T2 Medium with the unlimited option enabled, which means that they were dual cores with a fixed amount of processing power. Three nodes were chosen since this is the maximum amount of nodes where all nodes need to respond to form a supermajority with more than 2/3 of the votes.

The nodes were located in Frankfurt Germany, Ohio U.S.A., and São Paulo Brazil. These loca-tions were chosen for multiple reasons, firstly since they are all located in large countries on different continents. Secondly, they all offer the same type of machine setup, which enables the nodes to run on the same type of hardware, which makes the nodes identical in terms of hardware and software.

Figure 3.4: The geographical position of the servers

3.5.3 Processing results

After the files have been collected for the specific test, then the files were processed to filter out the information that is relevant to the current test. For example, when measuring the average block time, then only the messages that relate to the commit of a new block are relevant. All the nodes log when a block is committed to the blockchain on the local machine. The data is then collected and processed as individual plots in a graph since one or more servers might have some offset that is not observed in the system as a whole. The relevant lines were then processed in Microsoft Excel. Microsoft Excel makes it easier to create graphs and calculate averages compared to a custom made solution to the problem.

(28)

3.5. Performance evaluation

Block Time: The graphs over block time are created by taking the average block time over the entire test and calculating the accumulated difference between that average and the ob-served block time. That is the reason why all graphs end on the baseline since the baseline (y-axis) indicates the final average block time.

Since the nodes are coordinating throughout the scenarios, it is expected that the average block time is the same on all nodes. If the average block time is not the same on all nodes in a specific load-scenario, then something is wrong with the network, and the nodes are not getting the required more than 2/3 votes i.e., one vote from each node. However, the average block time might differ between the load-scenarios since they are separate runs and not coordinating between the different load scenarios. The block time is measured to estimate the difficulty in predicting the real-time when a block will be committed to the chain.

Processed transactions: All transactions are logged to evaluate the throughput of the sys-tem as a whole during different load scenarios. The log files created as part of the tests are parsed to extract all occurrences of transactions, these occurrences are then counted and di-vided by the duration of a test to ascertain the average amount of transactions that the system handles during a given load scenario.

There are two primary reasons to measure the throughput; The first reason one is to verify that the software is generating the correct amount of transactions during each load scenario. The second reason is that it does not matter how accurate each solution is if the throughput is too small for a given use-case.

The transaction logs are also used to calculate how long it takes for a transaction to be added to the blockchain after it reaches a node. This value is essential because some use cases ne-cessitate that a transaction must be completed within a certain amount of time. For example, no one wants to wait 30 minutes to purchasing an item in a store. Whereas other use-cases can accept longer transaction times, but it is important to determine this number as it gives a reasonable estimate of which use-cases that this solution is suitable for.

Accuracy of scheduling: Events are scheduled to occur at regular intervals in order to mea-sure the average performance of each scheduling method over time. A new event is sched-uled every 10 minutes at each server and aims to be executed 2 hours after the initial schedul-ing. Although a new event every 10 minutes is possibly more frequent compared to when the system is deployed in the real world, it aims to give a greater understanding of how the system performs during the peaks and lows of the system. The nature of the events is not relevant as all events take roughly the same time to execute, thus removing the need for additional tests for each type of event.

A multitude of parameters are parsed from the logs to determine the most accurate solution for scheduling action in a blockchain; The most important part is the time between when the event is supposed to be executed and when it was actually executed on all the nodes. These values are then analyzed to determine the size of the difference between the desired execution time and the actual execution time.

(29)

4

Results

This chapter contain the results that are used answer the research questions stated in Section 1.3.

4.1

Average block time

This section shows the average block time during different loads and how the number of transactions affects the consistency of the block time. As stated in the theory section, block time is the difference between the commit time (when a block is added to the blockchain) of a block and the commit time of the following block on a specific node.

The aim of these measurements is not to evaluate the advantages or the disadvantages of a certain block time, but rather to assess how consistent the block time is during different loads and with real-world network interference. Therefore, a good result is neither a high or low average block time but rather a consistent block time that remains the same during the entire test. A consistent block time has a small standard deviation while having the median-, min-, and max block time as close as possible to the average block time.

4.1.1 During low load

Table 4.1 show how the block time varied during the low load test. The maximum block time was over 10 seconds for two nodes, which is quite a large deviation from the average block time of 1649.3 ms. The seemingly random spikes are likely to make it more difficult to predict when a block is committed to the blockchain.

Figure 4.1 show the difference between the estimated commit time of block N and the actual commit time of block N, on all servers. However, the results are so similar on all servers that only the San Paulo results are visible while the other lines are covered behind it. The results indicate that the block times do not drift too much before returning to the average block time. An interesting observation is that the estimated commit time of a block N is rarely after the actual commit time. So the estimation is seldom early but rather is consistently after the

(30)

4.1. Average block time

Location Average Median Min Max Standard Deviation

Frankfurt, Germany 1649.3 1648.0 1064.0 10652.0 103.1

Ohio, U.S.A. 1649.3 1648.0 1227.0 8383.0 92.8

São Paulo, Brazil 1649.3 1583.0 1316.0 10580.0 131.3

Table 4.1: Block time during low load (ms)

actual commit time. There was also a definite spike in the block times corresponding to the maximum time in Table 4.1.

Figure 4.1: The accumulated difference in ms between the average commit time and the actual commit time during low load

4.1.2 During medium load

The block time during the medium load as shown in Table 4.2 is slightly different compared to the block time of the low load. The difference is below one millisecond and might, there-fore, be unaffected by the increase in transactions, since this test had 80 times the amount of transactions without a significant difference in the observed block time. While the average is very similar, the standard deviation increased slightly, which means that the block time varies more when the load increased.

This test also show a sudden spike in block time of almost 8 seconds, similar to the results during the low load. Overall, the results are similar, but a difference of 0.2 ms in average block time might seem small. However, the difference will accumulate over time, and over 24 hours, the difference between the estimations will be over 10 seconds.

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 1649.5 1632.0 1308.0 8084.0 108.2

Ohio, U.S.A. 1649.5 1632.0 1240.0 8084.0 99.4

São Paulo, Brazil 1649.5 1620.0 1183.0 8000.0 134.9

(31)

4.2. Processed transactions

The same estimation trend that was visible in Figure 4.1 was also observed in Figure 4.2. Meaning that the estimated commit time is rarely before the actual commit time. The graph is a lot smoother during the test using medium load, first having a period of faster block time followed by a period of slower block time.

There was also a significant spike observed in this test (shown in Figure 4.2), where the block time took significantly longer than expected. However, it was not at the same block as in Figure 4.1, indicating that the problem is not specific to a particular block.

Figure 4.2: The accumulated difference in milliseconds between the average commit time and the actual commit time during average load

4.1.3 During high load

The problem with occasional spikes is still observed during a high load; however, the differ-ence compared to the baseline was lower in relation to other load scenarios. It is also clear that the average block time is significantly larger here compared to the other scenarios, al-though the block time seems to be below the average a good portion of time, making this the only scenario where a significant negative difference was observed.

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 1772.3 1760.0 1301.0 3767.0 108.1

Ohio, U.S.A. 1772.3 1761.0 1428.0 3800.0 72.9

São Paulo, Brazil 1772.3 1748.0 1264.0 3876.0 113.6

Table 4.3: Block time during high load (ms)

Figure 4.3 is similar to the graphs produced during the other load scenarios in that it deviated several seconds from the baseline. A difference in Figure 4.3 in comparison to the other load scenarios is that there was no significant spike in block time.

4.2

Processed transactions

This section contain the average amount of transactions that are processed on each node every minute grouped by the expected amount, depending on the load scenario. The system was

(32)

4.3. Time until processed transactions

Figure 4.3: The accumulated difference in milliseconds between the average commit time and the actual commit time during high load

initially set up to be tested at three different load levels where the low load aimed to reflect the system when few were using it, for example, during nights. The medium load level covered the expected load during most of the day, while the high load level reflects the system during peak hours.

All load scenarios are aiming to keep a specific average amount of transactions per minute as outlined in Section 3.5.1. A comparison between the expected and the observed amount of transactions during the testing of the different load scenarios is found in Table 4.4. The values found in Table 4.4 should be the same in the low and medium load scenarios as the system is expected to cope with a higher amount of transactions, which is somewhat the results that are shown, with a small deviation. With the maximum number of transactions being observed during the high load. However, this observed maximum is unlikely to be a fixed number and is likely dependent on the network capacity between the nodes and the hardware of the nodes and could therefore not be estimated before the tests were performed.

Load Observed Expected

Low 53.7 60

Medium 1412.1 1500

High 6718.3

-Table 4.4: Processed transactions per minute for different load scenarios

4.3

Time until processed transactions

Tables (4.5, 4.6, 4.7) show the median time between the issuance of a new transaction and the time when the transaction is processed on a particular node. The server-locations on the left-hand side is the node issuing the transaction with the values on the same row being the median time it takes for the same transaction to be processed on a certain node.

This measurement is essential since it indicates how the system would perform while be-ing used to process transactions where users would need to wait for the completion of the

(33)

4.3. Time until processed transactions

request, such as a purchase in a store. The measurement also indicates if there is some advan-tage to issuing a transaction at a particular node to get the transactions processed faster than the competition (who are issuing to other nodes in the network), essentially estimating how fair the network is in terms of performance.

4.3.1 During low load

The following Table shows that the issuing node is not faster overall at processing a request that is issued on the same node compared to a request issued on other nodes. Although the transactions issued at the Ohio-node are being processed at least 30ms faster on all nodes compared to transactions that are issued on other nodes.

São Paulo, Brazil Frankfurt, Germany Ohio, U.S.A.

São Paulo, Brazil 1738.0 1763.0 1774.5

Frankfurt, Germany 1763.0 1781.0 1793.0

Ohio, U.S.A. 1707.0 1722.0 1734.5

Table 4.5: Time between sending and processing transactions during low load (ms)

4.3.2 During medium load

The average transaction time has increased a bit overall, which would indicate that the system is under more stress, something that’s expected while having 80 times as many transactions. The difference was still quite small and should not notably affect the average transaction time for the end-users. A thing to note is that the Ohio node seems to become faster when more transactions are passing through the system.

Frankfurt, Germany São Paulo, Brazil Ohio, U.S.A.

Frankfurt, Germany 1792.0 1781.0 1793.0

São Paulo, Brazil 1760.0 1782.0 1750.0

Ohio, U.S.A. 1678.0 1628.0 1649.0

Table 4.6: Time between sending and processing transactions during medium load (ms)

4.3.3 During high load

The results are very similar during high load compared to the low and medium load. A difference under high load is that the execution of a transaction is faster than the average block time, which means that the transaction is sent and added to the chain faster (on average) than the block time.

São Paulo, Brazil Frankfurt, Germany Ohio, U.S.A.

São Paulo, Brazil 1656.0 1726.0 1696.0

Frankfurt, Germany 1670.0 1667.0 1651.0

Ohio, U.S.A. 1688.0 1690.0 1670.0

(34)

4.4. Accuracy of block time solution

4.4

Accuracy of block time solution

This section contains the result from the tests that measure the accuracy of block time solution. The estimation of the commit time of future block N is made knowing the average commit time of previous blocks during the same load scenario, hence the expected prediction changes depending on the current load on the nodes.

The desired block N is calculated to be 2 hours away from the scheduling trigger. The values in the following Tables show the difference between the expected commit time of block ˆT(N) and the actual commit time of block T(N).

4.4.1 During low load

During a low load, the accuracy of the block driven solution differs on average 5 seconds over a 2 hour period, which is a difference of 0.7 %. At the same time, the difference between the minimum value and the maximum value is a lot larger, being almost 51 seconds behind the desired execution time, resulting in a difference of 7 %.

Location Average Median Min Max Standard Derivation

Frankfurt, Germany -5625.1 -9191.0 -24731.0 51080.0 14802.0

Ohio, U.S.A. -5625.0 -9228.0 -24669.0 51029.0 14799.5

São Paulo, Brazil -5621.9 -9184.0 -24672.0 51188.0 14799.8 Table 4.8: Accuracy of block time solution during low load (ms)

4.4.2 During medium load

Figure 4.9 shows that the accuracy during the medium load test is higher compared to the other loads with a standard deviation of just 1.3 seconds. While the average is around 2 seconds, however, these measurements still show a maximum difference of 5 seconds. This load seems to represent the best-case scenario with the minimum and maximum values being a factor of 10 better compared to the low load.

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 2042.6 1780.0 -692.0 5868.0 1360.4

Ohio, U.S.A. 2041.3 1776.5 -723.0 5872.0 1361.1

São Paulo, Brazil 2042.2 1784.0 -720.0 5868.0 1360.4

Table 4.9: Accuracy of block time solution during medium load (ms)

4.4.3 During high load

The accuracy during high load shown in Table 4.10 is slightly better compared to the low load but is still at worst 26 seconds off the mark. The standard deviation is also quite significant compared to the average load shown in Table 4.9.

4.5

Accuracy of observer-driven time

This section covers the different aspects that affect the accuracy of the observer-driven time during different load scenarios. This section serves as a direct comparison to the accuracy of the block-driven time.

(35)

4.5. Accuracy of observer-driven time

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 1184.8 544.0 -17352.0 26376.0 9395.6

Ohio, U.S.A. 1188.7 557.0 -17209.0 26324.0 9407.7

São Paulo, Brazil 1185.6 544.0 -17360.0 26504.0 9396.1

Table 4.10: Accuracy of block time solution during high load (ms)

The observer-driven time works by a notifier that runs every second in order to trigger events. When an event is triggered, a second message is sent, and when that message is added to the blockchain, then the event is executed. The second message will create an intrinsic delay between the desired execution time and the actual execution time, as outlined in the theory chapter.

Each node checks if an event should have been triggered once every second. The trigger delay shown in Table 4.11 is the time between the desired triggering time and when the first trigger notification is sent from a node, where a lower time is better. The check is only performed once per second since the events cannot be scheduled with greater accuracy than a second.

4.5.1 During low load

Table 4.11 show that the average delay is mostly below a second, except for some outliers with a maximum delay near 2 seconds. Which in general is in the expected delay range, with the check being performed every second under ideal conditions.

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 769.2 679.5 104.0 1887.0 440.5

Ohio, U.S.A. 734.2 644.5 69.0 1852.0 440.5

São Paulo, Brazil 476.3 469.0 142.0 818.0 276.8

Table 4.11: Trigger delay of observer driven time during low load (ms)

The accuracy of observer driven time is the difference between the desired execution time of an event and the actual execution time of an event. These values line up very good with the average transaction duration of 1.7s shown in Table 4.5 after a sightly delayed trigger of 0.7s shown in Table 4.11. This indicates that the observer driven time performs well during conditions where the load on the system is low.

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 2337.4 2351.0 1088.0 3506.0 543.8

Ohio, U.S.A. 2326.6 2285.0 1125.0 3500.0 544.2

São Paulo, Brazil 2329.0 2281.0 1099.0 3508.0 534.9

Table 4.12: Accuracy of observer driven time during low load (ms)

4.5.2 During medium load

The average delay shown in Table 4.13 is more substantial when compared to the low load scenario. That does not necessarily indicate that the system was under a higher load. The trigger will be run every second, so if the check is started late during a second, then the check will continue to be run in the higher milliseconds every second. The numbers here influence the accuracy of the observer driven time.

(36)

4.5. Accuracy of observer-driven time

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 937.7 860.0 2.0 2028.0 535.2

Ohio, U.S.A. 913.8 848.0 38.0 1991.0 532.9

São Paulo, Brazil 884.6 881.0 791.0 983.0 77.9

Table 4.13: Trigger delay of observer driven time during medium load (ms)

Table 4.14 shows that the time difference between the desired execution time and the actual executing time is more significant during the medium load compared to the low load. The difference is similar to the low load scenario in that it is a combination of the average time to complete a transaction, and the delay before an event is triggered.

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 2560.7 2475.0 1582.0 3760.0 565.2

Ohio, U.S.A. 2541.3 2482.0 1511.0 3726.0 562.9

São Paulo, Brazil 2554.9 2479.0 1551.0 3831.0 556.0

Table 4.14: Accuracy of observer driven time during medium load (ms)

4.5.3 During high load

The time until the second message was sent was overall lower during the high load scenario compared to the other load scenarios. This is, as previously stated, not an indication of accu-racy in itself; however, it affects the total accuaccu-racy.

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 325.1 302.9 287.0 345.0 41.9

Ohio, U.S.A. 321.0 314.5 270.2 438.0 46.4

São Paulo, Brazil 358.8 345.2 261.0 553.0 33.2

Table 4.15: Trigger delay of observer driven time during high load (ms)

The trend continues over higher loads, and the accuracy shown in Table 4.16 is similar to the lower loads where the time correlates with a combination of transaction time and the initial triggering time. The observer-driven time is a lot more accurate during the high loads compared to the lower loads.

Location Average Median Min Max Standard Derivation

Frankfurt, Germany 1837.7 1838.0 1763.0 1923.0 86.6

Ohio, U.S.A. 1883.7 1884.0 1772.0 1972.0 84.3

São Paulo, Brazil 1949.7 1950.0 1839.0 2110.0 65.2

References

Related documents

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

På många små orter i gles- och landsbygder, där varken några nya apotek eller försälj- ningsställen för receptfria läkemedel har tillkommit, är nätet av

Detta projekt utvecklar policymixen för strategin Smart industri (Näringsdepartementet, 2016a). En av anledningarna till en stark avgränsning är att analysen bygger på djupa

DIN representerar Tyskland i ISO och CEN, och har en permanent plats i ISO:s råd. Det ger dem en bra position för att påverka strategiska frågor inom den internationella

Indien, ett land med 1,2 miljarder invånare där 65 procent av befolkningen är under 30 år står inför stora utmaningar vad gäller kvaliteten på, och tillgången till,

Av 2012 års danska handlingsplan för Indien framgår att det finns en ambition att även ingå ett samförståndsavtal avseende högre utbildning vilket skulle främja utbildnings-,