• No results found

A closer look at WebAssembly

N/A
N/A
Protected

Academic year: 2022

Share "A closer look at WebAssembly"

Copied!
45
0
0

Loading.... (view fulltext now)

Full text

(1)

Faculty of Computing

Blekinge Institute of Technology SE-371 79 Karlskrona Sweden

A closer look at WebAssembly

Denis Eleskovic

26

th

August, 2020

(2)

ii (45 )

This diploma thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the diploma degree in Software Engineering. The thesis is equivalent to 10 weeks of full time studies.

Contact Information:

Author:

Denis Eleskovic

E-mail: deel18@student.bth.se University advisor:

Andreas Arnesson

Department of Software Engineering

Faculty of Computing

Blekinge Institute of Technology SE-371 79 Karlskrona, Sweden

Internet : www.bth.se

Phone : +46 455 38 50 00

Fax : +46 455 38 50 57

(3)

1 (45 )

ABSTRACT

WebAssembly is a new emerging technology for the web which offers a low-level bytecode format for other languages to compile to. The aim of the technology is to effectively speed up the performance of the web, as well as to offer a way for developers to port existing libraries and code that were written in static languages such as C/C++ or Rust. The technology is currently supported in all the major browsers of today.

This study takes a closer look at how the technology works and how it is compiled and executed across two of the major browsers as compared to JavaScript. Furthermore, a smaller experiment was conducted where AssemblyScript was used to compile to WebAssembly, with AssemblyScript being a typescript-to-WebAssembly compiler. The two technologies were then tested in terms of runtime performance and consistency across two browsers, operating systems as well as with different optimizations.

The study showed that the technology goes through ahead-of-time-compilation and optimization through a compiler toolchain that depends on the language used as opposed to JavaScript which makes use of just-in-time-compilation. Furthermore, the fundamental structure for WebAssembly proved to be limited in order to be able to better link directly to machine code and through this offer performance benefits. The experiment conducted showed that WebAssembly performed better when it came to pure calculations but fell behind JavaScript when performing operations on an array. The reason is most likely due to the overhead of passing data structures between the two technologies. The experiment further showed that WebAssembly was able to offer a more consistent and predictable performance than JavaScript.

Keywords: WebAssembly, JavaScript, JITC, AOT

(4)

2 (45 )

TABLE OF CONTENTS

1



INTRODUCTION ... 3



1.1



BACKGROUND... 3



1.2



PURPOSE ... 3



1.3



SCOPE ... 4



2



QUESTIONS TO BE ANSWERED ... 5



3



METHOD ... 7



3.1



LITERATURE REVIEW ... 7



3.2



EXPERIMENTAL STUDY ... 8



4



LITERATURE REVIEW ... 11



4.1



INTRODUCTION ... 11



4.2



WEBASSEMBLY ... 11



4.2.1



Semantic Phases ... 12



4.2.2



AssemblyScript ... 13



4.3



JAVASCRIPT ... 13



4.3.1



Compilers ... 13



4.3.2



Just-in-time ... 14



4.3.3



Profile-based optimizations ... 15



4.3.4



V8 Engine... 15



4.3.5



Spidermonkey Engine ... 16



4.4



CONCLUSION ... 17



5



EMPIRICAL EXPERIMENT ... 19



5.1



RESULTS ... 19



5.2



CONCLUSION ... 30



6



ANALYSIS AND DISCUSSION ... 31



6.1



WEBASSEMBLY AND JAVASCRIPT IN THE BROWSER ... 31



6.2



STRENGTHS AND WEAKNESSES ... 33



6.3



PERFORMANCE ... 35



6.4



CONSISTENCY ... 36



7



CONCLUSION ... 38



7.1



KEY CONCLUSIONS: ... 39



8



FUTURE WORK ... 40



REFERENCES ... 41



(5)

3 (45 )

1 I NTRODUCTION 1.1 Background

As the web has developed over the years, developers have always had to keep up with the latest technological advancements meant to make web applications faster and more efficient.

JavaScript has proven powerful enough to solve most issues, with it being the only natively supported language on the web. As the need for speed and efficiency has rapidly escalated, JavaScript has shown it does not possess the necessary performance to produce apps which require near native performance. Consequently, four of the major browser companies came together to create WebAssembly with the goal of bringing safe, fast and portable code to the Web [1]. WebAssembly is a low level bytecode; a compilation target for several different languages meant to enable developers to create apps at near native performance and can be used in more intensive use cases, making CAD applications, video/image editing and scientific visualization and simulation available directly in the browser [1].

1.2 Purpose

The release of WebAssembly has garnered interest in the developer community, with many choosing to benchmark the technology using various methods and comparing it against plain JavaScript [2], [3], [4], [5]. Consequently there have been many benchmarks performed for the different languages supported by WebAssembly but few have been performed using AssemblyScript [6]. AssemblyScript is a strict subset of Typescript, which compiles to WebAssembly, effectively making AssemblyScript a Typescript-to-WebAssembly compiler [7]. AssemblyScript enables existing JavaScript developers to be able to take advantage of WebAssembly in applications without needing to learn a new language; this can have a major effect on how the web develops for the foreseeable future as JavaScript is the most used language [8].

Therefore the aim of this paper is to provide greater insight into how WebAssembly operates

in order to shed more light on the technology and to better understand where it fits in on the

web. Furthermore, there have been various benchmarks performed using Assemblyscript but

they have not looked at performance across browsers, operating systems and also how the

technology is affected by the JavaScript engine [6].

(6)

4 (45 )

1.3 Scope

In order to limit the scope of this research, the thesis will not look at if WebAssembly can

improve the load time of a web application or the effects of building entire websites through

WebAssembly. The paper looks to provide a greater insight into WebAssembly and where it

fits into the picture on the web. Furthermore, a small set of benchmarks will be used to test

the technologies in different aspects in the two of the major browsers of today as well as

across two operating systems while looking at how each JavaScript engine affects the

technologies. Furthermore, in order to limit the scope of browsers used for testing. Two of the

most popular browsers on desktop computers were chosen for usage in the benchmark [9].

(7)

5 (45 )

2 Q UESTIONS TO BE ANSWERED

This study aims to take a closer look at WebAssembly in order to better understand how it works, for what problem the technology was created to solve, but also how it fits into the existing ecosystem on the web. Furthermore, a set of benchmarks will be selected to further experiment with the technology in different browsers, operating systems and different states of optimizations.

The questions to be answered in this study are the following:

RQ1. How are WebAssembly modules compiled and executed in the Firefox and Chrome browsers compared to JavaScript?

WebAssembly was built for the web, meaning it is supposed to run alongside JavaScript in the browser. Since WebAssembly is a low level bytecode that was created as a compilation target for other languages and is supposed to run in the browser, the research question aims to investigate how programs can be compiled to WebAssembly and executed in a browser environment as compared to JavaScript. The Firefox and Chrome browsers were chosen for the benchmark as they are the most popular browsers used on desktops [9].

RQ1.1. What are the strengths and weaknesses of WebAssembly?

The aim of this research question is to try and better understand in what areas the technology excels and where it might fall behind, further building on the first research question posed.

Since the technology is not a complete programming language, it would not be fair to directly compare it to JavaScript. Hence, the focus will be on what the technology excels at and what it might not be suited for.

RQ2. Does WebAssembly offer an improved runtime over JavaScript when executed in the Firefox and Chrome browsers?

The goal with this research question is to compare the execution time for different

benchmarks and pit the technologies against each other in the chosen browsers and across

operating systems as well as different states of optimizations. The reason being to further

(8)

6 (45 )

evaluate if the underlying operating system affects the performance, but also how WebAssembly is affected by the JavaScript engine.

RQ2.1. Does WebAssembly offer a more consistent and predictable performance over JavaScript when executed in the Firefox and Chrome browsers?

The above research question expands on and builds on the first research question as there

have been mentions about WebAssembly offering a faster and even more consistent

performance due to how it is compiled but not any direct testing performed to verify the

claim [3]. Hence, the aim of the research question is to compare the technologies based on

consistency and predictability in performance.

(9)

7 (45 )

3 M ETHOD

3.1 Literature review

A literature study will be conducted with a focus on how WebAssembly operates as opposed to JavaScript and with this also provide more insight into the strengths and weaknesses of the emerging technology. It is necessary to obtain more information about how respective technology works in order to fairly compare them. Furthermore there is also a need to provide a clearer view into what kind of problem WebAssembly aims to solve and where the technology fits into the picture. WebAssembly is also a rather new technology which is still evolving; as such it is necessary to search the internet for further information on how it works. However, this can also prove to limit the study in its reliability as there might not be as much valid material available on the technology. Thus, less reliable sources may be included as sources of information as a means of procuring additional information about the technology. Through this, the paper will aim to answer RQ1 and RQ1.1 effectively.

The literature study will be performed by searching various databases on the internet through the use of the search engine Google for different papers regarding the technologies. The words to be used will be: “WebAssembly”, “WebAssembly compiler”, “JavaScript engine”,

“JavaScript benchmark”, “JavaScript JIT” and “JIT-compilation”. If the search for papers

proves insufficient then the search of alternative sources will be conducted in order to fill any

gaps in knowledge that might occur. The criteria to be used in order to identify relevant

literature will be looking at if the information provided is from a scientific paper, contains

information that is relevant to the technologies and how old the information is. Once the

information has been collected, it will be compiled and analyzed in order to clarify the

information and present an answer to RQ1 and RQ1.1 respectively.

(10)

8 (45 )

3.2 Experimental study

In order to acquire an accurate measurement of the performance of a given technology one would have to tackle the challenging task of creating an extensive suite of benchmarks, these would then need to be tested across different browsers, operating systems and devices. This is an enormously time-consuming task which would not be feasible, given the time limit of this thesis. Hence, a smaller set of benchmarks were selected based on previous benchmarks performed on the technology and then tested in two of the major browsers of today but also across two operating systems [10], [11]. The benchmark suite was based on previous benchmarks which were used to test the SpeedyJs compiler. As the SpeedyJs compiler operates on a similar basis as AssemblyScript, benchmarks were chosen for this reason.

SpeedyJs also uses Typescript as a way to type check code against the WebAssembly specifications [11].

The experiment was conducted in order to gauge if WebAssembly offered a more advantageous performance and if the technology also offered a more consistent performance over JavaScript. The benchmarks included a variety of tests which were pre-configured for SpeedyJs and which were narrowed down for this experiment. The reason for narrowing down the benchmarks and not using all of them is due to time constraints, as all the benchmarks needed to be performed across browsers and operating systems. The benchmarks did not adhere to the code syntax of AssemblyScript, hence they required rework as well as creating a new environment where the AssemblyScript benchmarks could be compiled. The modules were compiled using the full runtime and the files were not bundled.

The algorithms included IsPrime, Fib, Merge sort, Array Reverse and Nsieve. IsPrime is an algorithm for calculating if the input provided is a prime number, while Fib is used to calculate the Fibonacci number of 40. Array Reverse calculates the reverse of an array containing 10 000 elements up to 999 times. The Merge sort test is implemented to sort an array containing 10 000 elements using the Merge sort algorithm. The Nsieve test calculates the prime numbers in the range of 2 - 39 9999 using the sieve of Eratosthenes [11].

In order to obtain reliable results as the execution of certain tests would finish too quickly and

to minimize the margin of error the iterations were increased for each algorithm. The

benchmarks were iterated individually. The Merge sort, Nsieve, Prime number algorithms

(11)

9 (45 )

were iterated 1000 times, the Array Reverse algorithm was iterated 100 times, the Fibonacci experiment was iterated one time. This was then done 30 times in both a warm and cold browser session, similarly to how a previous benchmark was conducted [10]. For the cold run aspect of the experiment, the browser was closed between each run in order to ensure that the compiler did not manage to optimize the code, while the warm run session allowed the compiler to optimize the code by running the benchmark one time before measurement took place. Furthermore, the browser was kept open between each run in the warm run session.

The reason for this aspect of the experiment is to analyze and see if WebAssembly is able to gain more in performance when optimized as compared to JavaScript, in essence seeing how WebAssembly is affected by the JavaScript engine. Furthermore the tests were executed in two of the most used browsers on desktops today, Firefox and Chrome [9]. The settings in Firefox were also changed in order to prevent the browser from requesting to close the window if the execution took too long and to prevent the browser from running in safe mode in the event of a crash.

The JavaScript implementations of the selected benchmarks were tested first in order to establish a baseline for which WebAssembly could be tested against. The WebAssembly implementations of the algorithms used AssemblyScript. Both the technologies were implemented to the same degree as much as possible. However, some limitations to keep in mind at the moment are that AssemblyScript is not able to effectively pass complex data structures between the two technologies at this point in time. Hence, the instantiation of the array to be used by the algorithms performing operations were done inside of the AssemblyScript module. This was done instead of using additional libraries to enable the passing of data structures as it would add an additional overhead to the WebAssembly implementation [3].

The AssemblyScript performance included the overhead of jumping between the two

technologies but also the time it takes for the browser to download and instantiate the

WebAssembly file [3]. A technique which streams a WebAssembly module, meaning the

engine can begin working on the module as it is being downloaded, was utilized in the

experiment [3]. Furthermore the experiment was performed in a local environment in order to

minimize the effects of an internet connection as this could have affected the total time

needed by the browser to begin parsing the file.

(12)

10 (45 )

The experiment was conducted on an AMD Ryzen 5 2600X 3.60 GHz machine with 16GB of

memory dual-booting Windows 10 version 1909 and Ubuntu 20.04 in the Firefox browser

version 78.0.1 (64-bit) and Chrome version 83.0.4103.116 (64-bit). The results were

compared based on their geometric mean and standard deviation, which was calculated using

the Geometric mean formula, and the Standard deviation formula.

(13)

11 (45 )

4 L ITERATURE REVIEW 4.1 Introduction

In order to better understand how the two technologies differ, it is important to first understand what WebAssembly is and how it works in comparison to JavaScript, in both how WebAssembly is supposed to offer better performance but also in how it is compiled and able to target the web platform as a whole.

4.2 WebAssembly

WebAssembly is a new type of intermediate language in binary form for the web that is meant to be run alongside JavaScript in the browser, it provides some new features and also helps improve performance [1], [11], [12]. In order for the different JavaScript engines to be able to execute WebAssembly modules, the browser companies have extended the capabilities of their respective virtual machine, effectively enabling WebAssembly to be executed in the browser [12]. The binary format was created to help speed up compute- intensive tasks and to ensure consistent performance as opposed to regular JavaScript [1].

However, WebAssembly is not meant to be written by hand but instead it is aimed at being a compilation target for other languages. The technology also does not support garbage collection at this point in time but is rather a long term goal, currently this is taken care of by the language compiling to WebAssembly. This means that code written in multiple languages which could not run on the web before have now become available; effectively enabling developers to port existing code from such languages as C/C++ and Rust to the web [12].

In general compilers work by taking a source language and converting it into machine language which the host CPU can then use to execute the instructions provided.

WebAssembly is in this regard rather similar to an assembly language, the exception being

that each assembly language (x86, ARM) is specifically tied to certain machine architecture

while WebAssembly is not. The reason behind this is because you never know what kind of

architecture will be used by the end-user. Hence there is a need to provide a type of

architecture agnostic assembly language [13]. WebAssembly takes it another step closer to

actual machine code than JavaScript source code due to it having a more direct mapping to

machine code. When a script is executed, the browser will download the WebAssembly and

compile it directly to machine code for the underlying hardware, effectively making the

(14)

12 (45 )

execution of WebAssembly faster than normal Just-in-time compilation of the JavaScript source due to skipping the need for parsing and optimizing the source code [13].

WebAssembly is built around certain concepts in order to be able to encode and provide its low-level binary format. The technology is a bit limited in terms of value types as it only works with four basic types. These are all IEEE standard floating-point arithmetic numbers.

Each of these types of values can have a 32 and 64 bit width. There is also no distinction between signed and unsigned integers [14]. The technology handles instructions by basing its computational model on a stack machine which works by a last-in, first-out principle. The instructions will perform operations on data by popping the arguments from the operand stack and pushing results back in [14].

When writing code for WebAssembly, there may be instructions provided that under certain conditions produce traps. When this happens, WebAssembly will abort execution since traps cannot be handled directly by WebAssembly. Instead information will be passed to the host environment where the problem can be raised [14].

The concept for functions work by taking a sequence of values as parameters and performing operations before returning results made up of a sequence of values. In the current version of WebAssembly, functions are only able to return one value [14].

WebAssembly makes use of linear memory, which is an array of raw bytes. The memory is created with an initial size but can be changed as needed. When a program is executed, it can store values in the memory but it is able to also access it at any byte address [14].

4.2.1 Semantic Phases

The modules are distributed in a binary format where the format is later decoded and made into an internal representation of a module. In a real implementation this would instead be compiled directly to machine code. In order for WebAssembly modules to work it needs to be valid. The validation phase performs checks for varied conditions to ensure the module is both safe and executable. One condition which is especially important is to perform type checking of functions and the sequences in their body. [14]

When the module has been decoded and validated it is finally ready to be executed. The

execution of a module can be seen as creating a module instance - meaning everything which

(15)

13 (45 )

is needed for execution is first instantiated, such as memory, tables and globals. Once this is performed the module can then be executed by invoking an exported function in a module instance. [14]

4.2.2 AssemblyScript

AssemblyScript is a strict subset of Typescript which is currently being developed with the aim of providing the advantages of WebAssembly to existing JavaScript developers. The AssemblyScript compiler works by compiling to WebAssembly through Binaryen [2].

Binaryen is a compiler used for compiling to WebAssembly. It is a library written in C++

aimed at making compiling fast and effective. It performs both minimizations of the code base as well as optimization. Binaryen is able to utilize multi-threading, meaning it can both build intermediate representations and optimize it at the same time [18].

4.3 JavaScript

JavaScript is an object-oriented programming language that is used in web pages to make them more dynamic. The language supports multiple paradigms such as object-oriented, procedural and declarative [16]. Due to its dynamic nature it utilizes just-in-time-compilation [17]. The process of compiling JavaScript code to machine code for execution varies on the browser the code is being run in, with the SpiderMonkey engine being run in the Firefox browser and the V8 engine being run in Chrome [18]. However, there is a general approach that the major browsers implement in order to speed up the performance of the browser. The JavaScript engines adopt a very complex multi-tier architecture. The architecture is composed of several execution tiers including the interpreter and the compiler, these tiers then operate on a function-by-function basis on the code to speed up the execution of code [19].

4.3.1 Compilers

In order to execute code, the code first needs to be translated from the language it was implemented in and into a machine language that can be run by the computer. For this there are mainly two methods [20].

The first method involves using an interpreter. An interpreter steps through the code and

executes it on a line-by-line basis. The interpreter prefers to directly evaluate expressions and

execute statements. An interpreter may need to process the same expressions and statements

(16)

14 (45 )

several times compared to a compiler, hence the reason for it being considered slower.

However, an interpreter is easier to move to another machine and they are faster to get up and running and run the program as well. Interpreters are quick to start but slow to execute [20].

The second method is to use a compiler. A compiler translates code written by human programmers using a high-level programming language into low-level machine code that can be understood by the machine. During the process of compiling the program, the compiler will also use this time to spot mistakes in the syntax and report them. The reason for using a high-level language is that it is closer to the way humans think about problems, the compiler can also effectively spot any issues with the programs but high-level languages also enable programmers to port the same code to many different machine languages, resulting in the program being available on multiple machines. Compilers need to compile the code before actually executing the program; compilers are slow to start but fast to execute [20].

4.3.2 Just-in-time

Just-in-time compilers (JITC) were designed to compile code during execution. It is an attempt at combining the better of two worlds, which are an interpreter and a compiler. To provide a low-level overview they tend to work as follows;

The JITC tends to be split up in three tiers. Starting from the first tier, the parser translates the source code which was provided for an invoked JavaScript function to bytecode. The bytecode is then executed in order to ensure a quick start to the JavaScript execution. From here the execution is repeated several times, depending on the engine it will repeat it a set number of times before considering the code “warm”. When the bytecode is considered

“warm”, the baseline JITC will trigger and start to compile the bytecode to machine code

with minimal optimizations. When this happens the baseline JITC will also collect some

profile information by inserting instrumentation code, should the baseline code be executed

enough times to collect stable profiles - the number varies between each browser's engine,

then the function that was being executed will be considered “hot”. This will in turn initiate

the third phase in the compiler. The already optimizing JITC will begin to recompile the

bytecode using various optimizations, the most powerful one being profile-based

optimization [19].

(17)

15 (45 )

4.3.3 Profile-based optimizations

JavaScript is a dynamically-typed language which means that the types of different variables in the code are interchangeable during execution; even properties can be dynamically inserted and deleted. Due to this, the compiler constantly needs to evaluate what type of object or variables are being passed. After some time the compiler will try to predict subsequent code execution based on patterns collected in previous executions, so called profiles. The profile information is related to the dynamic behaviors during execution. The JITC then proceeds to generate code to fit these profiles, attempting to predict which behavior will be repeated in future executions [19].

JavaScript engines tend to use hidden classes, a list of properties and their offsets within the object. This is used to represent the shape of an object, which means every object has a pointer to its hidden class. The hidden class is used when we want to access a specific property from the object [19]. This can later be used, depending on the code and what the JITC observes when attempting to optimize the code to either optimize the execution by ignoring the hidden class - called inline caching, since it has observed that the function is always called with the same object and will now expect the same identical object in the future. The JITC will try to validate each assumption it makes by inserting guard code. The address of the hidden class is cached and then compared to the current object; if it differs it signals to the compiler that another differently shaped object has been passed as an argument.

Thus, the JITC will begin to recover - called deoptimization, it will halt execution at that point in the optimized code and proceed to return to a similar position in the baseline code.

This time without any speculation and proceed with the execution [19].

4.3.4 V8 Engine

The V8 engine works by taking the JavaScript source code and passing it through their parser to create an intermediate representation in the form of an Abstract Syntax Tree (AST) [19].

An Abstract Syntax Tree is a tree-like representation of the structure of source code. Each

node in the tree holds a construct within the source code [21]. The AST is then consumed by

the first interpreter called Ignition. Ignition will proceed to generate unoptimized machine

code or otherwise called as bytecode, bytecode being an abstraction of machine code. The

bytecode is then executed several times while being analyzed. After a while it will be passed

to the optimizing compiler, TurboFan, which will then optimize the code and seek to de-

(18)

16 (45 )

optimize based on the conditions not being met and return to the slower baseline code and restart the process [21].

Figure 1 - Generic pipeline of code compilation and execution in the V8 engine [4].

The interpreter Ignition consists of bytecode handlers that handle specific bytecodes and then pass those along to the handler in order to process the next bytecode. The bytecode handlers themselves are written in an architecture agonistic way through assembly code. This means the interpreter can be created once and then rely on the TurboFan compiler to generate machine instructions for every architecture that is supported by the V8 engine. [22]

4.3.5 Spidermonkey Engine

The SpiderMonkey engine uses a different approach for its JITC. The source code is passed

to the parser, which generates an AST. The AST is then used by the interpreter to generate

bytecode which is then passed to the optimizing compiler with profiling data. The differing

thing from the V8 engine is that the SpiderMonkey engine has two optimizing compilers

instead of one. The source code goes through the interpreter first to be optimized into the

Baseline compiler. The Baseline compiler will then produce “somewhat” optimized code and

also analyze the code. This will then be passed to the heavily optimizing compiler called

IonMonkey. Should any of the assumptions made fail then IonMonkey will fall back to the

Baseline code. [18]

(19)

17 (45 )

Figure 2 - Generic pipeline of code compilation and execution in the SpiderMonkey engine [4].

4.4 Conclusion

JavaScript is designed to be run directly in the browser and in order to make it as fast as

possible it needs to employ a complex multi-tier architecture in order to parse and optimize

the code being executed. The architecture depends on the given browser which the JavaScript

code is being executed in, such as Chrome which uses the V8 engine for compiling code and

Firefox which uses the SpiderMonkey engine. In general they use both an interpreter and a

compiler in order to get the better of two worlds. The code will execute fast through the

interpreter and then be analyzed and optimized based on profiling data by the compiler before

being translated into the machine language suitable for the underlying machine. These steps

are all performed while the code is being executed, which is why it is called Just-in-time-

compilation. WebAssembly is a binary format which is a compilation target for other

languages. Hence, it is compiled ahead-of-time and undergoes optimizations before being

downloaded in the browser. Due to the binary format being a closer mapping to actual

machine code than regular JavaScript, it is able to make a short jump directly to machine

code without the need to be parsed like a regular JavaScript file, the modules tend to also be

smaller in size which makes it faster to download. WebAssembly is built around underlying

concepts in order to effectively be able to map directly to the low-level machine code.

(20)

18 (45 )

WebAssembly mainly uses 4 types of integers, which are of the type float. Hence, the technology is more suitable for pure calculations. However, while the technology is able to provide the low-level format it is then not as optimized for dealing with complex data structures and building out parts of a website. Furthermore, the technology makes use of a stack machine as a way to handle instructions. The stack machine operates on a last-in, first- out basis where it takes out arguments and performs operations before returning the value into the stack. WebAssembly also makes use of linear memory, where it can store values in any byte address. This is also used when communication is needed between JavaScript and WebAssembly, such as passing data structures between the technologies, which in turn can add additional overhead to execution. A summary of the key similarities and differences mentioned above can be found in figure 3.

Figure 3 - Summary of comparison between WebAssembly and JavaScript.

(21)

19 (45 )

5 E MPIRICAL EXPERIMENT 5.1 Results

Introduction

The data was compiled and the different geometric means and standard deviations were

calculated for each data set and the results are presented in the form of bar graphs for each

operating system, browser and type of execution conducted. The benchmarks were run 30

times each in both a cold and warm browser environment. The geometric mean was used to

calculate the average runtime for a benchmark and the standard deviation was used to

determine the inconsistency for every test; a high standard deviation would indicate that the

values are spread out over a higher range, in essence indicating that the test is not as

consistent and vice versa.

(22)

20 (45 )

WebAssembly and JavaScript in Ubuntu/Firefox (Cold)

The geometric mean which was calculated after compiling the data indicates that the results of the cold run experiment are mixed. The cold run data set shows that for the algorithms involving data structures, such as Array Reverse and Merge sort, WebAssembly did not offer an advantage in performance. However, for the algorithms involving more calculations, WebAssembly managed to outperform and match the speed of JavaScript. In the prime number test, WebAssembly had a better performance by a smaller amount compared to the improvement which was seen in the Fibonacci test, as seen in figure 4. The technology had a slightly worse performance in the Nsieve test.

Figure 4 - Runtime for algorithms in the Firefox browser on Ubuntu (Cold)

(23)

21 (45 )

WebAssembly and JavaScript in Ubuntu/Firefox (Warm)

The warm run experiment performed on Firefox yielded similar results as seen in figure 5. In the benchmarks working on arrays, WebAssembly did not offer any speed up over JavaScript. It did however see some marginal improvement in the runtime compared to JavaScript. The Merge sort test implemented in JavaScript was the only one which performed poorer than its initial cold run test. The experiment involving the Fibonacci algorithm showed that WebAssembly managed to gain a smaller advantage over JavaScript as seen in figure 5, while IsPrime was slightly slower. In the prime number tests, JavaScript managed to improve its performance from the initial cold run sequence to marginally better than WebAssembly.

Figure 5 - Time taken to complete given algorithms in the Firefox browser on Ubuntu (Warm)

(24)

22 (45 )

WebAssembly vs JavaScript in Ubuntu/Chrome (Cold)

The tests conducted in the Chrome browsers showed that in the benchmarks which worked with arrays, WebAssembly was slower. The JavaScript implementation of the array algorithms proved to have better performance than even their Firefox equivalent.

WebAssembly still showed similar performance in the Merge sort and Array Reverse tests as its Firefox equivalent. WebAssembly also performed better in the Prime number and Fibonacci tests as opposed to the Merge sort, Array Reverse and Nsieve tests where JavaScript was faster, as seen in figure 6.

Figure 6 - Time taken to complete given algorithms in the Chrome browser on Ubuntu (Cold)

(25)

23 (45 )

WebAssembly vs JavaScript in Ubuntu/Chrome (Warm)

The warm run implementation of the benchmark proved to provide similar results as its cold run equivalent. The JavaScript Merge sort implementation, which is seen to the far left in figure 7, proved to provide a better performance than WebAssembly in this instance. When it came to the Array Reverse implementation, JavaScript performed significantly better than its WebAssembly version. For the more calculation based algorithms, WebAssembly managed to provide better performance in the Fibonacci and Prime number tests, while for the Nsieve test WebAssembly fell behind. The difference between the cold run and warm run implementation provided a small increase in performance for both technologies across the tests, with an exception being the Array Reverse WebAssembly implementation which proved to decrease performance by a small margin.

.

Figure 7 - Time taken to complete given algorithms in the Chrome browser on Ubuntu (Warm)

(26)

24 (45 )

WebAssembly vs JavaScript in Windows/Firefox (Cold)

The results obtained from the tests conducted on the Windows 10 operating system indicate that similarly to the Ubuntu tests, WebAssembly performed poorer than its JavaScript equivalent when it came to the Merge sort and Array Reverse tests. However, when looking at the Nsieve, Fibonacci and Prime number tests the results are a bit more even, as demonstrated in figure 8.

Figure 8 - Time taken to complete given algorithms in the Firefox browser on Windows (Cold)

(27)

25 (45 )

WebAssembly vs JavaScript in Windows/Firefox (Warm)

In the cold run aspect of the benchmark, WebAssembly was a little bit slower in the Nsieve tests than JavaScript but in the subsequent test it manages to overcome the small gap and is even proving to be slightly faster while JavaScript regressed in the same test. The differences in the cold and warm runs are miniscule as both technologies improved their performance slightly, with the exception of Array Reverse and IsPrime for WebAssembly. The Merge sort for JavaScript also showed poorer performance as seen in figure 9.

Figure 9 - Time taken to complete given algorithms in the Firefox browser on Windows (Warm)

(28)

26 (45 )

WebAssembly vs JavaScript in Windows/Chrome (Cold)

The experiment conducted on the latest Chrome version in Windows 10 showed WebAssembly still falling behind in terms of performance when it came to the array sorting algorithms as seen in figure 10. The JavaScript implementation of the array sorting algorithms proved to have a significantly better performance in the Chrome browser compared to WebAssembly. For the rest of the tests, WebAssembly was slower in the Nsieve test but was faster in the Fibonacci and Prime number test.

Figure 10 - Time taken to complete given algorithms in the Chrome browser on Windows (Cold)

(29)

27 (45 )

WebAssembly vs JavaScript in Windows/Chrome (Warm)

For the warm run implementation, WebAssembly once again proved to be slower when it came to sorting arrays as seen in figure 11. Similarly for the Nsieve test, WebAssembly was a bit slower but faster when it came to the Fibonacci and Prime number test. While the JavaScript implementation of the various algorithms showed slight improvement in terms of performance from their cold run, WebAssembly’s performance degraded to a smaller degree across all tests except for the Prime number algorithm .

Figure 11 - Time taken to complete given algorithms in the Chrome browser on Windows (Warm)

(30)

28 (45 )

Ubuntu standard dev

Cold-js Cold-Wasm Difference Warm-js Warm-Wasm Difference

Merge sort Firefox 78 21,92 55,86 -33,94 15,62 19,40 -3,78

Merge sort Chrome 83 7,82 27,70 -19,88 8,36 3,35 5,01

Array Reverse Firefox 78 27,53 254,29 -226,76 19,28 34,36 -15,08

Array Reverse Chrome 83 27,60 59,92 -32,33 20,36 38,61 -18,25

Nsieve Firefox 78 28,39 8,23 20,16 15,43 3,81 11,62

Nsieve Chrome 83 10,19 13,52 -3,33 14,92 4,28 10,64

Fib Firefox 78 29,86 13,76 16,10 12,51 2,12 10,39

Fib Chrome 83 28,40 20,57 7,82 6,61 2,41 4,19

IsPrime Firefox 78 5,39 4,22 1,17 2,57 2,19 0,38

IsPrime Chrome 83 5,16 3,45 1,71 3,98 0,46 3,52

Consistency

The standard deviation calculated from the data set obtained through the tests on the Ubuntu operating system while performing a cold run, shows that WebAssembly is more consistent in performance when it comes to the algorithms dealing with calculations, the only exception being the Nsieve test in the Chrome browser. The deviation was higher for the tests that worked with arrays.

The warm run saw similar performance from WebAssembly, where it provided more consistency across the tests with calculations. The technology also improved its performance in the Nsieve test as compared to JavaScript which worsened in performance in comparison to its cold run for the Chrome browser but saw improvement in the Firefox browser.

The tests which were conducted on Windows using a cold run show that WebAssembly had less consistency when it came to the array sorting algorithms, the exception being the Array Reverse test which in the Firefox browser provided a performance closer to the mean. The tests involving calculations proved to provide a better consistency than their JavaScript equivalent with the exception of the Nsieve test where JavaScript was more consistent.

In the warm run implementation, WebAssembly showed better consistency across most of the tests, with the exception being the Array Reverse test conducted in the Chrome browser.

Table 1 – Standard deviation as calculated across the browsers and algorithms for the Ubuntu OS

(31)

29 (45 )

Windows standard dev

Cold-js Cold-Wasm Difference Warm-js Warm-Wasm Difference

Merge sort Firefox 78 14,85 15,49 -0,64 30,17 15,81 14,36

Merge sort Chrome 83 27,23 72,48 -45,25 30,46 16,82 13,64

Array Reverse Firefox 78 366,49 63,32 303,17 70,20 19,41 50,79

Array Reverse Chrome 83 35,06 40,05 -4,99 13,40 22,11 -8,71

Nsieve Firefox 78 15,14 48,24 -33,09 12,94 2,76 10,18

Nsieve Chrome 83 13,46 23,77 -10,31 14,14 6,95 7,19

Fib Firefox 78 11,17 4,00 7,17 8,87 1,70 7,18

Fib Chrome 83 44,28 18,64 25,63 25,10 2,11 22,99

IsPrime Firefox 78 4,10 2,78 1,32 11,48 8,93 2,54

IsPrime Chrome 83 4,55 2,26 2,29 8,82 1,34 7,47

Table 2 – Standard deviation as calculated across the browsers and algorithms for the Windows OS

(32)

30 (45 )

5.2 Conclusion

Overall, the trend in the tests indicates that JavaScript offers better performance for array

operations while WebAssembly offers better performance for pure calculations. The

difference in browsers proved that the different browsers are optimized for different types of

code and that given enough time both browsers are able to optimize code over time. There

were some exceptions to this, where instead of improving the performance it instead

regressed. Furthermore, WebAssembly proved to provide better consistency for the tests

working with pure calculations, with some exceptions depending on the browser, operating

system and type of execution.

(33)

31 (45 )

6 ANALYSIS AND DISCUSSION

6.1 WebAssembly and JavaScript in the browser

RQ1. How are WebAssembly modules compiled and executed across Firefox and Chrome compared to JavaScript?

The information obtained from the literary review showed that JavaScript modules are compiled and translated into machine code through the multi-tiered architecture of a compiler. The architecture depends on the browser in which the code is being executed. A programmer will write some JavaScript code which will then be passed to the JavaScript engine to be interpreted and executed. This begins with splitting the code into tokens for creating the abstract syntax-tree and allows fast execution. The abstract syntax-tree is meant to analyze and verify that the languages elements and keywords are being used correctly.

From there the profiler watches the code and looks at ways to optimize the code. Through this profiler, unoptimized code is then passed to the compiler for optimizations and to generate machine code which will then replace its previous unoptimized version by the interpreter. As these changes accumulate, the execution performance will gradually improve over time.

The WebAssembly file format is closer to machine code than regular JavaScript code, it is a step further than other formats that JavaScript employs which effectively makes it easier to compile into machine code and it does not require as much optimizations since the file would already have been optimized and compiled ahead of time.

The WebAssembly file format is also built in a way that makes it architecture agnostic,

effectively allowing it to target different machines in a more effective way and allowing

existing machines to easily execute the files. Once the file has been compiled to the

WebAssembly format the browser simply needs to download the file and make the small

jump to machine code. This is instead of needing to go through the process of analyzing and

optimizing the code before passing it to the compiler backend in order to generate machine

code relevant to the targeted architecture.

(34)

32 (45 )

WebAssembly does not offer complex data structures and other features of a modern developed programming language; instead it is rather limited in what it can use at this point in time as seen in figure 3. It mainly uses float-point numbers and is thus more suited for programming involving taxing calculations. However, WebAssembly is not meant to be written by hand instead it is supposed to be a compilation target for other languages. Hence, when using WebAssembly a user would first need to write the code in a language which currently provides a compiler toolchain for WebAssembly and then compile the file before using it.

The web browsers of today are very complex in the way they operate on code and the way

they go about providing the best performance possible. Due to the nature of just-in-time-

compilation, given enough time JavaScript would be able to optimize its performance greatly

but WebAssembly bypasses this by allowing such optimizations to take place before

execution instead of during execution, as presented under section 4.2 and 4.3. The degree of

optimization also varies depending on the browser used, in this study the browser Firefox and

Chrome were mainly used with their respective pipeline for compiling and optimizing code

presented in figure 1 and figure 2.

(35)

33 (45 )

6.2 Strengths and weaknesses

RQ1.1. What are the strengths and weaknesses of WebAssembly?

The literary study further showed that WebAssembly was created to solve a specific set of problems, such as providing faster computing for the web. It now allows other languages which were unable to previously port their code to the web, to now be able to transfer entire libraries and applications to the web. WebAssembly is able to offer faster computing on the web by effectively allowing well matured and developed languages to begin targeting the platform.

The small jump from downloading a WebAssembly module, then compiling it straight to machine code and executing it is a great way to improve performance as it reduces the need for further optimization and compilation into machine code. However, this does not mean that WebAssembly should be used for everything, since the cost of downloading a big WebAssembly module over a slow network in order to attempt to micro optimize your web application can quickly prove to not be worth the effort. WebAssembly was engineered for CPU intensive tasks; as such it would be inefficient to use it for working with elements in a webpage. Hence, the most optimal situation to use WebAssembly would be when there is a significant bottleneck being experienced in an application. There it would be preferable to move the code that is not able to perform well enough to WebAssembly.

The literature review showed that the WebAssembly format is a great step in the right direction for enabling other languages to be a part of the web, effectively increasing the ecosystem of the web and not only focusing on the JavaScript portion of it. Through the use of the WebAssembly format, languages like C, C++ and Rust become part of that ecosystem, which will surely help to push the web further in the future. Something which should not be confused with this is that WebAssembly is not meant to replace an entire JavaScript application but is meant to run alongside it. Hence, the recommendation is to explicitly use WebAssembly for the computationally expensive parts of an application.

A weakness of WebAssembly can be considered to be related to the overhead of passing data structures between the technology and JavaScript when there is a need to share assets.

Furthermore, based on the information which was extracted from the literary review, one can

(36)

34 (45 )

deduce that the technology is currently in its early stages of development with new features to

be expected. However, a weakness to consider at this point in time is the lack of a garbage

collector which results in bigger files that require more time to be downloaded over the

network. The technology was mainly developed for computationally intensive tasks, hence in

other areas the technology will most likely struggle. Therefore, it is important to ensure that a

developer is using the technology in a valid way but also for a valid reason as it might not be

worth it to use WebAssembly in an attempt to micro-optimize a piece of code. The

benchmark conducted further confirmed that WebAssembly does not always offer a better

performance but did in the majority of cases offer a more consistent performance. The results

showed that WebAssembly fell behind when working with data structures as opposed to

calculations where it was generally faster.

(37)

35 (45 )

6.3 Performance

RQ2. Does WebAssembly offer an improved runtime over JavaScript when executed in the Firefox and Chrome browsers?

The experiment conducted shows that in the majority of cases JavaScript was the faster technology, especially when used in the Chrome browser. It is however important to note that in the tests where there was a pure calculation involved such as the Fibonacci and the Prime number algorithm with the exception being the Nsieve test, even with the compilation and download included, WebAssembly provided better performance than JavaScript as seen in figure 6, 7, 8, 10 and 11. In the tests where it was more focused on sorting a given array such as the Merge sort and Array Reverse algorithm, JavaScript beat WebAssembly as seen in the results section. However, something to consider is also that WebAssembly at the moment does not offer great support for working with complex data structures, as such it is most likely that the need to transfer and process the arrays over to JavaScript added time to the total runtime of the algorithm since the entire array used in the respective implementation was saved in each file. Furthermore, as the file was compiled with the full runtime it might have contributed to further slowing down the execution time for WebAssembly by making the file size even bigger.

A different aspect which was also tested was the cold and warm executions. Where the warm run was expected to execute faster than its cold equivalent but in some cases this proved to be the contrary as certain tests did not improve their time despite the compiler having the time to optimize the code as seen in figure 8 and 9. For the compute intensive algorithms, the warmup of the JIT-compiler did bring some smaller improvements to certain tests as seen in figure 4 and 5. It could be argued that the cold run was not a true cold run as the algorithm would have been iterated 100/1000 times when the measurements would have been extracted.

However, it is important to keep in mind that in order to minimize the margin of error and to

be able to obtain a result, the tests would have to be iterated. An interesting note is that the

Chrome browser offered better JavaScript performance overall while Firefox offered better

WebAssembly performance overall as presented in the results section.

(38)

36 (45 )

6.4 Consistency

RQ2.1. Does WebAssembly offer a more consistent and predictable performance over JavaScript when executed in the Firefox and Chrome browsers?

The standard deviation calculated from the datasets and presented in table 1 and 2, based on the Ubuntu OS indicates that the results are a mixed bag. The warm iterations result set shows that for the majority of the algorithms WebAssembly did offer a significant improvement to the consistency of executions. The exceptions being Merge sort on Firefox 78 and Array Reverse on both browsers. The cold iterations result set shows significant slowdown in the consistency of performance for WebAssembly when it comes to algorithms such as Merge sort and Array Reverse, the other benchmarks did however offer some smaller gains in terms of performance.

The standard deviation extracted from the datasets based on the Windows 10 OS indicates that the warm iterations also offered a boost in terms of consistency across both browsers.

WebAssembly proved to in general offer a better performance for the Merge sort and Array Reverse benchmarks. However, the cold iterations did not offer great improvements to the consistency of performance for the benchmarks across the different browsers. The improvement was for a few of the benchmarks but it is enough to say that WebAssembly still does help in terms of improving consistency across the majority of benchmarks.

The reason for WebAssembly being able to offer a more consistent performance is most

likely related to the fact that it is more type restricted and goes through an ahead of time

compilation and optimization as opposed to its JavaScript equivalent. JavaScript needs to be

optimized in the browser and go through the various stages of the advanced just-in-time-

compiler before being passed to the compiler backend and then used to target the specific

architecture on the end-users computer. However, it would seem that even though the

WebAssembly code does not need as much optimizations it could be argued that it did benefit

more from the browsers warm iterations than regular JavaScript. The reason behind this is

speculative but it could be related to the very nature of ahead of time compilation. Which in

most cases offer a slower start but better optimizations for the source code. Just-in-time

compilation could offer more inconsistency in the performance due to how it optimizes code

as it could need to suddenly de-optimize the code, which would affect the performance by

(39)

37 (45 )

needing to step back in the code and restart the process of optimization. Furthermore, this is

needed due to the way the JavaScript language is used and how it is designed, as its dynamic

nature forces the compiler to stay on its toes and quickly return to a baseline should its

assumptions be incorrect. Hence, developers stand to gain in terms of performance by writing

predictable code for their applications, irrespective of the nature of the code.

(40)

38 (45 )

7 C ONCLUSION

WebAssembly differs from JavaScript by providing low-level functionality to the web by use of its format and its more direct mapping to machine code. Hence, it does not require being parsed as regular JavaScript. Due to this it can be downloaded in the browser and further optimized and compiled to the underlying machine language. WebAssembly is best used when developing code which relies heavily on calculations instead of working with more complex data structures as there is currently significant overhead. The weaknesses of WebAssembly are that it does not make use of a garbage collector as JavaScript. Hence, this would be settled by the underlying language it is being compiled from. Due to this, the garbage collection would need to be part of the program itself instead of the environment, which would result in the size of the program increasing. However, it is important to point out that this is one of the long term goals of the technology and given time the issue should be resolved.

Overall, the results are in line with previous research performed that concluded WebAssembly would not always offer a better performance but rather a more consistent performance. The technology proved to provide a better performance in the benchmarks which were mainly focused on pure calculations. In the tests which involved working with arrays it was significantly slower than JavaScript. The reason behind this is most likely related to the overhead of passing data structures between the two technologies, with time as the technology matures it is expected to also improve performance in this area. The technology also managed to improve in runtime through further optimizations conducted in addition to its ahead-of-time-compilation and optimization.

The study showed that WebAssembly is better in certain scenarios when compared to

JavaScript. However, it is important to note that even though the technology offers benefits,

there are certain overheads to consider when using WebAssembly in applications. The time it

takes to download and compile a WebAssembly module is fast but if the technology is being

used to simply micro-optimize an existing application then it might quickly turn out to not be

worth the effort. Hence, the recommendation would be to use the technology where an

application is being severely bottlenecked by using JavaScript when it comes to

(41)

39 (45 )

computations. By moving certain parts of an application to WebAssembly, a developer would be able to effectively speedup and improve their application.

7.1 Key conclusions:

x WebAssembly offers a more direct mapping to machine code, making it faster for the JavaScript engine to compile.

x WebAssembly offers better performance in pure calculations but falls behind when dealing with data structures due to overheads present in passing data between the two technologies.

x WebAssembly offers a more consistent performance than JavaScript in the majority of cases due to it not needing to use dynamic typing.

x WebAssembly does not have its own garbage collector, but instead relies on the underlying language which could further increase file size.

x WebAssembly should be used when developing applications which make use of heavy calculations, such as photo/video edition, games or computer vision.

x WebAssembly effectively opens up the web for other languages, in essence expanding the current ecosystem and making the web a universal platform.

x WebAssembly is a technology that can be considered to be in its infancy at the

moment, but as the technology develops and tools for it mature it can be expected that

the technology will improve tremendously.

(42)

40 (45 )

8 F UTURE WORK

As this paper only performs a minimal set of tests it would be recommended to engage a

broader set of benchmarks that tests more aspects of the technology However, it is also

important to keep in mind that there are several factors that come into play when

benchmarking a technology. Therefore, it is important to be aware of the factors and attempt

to mitigate these as much as possible. It would be recommended to benchmark the

technology as it progresses but using real world examples as these would be most in line with

how it is being used. There are also several other aspects which can be experimented with in

addition to the runtime and consistency of performance, such as the effects of different

compiler options. Furthermore, a look into how performance differs between WebAssembly

modules compiled from different languages would also be beneficial.

(43)

41 (45 )

R EFERENCES

[1] A. Haas, A. Rossberg, D. L. Schuff, B. L. Titzer, M. Holman, D. Gohman, L. Wagner, A.

Zakai, and JF. Bastien, 2017, ‘Bringing the web up to speed with WebAssembly’, In

Proceedings of the 38th ACM SIGPLAN Conference on Programming Language Design and Implementation ( PLDI 2017 ), Association for Computing Machinery, New York, NY, USA, 185–200, [Online]. Available: https://doi.org/10.1145/3062341.3062363. [Accessed Feb. 3, 2020]

[2] OPTASY. ‘WebAssembly vs JavaScript: Is WASM Faster than JS? When Does JavaScript Perform Better?’, Medium, [Online]. 19 December 2018,

https://medium.com/@OPTASY.com/webassembly-vs-javascript-is-wasm-faster-than-js- when-does-javascript-perform-better-db86d2ecf2cc . [Accessed Feb. 3, 2020].

[3] A. Turner, ‘WebAssembly Is Fast: A Real-World Benchmark of WebAssembly vs. ES6’, Medium, December 18, 2019, [Online]. Available:

https://medium.com/@torch2424/webassembly-is-fast-a-real-world-benchmark-of- webassembly-vs-es6-d85a23f8e193. [Accessed Feb. 3, 2020]

[4] Vladimir . “WebAssembly vs. the world. Should you use WebAssembly?”, Sqreen, [Online]. 21 August 2018, Available: https://blog.sqreen.com/webassembly-performance/ . [Accessed: March. 1, 2020]

[5] K. Peterson, ‘WebAssembly Overview: So Fast! So Fun! Sorta Difficult!’, Lucidchart, May 16, 2017, [Online]. Available:

https://www.lucidchart.com/techblog/2017/05/16/webassembly-overview-so-fast-so-fun- sorta-difficult/. [Accessed Feb. 3, 2020]

[6] AssemblyScript, ‘Built with Assemblyscript’. AssemblyScript, December 2019, [Online].

Available: https://docs.assemblyscript.org/community/built-with-assemblyscript . [Accessed:

March. 1, 2020]

[7] AssemblyScript, ‘The AssemblyScript Book’. AssemblyScript, December 2019, [Online].

Available: https://docs.AssemblyScript.org/. [Accessed: Feb. 3, 2020]

[8] Stack Overflow. (2019). Developer Survey Results 2019

https://insights.stackoverflow.com/survey/2019#most-popular-technologies [Accessed March. 1, 2020]

[9] Statcounter GlobalStats, ‘Desktop Browser Market Share Worldwide’. Statcounter

GlobalStats, August 2020, [Online]. Available: https://gs.statcounter.com/browser-market-

share/desktop/worldwide. [Accessed: Aug. 24, 2020]

(44)

42 (45 )

[10] D. Herrera, H. Chen, E. Lavoie, and L.Hendren, October 2018, Numerical computing on the web: benchmarking for the future, In Proceedings of the 14th ACM SIGPLAN

International Symposium on Dynamic Languages, pp. 88-100, [Online]. Available:

https://doi.org/10.1145/3276945.3276968. [Accessed Feb. 3, 2020]

[11] M. Reiser and L. Bläser, October 2017, Accelerate JavaScript applications by cross- compiling to WebAssembly, In Proceedings of the 9th ACM SIGPLAN International

Workshop on Virtual Machines and Intermediate Languages, pp. 10-17, [Online]. Available:

https://doi.org/10.1145/3141871.3141873. [Accessed Feb. 3, 2020]

[12] WebAssembly Concepts - WebAssembly, MDN, January 14, 2020, [Online]. Available:

https://developer.mozilla.org/en-US/docs/WebAssembly/Concepts . [Accessed: Feb. 3, 2020]

[13] L. Clark, ‘Creating and working with WebAssembly modules’, Mozilla Hacks, February 28, 2018, [Online]. Available: https://hacks.mozilla.org/2017/02/creating-and-working-with- webassembly-modules/. [Accessed: March. 7, 2020]

[14] Overview - WebAssembly 1.1, 2017, WebAssembly Community Group., [Online].

Available: https://webassembly.github.io/spec/core/intro/overview.html. [Accessed Feb. 3, 2020]

[15] WebAssembly / binaryen: Compiler infrastructure and toolchain library for WebAssembly, 2020. GitHub, [Online]. Available:

https://github.com/WebAssembly/binaryen. [Accessed: March. 7, 2020]

[16] JavaScript, MDN, July 19, 2020, [Online]. Available: https://developer.mozilla.org/en- US/docs/Web/JavaScript. [Accessed: Aug. 9, 2020]

[17] ECMAScript Language Specification: Standard ECMA-262, 11th ed. (ECMA, 2020), [Online]. Available: https://www.ecma-international.org/ecma-262/11.0/index.html.

[Accessed: Aug. 9, 2020]

[18] B. Meurer and M. Bynens, ‘JavaScript engine fundamentals: Shapes and Inline Caches’, Mathias Bynens, June 14, 2018, [Online]. Available: https://mathiasbynens.be/notes/shapes- ics. [Accessed: March. 7, 2020]

[19] H. Park, S. Kim, J. G. Park and S. M. Moon, 2018, Reusing the Optimized Code for JavaScript Ahead-of-Time Compilation, ACM Transactions on Architecture and Code Optimization (TACO), 15(4), 1-20, [Online]. Available: https://doi.org/10.1145/3291056.

[Accessed: March. 1, 2020]

[20] T. Æ. Mogensen, 2009, Basics of compiler design, Torben Ægidius Mogensen.

(45)

43 (45 )

[21, 7] F. Hinkelmann, ‘Understanding V8’s Bytecode’, Medium, August 16, 2017, [Online].

Available: https://medium.com/dailyjs/understanding-v8s-bytecode-317d46c94775.

[Accessed: Feb. 3, 2020]

[22, 6] Documentation, V8, [Online]. Available: https://v8.dev/docs. [Accessed: March. 7,

2020]

References

Related documents

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

Från den teoretiska modellen vet vi att när det finns två budgivare på marknaden, och marknadsandelen för månadens vara ökar, så leder detta till lägre

40 Så kallad gold- plating, att gå längre än vad EU-lagstiftningen egentligen kräver, förkommer i viss utsträckning enligt underökningen Regelindikator som genomförts

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar