• No results found

A Performance Comparison of Auto-Generated GraphQL Server Implementations

N/A
N/A
Protected

Academic year: 2021

Share "A Performance Comparison of Auto-Generated GraphQL Server Implementations"

Copied!
29
0
0

Loading.... (view fulltext now)

Full text

(1)

Linköpings universitet SE–581 83 Linköping

Linköping University | Department of Computer and Information Science

Bachelor’s thesis, 16 ECTS | Datateknik

2020 | LIU-IDA/LITH-EX-G--20/070--SE

A Performance Comparison of

Auto-Generated GraphQL Server

Implementa ons

En jämförelse av automa skt genererade GraphQL server

imple-menta oner

David Ångström

Markus Larsson

Supervisor : Sijin Cheng Examiner : Olaf Har g

(2)

Upphovsrätt

De a dokument hålls llgängligt på Internet - eller dess fram da ersä are - under 25 år från publicer-ingsdatum under förutsä ning a inga extraordinära omständigheter uppstår.

Tillgång ll dokumentet innebär llstånd för var och en a läsa, ladda ner, skriva ut enstaka ko-pior för enskilt bruk och a använda det oförändrat för ickekommersiell forskning och för undervis-ning. Överföring av upphovsrä en vid en senare dpunkt kan inte upphäva de a llstånd. All annan användning av dokumentet kräver upphovsmannens medgivande. För a garantera äktheten, säker-heten och llgängligsäker-heten finns lösningar av teknisk och administra v art.

Upphovsmannens ideella rä innefa ar rä a bli nämnd som upphovsman i den omfa ning som god sed kräver vid användning av dokumentet på ovan beskrivna sä samt skydd mot a dokumentet ändras eller presenteras i sådan form eller i sådant sammanhang som är kränkande för upphovsman-nens li erära eller konstnärliga anseende eller egenart.

För y erligare informa on om Linköping University Electronic Press se förlagets hemsida http://www.ep.liu.se/.

Copyright

The publishers will keep this document online on the Internet - or its possible replacement - for a period of 25 years star ng from the date of publica on barring excep onal circumstances.

The online availability of the document implies permanent permission for anyone to read, to down-load, or to print out single copies for his/hers own use and to use it unchanged for non-commercial research and educa onal purpose. Subsequent transfers of copyright cannot revoke this permission. All other uses of the document are condi onal upon the consent of the copyright owner. The publisher has taken technical and administra ve measures to assure authen city, security and accessibility.

According to intellectual property law the author has the right to be men oned when his/her work is accessed as described above and to be protected against infringement.

For addi onal informa on about the Linköping University Electronic Press and its procedures for publica on and for assurance of document integrity, please refer to its www home page: http://www.ep.liu.se/.

© David Ångström Markus Larsson

(3)

Abstract

As databases and traffic over the internet is becoming larger by the day, the perfor-mance of sending information has become a target of great importance. In past years, other software architectural styles such as REST have been used as it is a reliable framework and works really well when one has a dependable internet connection. In 2015, the querying language GraphQL was released by Facebook to the public as an alternative to REST. GraphQL made improvements in fetching data by for example removing the possibility of under- and overfitting. This means that a client only gets the data which they have requested, nothing more, nothing less.

To create a GraphQL schema and server implementation requires time, effort and knowl-edge. This is however a requirement to run GraphQL over your current legacy database. For this reason multiple server implementation tools have been created by vendors to reduce development time and instead auto-generates a GraphQL schema and server imple-mentation using an already existing database.

This bachelor thesis will pick, run and compare the benchmarks of the two different server implementation tools Hasura and PostGraphile. This is done using a benchmark methodology based on technical difficulties (choke points). The result of our benchmark suggests that the throughput is larger for Hasura compared to PostGraphile whilst the query execution time as well as query response time is similar. PostGraphile is better at paging without offset as well as ordering, but on all other cases Hasura outperforms PostGraphile or shows similar results.

(4)

Acknowledgments

First and foremost, we want to thank our examiner Olaf Hartig for giving us the opportunity to do this degree project. We have especially appreciated all of the constructive feedback given to us to improve this work.

We would also like to thank our supervisor Sijin Cheng, whom have spent several late nights coding and debugging the test drivers. This work was fairly ambitious from our point of view, and several concepts were hard to comprehend. Without Sijins help we would have had a much harder time. Due to the COVID-19 pandemic response, this degree project had to be done from home. Therefore we have truly appreciated the quick responses through email and overall communication from both Olaf and Sijin.

Finally, we want to thank our opponents Christoffer Akouri and Jesper

Eriksson for taking the time to improve our thesis by giving us two additional perspectives.

(5)

Contents

Abstract iii

Acknowledgments iv

Contents v

List of Figures vi

List of Tables vii

1 Introduction 1 1.1 Motivation . . . 1 1.2 Aim . . . 2 1.3 Research questions . . . 2 1.4 Delimitations . . . 2 2 Theory 3 2.1 GraphQL . . . 3

2.2 Server translation tools . . . 4

2.3 Docker . . . 4

2.4 Related Work . . . 4

3 Method 6 3.1 The Linköping GraphQL Benchmark (LinGBM) . . . 6

3.2 Template translation . . . 9

3.3 Experiment environment . . . 10

4 Results 11 4.1 Throughput . . . 11

4.2 Query Execution Time & Query Response Time . . . 15

5 Discussion 18 5.1 Result . . . 18

5.2 Method . . . 19

5.3 The work in a broader context . . . 20

6 Conclusion 21

(6)

List of Figures

2.1 Example GraphQL query (from the Hasura CLI) where the client (left) request the field name from one object of type university. The result (right) is similar to the

clients request but with the additional response written out. . . 3

2.2 Visual overview of the traversal of a request from the GraphQL client to the legacy database. . . 4

2.3 Left: Overview of Docker, Right: Overview of Virtual Machines . . . 5

3.1 LinGBM Entity-Relationship diagram . . . 7

3.2 Query templates related to aggregation. . . 9

4.1 The throughput with different number of clients of each query template with the dataset at scale 10. The left graph shows the results of Hasura and the right graph shows the results of PostGraphile . . . 12

4.2 Comparisons of the tools with 1, 2 and 3 clients with the dataset at scale 10, bigger is better. . . 12

4.3 The throughput with different number of clients of each query template with the dataset at scale 15. The left graph shows the results of Hasura and the right graph shows the results of PostGraphile . . . 13

4.4 Comparisons of the tools with 1, 2 and 3 clients with the dataset at scale 15, bigger is better. . . 13

4.5 The throughput with different number of clients of each query template with the dataset at scale 20. The left graph shows the results of Hasura and the right graph shows the results of PostGraphile. . . 14

4.6 Comparisons of the tools with 1, 2 and 3 clients with the dataset at scale 20, bigger is better. . . 14

4.7 Results for QET for all QTs and sizes of the dataset, lower is better. . . 16

(7)

List of Tables

4.1 Contains the average throughput at scale 10 of the dataset for Hasura and Post-Graphile with 1, 2 and 3 clients. . . 15 4.2 Contains the average throughput at scale 15 of the dataset for Hasura and

Post-Graphile with 1, 2 and 3 clients. . . 15 4.3 Contains the average throughput at scale 20 of the dataset for Hasura and

Post-Graphile with 1, 2 and 3 clients. . . 15 4.4 Contains the average QET at scale 10, 15 and 20 of the dataset for Hasura and

PostGraphile. . . 16 4.6 Shows how much of the QET which is the QRT for both Hasura and PostGraphile

with the three tested scales of the dataset. . . 17 4.5 Contains the average QRT at scale 10, 15 and 20 of the dataset for Hasura and

(8)

Chapter 1

Introduction

1.1

Motivation

GraphQL1was developed and used by Facebook in 2012 internally and it was brought into the

public in 2015 [6]. Multiple vendors of GraphQL server implementations have had their own performance-related test suites. However, the researchers at ADIT - The Database and Web Information Systems Group at Linköping University - observed that these were insufficient as a general performance benchmark. This is because of multiple factors some of which are

• a single constant dataset which cannot be scaled

• a small fixed set of queries designed for the specific implementation of the server in use • too few metrics measured (usually only some sort of throughput)

• as well as single-machine setups which further reduces the accuracy of the test since the driver concurrently handle both GraphQL requests as well as the response of these requests.

In addition to these limitations, the dataset as well as the queries in question are molded to fit the system they are running on, making the comparison of these systems unreliable. This thesis is a contribution to a larger project which attempts to resolve the issues previously mentioned2. To achieve these goals two scenarios are created. Scenario one represents

cases in which data from a legacy database has to be exposed as a read-only GraphQL API with a user-specified GraphQL schema. This scenario has already been worked on by the researchers at ADIT. Scenario two is the one that will be focused on in this thesis and that is the case in which data from a legacy database is exposed as a read-only GraphQL API provided by an auto-generated GraphQL server implementation. Multiple server translation tools have been developed by vendors for this purpose. These tools work by receiving a legacy database to automatically provide a GraphQL schema and server implementation. Following these scenarios, the researchers at ADIT created a benchmark as well as appropriate query templates to test the technical difficulties of the systems used. With this benchmark, the performance may be looked at with additional metrics such as

• Query Execution Time (QET) • Query Response Time (QRT) • Throughput

These metrics are further expanded upon in Section 3.5. 1https://graphql.org/

(9)

1.2. Aim

1.2

Aim

The aim of this work is to help vendors compare the performance of different GraphQL server implementations through the use of certain server translation tools (henceforth mentioned as tools). This work will also showcase the possible limitations regarding the features of the chosen tools.

1.3

Research questions

In this section we outline the questions that we are seeking to answer in this thesis.

1. What differences in throughput, query execution time and query response time can be found when using the GraphQL server implementations provided by Hasura and Post-Graphile?

2. Which of the tools is best suited in regards to the performance metrics for different query workloads?

1.4

Delimitations

For the work to fit within the scope of a bachelor thesis we decided to reduce the amount of different tools to compare. There could very well be a large amount of different tools that could perform better or worse given the experiments that will be run, but due to the time constraints, two tools were chosen. It was our original intention to use a third tool called Prisma, but due to issues with setting up the environment, this tool was pushed aside. The base version of the GraphQL server translation tools may be insufficient to run cer-tain queries. As such we decided that only plugins which are used in the setup tutorials (on respective website) of the tools are to be used in this benchmark. With this in mind, some queries will be left out of the tests for certain tools.

(10)

Chapter 2

Theory

This chapter provides necessary information about the subjects needed to understand the thesis in its entirety. The following will be introduced to the reader: the query language

GraphQL, the server translation tools and why they are necessary, how Docker works

and how it will be utilized in the experiment, as well as a related work in the form of a pre-vious benchmark conducted by a maintainer of the server translation tool PostGraphile

2.1

GraphQL

GraphQL is a query language that was developed by Facebook in the early 2010s and released in 2015 before being moved to the GraphQL-foundation, hosted by the Linux-Foundation [2]. Since its inception it has been implemented by several major platforms such as Twitter, GitHub, Instagram and obviously, Facebook1. GraphQL makes it possible for the client to

request specific data with a single request, unlike REST, which typically needs a request for each object.

GraphQL is a querying language that looks very similar to JavaScript Object Notation (JSON) which specifies which fields from which objects are to be retrieved [8]. One advantage with GraphQL is that it only retrieves the data that the client asking for. This idea that a client can request specific data from a server (which has implemented GraphQL) in a single request allows for minimization of overfetching as well as underfetching the data of a request. While not being detrimental to larger machines with good connection, GraphQL allows even for mobile devices with slow connections to be fast[9]. An example of a request and response is shown in Figure 2.1.

1https://graphql.org/users/

Figure 2.1: Example GraphQL query (from the Hasura CLI) where the client (left) request the field name from one object of type university. The result (right) is similar to the clients request but with the additional response written out.

(11)

2.2. Server translation tools

Figure 2.2: Visual overview of the traversal of a request from the GraphQL client to the legacy database.

To make a GraphQL server able to query a database, an API has to exist to show the server which data can be queried, thus creating a GraphQL service. In this case, a schema which consist of the types existing in the database has to be provided. After receiving the query, it is validated against the schema and executed. A GraphQL schema mostly consist of object types which includes fields of different scalar types such as String, Int, Float, Boolean and ID. Object types can have other Object types as fields, this is how GraphQL handles what would be a foreign relationship in SQL. In addition to object types, there exists enum types (can only be one of a select number of types), interface types (a base type which other types can implement) etc..

2.2

Server translation tools

GraphQL by itself is unable to communicate with the database and therefore require a form of assistance when implementing it into your API. To achieve a auto-generated GraphQL server implementation, a server translation tool can be used. The tool is provided with a legacy database and can thus act as an intermediary between the GraphQL client and the database. The server generated by the server translation tool, using the legacy database, provides ac-cess to the database in terms of an auto-generated GraphQL schema (rather than a user-specified one). By sending a GraphQL query to the server, the tools will then translate the given query into something the legacy database can understand before forwarding the request. The database will send back the result from the given query which the tool will translate into GraphQL and send it back to the client who sent the request. This can be seen in Figure 2.2.

2.3

Docker

The server implementations that will be generated will be hosted using Docker. Docker is a lightweight open-source virtualization platform that can be used for developing, shipping and running applications. Docker uses containers as a means of virtualization. Docker runs on top of the operating system (OS) already running as its host environment. Every container execute in an isolated environment from each other as well as from some parts of the host OS [7]. This is different as opposed to Virtual Machines (VM) whom sits on top of a hypervisor which will distribute hardware to the different VMs which in turn usually hosts a Guest OS, see Figure 2.3 [1]. This means that VMs compared to Docker containers scales poorly when it relates resources such as RAM, CPU and bandwidth.

2.4

Related Work

There have been some benchmarks done in the past between PostGraphile and Prisma when it comes to their GraphQL offerings. There were two blog-posts made by Benjie Gillam whom is working as a Maintainer for PostGraphile. In these blogs he compared PostGraphile in its

(12)

2.4. Related Work

Figure 2.3: Left: Overview of Docker, Right: Overview of Virtual Machines

newest version the 22nd of May 2018 and another one a year later when Prisma released their newest version [3, 4]. The first blog showed Prisma edging out PostGraphile in performance for simple requests. For more complex queries PostGraphile showed massive improvements as opposed to Prisma. PostGraphile requests per second being up to four times the amount of Prismas requests and a larger amount of these were also successful. One of the queries did not have any success’ for Prisma [3]. In the updated version, PostGraphile was superior in most tests, with Prisma having similar performance for simple queries [4]. Both of these blogs suffer from several flaws mentioned in the Introduction of this thesis (a single constant dataset, the benchmark being run on the same machine, a small fixed set of queries as well as not using enough metrics). Another point to note is that the tests were only ran once, which means that a sudden system procedure may cause the performance of the benchmark to suffer greatly. Flaws aside, these tests may be interesting as a comparison to the benchmarks found in this thesis.

Other than the benchmarks presented by Benjie Gillam there does not appear to be any research published at this current time related to benchmarking of auto-generated server implementations.

(13)

Chapter 3

Method

This chapter provide details on how the experiments were performed, necessary data and soft-ware used as well as other details needed to replicate the experiment. The following will be covered: the data schema and the dataset used in the experiment, the query workloads as well as the choke points being tested with these queries, the given query templates and how they were translated to fit each tool, the performance metrics we use to define the result of this thesis and lastly the experiment environment and the hardware that will be used to run the tests.

3.1

The Linköping GraphQL Benchmark (LinGBM)

The method used in this work is based on the design methodology applied in the LinGBM Project1which was created by the researchers at ADIT.

Data Schema

The dataset used is a scalable and synthetic dataset based on the Lehigh University bench-mark (LUBM) [5]. Since the dataset is scalable, it can be generated in an unlimited variety of different sizes instead of having to re-implement a different dataset from scratch. The generated schema can be used to create an SQL database.

The generated dataset contains a fictional University with departments that has different faculties and students who teaches or takes courses and have publications. In addition, ev-ery university has graduates (including undergraduate degree, master’s degree and doctoral degree). The faculty could work for a department in the same or a different university. The graduate could also have acquired their degree from the same or another university. The scenario consists of 12 types of entities and 18 types of relationships between these. See Figure 3.1 for the ER diagram.

Workloads

To simulate different workloads with respective GraphQL server translation tool, 16 GraphQL query templates are provided by LinGBM. These templates were based on a GraphQL schema which was developed by the researchers at ADIT and used to access the datasets in the LinGBM

GraphQL schema. Each query template consists of one or more placeholders. The value for

these placeholders are generated through the use of the dataset generator and later get in-stantiated through the use of the query generator. These queries are what provides the query workloads.

(14)

3.1. The Linköping GraphQL Benchmark (LinGBM)

Figure 3.1: LinGBM Entity-Relationship diagram

Choke Points

The researchers at ADIT has identified multiple technical difficulties (so called choke points) for the GraphQL server implementation. These technical difficulties include different func-tionalities of the GraphQL queries such as string comparison, multi-attribute retrieval, etc. The server translation tools may perform better or worse than one another related to each of these choke points. Each query template is specifically created to test at least one of the choke points (CP) itemized in the following subsections.

Choke Points Related to Attribute Retrieval

Some queries may request the retrieval of multiple attributes of a data object. A very sim-plistic way to do this is to execute several different operations to fetch these attributes from the database. There are ways to optimize this, one of these is batching. Batching is when in-stead of executing multiple queries to gain all the attributes, one inin-stead combine the wanted attributes into one query and send that to the database.

• CP 1.1: Multi-attribute retrieval

Choke Points Related to Relationship Traversal

Traversing over related data objects are done in a single request, however, supporting such a traversal in a GraphQL server might cause issues. The following choke points will focus on these.

• CP 2.1: Traversal of different 1:N relationship types • CP 2.2: Efficient traversal of 1:1 relationship types

• CP 2.3: Relationship traversal with and without retrieval of intermediate object data • CP 2.4: Traversal of relationships that form cycles

(15)

3.1. The Linköping GraphQL Benchmark (LinGBM)

Choke Points Related to Ordering and Paging

Defining the amount of results (paging) from a query is sometimes enforced by servers to avoid unnecessarily large operations. GraphQL also has this functionality. In addition, there is an option to specify the order of which the resulted objects are to be returned. These choke points aim to test these functionalities.

• CP 3.1: Paging without offset • CP 3.2: Paging with offset • CP 3.3: Ordering

Choke Points Related to Searching and Filtering

The field arguments in a GraphQL query are very useful since you can specify a set of the result which is to be returned, for example using the match or startswith keyword. These choke points tests this functionality with more complex filtering.

• CP 4.1: String matching • CP 4.2: Date matching

• CP 4.3: Subquery-based filtering • CP 4.4: Subquery-based search • CP 4.5: Multiple filter conditions

Choke Points Related to Aggregation

An advanced function within GraphQL API is the possibility to aggregate functions over queried data. Some of these aggregate functions are calculations such as SUM, AVERAGE and MAX. Aggregate functions may also be things such as counting the number of elements in the resulting query.

• CP 5.1: Calculation-based aggregation • CP 5.2: Counting

Performance metrics

To present the results certain performance metrics were chosen by the researchers at ADIT as a sufficient representation of what is important in a web server environment. The chosen performance metrics are the following.

Query execution time

Query execution time (QET) is the time required in which a query is sent from the GraphQL client to the GraphQL server and for the complete query result to be acquired by the client. This time will be represented in milliseconds (ms).

Query response time

Query response time (QRT) is the amount of time which it takes for a query to be sent from the GraphQL client to a GraphQL server and the beginning of receiving the query result at the client-side. This time will be represented in ms.

(16)

3.2. Template translation query count_query ( $ u n i v e r s i t y I D : ID ) { u n i v e r s i t y ( nr : $ u n i v e r s i t y I D ) { g r a d u a t e S t u d e n t C o n n e c t i o n { a g g r e g a t e { count } } } } Provided query count_query ( $ u n i v e r s i t y I D : ID ) { a l l U n i v e r s i t i e s ( c o n d i t i o n : { nr : $ u n i v e r s i t y I D } ) { nodes { graduatestudentsByUndergraduatedegreefrom { t o t a l C o u n t } } } } PostGraphile query count_query ( $ u n i v e r s i t y I D : ID ) { u n i v e r s i t y ( where : { nr : { _eq : $ u n i v e r s i t y I D } } ) { g r a d u a t e s t u d e n t s _ a g g r e g a t e { a g g r e g a t e { count } } } } Hasura

Figure 3.2: Query templates related to aggregation.

Throughput

Throughput is the amount of queries completely processed by a GraphQL server and client(s) within a specific time interval. One query is completely processed when the client receives a complete result for a given query. The time interval will be measured in seconds and the chosen interval is 30 s.

3.2

Template translation

The server translation tools (Hasura and PostGraphile) generate a GraphQL schema and server from the legacy database. The schemas generated by Hasura and PostGraphile may be different from one another in the sense that different queries has to be written to achieve the same result. Given this information, the queries generated from the query templates provided by LinGBM may not match the generated schemas of Hasura and PostGraphile and thus have to be adapted to do so.

In order to measure the performance of the generated servers, for both Hasura and Post-Graphile this will have to be done. The final result should be a query that achieves all of the necessary choke points as well as produces the same query result. The process of translating will be done by studying the provided LinGBM query templates and the choke points that the given query would achieve, as well as by studying the newly auto-generated GraphQL schemas. A comparison of the 16 query results using the different GraphQL server translation tools will be made to make sure the resulting fields was equivalent given the same parameters. See Figure 3.2 for an example between the syntactical differences in query template 15 between the provided template and the translated templates used for Hasura and PostGraphile. Generally, Hasura and PostGraphile generate similar GraphQL schemas in the sense that for every table which has a foreign key in the legacy database, there exist a nested field of the related type which can be accessed.

To clarify, In the legacy database, there exists a table called Faculty which has the follow-ing fields consistfollow-ing of native datatypes: id, email and telephone. These all get generated

(17)

3.3. Experiment environment

normally as scalar datatypes in the auto-generated GraphQL schemas for both Hasura and PostGraphile. In addition to these, Faculty possesses foreign keys. A Faculty has received a master degree from a specific university and thus Faculty has a foreign key called mas-terDegreeFrom which contains the ID of the University, see Figure 3.1. The schema which gets auto-generated in both tools include a type called Faculty which, besides the fields of scalar datatypes, has two fields to represent the foreign key called masterdegreefrom and universityByMasterdegreefrom. The first mentioned field stores an integer (which refers to the specific University ID where the Faculty got their masters degree) and the second field contains the whole University and its contents.

The main difference between the schemas generated by PostGraphile and Hasura is that PostGraphile uses an interface called Node which all types implements and which stores a unique global ID for each object created. This difference can be seen in Figure 3.2. As can be noted in the figure, allUniversities has access to an array of these nodes and stores the ID of the object which is sought after in this example.

Regarding PostGraphile, it was initially not possible to translate query template 10, 13, 14 and 16 to the auto-generated schema. As mentioned in Section 1.4, a decision was made to only use the plugins which were used on the setup tutorial of respective server translation tool. In the tutorial for PostGraphile, the plugin postgraphile-plugin-connection-filter2is used. This

plugin makes it possible for PostGraphile to execute the queries based on query template 10 because of the added filter functionality. However, query templates 13, 14 and 16 could still not be translated.

3.3

Experiment environment

There will be three different docker-containers. One of the docker-containers will contain a PostgreSQL database server with the generated database loaded and the other two docker-containers are running the auto-generated server implementations. One docker-container has the auto-generated server implementation of Hasura v1.2.1 and the other of PostGraphile v4.6.0.

Using the dataset generator, the dataset scales generated and tested will be 10, 15 and 20. To manage the generated databases the open-source relational database management system (RDBMS) PostgreSQL3 will be used. At the time of writing this thesis this is the

only RDBMS that is functional for both of the used tools. Using the query generator, 5000 queries per query template will be generated to reduce the number of duplicate queries used for most query templates. These queries are based on the query templates that have been made for both of the server translation tools. To run these tests, the LinGBM test driver tool will be used. There are two version, one which tests for throughput and another which tests for QET and QRT. These test drivers will be running the queries generated from the translated query templates and record the metrics into a .csv file. Both tests will be run on scales 10, 15 and 20 of the dataset as well as 1, 2 and 3 clients for the throughput test. The test driver testing throughput will be repeated 5 times for every number of clients for all scales of the dataset. Regarding the test driver testing QET and QRT, 100 generated queries will be run based on each query template. This will be done for each scale of the dataset tested. The hardware that will be used is a Hewlett Packard laptop. The system is running De-bian GNU/Linux 10 (buster) as its OS. The laptop uses an Intel Core i5-7200U processor with 8GB of LPDDR3-1866 SDRAM and a M.2 SSD.

2https://github.com/graphile-contrib/postgraphile-plugin-connection-filter 3https://www.postgresql.org/

(18)

Chapter 4

Results

This chapter covers the result of the experiments performed in regards to the metrics brought up in earlier sections. When a specific query template is mentioned, it will be noted as QT1, QT2, etc.

4.1

Throughput

The throughput results are represented in two ways. Firstly there will be the results showing the throughput between the differing number of clients for a specific tool. These can be seen in Figures 4.1, 4.3 and 4.5. Secondly there will be a comparison between the through-put of Hasura and PostGraphile for all of the different query template. This comparison is seen in Figures 4.2, 4.4 and 4.6. The raw data for these graphs can also be found in Ta-bles 4.1, 4.2 and 4.3.

Observing Figures 4.2, 4.4 and 4.6 show that the throughput for Hasura is larger than that of PostGraphile on all but QT2 and 9 (which are about equal) as well as QT8 (in which Post-Graphile outperforms Hasura). This holds true throughout all of the different sizes in datasets and number of clients. Furthermore, the throughput of Hasura can be observed to increase slightly more than the throughput of PostGraphile with a higher number of clients. However, by comparing the tools with different scaling on the dataset, PostGraphiles throughput scales slightly better. By looking at the bars on Figures 4.2, 4.4 and 4.6, PostGraphile is progressing closer to Hasura’s throughput.

Looking at Figures 4.1, 4.3 and 4.5 one can see that the proportion of increase with the number of clients present is digressive. As the number of clients increase, the throughput eventually stabilizes. This can especially be seen in the right graph in Figure 4.5 but is true for all of the graphs. There is a large increase in throughput between one and two clients, however, the increase between two and three clients is substantially lower.

The throughput for QT9 are very small and do not show well in the graphs (especially on dataset scale 20, Figure 4.6). This is because the number of queries finished was very small for these tools in comparison to the other query templates. The results favored Hasura with on average about five more for one and two clients, but when there were three clients, Hasura only had one more completed queries on average when the dataset was on scales 10 and 15. However the throughput was the same whenever the dataset was at scale 20 with three clients.

(19)

4.1. Throughput Query Template Number of Queries in 30s 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

1 client 2 clients 3 clients

Average Throughput, different number of clients (Hasura)

Query Template Number of Queries in 30s 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

1 client 2 clients 3 clients

Average Throughput, different number of clients (PostGraphile)

Figure 4.1: The throughput with different number of clients of each query template with the dataset at scale 10. The left graph shows the results of Hasura and the right graph shows the results of PostGraphile 0 2000 4000 6000 8000 10000 12000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 N u m b e r of Q u e rie s in 3 0s Query Template

Average Throughput comparison, 1 client

Hasura PostGraphile 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 N u m b e r of Q u e rie s in 3 0s Query Template

Average Throughput comparison, 2 clients

Hasura PostGraphile 0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 N u m b e r of Q u e rie s in 3 0s Query Template

Average Throughput comparison, 3 clients

Hasura PostGraphile

Figure 4.2: Comparisons of the tools with 1, 2 and 3 clients with the dataset at scale 10, bigger is better.

(20)

4.1. Throughput Query Template Number of Queries in 30s 0 2000 4000 6000 8000 10000 12000 14000 16000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

1 client 2 clients 3 clients

Average Throughput, different number of clients (Hasura)

Query Template Number of Queries in 30s 0 2000 4000 6000 8000 10000 12000 14000 16000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

1 client 2 clients 3 clients

Average Throughput, different number of clients (PostGraphile)

Figure 4.3: The throughput with different number of clients of each query template with the dataset at scale 15. The left graph shows the results of Hasura and the right graph shows the results of PostGraphile 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 N u m b e r of Q u e rie s in 3 0s Query Template

Average Throughput comparison, 1 client

Hasura PostGraphile 0 2000 4000 6000 8000 10000 12000 14000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 N u m b e r of Q u e rie s in 3 0s Query Template

Average Throughput comparison, 2 clients

Hasura PostGraphile 0 2000 4000 6000 8000 10000 12000 14000 16000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 N u m b e r of Q u e rie s in 3 0s Query Template

Average Throughput comparison, 3 clients

Hasura PostGraphile

Figure 4.4: Comparisons of the tools with 1, 2 and 3 clients with the dataset at scale 15, bigger is better.

(21)

4.1. Throughput Query Template Number of Queries in 30s 0 2000 4000 6000 8000 10000 12000 14000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

1 client 2 clients 3 clients

Average Throughput, different number of clients (Hasura)

Query Template Number of Queries in 30s 0 2000 4000 6000 8000 10000 12000 14000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

1 client 2 clients 3 clients

Average Throughput, different number of clients (PostGraphile)

Figure 4.5: The throughput with different number of clients of each query template with the dataset at scale 20. The left graph shows the results of Hasura and the right graph shows the results of PostGraphile. 0 1000 2000 3000 4000 5000 6000 7000 8000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 N u m b e r of Q u e rie s in 3 0s Query Template

Average Throughput comparison, 1 client

Hasura PostGraphile 0 2000 4000 6000 8000 10000 12000 14000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 N u m b e r of Q u ri es in 3 0s Query Template

Average Throughput comparison, 2 clients

Hasura PostGraphile 0 2000 4000 6000 8000 10000 12000 14000 16000 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 N u m b e r of Q u e rie s in 3 0s Query Template

Average Throughput comparison, 3 clients

Hasura PostGraphile

Figure 4.6: Comparisons of the tools with 1, 2 and 3 clients with the dataset at scale 20, bigger is better.

(22)

4.2. Query Execution Time & Query Response Time Hasura-1 PostGraphile-1 Hasura-2 PostGraphile-2 Hasura-3 PostGraphile-3 5244 4448,2 7813,8 6451,6 9034 7216,6 393,6 393,4 600,8 603,2 697 692,2 10627,8 7480,6 15564 9916,2 17370,2 10450,4 4889,4 3796,8 7144,2 5415,4 8299 5976 280,4 257 428,6 371,2 470,4 415,6 5208,2 4178,6 7584,4 5974 8809,6 6642,6 5291 4652,8 8109,4 6748 9400,6 7476,4 632 1380,4 889 1813 1013 2020,4 122 117,8 192,4 186 205,2 204,4 1327,4 1321,6 1950,2 1933,4 2292,2 2251,6 5158,6 4484,2 7590,6 6460 8906,4 7226 7453,2 6308,4 11298,4 8460,8 13198 9310,8 4272,4 0 6336,8 0 7501 0 4587,2 0 6875,8 0 8191,8 0 5476,6 4832,8 8050,4 7044,6 9493 7894,4 5394,8 0 7967,4 0 9433 0 QT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Table 4.1: Contains the average throughput at scale 10 of the dataset for Hasura and Post-Graphile with 1, 2 and 3 clients.

Hasura-1 PostGraphile-1 Hasura-2 PostGraphile-2 Hasura-3 PostGraphile-3 3710 3462,2 5713,8 5110,8 6819,2 5814,8 185 184,6 287,2 289 316,4 314,6 7690,4 6879,4 12621 9204,6 14985,8 9950,4 3241,6 2780 5151,6 3918,6 6226,8 4467,2 184,8 146,6 289,2 217,8 311,8 238,2 3527,6 3015,6 5465,6 4333,8 6580,4 4960,2 4273,6 3833,4 6550 5523,2 7694,4 6269,8 441,2 1169,8 643,8 1533 718 1744,6 49 47 77,6 75,8 81,2 83,4 913,2 923 1339,2 1344,6 1570,8 1573,8 3799,8 3496,6 5672,6 5001,6 6696 5716,8 5549,4 5504,8 9107,2 7358 10845 8256,4 3044,2 0 4653,6 0 5588,4 0 3408,4 0 5160,8 0 6233,8 0 4086,2 3810,6 6076,2 5540,4 7243,4 6399,6 3924 0 5962,6 0 7203 0 QT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Table 4.2: Contains the average throughput at scale 15 of the dataset for Hasura and Post-Graphile with 1, 2 and 3 clients.

Hasura-1 PostGraphile-1 Hasura-2 PostGraphile-2 Hasura-3 PostGraphile-3 3048,6 2831,6 4594,4 4162,8 5492,8 4836,4 88,4 87 132,2 132 148,2 145,4 6752,8 6346,2 11457,6 8540,8 13338,6 9363,8 2653,6 2193 4127,2 3121,6 4939,2 3526,8 128,8 102 204,8 155,8 218,8 165,2 2895,4 2386,2 4336,8 3403,2 5210,4 3899,2 3942,2 3340 5923,6 4788,8 6973,2 5513,2 338,6 983 499 1318,4 544,6 1498,4 28 27 43 42,8 47 46,8 480,8 473,2 546,2 535,2 591,2 542,2 3024,2 2808,2 4467,8 4056 5304,2 4664 4995,6 4681 7700,8 6651 9172,6 7387,8 2514,6 0 3684 0 4408,4 0 2802,8 0 4131,6 0 4914,6 0 3221,4 3126,2 4794,6 4513,8 5753,4 5283 3231,6 0 4711,2 0 5748,6 0 QT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Table 4.3: Contains the average throughput at scale 20 of the dataset for Hasura and Post-Graphile with 1, 2 and 3 clients.

4.2

Query Execution Time & Query Response Time

The QET and QRT of each query template for Hasura and PostGraphile are represented in Figure 4.7 as well as Figure 4.8 by comparing the two tools with different scales of the dataset. As the average QET and QRT was substantially larger, a decision was made to add the results for QT9 into a separate graph. The average raw data can also be found in Tables 4.4 and 4.5. Table 4.6 shows the relative time differences between QET and QRT.

As can be noted in Figure 4.7 and Figure 4.8, the QET and QRT of the two tools are very similar with an exception of QT5 and 8. The QET and QRT of QT8 is higher for Hasura which further confirms the result of the throughput test of this query template. QT5 shows the QET and QRT being higher for PostGraphile.

For both of the tools, an interesting change was observed in the QET and QRT through-out the sizes of the dataset which appeared whenever the QET and QRT for a given query template would reach the time of 45-50 ms. With the usual change of the QET and QRT between the different scales of the dataset being rather low, in fact, the test driver only recorded an increase between 0.04 ms and 2.81 ms for all query templates below the men-tioned threshold. The query templates which reached this threshold were observed to increase

(23)

4.2. Query Execution Time & Query Response Time

substantially more than those below it. For example, The QET and QRT of QT2 for Hasura and PostGraphile dataset scale 10 was recorded to be 96.55 and 96.09 ms respectively. By increasing the dataset size to scale 20 the time then was recorded at 255.61 ms for Hasura and 258.92 ms for PostGraphile. In contrast, the QET and QRT for QT1 with respective tools with a dataset of scale 10 was recorded at 32.38 and 30.7 ms. By increasing the dataset size to scale 20 it was recorded to be 36.72 ms and 33.98 ms.

As can be seen in Table 4.6, the QET is larger as one would expect. All of the query tem-plates for PostGraphile has an average difference in QET and QRT of around 2 ms. Hasura shows a similar average difference between QET and QRT as PostGraphile, with one exception being QT8 which instead shows a difference of just above 4 ms.

0 20 40 60 80 100 120 140 160 180 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 M ill is e co n ds Query Template

Average Query Execution Time - Scale 10

Hasura(QET) PostGraphile(QET) 0 50 100 150 200 250 300 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 M ill is e co n ds Query Template

Average Query Execution Time - Scale 15

Hasura(QET) PostGraphile(QET) 0 100 200 300 400 500 600 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 M ill is e co n ds Query Template

Average Query Execution Time - Scale 20

Hasura(QET) PostGraphile(QET) 0 200 400 600 800 1000 1200 1400 M ill is e co n ds

Average Query Execution Time - QT9

Figure 4.7: Results for QET for all QTs and sizes of the dataset, lower is better.

Hasura-10 PostGraphile-10 Hasura-15 PostGraphile-15 Hasura-20 PostGraphile-20 32,38 30,7 34,84 32,82 36,72 33,98 96,55 96,09 184,97 185,26 352,16 355,01 30,99 30,96 31,57 31,68 31,75 31,68 33,19 34,72 33,51 37,19 33,88 39,53 111,92 133,93 181,47 225,05 236,67 303,58 35,35 33,59 36,63 36,4 37,88 38,24 34,2 32,41 35,23 33,4 35,52 33,98 73,04 47,43 91,96 51,2 111,69 54,95 260,43 266,78 621,73 628,81 1071,29 1068,37 49,45 46,79 58,97 56,13 88,06 85,72 35,21 32,98 36,94 35,35 38,12 36,49 31,86 28,59 32,12 28,97 32,92 29,52 33,55 0 35,8 0 38,66 0 33,81 0 35,4 0 36,59 0 34,52 33,12 35,99 34,75 36,73 35,71 34,58 0 36 0 36,91 0 QT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Table 4.4: Contains the average QET at scale 10, 15 and 20 of the dataset for Hasura and PostGraphile.

(24)

4.2. Query Execution Time & Query Response Time

Table 4.6: Shows how much of the QET which is the QRT for both Hasura and PostGraphile with the three tested scales of the dataset.

0 20 40 60 80 100 120 140 160 180 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 M ill is e co n ds Query Template

Average Query Response Time - Scale 10

Hasura(QRT) PostGraphile(QRT) 0 50 100 150 200 250 300 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 M ill is e co n ds Query Template

Average Query Response Time - Scale 15

Hasura(QRT) PostGraphile(QRT) 0 100 200 300 400 500 600 1 2 3 4 5 6 7 8 10 11 12 13 14 15 16 M ill is e co n ds Query Template

Average Query Response Time - Scale 20

Hasura(QRT) PostGraphile(QRT) 0 200 400 600 800 1000 1200 1400 M ill is e co n ds

Average Query Response Time - QT9

Figure 4.8: Results for QRT for all QTs and sizes of the dataset, lower is better.

Hasura-10 PostGraphile-10 Hasura-15 PostGraphile-15 Hasura-20 PostGraphile-20 30,35 28,68 32,77 30,83 34,66 31,97 94,52 94,16 182,97 183,35 350,13 353,03 28,85 28,94 29,4 29,66 29,59 29,66 31,33 32,63 31,62 35,12 32,1 37,4 109,72 131,84 179,27 222,87 234,48 301,36 33,25 31,45 34,54 34,25 35,92 36,21 32,13 30,24 33,12 31,33 33,34 31,95 68,93 44,81 87,88 48,7 107,6 52,55 258,42 264,81 619,67 626,85 1069,27 1066,4 46,88 44,73 56,36 54,07 85,22 83,48 32,51 30,64 34,17 32,99 35,21 34,2 29,7 26,48 29,92 26,87 30,75 27,45 31,52 0 33,8 0 36,68 0 31,85 0 33,44 0 34,63 0 32,47 30,68 33,84 32,68 34,7 33,54 32,48 0 34,02 0 34,93 0 QT 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

Table 4.5: Contains the average QRT at scale 10, 15 and 20 of the dataset for Hasura and PostGraphile.

(25)

Chapter 5

Discussion

5.1

Result

In this section we discuss the results of the performed benchmark on the auto-generated GraphQL server implementations.

Throughput

The result achieved showed that Hasura was the tool with overall better throughput as well as the tool being slightly better with an increasing number of clients. The throughput of PostGraphile could be seen to improve slightly more with a larger dataset, furthermore, we noted that QT13, 14 and 16 (which were not run on PostGraphile) had a somewhat lower drop-off with a larger dataset. This fact leads us to believe that PostGraphile would show to scale even better, would those QTs have been run. Testing with a larger dataset and more clients is needed to confirm whether or not PostGraphile can outperform Hasura on most QTs. The only query template that PostGraphile came substantially ahead on was QT8. The CPs that are present for this query template are CP 3.1 Paging without offset as well as CP 3.3

Ordering. The only other query template that specifically relates to those CPs as well is QT9.

For this query template the result was similar between the tools, with Hasura having a slightly higher throughput. This probably has to do with the fact that two additional CPs, those being

CP 2.1 Traversal of different 1:N relationship types as well as CP 2.2 Traversal of different 1:1 relationship types. CP 2.1 is the only CP tested in QT2, which gave comparable results in

throughput between the tools. CP 2.2 is never the sole CP used in a QT, but is present in QT1, 3, 4, 5, 6 and 7. All of which showed Hasura to have the larger throughput. Whether or not this is solely because of CP 2.2 however is uncertain.

Query Execution Time & Query Response Time

The QET of the 16 QTs were shown to be rather similar for both of the tools with the exception of QT5 and 8. QT5 is the only query template which tests the choke point related to CP 2.4:

Traversal of relationships that form cycles which tells us that Hasura is the better option

if these kinds of queries are to be used often. It should be noted that CP 2.1 Traversal of

different 1:N relationship types, CP 2.2 Traversal of different 1:1 relationship types as well

as CP 2.3: Relationship traversal with and without retrieval of intermediate object data are also tested in QT5, however, these show no significant increase nor decrease in QET for the other QTs for either tool.

After the throughput has been recorded, a very rough estimate of the QET can be calcu-lated by applying the formula

Queries completed with one client

(26)

5.2. Method

However, this does not hold true with the values we received. For example, the QET for Ha-sura QT1 was recorded at 36.72 ms at dataset scale 20 and the throughput for the same query template was recorded at 3025. Applying the above formula but with the time interval as the unknown variable gives

0.03672× 3025 = 111.078s

the result suggests that sending 3025 queries of QT1 would take 111 seconds, which in fact took only 30 seconds. This anomaly can be explained by noting that the benchmark drivers used to test the throughput, QET and QRT were different in the way that queries were sent to the GraphQL server which may have caused the results to vary. Testing the throughput, we sent as many of a single instantiated query template as possible under a specified time whilst the QET and QRT was recorded by sending one query template after another (QT1, QT2, QT3..etc). In up the case where the client sends similar queries continuously, this the case that either of the tools use some sort of caching to speed could explain the previously mentioned anomaly. If caching is affecting the result, it should be noted that the QET and QRT would be lower than our result in a situation were the queries sent to the GraphQL server are similar (as with our throughput tests).

Hasura has a higher throughput on all QTs except for QT8 compared to PostGraphile and at the same time a slightly worse QET and QRT. This would suggest that for the case were similar queries are being executed repeatedly would favor Hasura significantly more than it would for PostGraphile. Whereas if completely different queries are sent every time a query is sent to the server Hasura may perform worse in comparison to PostGraphile.

5.2

Method

This chapter will lead back to the flaws of previous benchmarks enumerated in the motivation of the thesis, compare those to the environment we are using as well as criticize our approach in its entirety.

As mentioned in the introduction, previous benchmarks had only been done on a single dataset which does not say much about how the tools handle different sizes of a database. We addressed this flaw by using a scalable dataset to observe the changes in the performance related to different sizes of the database.

There were 16 query templates made to test different choke points of the server imple-mentations. An issue we had while interpreting the results was the fact that most queries tested multiple choke points. This made the process of concluding how a tool performed for certain choke points difficult. Something which could be improved would be to make QTs which only test a single choke point such as QT2, 15 and 16 (if possible). To reduce the number of duplicate queries run during testing we generated 5000 queries. Since a couple of QTs exceeded that amount of queries during the throughput test, more queries could be generated to reduce the chance of duplicate queries.

Another point which was brought up was the issue of having too few metrics of which the benchmark was aimed to measure. We also measured the QET and QRT for each of the QTs. These additional metrics gave us more information regarding in what ways the performance is affected with specific queries and in which order they are sent.

One of the flaws presented is still present in this work. The tools and the database is run on a laptop. To reduce the possible variables that could be interacting with our testing neg-atively we decided to use virtual environments, using Docker. Whereas this may not be as preferable as using completely different systems for the server and client(s), we thought it

(27)

5.3. The work in a broader context

was a sufficient alternative for our circumstances.

Both of the server implementation tools that we are using have two different versions. In this work we have been using the free open-source versions. Something to note is that these free version come with less features by default, and the pro versions may have a different outcome. PostGraphile has one version which is open-source (PostGraphile Core) which is crowd-funded and free to use and another which is subscription based (PostGraphile Pro). PostGraphile Pro has more features presented to it by default, but at a price1. Just like

Post-Graphile, Hasura has two versions. Hasura Core which is free and open-source and another version called Hasura Pro. To gain access to Hasura Pro one can get in contact with their team on their website2.

5.3

The work in a broader context

The findings of this thesis may be used by vendors to simplify the selection of the different tools used to auto-generate GraphQL server implementations. With a tool suited for the needs of the vendor, time and energy can be saved. Furthermore, as the internet traffic increases and more databases of increasing sizes are required, focusing on optimizing the energy consumption of these databases is a crucial aspect in the area.

1https://www.graphile.org/postgraphile/pricing/ 2https://hasura.io/pricing/

(28)

Chapter 6

Conclusion

The aim of this thesis was to help vendors compare different GraphQL server translation tools as well as highlighting the limitations of the tools Hasura and PostGraphile. Using the LinGBM benchmark methodology we have shown the throughput, query execution time as well as the query response time when using the server translation tools Hasura and Post-Graphile. Our results suggests that Hasura outperforms PostGraphile in most cases in regards to the throughput whilst showing comparable results in query execution time and query re-sponse time. Alongside this benchmark we have identified some choke points where these two tools excel compared to one another. PostGraphile outperformed Hasura at paging without offset as well as ordering in all the performance metrics. In all other cases however, Hasura surpasses PostGraphile or shows similar performance.

For future research, there are multiple areas of our work which would need more experi-menting. We would suggest increasing the scale factor of the dataset to further investigate which tool scales better with a larger dataset. Moreover, running the server translation tools on a separate system from the clients would better simulate a real world scenario. As we mentioned in our discussion, caching may be having an impact on the QET and QRT because of the way the current test driver works. Therefore, we would propose adding the possibility to send queries of the same query template continuously as we do with the throughput. This would allow us to see how the different tools handles caching.

(29)

Bibliography

[1] C. Anderson. “Docker [Software engineering]”. In: IEEE Software 32.3 (May 2015). DOI: https://doi.org/10.1109/MS.2015.62, pp. 102–c3. ISSN: 1937-4194. DOI: 10.1109/ MS.2015.62.

[2] Facebook, Inc. GraphQL. Working Draft.http://spec.graphql.org/draft/. [Online;

accessed 3-May-2020]. April 30, 2020.

[3] Benjie Gillam. How I Made PostGraphile Faster Than Prisma GraphQL Server In 8

Hours.https://medium.com/graphile/how- i- made- postgraphile- faster-

than-prisma-graphql-server-in-8-hours-e66b4c511160. [Online; accessed 3-May-2020]. 2018.

[4] Benjie Gillam. How I Made PostGraphile Faster Than Prisma GraphQL Server In 8

Hours.https://medium.com/graphile/how- i- made- postgraphile- faster-

than-prisma-1-year-later-e6bf3a802f58". [Online; accessed 3-May-2020]. 2019.

[5] Yuanbo Guo, Zhengxiang Pan, and Jeff Heflin. “LUBM: A benchmark for OWL knowl-edge base systems”. In: Journal of Web Semantics 3.2 (2005). Selcted Papers from the International Semantic Web Conference, 2004. DOI:https://doi.org/10.1016/j. websem . 2005 . 06 . 005, pp. 158–182. ISSN: 1570-8268. DOI: https : / / doi . org / 10 . 1016 / j . websem . 2005 . 06 . 005. URL: http : / / www . sciencedirect . com / science / article/pii/S1570826805000132.

[6] Lee Byron. GraphQL: A data query language. https://engineering.fb.com/core-data/graphql-a-data-query-language/. [Online; accessed 11-February-2020]. 2015. [7] Dirk Merkel. “Docker: lightweight linux containers for consistent development and

de-ployment”. In: Linux journal 2014.239 (2014), p. 2.

[8] Eve Porcello and Alex Banks. Learning GraphQL: Declarative data fetching for modern

web apps. 1005 Gravestein Highway North, Sebastopol, CA 95472: O’Reilly Media, Inc.,

2018.

[9] The GraphQL Foundation. GraphQL | A query language for your API. https : / / graphql.org/. [Online; accessed: 28-May-2020]. 2020.

References

Related documents

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Generally, a transition from primary raw materials to recycled materials, along with a change to renewable energy, are the most important actions to reduce greenhouse gas emissions

För att uppskatta den totala effekten av reformerna måste dock hänsyn tas till såväl samt- liga priseffekter som sammansättningseffekter, till följd av ökad försäljningsandel

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i