• No results found

Optimizing your data structure for real-time 3D rendering in the web: A comparison between object-oriented programming and data-oriented design

N/A
N/A
Protected

Academic year: 2022

Share "Optimizing your data structure for real-time 3D rendering in the web: A comparison between object-oriented programming and data-oriented design"

Copied!
45
0
0

Loading.... (view fulltext now)

Full text

(1)

Bachelor Degree Project

Optimizing your data

structure for real-time 3D rendering in the web

A comparison between object-oriented programming and data-oriented design

Bachelor Degree Project in Information Technology Basic level 30 ECTS

Spring 2021

Constantin Christoforidis

(2)

Supervisor: Henrik Gustavsson

Examiner: Yacine Atif

(3)

Abstract

Performance is something that is always of concern when developing real-time 3D graph- ics applications. The way programs are made today with object-oriented programming has certain flaws that are rooted in the methodology itself. By exploring different pro- gramming paradigms we can eliminate some of these issues and find what is best for programming in different areas. Because real-time 3D applications need high perfor- mance the data-oriented design paradigm that makes the data the center of the appli- cation is experimented with. By using data-oriented design we can eliminate certain issues with object-oriented programming and deliver improved applications when it comes to performance, flexibility, and architecture. In this study, an experiment cre- ating the same type of program with the help of different programming paradigms is made to compare the performance of the two. Some additional up- and downsides of the paradigms are also mentioned.

keywords: Data-oriented design, Entity component system, Object-oriented program-

ming, Real-time 3D visual simulation

(4)

Contents

1 Introduction 1

2 Background 2

2.1 Why performance is important . . . . 2

2.2 Memory layout . . . . 3

2.2.1 Class properties in memory layout . . . . 4

2.3 3D graphics in the web . . . . 5

2.3.1 WebGL . . . . 5

2.3.2 JavaScript . . . . 5

2.4 Programming paradigms . . . . 6

2.4.1 Object-oriented programming . . . . 6

2.4.2 Data-oriented design . . . . 7

2.5 Entity component system pattern . . . . 8

2.6 Scene graph . . . . 8

3 Problem formulation 12 3.1 Hypothesis . . . . 12

4 Method 14 4.1 Type of empirical study . . . . 14

4.2 Experiment . . . . 15

4.2.1 Variables and factors . . . . 15

4.3 Research ethics . . . . 17

5 Implementation 17 5.1 Literature study . . . . 18

5.2 Development . . . . 19

5.2.1 Initial stages . . . 20

5.2.2 Data loading . . . 20

5.2.3 Shading . . . 20

5.2.4 Object-oriented simulation . . . 20

5.2.5 Data-oriented simulation . . . . 21

5.2.6 Physics . . . 22

5.2.7 Gathering data . . . 22

5.3 Pilot study . . . 22

5.3.1 Discussion . . . 24

6 Evaluation 26 6.1 Code changes . . . 26

6.2 Data collection . . . 26

6.3 Analysis . . . . 27

6.3.1 Simple entities . . . . 27

6.3.2 Complex entities . . . 28

(5)

7 Discussion 28

7.1 Summary . . . 28

7.2 Ethics and society . . . 33

7.3 Ethics . . . 33

7.3.1 Society . . . 34

7.4 Future work . . . 35

A Code I

(6)

1 Introduction

Object-oriented systems are some of the most widely used according to (Kölling 1999), not only when programming real-time 3D visual simulations. Using object-oriented design patterns can lead to a lack of performance regarding refresh rates. This lack of performance can happen due to inefficient memory layout from the array of struc- tures object-oriented programming enforces as described in (Homann & Laenen 2018);

therefore, alternative programming paradigms like data-oriented design are evaluated in search of a better alternative. Data-oriented design is a programming paradigm that can eliminate some of the issues with object-oriented programming such as the afore- mentioned inefficient memory layout but also structural issues such as low cohesion, high coupling. Object-oriented design also has difficulties utilizing performance gained from parallel computing as mentioned in (Fedoseev et al. 2020). In this study, the per- formance of memory layout and its access times is investigated.

A large problem when developing real-time 3D visual simulations is the lack of refresh rate, (Rohlf & Helman 1994). Large performance gains come from effectively handling memory access because of the large caps in processor memory speed differential, (Or- Bach 2017). By arranging data in a more effective way in memory we can see significant performance gains.

The way object-oriented programming often arranges several objects in memory leads to lacklustre performance when values that are needed at the same time reside in dif- ferent places. One of the properties of data-oriented design is the way it improves how memory is programmed. Data-oriented design does this by storing data that is often used together with each other. This is in contrast to how object-oriented manages mem- ory and should improve performance.

To test these differences, two programs of the same functionality are created. One of these programs is created with object-oriented programming in mind while the other is created with data-oriented design in mind. These programs were then evaluated for their refresh rates while performing functions like rendering a set of entities.

The different decisions that are made when developing the programs based on the afore- mentioned programming paradigms should present a difference in performance in the favour of object-oriented programming over object-oriented programming. For that reason, the hypothesis should then be that there are differences in the performance of object-oriented programming and data-oriented design.

If this hypothesis is true, a shift to programming paradigms like object-oriented pro-

gramming may be a step towards better performance in areas where performance is

important. Better performing programs can help society by reducing the resources it

costs to run these programs. Changing the way programming is approached does not

come without a cost, but it might be worth it depending on the benefits received.

(7)

2 Background

There have already been studies showcasing the performance improvements for object- oriented programming over object-oriented programming like (Fedoseev et al. 2020, Faryabi 2018) but none of them have focused on the environment of the web. Not only 3D graphics but graphics, in general, have become more widely used on the web as documented in (Evans et al. 2014, Lau et al. 2003). (Lau et al. 2003, Salisbury et al.

1999) describes older now deprecated APIs like Java3D were used to render 3D visual simulations in the web. In more modern times technologies like WebGL were starting to get used as documented in (Lei Feng et al. 2011, Leung & Salga 2010, Evans et al. 2014) as they describe WebGL as a new powerful AI that will soon be in releases of popular browsers. This was a big game-changer at the time considering that WebGL was a new royalty-free web standard for 3D graphics that could use hardware acceleration without requiring any additional plugins from the user.

As more of the hardware is exposed to web browsers, developers obtain greater free- dom in the possibilities when creating real-time 3D visual simulations. Improving per- formance can be done in many ways but one of the fundamentals is by creating more performant code that can take advantage of the new hardware exposed, especially on the web.

2.1 Why performance is important

Performance has always been something developers had to take into consideration while making programs that perform real-time graphics simulations. The downside of real- time graphics is that they have to happen in real-time and cannot be pre- or postponed.

To have a consistent output most programs define a minimum update rate they want to achieve. Update rate in the context of real-time graphics simulations is defined by how often the graphical objects displayed can showcase manipulation of their state. A set refresh rate is prevalent in studies like (Montrym et al. 1997) and (Stoll et al. 2001) where the programs analysed and developed tries to maintain a consistent refresh rate of 15, 30 and 60Hz. (Rohlf & Helman 1994) is another study that shines a light on the importance of performance in real-time graphics, especially when it comes to modern devices. (Rohlf & Helman 1994) says that the decreased usage of available computa- tional resources and the increased performance of newer devices has opened up oppor- tunities for non-traditional usages of visual simulations in newer fields such as virtual reality and location-based entertainment. The usage of such visual simulations has only grown since that study was released as described in (Farshid et al. 2018). (Farshid et al.

2018) talks about the many use cases of augmented, virtual and mixed reality visual simulations and argues that it can greatly enhance user experience.

Increased performance can therefore open up new possibilities in real-time graphics simulations and also improve the current ones. Better performance can also give us access to new use cases of mentioned simulations, an example of this is virtual reality.

According to (Zheng et al. 1998), ”Time-critical computation” is essential in these types

of systems to provide real-time performance. (Zheng et al. 1998) also goes on to say

that improved algorithms and computing performance is required for the future devel-

opment of virtual worlds, especially in virtual reality, where performance is crucial.

(8)

Another example of new possibilities in graphic simulations can be found when doing visual simulations on mobile devices such as smartphones. This does not only open up new visual possibilities but also different kinds of interactions between existing tech- nologies like gyroscopes and other technologies that are only accessible on those types of devices. (Hürst & Helder 2011) is an example of an application of this technology where virtual reality is used in conjunction with smartphone technologies that are not usually seen on more traditional devices such as computers.

To improve refresh rates of real-time graphic simulations the device must be able to execute the code within a certain time frame. If a graphic simulation wants to main- tain a consistent refresh rate, all the computation in between the time it takes to draw graphics to the display has to occur within that time frame. If the program wants to maintain a 60Hz refresh rate like in previously discussed graphic simulation cases all the computation has to be done within a

100060

= 16

23

millisecond time frame. The bet- ter performance overall program and graphic simulation have the more opportunities there are for innovation.

One way to reduce the time it takes for a graphic simulation to update would be to in- crease the speed at which data can be transferred through different memory modules.

This can include the transfer of data from the central processing unit and the graph- ical processing unit. This can be done by changing the way memory is accessed and managed in the program.

2.2 Memory layout

One issue with modern computers is that the speed of memory modules does not in- crease as fast as the performance of processing modules. This gap in speed emphasizes the problem where accessing the computer memory becomes a very expensive process.

The expense of accessing memory increases the performance by a large proportion. This is described in great detail in (Or-Bach 2017).

The way programs are constructed affect the way they handle their memory manage- ment. One issue that object-oriented design creates is when a multitude of objects have to be stored and iterated over. In object-oriented design, the state of the object is ac- cessed from within that object. Object-oriented programming therefore forces the data structure into an array of structures as mentioned in (Fedoseev et al. 2020). The way object-oriented design store objects in an array of structures is in contrast to the struc- ture of arrays that data-oriented design opts for. One disadvantage of arranging data in an array of structures is the unavailability of using SIMD operations. (Hassan et al.

2016) mentions how SIMD instructions can speed up matrix multiplication which can be extremely useful while performing calculations for real-time 3D visual simulations.

(Hassan et al. 2016) also notes that a lot of C++ compilers optimize using SIMD instruc- tions depending on what kind of compiler and target processor type. SIMD instructions are an example of how a structure of arrays can be beneficial in certain environments.

(Sato et al. 2015, Homann & Laenen 2018) also mentions how the usage of SIMD in- structions and parallel computing improve performance.

(Inoue & Taura 2015) also mentions how arrays of structures have difficulty using SIMD

instructions because of the possibility of having the structures be scattered in memory.

(9)

Figure 1: Structure of arrays and array of structures

SIMD instructions are instructions that can process multiple batches of data at the same time for increased parallelism as described in (Win et al. 2016). These instructions can execute on both the main and graphical processor of a computer and needs to be supported by the instruction set of those modules.

2.2.1 Class properties in memory layout

(Strzodka 2012) demonstrates how structure of arrays and arrays of structures can be implemented in C++. If the same principles mentioned were applied to A 4 and A 5 the memory layout in C++ would be similar to Figure 1. If a function in the program would want to read or mutate one of these data types, it would only have to access that data. The arrays would then also take up less space in the cache which would improve cache locality by having more relevant data in the cache. Having higher cache locality has been shown to improve performance in many cases according to several studies like (McKinley et al. 1996, Carr et al. 1994, Wolf & Lam 1991, Chilimbi et al. 1999).

(Homann & Laenen 2018) is a study comparing the up and downsides of structure of

arrays and array of structures. It attributes one of the main benefits of using structure of

arrays over array of structures to be able to use SIMD instructions. It is also mentioned

that modern hardware can utilize the structure of arrays even better than their older

counterparts due to their improved SIMD instruction hardware. Modern hardware has

larger vector registers and caches in general which can improve the structure of arrays

depending on the type of data stored in the array. (Homann & Laenen 2018) also see a

lot of drop in performance when all the data is unable to fit in the L1, L2 and L3 memory

caches.

(10)

2.3 3D graphics in the web

Considering the large amount of abstraction web browsers have in comparison to more native solutions, large-scale real-time 3D visual simulations have not been as popular in the past. Lack of performance and access to hardware acceleration has limited the field of 3D visual simulations on the web. With the prominence of newer technologies, 3D visual simulations on the web are starting to become more relevant. (Ortiz 2010) speaks about the relevancy of 3D graphics on the web and how more business and en- gineering firms are starting to realise the improved user experience that 3D graphics can add to the user experience. This complement with users being more accustomed to 3D content because the improving technology makes it an important step in improving web technology.

(Evans et al. 2014, Xu 2012) mentions modern ways of approaching graphics in the web.

For 2D it is mentioned that SVG and the HTML5 canvas element is used. For 3D it is mentioned that depending on if there is a need for a declarative approach or an imper- ative approach X3D or WebGL are used. X3D is meant to be a standard for displaying 3D objects on the web with minimal effort. (Evans et al. 2014) describes how X3D is integrated into the DOM of the web page for quick interactivity with JavaScript. We- bGL on the other hand offers more flexibility in how graphics are displayed on the web page by letting the programmer change things without using the DOM. WebGL, there- fore, offers more accessibility to the hardware and how 3D is displayed than X3D where the programmer simply defines what should be rendered. In WebGL, the programmer has to define what is to be rendered, as well as how to do it. WebGL is therefore more important for developers who want to maximize their performance by specializing their rendering methods for their use case.

2.3.1 WebGL

Since WebGL exposes the hardware API that is necessary to render advanced 3D vi- sual simulations it also requires specific shader code to execute on that hardware. The process of creating these shaders is described in (Parisi 2012). Certain shaders like the geometry shader are provided by default while the vertex and fragment shader are required to be defined by the developer or a secondary framework. A 7 and A 8 are examples of basic vertex and fragment shaders.

The way data is transferred to the graphics processing unit in the computer is through arrays of primitive data types like floats. These arrays can then be accessed in the shader code that was previously mentioned. To receive array values from the JavaScript envi- ronment to the vertex shader it must define it as an attribute like in A 7. The values must then be sent to the graphics processing unit by binding an array then setting the buffer data like in A 9. Singular data can also be sent to shaders using uniforms.

2.3.2 JavaScript

JavaScript differs to more traditional programming languages when it comes to some

aspects. It does not have typical class inheritance but instead opts for a prototypal inher-

itance as mentioned in (Chandra et al. 2016). Prototypal inheritance means that every

object has a prototype object that it inherits properties and functions from. The pro-

totype can also inherit from other prototype objects creating a class structure. In most

JavaScript environments the prototype of an object can be accessed with the __proto__

(11)

property as shown in A 6. In A 6, the ball object will contain a __proto__ property which itself has a function called bounce.

One downside of using JavaScript is the lack of SIMD instructions. (Jibaja et al. 2015) is a study investigating the usefulness and availability of SIMD in different JavaScript en- gines but does also mention that it is currently not widely available. (Jibaja et al. 2015) also states that SIMD instructions can give substantial improvements to both execution speed and processor energy usage. (Jibaja et al. 2015) concludes by stating that SIMD instructions are in their final stages of adoption by the JavaScript standards committee and might be something to look out for in the future of web programming. Since this study has been published SIMD instructions have been taken out of JavaScript and in- stead are now being perused in the field of WebAssembly as described in (Polubelova et al. 2020).

(Di Benedetto et al. 2010) is a study about implementing a graphics library for the web. It mentions the different usages of structure of arrays and array of structures.

(Di Benedetto et al. 2010) claims that the JavaScript runtime performs more efficiently when working with homogeneous arrays of numbers rather than arrays of generic object references.

There would not be much point in talking about the environment if not mentioning the interpreter the JavaScript code will be using. According to (Tiwari & Solihin 2012), the V8 engine is one of the most widely used JavaScript engines by researchers and per- formance practitioners. It is also used widely by users in browsers like Google Chrome.

(Tiwari & Solihin 2012) also mentions that the engine has better or similar performance to other competitive JavaScript engines which makes it suitable for testing. The V8 en- gine is not only used in server-side programs. (Tilkov & Vinoski 2010) brings up other programs like Node.js that aim to support long-running server processes. The V8 en- gine is therefore not only used in browsers and similar performance optimizations may be expected on server systems that run on programs like Node.

2.4 Programming paradigms

(Bartoníček 2014) mentions that a programming paradigm is almost synonymous with the word ”approach”. According to (Bartoníček 2014), the choice of programming paradigm has a great impact on how a program is structured.

Most programming languages support using a multitude of programming paradigms as mentioned in (Bartoníček 2014). Even object-oriented programming still has many uses for creating custom types with classes even though that is mostly used for creat- ing objects when designing with object orientation in mind. In appendix A there are examples of using classes with object-oriented and data-oriented principles in mind. A 4 shows how data is structured in a single ball object that may be stored in an array or not. A 5 shows how classes can be used with data orientation in mind by storing the different types of data together in different arrays.

2.4.1 Object-oriented programming

Object-oriented programming has historically been one of the most influential and widely

used programming paradigms to follow according to (Kölling 1999). The basic concept

(12)

of object-oriented programming was and still is a classical way of handling large prob- lems, to divide them into smaller ones. (Wegner 1990) refers to this strategy as divide and conquer and mentions that it is a time-honoured method of managing complexity.

The object-oriented programming paradigm can therefore be divided into three groups of sub-paradigms mentioned in (White & Sivitanides 2005).

• A paradigm of program structure

• A paradigm of state structure

• A paradigm of computation

In object-oriented programming, procedures are grouped with data to become classes.

A procedure is what many programming languages call a method or a function. A class is not instantiated with any data, it is only a template that defined what kind of data an object should have. Having templates like classes is a way of defining the state structure.

Having a state stored in groups like classes is different from procedural programming that shares a global unprotected state.

These classes are then instantiated into objects. The procedures in the classes often modify the object’s state and can also return a function that depends on the state of the object. By having state and procedures in classes, the program groups the state and functionality together. Handling state and functionality with classes is a way to organize the program structure.

The third paradigm mentioned in (White & Sivitanides 2005) is in terms of state transi- tion, communication, and classification within a programming language. Classification in this context is a way to constrain a result in a certain way. In object-oriented pro- gramming, the state is modified by a class procedure. Communication is handled when objects send messages to each other. Most programming languages that fully support working in an object-oriented way often implement classification by having classes that objects are then made from. On top of that, they also have inheritance of classes which further support these constraints on how computation is done.

According to (White & Sivitanides 2005), object-oriented programming is often used to map classes according to the real world, a certain type of skeuomorphism already present in user cognition as mentioned in (White & Sivitanides 2005). Objects are of- ten described with their own data type which is called a class. A class defined which procedures and state variables an object should have. In object-oriented programming classes are often divided into a hierarchy where they get their functionality and state variables from their parents, and then also share those with their children.

2.4.2 Data-oriented design

Data-oriented design is a newer design paradigm that tries to put emphasis on where

the data is stored instead of the relation it has to other data. Data-oriented design also

tries to put more emphasis on how the data is stored in a way that makes it easy to ma-

nipulate. It is mentioned in (Fedoseev et al. 2020) that data-oriented design facilitates

the programming of code that has a more efficient memory layout by storing data that is

often used together by the computer close to each other in memory. It is also mentioned

(13)

that manipulation of data is easier in data-oriented design since it is easier to predict which parts of the program that manipulates the different data that is stored.

As mentioned in (Hatledal et al. 2021) and (Fedoseev et al. 2020), data-oriented design favours composition over inheritance. Favouring composition is key when following data-oriented design as composition separates data and logic, this is also mentioned in (Fedoseev et al. 2020). The separation of data and logic also decreases the diffi- culty of writing code that manages the memory in an optimal way. One of the rules of object-oriented design is to store state within classes that also contain logic. Storing state within classes forces data to be stored together with logic which in turn forces the data structures to be shaped in a certain way. Data being stored this way might be more intuitive, especially to developers who are used to thinking in an object-oriented mind- set, but it separates the data. The separation in object-oriented systems that is caused by the lack of composition is one of the key factors to memory access performance.

2.5 Entity component system pattern

The entity-component system pattern is a pattern that follows the approach of data- oriented design. The entity-component system pattern does this by storing different types of data chunks that are used in different objects in their own array. Storing data that is used together in their own array allows efficient access according to data-oriented design. The arrays should ideally only hold values that are all used together to optimize access times and caching.

Figure 2 shows an example of an entity-component system class structure. This struc- ture is similar to the structure described in (Hatledal et al. 2021) and (Fedoseev et al.

2020). The systems mutate data by requesting all the components of a certain type.

The systems then mutate these components that are tied to the entities. Components do not have to be singular. The returned components can consist of a tuple that has a set of components that are all associated with the same entity. If a system in figure 2 wants to modify all entities with visual components and translation components, it can request to get entities that have both of those component types. Entities that have neither or only some of the components will not have their components exposed to the system with that system call.

2.6 Scene graph

A scene graph is a data structure that described an environment with a graph or tree structure that describes how it is structured. Scene graphs are often used within the real-time rendering of 3D graphics, but there are other use cases. Scene graphs have been used to represent 3D graphics for a long time. In (Bishop et al. 1998) it is men- tioned how a game engine that performs real-time 3D visual simulations uses a scene graph. It is also mentioned generally how this data structure is used within video game engine development. Other cases of scene graph usage outside of real-time 3D visual simulations is described in (Wu et al. 2014). In (Wu et al. 2014), a scene graph is used to describe an environment to robots.

Scene graphs typically have some traditional node types and attributes as described

(14)

Figure 2: Entity component system class structure

(15)

Figure 3: Object-oriented scene graph class structure

Figure 4: Node heavy scene graph class structure

in (Naef et al. 2003). Since there are many different ways to implement a scene graph some of these types could be attributes and vice versa, but usually they are implemented somewhere. The most basic information that is needed in a scene graph for it to render things in 3D graphics is some type of transformational information and a way to traverse the tree. Having transformations in a tree structure satisfies the basic constraints of representing different locations in space relative to each other.

In (Strauss & Carey 1992), a framework for working with 3D graphics is created. One of the data structures used is a scene graph. The framework in (Strauss & Carey 1992) was made with object-oriented programming in mind. (Strauss & Carey 1992) is a very generic scene graph separating a lot of data into individual nodes. This is similar to what was mentioned earlier talking about the difference between having node data instead of attribute data. Instead of representing a texture in an individual node like in (Strauss

& Carey 1992), a program could have this as an attribute of the node itself. This node would then be specified as its own type in the object-oriented hierarchy. In (Strauss &

Carey 1992), only base types are made into their own types, then used together in the

graph to represent the intended purpose. Instead of collecting attributes into one class

(Strauss & Carey 1992) collects a set of nodes into what is called a node kit.

(16)

Figure 5: Data-oriented scene graph class structure

In figure 3 and 4 it is displayed how a scene graph’s class structure might look depending how you implement it. 4 is more similar to how (Strauss & Carey 1992) has implemented their scene graph while the other one is more object-oriented in nature, representing a hierarchy of classes that depend on their parent’s attributes. This hierarchy is not directly implemented in (Strauss & Carey 1992) since nodes are instead grouped into node kits.

A different way to structure a scene graph would be in a data-oriented way. Creating

the scene graph data structure with an entity-component system pattern in mind will

not only improve cohesion and reduce coupling as described in (Wiebusch & Latoschik

2015), but also improve performance. Figure 5 shows a way of implementing a class

structure for a scene graph similar to the figure in (Wiebusch & Latoschik 2015). Instead

of grouping nodes with node kits like in (Strauss & Carey 1992), data fields are grouped

in an entity with components.

(17)

3 Problem formulation

(Rohlf & Helman 1994) mentions that one of the problems within the domain of real- time 3D graphics is thus the lack of performance. Since a lot of systems use a scene graph as the main data structure of rendering 3D graphics it is the data structure that is appropriate to analyse.

Something that has to be taken into consideration when performing this study is how the nature of the environment will affect the results. Since JavaScript does not have a traditional approach towards inheritance, it may affect the way the programs are imple- mented in comparison to other studies like (Fedoseev et al. 2020). Another issue could be the lack of references in JavaScript since object-oriented programming often uses it to read values in systems without having to pass them by value. This could mitigate some of the supposed performance gains from object-oriented programming.

To analyse how memory is managed in a program you have to analyse the overarching design decisions that went into making it. As mentioned earlier in (Bartoníček 2014), programming paradigms can be compared to the word approach. The approach or pro- gramming paradigm then influence the overarching design decisions and design pat- terns that go into consideration when developing the program. The two types of design paradigms in questions are of course object-oriented programming and data-oriented design.

Since object-oriented programming manages objects by putting their data in the same block of memory, it might be inefficient to access certain data types that a lot of objects have in common. An example of this might be when wanting to send all of the positional data to the graphics processing unit. Data that is usually sent to the graphical process- ing unit might be positional, rotational and scaling data as seen in (Congote et al. 2011).

Accessing all of this data when it is stored inside of several different objects is slower than if it were to be stored contiguously in memory. Not only could these objects of different classes be in different locations in memory but they would also take up more space in memory. Getting multiple attributes from several different containers would require more of the memory being accessed, increasing the time it would take to access- ing the data. Storing data continuously would open fewer rows in the memory module of the computer, effectively decreasing access times.

Data-oriented programming would solve this issue by having attributes that several classes have in common in object-oriented programming stored contiguously in mem- ory. This would reduce the number of active rows required by the memory, reducing the time it takes to access all the data.

3.1 Hypothesis

If all the premises laid out in this paper are correct, it would mean that object-oriented programming would improve the response times of memory access. Improving the memory access would then improve the refresh rates of 3D visual simulations when used together in a scene graph.

The hypotheses are then:

(18)

H

0

There are no differences in refresh rates between object-oriented programming (OOP) and data-oriented design (DOD).

H

a

There are differences in refresh rates between object-oriented programming (OOP) and data-oriented design (DOD).

H

0

: P erf (OOD) = P erf (DOD) H

a

: P erf (OOD) 6= P erf (DOD)

Measurements that have to be made: refresh rates for the two programming paradigms (OOP and DOD)

Refresh rates are measured in how many milliseconds (ms) it takes for the computer to perform a cycle in the program. A cycle in 3D visual simulations in this context is when every entity in the scene has updated its state according to the program’s logic and then been rendered again.

In (Fedoseev et al. 2020), the application that is developed with data-oriented design in mind has better performance when it comes to processor utilization. This observed increase in performance strengthens the hypothesis that data-oriented design is better when it comes to refresh rates.

(Data-oriented programming has higher refresh rates then object-oriented program-

ming.)

(19)

4 Method

(Wohlin et al. 2012) talks about two main ways of performing empirical studies, ex- ploratory research, and explanatory research. The first has a more flexible research de- sign and usually gives qualitative data while the latter one has more of a fixed research design and gives quantitative data.

Comparing the performance of the two programming paradigms will be done using quantitative data that is generated by a computer running programs that use these two programming paradigms in mind. By measuring the performance of these programs, quantitative data will be generated. The quantitative data can then be used to determine which programming paradigm is superior to the other when it comes to this metric.

The qualitative side of this study will also seek to understand the cause of the results that have been found while evaluating the two programming paradigms. Things that will have a more flexible research design are the evaluations of the environment and execution of built programs. The environment can have large effects on how well the different programs can perform depending on how it handles memory management and optimization. The way the programs are built will impact the performance of both of them. Both of the programs must follow their intended programming paradigm to exclude outside factors.

4.1 Type of empirical study

(Wohlin et al. 2012) talks about different empirical strategies for evaluating and gath- ering quantitative data. The two possible strategies that could be used to evaluate these programming paradigms are an experiment and a case study. (Wohlin et al. 2012) also talks about surveys and quasi-experiments but they are not as applicable or useful in this context. A survey would be more appropriate if evaluating something that con- cerns humans working with the programming paradigms. If we were to evaluate the productivity of developers using both programming paradigms then a survey could be more appropriate. A quasi-experiment is not needed when there is a possibility of con- ducting a normal experiment as there is less randomization when evaluating.

Doing a case study to compare the two programming paradigms is possible and has

been done, see (Fedoseev et al. 2020) for an example. The pros and cons of conduct-

ing a case study depend on how it is performed. If the case study analyses two different

programs using the different programming paradigms there might be a disparity in how

these programs run making the performance comparison questionable, this is what is

done in (Fedoseev et al. 2020). Another type of case study that could be conducted to

compare the programming paradigms would be to refactor the code base of a program

having the other programming paradigm’s design patterns in mind. Refactoring an ex-

isting codebase could be more representative of a real-world example rather than having

programs created in a lab environment. A potential issue with this approach could be

the way the existing code base was created. The existing code base could be highly tai-

lored towards a certain programming paradigm making the refactoring difficult or the

performance comparison moot.

(20)

4.2 Experiment

Because of previously mentioned reasons, the method for evaluating the two program- ming paradigms in question is going to be an experiment. One of the positives with an experiment is the possibility of tailoring both programs in a way that tests the strength and weaknesses of both programming paradigms, (Wohlin et al. 2012) mentions briefly.

(Wohlin et al. 2012) also states that even though an experiment does not use real-world examples of the things that are compared, they can and should often have the same type of characteristics. This experiment will therefore have several data points based on different types and amounts of entities. Having several data points is one way to reflect the real-world usage since different programs will have a different amounts of entities with varying complexity.

Something else of importance that (Wohlin et al. 2012) also brings up is the fact that variables that are not measured have to be the same for the experiment to stay em- pirical. Since it is not a quasi-experiment things that are not being compared should stay the same. In this study the programming paradigms are being compared and not anything else, this means that other things used in the experiment such as execution en- vironment and application domain should stay the same. As both programs will use the same programming language they will also be executed in the same environment. Both programs are also expected to have similar if not the same functionality which means the the application domain will also stay the same.

(Wohlin et al. 2012) also mentions different characteristics that should be appropriate when there is a need to use the experiment empirical strategy. Some of the character- istics described such as ”to confirm theories” and ”to explore relationships” do fit in line with this study as that is what is being tested. (Wohlin et al. 2012) also agrees that experiments are a good way to evaluate which standards, methods and tools are recom- mended. This is in line with this study’s goal to evaluate which programming paradigm is the best within the determined use case.

4.2.1 Variables and factors

(Wohlin et al. 2012) also says that before experimenting can be conducted, a set of words to describe the different parts of an experiment must be defined to decrease confusion as to what is being addressed. According to (Wohlin et al. 2012) there are two types of variables in an experiment, independent and dependant variables. Dependant variables are the results we get from an experiment and what is used to determine our hypothesis.

The independent variables are the different types of systems that can affect the result of the experiment.

Factors are the first part of the experiment. Factors could be anything from execution environment, development environment, development tools, or similar. Anything that can affect the outcome of the experiment is a factor. In this experiment, the factor would be the type of programming paradigm used since that is what the experiment is about.

Programming paradigms could be divided into smaller fractions like the type of design

patterns and principles used but in this instance, they are considered to be a part of

the larger programming paradigms. Other factors that could be worth investigating in

the realm of real-time 3D visual simulations on the web could be the use of technolo-

(21)

gies like WebAssembly or the type of JavaScript interpreter. Modifying factors like the JavaScript interpreter might be more useful in a more advanced setting where the im- provements could be applied to the interpreter in use, otherwise, these findings may go unused. Changing web rendering technologies to something like WebAssembly could be extremely useful considering the background research of this study. WebAssembly has a lot of potential upsides in comparison to JavaScript, especially when it comes to improving the performance of 3D visual simulations since most of these are usually preferred to run natively on computers.

Since the factor in question is which programming paradigm should be used, the treat- ment has to be the type of programming paradigms that will be used. Object-oriented programming and data-oriented design is just a fraction of the type of design paradigms programmers can follow. Object-oriented programming was chosen by the virtue of being the most used programming paradigm as mentioned in (Kölling 1999). Data- oriented design was chosen because of its focus on fast memory access and improved performance, which is needed in real-time 3D visual simulations. Alternatives to this could be something with more emphasis on functional or even declarative programming for certain programs.

The dependent variables are as previously mentioned, the output of the experiment. In this experiment, it is decided that the maximum refresh rate of the program is going to be the dependent variable to measure. The refresh rate of 3D visual simulations, or at least the rendering time of such has been used in previous studies like (Montrym et al.

1997) and (Stoll et al. 2001). It is a good variable to measure considering it is the final variable to measure before having to measure how subjects perceive the rendering itself.

It is therefore important to measure of these improved memory access times positively affect the program when it comes to user interaction. If the program has better memory access times but then has more overhead, resulting in a poorer refresh rate overall, it might not be a good solution to use after all. Other factors could include the bandwidth of the amount of memory the program can access in a certain amount of time, single memory block access times, or something similar. The main upside of using refresh rates is that it is easy to evaluate when it comes to the effect it has on user interaction.

It is also something that can vary between the complexities of programs. Very light programs might not see or need the improvement from data-oriented design, which would be apparent if the refresh rate is already so high that it is indistinguishable to the average user.

(Wohlin et al. 2012) also mentions the use of subjects in empirical experiments. When

it comes to empirical experiments done in a computer setting it is unclear if subjects are

supposed to be the developers or nothing at all. The unclear terminology could be due to

the fact that the terminology used by (Wohlin et al. 2012) is often used within biomedical

research. Would the programmer that has programmed the different programs using

the different programming paradigms a subject? It is unclear in (Wohlin et al. 2012) if

the programmer should be considered a subject but the argument can be made that they

rightfully are the subject. This would be especially true considering that programming

paradigms are not a strict approach and can produce different results depending on the

subject that performs the programming of the program. If this argument holds up then

the author of this study will be the only subject.

(22)

4.3 Research ethics

Wohlin et al. (2012) says that laws are often tailor-made for studies within biomedical research. Wohlin et al. (2012) also mentions how there should be proper laws about the ethics of how peoples’ information and data that is used within experiments should be used. It is also mentioned that peoples’ code should be taken into consideration when conducting experiments. The code is the only type of information that is made by other people that will be used within this experiment. Wohlin et al. (2012) says that the only code that should be taken into consideration from an ethical standpoint is code that reveals information about the writer. Wohlin et al. (2012) does not specify what ”code that reveals information about people is” which makes the statement very confusing and hard to understand. It would be preferable if it was specified with higher detail.

Since this experiment does not use code that reveals information about the writer, this should not be an issue for this experiment.

All of the source code that will be produced and used will be uploaded to a public code repository. The technology for storing this technology will be Git. Git is a version control system that is widely used in software development. (Loeliger & McCullough 2012) talks about a popular public platform for publishing code, named GitHub, which will be the platform of choice to publish to in this experiment. Since this experiment will not use any personal information or code that is sensitive to the author of said code, there will be no issues with the confidentiality of data. Wohlin et al. (2012) speaks about sensitive results that could hurt subjects, sponsors, or the researchers themselves. It is mentioned that to ensure that the moral standards of experimentation are upheld, statistical analysis should be done by peers to ensure that the treatment that is favoured in the hypothesis does not get statistical benefit from something like hand-picked data.

To ensure the repeatability of this experiment the code will as previously mentioned, be uploaded to a publicly accessible code repository. The data used in the experiment will also be published in this code repository for maximum transparency and repeatability.

The public repository will also contain instructions on how to conduct the experiment itself. Having readily available instructions and data will ensure that the experiment can be repeated and ideally give the same results given the same type of environment as the original experiment.

5 Implementation

The implementation of the real-time 3D visual simulation experiment started with per-

forming a literature search and study. After the literature study had been performed,

the development of the simulation started. When the development of the experimental

simulation was done, a pilot study was performed to ensure the validity of the written

code. The pilot study results are also discussed to try to better understand why the re-

sults are the way they are.

(23)

5.1 Literature study

According to (Wohlin et al. 2012), a literature search and study is performed to form an understanding of what is considered the state of the art in a certain field or area. (Wohlin et al. 2012) specifies the main way of conducting a literature search as specifying search strings and applying them to databases. The search strings were applied to these main databases to find relevant studies:

• Google Scholar

• IEEExplore

• ResearchGate

• ScienceDirect

• ACM Digital Library

To solve the practical issues of this experiment, other sources were also used. Sources other than studies for this literature study and implementation were books, web articles, and online forum posts.

When it comes to deciding the search strings, the relevant technologies and method- ologies that were going to be used in the experiment were to be identified. This study will focus mostly on WebGL and not on similar technologies like X3D because of its rel- evant usage and well-supported standard. Technologies and libraries that would work adjacent to WebGL would be ones that improve the simulation or simplify the imple- mentation without worsening or restraining either of the programming paradigms in a significant way.

To find similar studies, relevant terms such as ”object-oriented programming”, ”data- oriented design”, ”entity-component system”, ”arrays of structures”, ”structures of ar- rays” were used. The studies most similar to the experiment tested in this study were (Homann & Laenen 2018, Fedoseev et al. 2020, Faryabi 2018, Hatledal et al. 2021, Fe- doseev et al. 2020). These studies all evaluated something related to object-oriented programming. Some focused on all aspects of object-oriented programming and some focused more on individual aspects like arrays of structures compared with structures of arrays. One thing all of these studies have in common is that none of them were done in a web context, and none of them was a direct comparison between programming paradigms. To understand the possible applications of object-oriented programming, different environments have to be tested. The memory assignment and threading of JavaScript differ a lot from more traditional languages. Making an experiment in a web context would be an interesting addition to the literature.

Other than using the standard JavaScript library which includes WebGL, there were

other types of libraries that were needed. There was a need for a matrix library to pro-

vide easier implementations for matrix manipulation. Other libraries like physics sim-

ulation libraries were also looked after for the possibility of providing a closer similarity

to that of real real-time 3D visual simulations in WebGL. The python libraries Pandas

and Matplotlib were used visualise the data gathered from the simulation.

(24)

Name Documentation License Integration Lightweight Maintained

AvoMatrix 3 3 3 3 7

glMatrix 3 3 3 3 3

math.js 3 3 3 7 3

math.gl 3 3 3 7 3

Table 1: Matrix library comparison

WebGL documentation was found by searching ”WebGL” in the mentioned databases, as well as the search engine Google. The most helpful source for getting WebGL up and running was (Parisi 2012). Other helpful resources that were used were the Mozilla foundation’s MDN JavaScript documentation, webglfundamentals.com, and stackover- flow.com.

To find math libraries with support for matrices, the search strings ”webgl math” and

”javascript matrix math” were used. Four candidate libraries were found:

• AvoMatrix

• glMatrix

• math.js

• math.gl

Weighing the pros and cons of every library, glMatrix was the best one for the intended use purposed based on several factors. These factors were extensive documentation, high license compatibility, easily integrated with WebGL, lightweight, and actively main- tained. glMatrix fits these criteria the best of the libraries discovered. glMatrix also provides instruction on how to integrate the library with WebGL and how to create and modify matrices and vectors on its website glmatrix.net.

In the table 1 there is a comparison of each matrix and math library with the previously mentioned factors. These factors are only relevant to this simulation. These factors may vary depending on which system these libraries are integrated into.

The candidates for physics libraries were Oimo.js and Cannon.js. These libraries were found by searching ”javascript 3d physics”. Most physics libraries only focus on 2D but at this point in the development of the simulation, everything was already tailored towards 3D simulations. Both Oimo.js and Cannon.js seemed like decent candidates but Oimo.js had more recent updates, last push was performed in 2019 which is later than Cannon.js which had its last code push in 2016.

Both python libraries Pandas and Matplotlib were used to visualise the data gathered from the simulation. Any program or library could be used to visualise the JSON data generated. The selection of these is arbitrary and was only chosen for the ease of use.

5.2 Development

After the initial libraries and frameworks had been decided, the implementation of the

simulation started. Since the simulation will not simulate a full web application, it will

(25)

just be served as static files from a web server, similar to a multi-page application. This is also to prevent unnecessary overhead from unessential frameworks that are not nec- essary when creating 3D visual simulations in WebGL.

5.2.1 Initial stages

To get the simulation started, instructions from the starting chapters of (Parisi 2012) were followed to create a web page with a WebGL canvas context initialized. (Parisi 2012) later suggests using a framework like Three.js on top of WebGL which were avoided to keep full control of how data is managed in the simulation.

5.2.2 Data loading

To load vertices from a file, a file parser was created. The Wavefront OBJ format is a relatively simple format compared to more modern formats like COLLADA and FBX but it is sufficient for this type of simulation. A simple OBJ parser was created to convert the given file into vertices, normals, and indices. This parsed was created with help of the OBJ description in (Mchenry & Bajcsy 2008). The file’s parsed data was then used to render the object defined in the OBJ file.

1

To optimize the data loading further, caching of resources was added. This prevents the same resource from having to be stored in the graphic module’s memory multiple times and having to be parsed more than once.

2

5.2.3 Shading

Many 3D simulations often include lighting to improve the realism of the simulation.

A popular lighting model is the Phong lighting model. The model implemented in the simulation is a Phong lighting model and was inspired by code from (de Vries n.d.).

The Phong lighting model has 3 layers, a diffuse, ambient, and specular layer. The dif- fuse shading darkens and lightens triangles depending on their direction to the light source, based on their normal value. The ambient shading applies the light source evenly throughout the object. The specular shading simulates the light’s ray going into the view for each pixel on it. The lighting was implemented alongside the implementa- tion of the object-oriented simulation.

3 4 5 6

5.2.4 Object-oriented simulation

Following up the initial code, the object-oriented programming simulation started to be developed. After basic features like support for model, view, and projection matri- ces were added the simulation was refactored into a more object-oriented simulation.

7

According to (Stroustrup 1988), a core part of object-oriented programming is the in- heritance of objects. For that reason, the object structure was created. The base object was a WorldObject. The second class inheriting WorldObject was VisualObject. The Vi- sualObject obtains the transform attributes like position and rotation while defining its own attributes like the VAO and indicesLength which are used for drawing the object.

1https://github.com/a18conch/exam-project/commit/7221d69 2https://github.com/a18conch/exam-project/commit/cbf2d98 3https://github.com/a18conch/exam-project/commit/f02f4ea 4https://github.com/a18conch/exam-project/commit/319ff6a 5https://github.com/a18conch/exam-project/commit/2fb9d2e 6https://github.com/a18conch/exam-project/commit/319ff6a 7https://github.com/a18conch/exam-project/commit/8680364

(26)

8

After the core structure for the object-oriented programming simulation was done, the uniforms in the shaders that stay the same for the entirety of the simulation are set. The uniforms that are the same for the entirety of the simulation are values like the view location and view and projection matrices. After doing the initialization of the objects described in the previous paragraph and setting the uniforms, the main iteration loop starts. Inside the loop, the color and depth buffers are first cleared. After the buffers are cleared the array of VisualObjects is iterated upon and their respective draw function is called. The core of the draw loop is demonstrated in the listing 1.

9

1

draw(gl, program) {

2

let model = mat4.create();

3

model = mat4.fromRotationTranslation(mat4.create(), this.rotation , this.

position); vec3.fromValues(0, 1, 0))

4

gl.uniformMatrix4fv(gl.getUniformLocation(program , "model"), false, model);

5

gl.bindVertexArray(this.VAO);

6

gl.drawElements(gl.TRIANGLES , this.indices.length , gl.UNSIGNED_SHORT , 0);

7

}

Listing 1: The core of the object-oriented draw method

The first thing that happens in the draw function is setting the uniforms that are unique to each VisualObject. The unique values are attributes of the VisualObject like the posi- tion and color of the object. These uniforms were gradually added to the draw function and shader as they were needed, some were constant in the whole simulation but were then made unique to each object.

10 11

After that the object is drawn using the pre- initialised values that were created before the simulation loop started to like the vertex object array (VAO).

5.2.5 Data-oriented simulation

After the object-oriented simulation was able to render objects in a generic way the de- velopment of the data-oriented simulation started. (Faryabi 2018) describes how an entity-component-system pattern was used to implement and follow a data-oriented design in the game engine Unity. Other informal sources like (Ford n.d.) and (Fabian n.d.) were also used to study how data-oriented design is structured in real-life systems.

An entity-component system was then implemented similar to (Faryabi 2018). Access- ing attributes inside of components seemed convoluted which prompted the change to just storing the values instead of components. In this sense, each value became a com- ponent. This change did increase time to fetch entities with a certain type of component which became a trade-off for simplicity. Storing values in single components might be changed for the final experiment but stayed this way throughout the pilot study.

When the data-oriented simulation started being developed, the object-oriented func- tions and classes were converted into systems and components. All the state of Worl- dObject and VisualObject were stored into their components. Attributes of the classes like xPos, yPos, zPos, color, VAO or indicesLength were now their components. Func- tions like draw and the process of copying values from each respective object’s physic

8https://github.com/a18conch/exam-project/commit/8680364 9https://github.com/a18conch/exam-project/commit/8680364 10https://github.com/a18conch/exam-project/commit/4265e62 11https://github.com/a18conch/exam-project/commit/8680364

(27)

object’s state were now the systems PhysicsSystem and RenderSystem.

12

Some helper functions that convert objects into entities with components were also added.

13

5.2.6 Physics

To make the simulation more like real-life use cases, physics simulations were added.

The library Oimo.js was added and a new physics world was initialized like the docu- mentation. Entities were also added as physics objects in the world, this reference was then saved to the visual object in the object-oriented simulation and its component in the data-oriented simulation. The updating of the physics world then got included in the update loop of both programming paradigms’ simulations.

14

. In the object-oriented simulation, it was added to the main loop while in the data-oriented simulation it was added into its system.

15

5.2.7 Gathering data

When both the simulations were able to render objects, a way to perform the experi- ment and gather the data was developed. Firstly a counter was added to measure how many updates were performed. Every update, the counter increases. After a second the counter stores the value and resets. To ensure that the parameters used were the same across both simulations, they were shared in a module both simulations had in com- mon. To get a larger sample size and better represent real systems of different sizes, a different amount of entities were created. The number of updates per second were recorded for a certain amount of seconds for each different amount of entities. All of this data was then stored as a JSON string and exported into a file.

16 17

There was also a physics floor added for the physics objects to land on, to make the objects interact with each other. Making objects interact with each other makes it easier to determine if the physics simulation is working.

18

5.3 Pilot study

To test the simulations, the code, and then data gathering methods, a pilot study was performed. It was also performed to see if there were any inaccuracies or unforeseen logic errors in the code. The sample size of the pilot study is not as large as the final study since the results will not be used to confirm or deny the hypothesis.

The number of updates a second was recorded each second, for 10 seconds, for each different amount of entities. These different amounts were calculated with the function f (x) = (x·10)

2

and was initialized for the values {1, 2, 3, 4, 5} . In total there were therefore 50 values being gathered. Each value is the number of updates that were performed that second. Since the test.js file contains the simulation data, that is where the parameters are defined. The different periods to test were defined as sections. These seconds all had the same amount of time to run and see how many updates they could perform. As

12https://github.com/a18conch/exam-project/commit/4a135b3 13https://github.com/a18conch/exam-project/commit/8550389 14https://github.com/a18conch/exam-project/commit/cd4b7d4 15https://github.com/a18conch/exam-project/commit/6efea90 16https://github.com/a18conch/exam-project/commit/1a23ccd 17https://github.com/a18conch/exam-project/commit/1c246db 18https://github.com/a18conch/exam-project/commit/bfff027

(28)

Figure 6: Pilot study line data

Figure 7: Pilot study bars data

(29)

seen in 3, the set is defined as the constant SECTIONS, and the 10 seconds is defined in TIME_TO_TEST.

1

let world = new OIMO.World({

2

timestep: 1 / 60,

3

iterations: 8,

4

broadphase: 2, // 1 brute force, 2 sweep and prune, 3 volume tree

5

worldscale: 1, // scale full world

6

random: true, // randomize sample

7

info: false, // calculate statistic or not

8

gravity: [0, -9.8, 0]

9

});

Listing 2: Oimo.js world initilazation

The experiment was then performed on chromium, build revision 874339. The test was performed by firstly running the node package http-server in the directory of the static files. After the http server was running, the oop/index.html and dod/index.html files were navigated to. When these files are navigated to, they will automatically perform the experiment and then download the data in the browser. Figure 6 presents the av- erage updates per second for each simulation with different amounts of entities. The error bars show the lowest and highest average updates per second recorded for that amount of entities. Figure 7 shows the average amount of updates for each respective programming paradigm, with standard deviation shown on the error bar.

1

const SECTIONS = [1, 2, 3, 4, 5];

2

const TIME_TO_TEST = 10;

Listing 3: Testing parameters

A one-way analysis of variance (ANOVA) test was performed with python calculating the results of F statistic being 0.45719 and P value being 0.50071. These values were evaluated at a confidence level of 95%. Using a confidence level of 95% makes the con- fidence coefficient 0.95 and the α = 0.05 . Since P > α , the means are not different.

Therefore this pilot study can not be used to disprove the initial hypothesis.

5.3.1 Discussion

The general trend seems to be that object-oriented programming is ahead on average.

This was investigated to see if there were any bottlenecks in any of the simulations. Ac- cording to Chrome DevTools, in figure 8, the function iterateEntitiesOfTypes occupies a large amount of the total the simulation execution time. If you compare this function to the functions in the object-oriented simulation in figure 9, none of the functions take up as much execution time. The function iterateEntitiesOfTypes is the function responsi- ble for fetching certain components from entities, it is this overhead that is responsible for the increased execution time. If this overhead was to be reduced, object-oriented programming could therefore perform better than it did in this pilot study.

Using an already created framework for object-oriented programming might yield more

performance. This should be researched and evaluated to ensure that it is not this spe-

cific implementation of object-oriented programming that is the cause of the lack of

performance.

(30)

Figure 8: Chrome DevTools performance evaluation for object-oriented programming

Figure 9: Chrome DevTools performance evaluation for object-oriented programming

(31)

6 Evaluation

This chapter evaluates the final results of the simulations that were improved upon after the pilot study. It will begin by noting the changes that were made after the pilot study and then analyse the data that was collected after the changes had been made.

6.1 Code changes

To improve the simulations and add depth to the scene graph, object/entity children functionality was added. In the object-oriented simulation, this was implemented by adding children to each object directly while in the data-oriented simulation it was im- plemented by creating separate entities that stored the index of their parents. This made the object-oriented simulation traverse from parent to children while the data-oriented simulation had to traverse from child to parent. It is less efficient to traverse from chil- dren to parent as you have to calculate the parent’s transform each time a child needs it.

If it is traversed from parent to children the transform only has to be calculated once.

This seemed to be the easiest solution at the time but other solutions could be imple- mented to improve the performance of data-oriented design.

19

Another data-oriented design simulation was added to simulate. The addition was done to provide another implementation of an entity-component system pattern structure to compare to. The ECSY (ecsy.io) framework was added and implemented to another sim- ulation henceforth referred to as ”DOD2”. This framework follows an entity-component system pattern where components are added to entities and then modified and read by systems.

20

Some minor changes were also made to how data was collected. One of these changes were making indices get stored automatically instead of having to write them in manu- ally when presenting the data. The other change was made to ensure that each section runs for the appropriate amount of time. In some of the sections during the pilot study, certain sections ran for less than the intended time due to small differences in the time recordings. These changes had been fixed to make each second of a section get recorded individually and all have the same weight on the total time of the section.

21

6.2 Data collection

The way the data collection was performed in the final study changed from the pilot study. There was an off by one error in the pilot study only recording 9 data points instead of 10 for each section. This did not impact the study as both programming paradigms had this issue but it has been corrected for in the final study.

The number of total data points recorded had also increased from 5 to the initially planned 150. To get a wider range of entity count, a different mathematical function was used to get the number of entities for each iteration. The mathematical function used in the final study was 10 · e

0.04x

instead of the one used for the pilot study, f (x) = (x · 10)

2

. This function was then iterated over with the values {1, 2, 3, ..., 150} to get the number of

19https://github.com/a18conch/exam-project/commit/0901bf8 20https://github.com/a18conch/exam-project/commit/0a333d4 21https://github.com/a18conch/exam-project/commit/39e316b

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

Both Brazil and Sweden have made bilateral cooperation in areas of technology and innovation a top priority. It has been formalized in a series of agreements and made explicit

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Av tabellen framgår att det behövs utförlig information om de projekt som genomförs vid instituten. Då Tillväxtanalys ska föreslå en metod som kan visa hur institutens verksamhet

Generella styrmedel kan ha varit mindre verksamma än man har trott De generella styrmedlen, till skillnad från de specifika styrmedlen, har kommit att användas i större

Parallellmarknader innebär dock inte en drivkraft för en grön omställning Ökad andel direktförsäljning räddar många lokala producenter och kan tyckas utgöra en drivkraft

Example 4: The application is able to sort the elements after the whole process, using the Aspect to decide which algorithm is the best to use in each case in function of

The resources that are mostly used with a search-for-meaning approach are closely connected to each other and the subject area. The students use the computer when they solve