• No results found

Mixed-initiative Puzzle Design Tool for Everyone Must Die

N/A
N/A
Protected

Academic year: 2021

Share "Mixed-initiative Puzzle Design Tool for Everyone Must Die"

Copied!
17
0
0

Loading.... (view fulltext now)

Full text

(1)

Mixed-initiative Puzzle Design Tool for Everyone Must Die

Nils Brundin Viktor R¨orlien

Computer Science Bachelor’s Thesis 15 ECTS credits Spring 2021

Supervisor: Jos´e Font

(2)

Abstract—The application of PCG to generate puzzles offers great value since their replayability is severely limited, requiring any game that employs them to produce many different puzzles.

In this paper we propose a modified version of the progressive content generation approach to function as a mixed-initiative system, to create puzzles for the novel partially physics-based game Everyone Must Die. Thus exploring the adaptability and usefulness of the progressive content generation approach for a unique type of puzzle game. Further the mixed-initiative system is explored in relation to how effectively it can generate puzzles with a specified difficulty, an issue many papers exploring puzzle generation neglect. This is explored by implementing and incorporating a PCG system by extending an existing puzzle editor featured in the game. The analysis is conducted with the help of a user study on the developers of the game by testing qualitative experiences with the system. The promising results are then discussed and concluded with suggestions for future work and improvements to the described system and its used approach.

Index Terms—PCG, mixed-initiative, progressive content gen- eration approach, Everyone Must Die

I. INTRODUCTION

Procedural Content Generation, abbreviated PCG, is the algorithmic creation of game content with limited or indirect user input [16]. PCG, in other words, focuses on the automatic or semi-automatic generation of various kinds of content, ranging from stories and characters to levels and puzzles.

This study focuses on PCG methods relating to generating and evaluating puzzles. There has been previous research into this particular content space for PCG in large part because it is especially beneficial to puzzles, which generally have low replayability value, yet are popular in many different forms [1]. There is a particular type of puzzle presented in [1] called physics-based puzzles. The difficulties with generating these types of puzzles lie in the unpredictability of the outcome, construction and validation of the puzzles therefore require a simulation run. The challenge this paper faces is how to generate partially physics-based puzzles in a mixed-initiative approach with the game Everyone Must Die (EMD), a wild- west themed puzzle game that partially makes use of physics for puzzle solving.

There has been similar research [2][3][4] conducted towards the goal of generating physics-based puzzles for the mobile game Cut The Rope [5]. In particular this study draws heavy inspiration from the progressive content generation approach described in [2]. The progressive content generation approach is a combination of search-based and constructive approaches.

It functions by having a search-based algorithm evolve and find a timeline of actions, which the constructive algorithm then interprets and simulates to create a playable puzzle.

Notably, in their study [2], the applications they present and suggest for the approach differ slightly compared to the puzzles featured in Everyone Must Die. This presents another intriguing challenge for this study to overcome. The recent survey by B. De Kegel and M. Haahr [1] that focuses specifically on PCG relating to puzzles, forms the basis of this paper’s insights into this particular domain of PCG.

This research will try and adapt their approach to allow for human control in the generation process to realize a mixed- initiative system. Mixed-initiative systems in games refer to the combination of PCG methods and a human designer in a collaborative effort to create content [8][9]. Some notable examples of this are Tanagra [11] and Sentient Sketchbook [10]. In Tanagra the human designer specifies the position of certain platforms and then the computer generates the remaining topology. On the other hand in Sentient Sketchbook the human designer edits a strategy map while the computer continuously generates variants of the human design.

The research question posed is how the progressive content generation approach can be adapted and applied, in conjunc- tion with a mixed-initiative approach, to the puzzle game Everyone Must Die. This has several beneficial contributions.

Firstly it demonstrates the adaptability and applicability of the progressive approach. Secondly it presents a suggestion of how to combine it with a mixed-initiative approach. This could potentially open up interesting new avenues to take the progressive approach in the future. Further, by allowing more control to the designer, the approach presented in this study helps alleviate an issue presented in both [2] and [1] regarding how to generate puzzles with a specified difficulty.

Conducting this study requires development of an artefact to facilitate a PCG system by extending the existing puzzle editor in the game. This involves further research of related work to the progressive content generation approach and various mixed-initiative techniques. Following the implementation of the approach, tests are conducted to measure the effectiveness and usability of the method through the help of a qualitative questionnaire in conjunction with metrics gathered from the design process. To conclude, the results are analyzed and future research is suggested.

The following sections of the paper will first discuss the area of concern and related research in greater depth, followed by an explanation of what Everyone Must Die is and how it works. The methodology is then discussed with further insights into the inspiration and what adaptations have been made, alongside the data collected by the system after tests. The study then ends with a discussion of the findings and the drawn conclusions.

II. RELATEDRESEARCH

This section will consist of three different main subsections.

The first subsection, Everyone Must Die Overview, will pro- vide an overview of the game that the design tool is being implemented on, Everyone Must Die. The overview will ex- plain how the video game works, as well as its core mechanics.

Subsection Puzzles in PCG will cover puzzle games similar to EMD, that have been researched in the context of procedural content generation. Further, the subsection will give a brief explanation of said puzzle games. These explanations are given as subsection Procedural Content Generation will go over the procedural content generation methods that have been used on said puzzle games in previously conducted research. This

(3)

section will also give an explanation of what PCG is and its history.

A. Everyone Must Die Overview

Everyone Must Diewas developed in a project course called Game Engine Driven Product Developmenta part of the Game Development programme at Malm¨o University. The authors of this paper, Viktor R¨orlien and Nils Brundin, were a part of the development team for this game. A majority of the original team, the authors of this thesis included, decided to continue the development of the game after the end of the course. However, at the time of writing, the game has not been released to the public.

Fig. 1. Example of a user-created puzzle level, containing five different characters and the items available to the player (to the left side of the image).

Fig. 2. Example of how the special items; mirror, frying pan and alarm clock can be used together to kill both enemy characters.The character on the left will shoot at the mirror in front of it, while the character on the right will first turn towards the alarm clock next to it, then shoot towards the mirror in front of it. Both bullets will then hit the frying pan and bounce towards the two characters.

Everyone Must Die is a puzzle game set in the wild west.

Each puzzle is a sort of Mexican standoff between a set of NPC characters, to solve the different puzzles the player has to make sure all characters die (see Figure 1 for an example level). The NPC follows very basic rules, for example they target the closest enemy NPC within its vision cone and will try to kill it. The puzzle element comes from the fact that some characters will be turned away from each other or will

Fig. 3. The figure shows an explosive barrel hazard that would kill two characters if hit.

not be within each other’s vision range. However the player cannot control any of the characters directly so they have to manipulate the environment around them to make sure they kill each other. This is done with the help of a set of special items available to the player, all with their own unique properties, see Figure 2 for an example of how they can be used. Characters can also be manipulated by moving certain objects of the existing environment to block or redirect their vision. There are also specific environmental hazards like trains or explosive barrels, shown in Figure 3, that cannot be moved but have special interactions when hit. There are also a varying number of environment objects in every level placed by the designer that are static and can not be manipulated by the player. The game loop is split into a planning phase where the player gets to manipulate the environment and a cinematic action phase where the player gets to see if they succeeded or not.

This study will make use of a small selection of the special items, to limit the scope of the research, in the form of a mirror, frying pan and an alarm clock. The mirror is treated like any other enemy by the NPC, the frying pan will redirect any projectile that hits it and finally the alarm clock will force any NPC within its range to turn towards it.

B. Puzzles in PCG

In 2020 B. Kegel and M. Haahr released an extensive survey of 32 different PCG methods within 11 categories of puzzles [1]. They give a detailed overview of the current status of the literature for each category of puzzles as well as the specific methods that have been used for said category.

1) Physics Puzzles

One of the categories covered is physics puzzles which are defined as “puzzles which have an element of unpredictability, as the game environment changes in real-time according to the laws of physics.” These puzzles are often considered to be unique since they often require very precise timing or execution of actions by the players. The study brings up two different games, Cut The Rope [5] and Angry Birds [7], which have puzzles that follow the just mentioned criteria, and have also been studied in the context of PCG.

(4)

A brief description of how the games work; In Cut The Rope, a candy hangs by a rope that moves according to the laws of physics, meaning that if the rope is cut, the candy will travel according to the velocity of the rope. The goal is then to precisely time the cutting of the rope so that the candy falls into the mouth of a monster which is placed in a locked position. (Note that this explanation is somewhat simplified, as there are several other objects in the game, such as a bounce pad which the candy might need to hit to reach the monster’s mouth). In comparison, the physics puzzles in Angry Birds differ quite a bit in that precisely timing actions is not a relevant factor. Instead, the way Angry Birds works is that the player uses a slingshot to shoot birds at structures, which then collapse and destroy the target. This means that the player must use their physics knowledge to decide at what angle and how hard to shoot the bird to optimally collapse the structures.

This differs quite a bit from EMD, where the physics part of the gameplay mainly consists of accurately understanding and predicting the travel time and direction of projectiles. As such there is no timing or execution challenge similar to that of the previously mentioned games. Hence this paper will not only look at related research from the physics puzzle category, but also look at the approaches in another category mentioned in the survey; path-building puzzles.

2) Path-Building Puzzles

Keegel and Haahr define path-building puzzles as puzzles where the objective is “to create a new path by altering the environment at the hand of tools or items” [1]. Path-building puzzles are similar in many ways to maze puzzles, however, the key distinction and what separates them from maze puz- zles, is the fact that the player is allowed to modify the environment. Modifying the environment and placing items, can be seen as one of the most fundamental aspects of the puzzles in EMD. What makes EMD different compared to most path-building puzzles, is the fact that in EMD the player tries not only to manipulate non-player characters (NPCs) but also the projectiles.

One of the path-building puzzle games which the survey brings up and that has had multiple PCG related studies conducted on it, is Refraction [14]. The objective of the game is to move and arrange items to manipulate laser beams into hitting underpowered spaceships, which then restores their power. The game plays out on a 10-by-10 grid and there is a mismatch between the power needed by the spaceships and the outgoing power of the laser beams. This means that it is the player’s task to split and recombine the laser beams to provide the correct proportion of power (this is indicated by a fraction). The player does this by arranging the earlier mentioned items, which are the following; benders that apply a 90 degree deflection to a beam. Splitters that consume one input beam and produce two beams at half power, and expanders that facilitate combining of unlike fractions by multiplying numerators and denominators by a common factor.

C. Procedural Content Generation

PCG was initially, during the early 1980s, developed as a set of methods to deal with the limited capabilities of home computers during that time. One of the major issues games faced during that time was the limited amount of memory available on the computers, which in turn constrained the amount of game content designers could store inside their games. Thus by procedurally generating content this issue could be circumvented. One of the earliest examples of this can be seen with the 1980 game Rogue, where levels are randomly generated every time a new game starts [6].

Since then, the field of PCG has continued to advance and PCG methods can now be used to achieve a variety of goals. One such goal, which has been increasingly popular in recent times, is the procedural generation of puzzles. Although puzzles are popular with players and exist in a wide variety of different types of games and genre, they all share a common issue, which is that an individual puzzle is not enjoyable for players to replay. Procedurally generating puzzles, therefore, seems like a promising solution to this issue as long as one could generate puzzles that are sufficiently unique and interesting to the players. However, procedurally generating puzzles is challenging in several different ways. For instance, puzzles often require a strict solvability constraint, which becomes problematic as the outcome of applying PCG is usually unpredictable [1].

1) PCG Methods

Multiple different PCG methods were considered before one was finally decided on. to decide which method was most suitable, several factors were considered. One of the primary factors was how practical the implementation would be in EMD. Another factor and probably the one that weighed the most on the decision was how well it would be compatible as a mixed-initiative tool.

The following methods were used by the studies examining the two categories of puzzle games mentioned earlier. They were considered when the decision of which PCG method to use was made.

a) Mixed-Initiative

As previously stated, mixed-initiative systems in games are a collaborative effort to create content between a human designer and PCG methods [8][9]. Out of the 32 different methods reviewed in the PCG puzzle survey [1], only four methods make use of mixed-initiative in the design process.

Tanagrauses a mixed-initiative system to generate platform levels [11]. The human designer can place down and edit geometry in the level as well as marking certain aspects of the geometry that should always be kept. The computer will then generate the rest of the geometry with the use of small building blocks in conjunction with a beat timeline. The beat timeline defines the rhythm of the level, specifying the frequency of obstacles and other challenges affecting the difficulty of the level. Sentient Sketchbook also uses a mixed-initiative system but this time to generate strategy levels typically found in RTS games [10]. The human designer edits a low-resolution version of the level, which focuses on the key geometric properties

(5)

of the map like base locations, impassable terrain etc. The computer tests the level for playability while evaluating the map on its gameplay properties. It also continuously generates variants of the user level with the help of genetic algorithms and displays these, alongside the previously mentioned infor- mation, to aid the human designer in the creative process. The levels are finally interpreted into higher resolution with the help of cellular automata.

The benefits of mixed-initiative come from the fact that it enhances the human designer during the creative process [9]. By suggesting new designs or evaluating levels it allows the designer to quickly explore a larger search space. There is however a high expectation that the computer can handle everything the designer can do so as not to hinder the creative process in any way [15]. This might be an issue if the computer does not follow the model the designer has in their head for how the system should function.

b) Answer Set Programming

Answer Set Programming (ASP) is a form of declarative programming that is primarily used for difficult search prob- lems, which are usually NP-hard problems [12]. It does this by declaring constraints and logical relations in a Prolog- like language. An ASP solver is then used to find world configurations that satisfy the constraints expressed in this language [13]. In PCG methods ASP solutions commonly use AnsPrologas the logic programming language of choice [17].

In comparison to the way modern multi-paradigm languages such as python work, AnsProlog is a lot more basic. It only contains two constructs; rules and facts. Rules consist of three different types and they are what is controlling the production of new facts. These three types are; choice rules that define how to guess a description of a potential solutio,.

deductive rules that define how to deduce properties of a guessed solution, and integrity constraints forbid solutions that display or lack certain properties that have been deduced. The other construct, facts, are statements that are used to describe properties of an input problem occurrence or other kinds of configurations [14].

In a study [14] by E. Butler et al, they use ASP in an implementation of an automatic puzzle generator for the earlier mentioned game Refraction. To achieve this, they split up their puzzle generator into three distinct systems; grid embedding with ASP, mission generation with ASP and an ASP-based puzzle solver.

The grid embedder, as the name implies, has rules and constraints regarding the positions and validity of grid objects.

Examples of a rule and constraint for the grid embedder are; “deduce relative (north/south/east/west) positions from absolute positions” and “forbid two pieces overlapping”.

The ASP-based mission generator has other rules and con- straints, mainly relating to the high level design concerns when creating a Refraction puzzle. This revolves around questions such as; which game elements are active, how large is the intended solution and so forth. “Guess which pieces will be active.” and “Forbid edges to nodes not on a path to a target.”

are examples of a rule and a constraint for this system.

Finally, the ASP-based puzzle solver’s goal is, as the name implies, to solve the puzzle and has rules and constraints to help it do just that. Examples of a rule and constraint it uses are; “Guess piece position.” and “ Forbid leaving targets unpowered.”. Outside of its own rules and constraints, the system also makes use of some coming from the grid embedder (it uses the previously mentioned grid constraint that forbids two pieces from overlapping) and the mission generator.

E. Butler et al [14] conclude that working with hard con- straints is highly beneficial and recommends other developers of procedural content generation to try it. Additionally, the authors of the study also mention that their solution not only achieved attractive performance measures but also that they came to a better understanding of design automation problems through iterative exploration of constraints and generated output.

Overall, the constraint-based ASP solution presented in their paper suggests that a similar method could potentially be suitable as the method of choice for the mixed-initiative PCG design tool for EMD. The ASP method does not only offer attractive performances but also a high degree of designer controllability with the constraints. In the end, due to inexpe- rience with Prolog-like languages and ASP, this method was deemed unfeasible and out of the scope of this paper, and was ultimately not chosen.

c) Progressive Content Generation Approach

The progressive content generation approach presented by Shaker et al [2] is framed around its application to their clone of the mobile game Cut The Rope [5]. Cut The Rope is a physics-based puzzle game where the player is supposed to help guide a piece of candy to a monster called Om Nom.

This can, as previously described, be achieved by cutting ropes holding the candy which in turn causes the candy to go into a free fall according to its current velocity in conjunction with the physics engine. In short the gameplay consists of performing time-specific actions on components to redirect the candy towards Om Nom. This means timing is an especially important property of the game. Their study [2] uses some of the more basic components featured in the game: ropes, air- cushions, bubbles, rockets and bumpers. The possible actions derived from this set are as follows: a rope cut, an air-cushion press, a bubble pop, a rocket press and finally, as they describe it, a void action when waiting for the candy to reach a specific position. All these actions, except the void action, will redirect the velocity of the candy in slightly different ways.

Their paper explains that search-based methods are often too slow to be used in a real-time context and that more demanding playability considerations are hard to balance with these methods. It also goes on to describe that the constructive methods tend to be very fast and often have to guarantee playa- bility or usability but that the cost of that playability results in uninspired content. The motivation behind the progressive content generation approach was therefore to merge these two approaches. The progressive approach tries to combine the di- versity, flexibility and playability-preserving ability of search-

(6)

based approaches with the speed of constructive approaches.

To accomplish this, the generation approach is split into two phases. The first phase uses a search-based method to generate a timeline of actions which the second phase, with the help of a constructive method, interprets and subsequently constructs into a playable level.

Timeline Generator

The timeline is a sequence of actions, from the previously mentioned set of actions, which should occur in that specific temporal order during a playthrough of a level. It essentially represents the rhythm of the level which in turn can potentially determine how difficult a level is. The timelines are generated generically to promote the application in multiple dissimilar games. This is further eased by the grammatical evolution [21]

used to evolve the timelines. To adapt the timeline generator to another game simply requires defining a new design grammar with the game-specific events. The timestamps used for each action in the timeline specifies the elapsed time since a previous action was used. This was chosen because it has some beneficial attributes. First and foremost it limits the search space by eliminating a lot of invalid candidates that have the same timestamp or are overlapping. It also allows the generator to define the frequency of actions and enables different waiting times for different types of actions. This has the bonus of further benefiting the adaptability of the approach since that information is often game-dependent.

Timeline Simulator

The timelines only contain imperfect information about what a level should contain. It does not contain any notion of the components’ properties or exact positions. To determine this and also to make sure the timeline is playable the approach employs a timeline simulator in the form of a constructive approach. The level is gradually constructed while an agent plays through the timeline, placing down components freely and using them as it encounters them in the timeline. Since many configurations of the timeline can be invalid the simula- tor is optimised to place the components intelligently, they are placed so that they intersect with the trajectory of the agent.

Through this method, the simulator can guarantee a playable design if it reaches the end. Should a simulation run fail it will repeat a couple of times to ensure that there is no playable configuration.

III. METHOD

A. Research Approach

The research approach chosen for this study is to follow the design science paradigm. Design science is the design and investigation of artefacts in a context [18]. In other words, this process involves the identification of a specific problem and the subsequent development of an artefact to solve that problem in its given context. This paper follows the design science methodology defined by Peffers et al [19], which breaks the process down into six steps in a nominal sequence.

The steps as described by Peffers et al are summarized as follows:

1) Problem identification and motivation

Define the specific research problem and justify the value of a solution.

2) Define the objectives for a solution

Infer the objectives of a solution from the problem definition and knowledge of what is possible and feasible. The objec- tives can be quantitative or qualitative but should be inferred rationally from the problem specification.

3) Design and development

This involve determining the artefact’s desired functionality and architecture as well as the creation of the artefact.

4) Demonstration/Test

Demonstrate the use of the artefact to solve one or more instances of the problem. This could involve its use in ex- perimentation, simulation, case study, proof, or any other appropriate activity.

5) Evaluation

Observe and measure how well the artefact supports a so- lution to the problem. This involves comparing the objectives of a solution to the actual observed results from the use of the artefact in the demonstration.

6) Communication

Communicate the problem and its importance, the artefact, its utility and novelty, the rigor of its design, and its effective- ness to relevant audiences.

B. Application of the Methodology

1) Problem identification and motivation

Everyone Must Die’s puzzle mechanics are sufficiently unique, compared to other puzzle types, that it provides an interesting playground to determine how applicable and adaptable some of these existing PCG methods are. To tackle this issue the progressive content generation approach [2]

was chosen as it had some success in generating puzzles for the slightly similar physics-based game Cut The Rope [5].

However, the goal is not only to change the testbed used for the progressive approach but also to adapt it to satisfy the issue of generating puzzles with a specified difficulty.

Most research on the application of PCG to puzzles has focused on creating systems that can successfully create puz- zles. However, many of these studies have either neglected or failed to successfully find a solution to generating puzzles with a specified difficulty [1][2].

This is interesting for the authors as it will provide a method that can aid and enhance the creation of puzzles for the game.

It also demonstrates the adaptability and applicability of the progressive approach and helps alleviate a long-standing issue within puzzle generation.

2) Define the objectives for a solution

The objective is to adapt the progressive content generation approach in a way which makes it possible to generate puzzles for Everyone Must Die with a specified difficulty. We therefore propose to solve this by adapting the progressive content generation approach into a mixed-initiative system.

Everyone Must Die is a large and complex game so to tackle the problem, this research has to limit the scope of

(7)

puzzle generation to the most basic components of the game.

These basic components are in the form of the mirror, alarm clock and frying pans described under the Everyone Must Die Overview section. Further, there are several NPC types in the game, this study has chosen to include only one. The most basic NPC type is the rifle character because it will never have to path-find when using the previously mentioned components.

3) Design and development

Everyone Must Die already has a puzzle editor to aid in the creation of puzzles. This editor as well as the game is developed in Unity [20]. To incorporate a PCG system, the puzzle editor was modified to include the UI required for this system. The PCG system was coded in C# within the Unity engine. The system was developed iteratively in a bottom up approach following Unity’s architecture. Components were implemented one by one starting with the mirror and then followed by the frying pan and alarm clock. For a detailed description of the modifications to the progressive content generation approach, see the method section D.

4) Demonstration/Test

Since the implemented PCG system uses a mixed-initiative approach to design the puzzles, it is required that the designer have experience with both designing and playing the game, to properly demonstrate and test the system. Further, the game is currently not available to the public as it is still under development. Fortunately, Everyone Must Die was developed and designed by a team of seven developers, which means that outside of the authors there are five others who have the experience and ability to test the design tool.

For the purpose to more effectively demonstrate the tool, these five individuals have been interviewed on their experi- ence with the design tool. They have been asked the following questions: Did the design tool make designing the puzzles easier? To which degree did you make use of the generated puzzle suggestion? Were there inputs into the tool editor that resulted in unplayable or unexpected results?

While they are testing the tool, the following properties will also be observed and noted:

Time it took to design a puzzle using the tool for a specific puzzle difficulty level.

How many times a partial puzzle had to be generated for the designer to be satisfied.

How many times the generation failed compared to how many times it succeeded.

5) Evaluation

First and foremost the evaluation will determine the ap- plicability of the artefact through the questionnaire employed in step 4. This helps establish the usability and perceived efficiency of the artefact. Which will help gauge a sense of how much the tool influenced the designer’s creation of puzzles.

The artefact will also be examined on the time it took to create the puzzles to determine if it speeds up the process in any way compared to when not using the tool. This will help in understanding whether the tool only serves as an inspiration to new innovative puzzle designs or if it helps speed up the designer in their existing thought process.

Finally the last metric examined is how many times a partial puzzle had to be generated before the designer found some- thing they wanted. This will help evaluate how controllable or innovative the artefact is. If the designer constantly regenerates they might either not find what they are looking for or they are not intrigued by the generated candidates.

6) Communication

The development and testing of our artefact and the problem it aims to solve is documented through this paper. It is submitted to the research community alongside a discussion of possible future research endeavours, in this particular field of study.

This study’s novelty comes from the fact that it tries to apply a PCG method to the game Everyone Must Die, which differs greatly from all the other puzzle games addressed in the puzzle survey by De Kegel B and Haahr M [1]. For more detailed discussions of the findings see section 5. The possible future endeavours are detailed in section 6.

C. Method Discussion

The design science paradigm fits very well with the goals of this study, development of an artefact to solve a problem in a specific context [18]. The design science methodology defined by Peffers et al [19] was chosen because it outlined a clear and rigorous process to approach the research. The six different steps of the process have split the important aspects of conducting research into their own headings which ensure they are answered and sufficiently motivated. This gives clarity to both the thought and practical process of this paper.

The evaluation could be criticized for being very focused on this specific application to Everyone Must Die. This is due to the inherent reason that the research problem revolves around applying a PCG method to this game. This has been partially minimized by asking questions and collecting metrics that are generic enough that they could be applied to other similar puzzle games.

D. Adaptations to the Progressive Content Generation Ap- proach

This study explores how to apply a modified version of the, previously described, progressive content generation approach [2] to the game Everyone Must Die. The suggestion is to com- bine it with a mixed-initiative approach [8][9]. To accomplish this one of the phases of the progressive approach has to be modified heavily. The timeline simulator is an essential part of the progressive approach which can be reflected from the fact that it inspired the name of the approach [2]. The timeline simulator progressively creates the level from the specified timeline. There is not much room in the phase to implement the core of a mixed-initiative system without breaking what the progressive approach is, this leads us to the timeline generator.

The suggestion is to replace the timeline generator’s search- based approach with the manual design of different timelines from a human designer. This allows for a much higher degree of control, allowing the designer to specify certain items, objects and rough concepts they want to be included in the

(8)

puzzle. Further, because the timeline generator phase is now a mixed-initiative system with a much more limited supply of timelines, due to the human requirement, the timeline simulator phase is also slightly modified. Instead of always creating fully completed designs and discarding those that do not work, the timeline simulator will only create partially completed designs. This means that the system will always allow for human input towards the complete design. It will still discard and restart the simulation if even a partial design could not be generated.

The modifications leave us with a mixed-initiative progres- sive approach to content generation. The human designer cre- ates a timeline by defining a series of sequences that describe the solution to the puzzle. First, the designer will specify how many sequences that are a part of the timeline. After this, the designer will specify each sequence by providing it with the components it holds and in what order. The ordering in the sequence matters since the timeline simulator will use it to determine where a component should be placed. For instance, if a sequence is an alarm clock and a mirror, the simulator will place down an alarm clock to change the current direction of an NPC and later place a mirror in the direction the NPC will eventually face. If the order in this sequence is reversed instead, in other words with the mirror being placed first and the alarm clock after that, it will become invalid. This does not work because of the higher priority that the alarm clock has in the game, the mirror will be obsolete because the NPC will instantly turn towards the new direction. The risk of obsolete components is circumvented by preventing the designer from supplying such invalid sequences, this is easily identifiable without incurring any unnecessary calculations. The timeline simulator will pick out each sequence in the timeline and bind them together to create a suggestion of how a puzzle could look. The human designer can then modify it to their liking.

IV. DATA

The following section will present both the finished mixed- initiative PCG system and the results from the user study conducted. The description of the implemented PCG system first describes the UI and workflow before proceeding with a detailed look into the generation process of the system. The subsection is concluded with a description of how the difficulty and complexity can be measured in the game. The results from the user study first describe the procedure in which they were conducted before presenting a summary of the answers to each question in the questionnaire alongside the collected metrics.

It concludes with a short presentation of some of the created puzzles from the tests.

A. Everyone Must Die PCG Tool 1) UI and Workflow

The implemented PCG system generates puzzle suggestions according to the timeline provided by a human designer.

The designer accomplishes this through the UI provided by the level editor. In the bottom right there are three different components available for use in the creation of timelines.

Fig. 4. The Puzzle Editor UI featuring the PCG system.

Fig. 5. A generated puzzle suggestion with a timeline consisting of three sequences.

These are the mirror, frying pan and alarm clock which the user can drag and drop into the timeline (see Figure 4). The timeline consists of a series of sequences where each sequence is coupled to an NPC character. The timeline can be found in the bottom left and is initially only one empty slot (see Figure 4). The user progressively builds out the timeline by dragging the components into the empty slots which then spawns more empty slots to build the sequences. Each sequence is a vertical series of, at minimum one and maximum seven, slots, while these vertical sequences are in turn ordered in a horizontal manner from left to right with at most nine sequences in total (see Figure 5). It’s not only the ordering within each sequence that matters for the generation process but also the ordering of the different sequences themselves. This ordering of the sequences typically does not have a massive effect on the generated puzzles but it does essentially determine whom each NPC character should kill. For example, it can play a role in the generation process when concerned with timelines that include components like the alarm clock that have an inherent time constraint on them, they need to have the time and room to be able to fully turn a character in the new direction before that character is killed.

The designer builds their sequences using the components and then presses the generate button located in the middle top of the screen (see Figure 4) to start the generation process.

When pressing generate the PCG system will not modify or

(9)

interfere with any pre-existing environment or characters that the designer has created. It will instead endeavour to find a solution that works with the human-designed environment. It will however remove the previously generated puzzle sugges- tion if there is one. Just to the right of that button, there is an input box for the maximum number of generations in each attempt, this essentially determines how many times the generation process is allowed to retry should it fail.

As mentioned earlier there are certain inputs that would immediately result in failed or pointless generations, these are circumvented by applying constraints on how the sequences are constructed. For example, multiple mirrors in a single sequence serve no purpose as only one will ever affect the NPC. Similarly, as brought up previously, the alarm clock has to be placed first in every sequence it is included in because if it would be placed later in the sequence it would invalidate all the components before it. This is because the alarm clock will turn the character into a new direction and in the game, it has the highest priority, which means it will override any other action. These constraints mean that it is not possible for the designer to create these invalid sequences.

With these inputs, the designer can guide the generation process fairly well. Because each sequence is coupled with a single NPC character the designer has control over the number of characters that will be generated. Similarly, all the components defined in a sequence must appear in the end result, giving the designer control over which and how many of each component appears in the final puzzle.

The designer can then take the generated puzzle sugges- tion and modify it to make it more interesting or complete.

Alternatively, the designer can use it as inspiration and reuse aspects of the suggestion in another puzzle or suggestion.

2) The Underlying PCG System

Fig. 6. A puzzle suggestion being generated. It showcases one of the simulations being tried.

The PCG system takes the timeline provided by the designer as input, alongside the maximum number of tries the PCG system has to find a solution for that specific timeline. It then progressively creates the puzzle by simulating the provided timeline in a constructive manner. Due to the inherent random- ness in the simulation of the timeline, it has a one-to-many relation, meaning one timeline can result in several different

puzzles, similar to the way it works in the progressive content generation approach presented by Shaker et al [2]. Should the timeline simulation fail it will retry as many times as defined by the designer. To keep the designer entertained and engaged some of the simulations are visually shown but very sped up (see Figure 6). This allows the designer to feel like the system is doing something while potentially waiting for it to finish the simulations. If the simulation fails all of the retry attempts defined by the designer, the system will consider that generation attempt a failure.

The first step is to place the very first character on the grid which it does by randomly placing it in an arbitrary position.

The rotation or direction of the character is then determined by the environment or items around the character. The system will make sure the character does not face the edge if there are less than three tiles between it and the edge, similarly, it will make sure the character does not face an adjacent tile that is already occupied by an object.

The second step is to bind the first sequence to this character, this is done by simulating how a bullet will travel from the character. It will place the components defined in the sequence by placing them in the path of the projectile with a variable distance from the previously placed component. If the component placed is, for example, a frying pan it will change the direction of the simulated projectile. Any other component would then be placed in this new direction with an arbitrary distance, relative to the position of the previously placed frying pan. There are however cases, depending on the component, where it is not placed directly in the simulated projectile path.

One such example is the mirror, if it is preceded by a frying pan it would make no sense to place it in the new simulated projectile path because it would not be within range of the character’s vision. If it is not within range of the character’s vision the mirror serves no purpose therefore in such a scenario the mirror will be placed in the character’s direction instead, however it will still be placed behind the frying pan because that is what the designer defined in the sequence.

The next step is to create the next sequence but first, the character coupled to that sequence must be placed. This is done by placing the new character in the simulated projectile path from the previous sequence, with an arbitrary distance to the last placed component from that sequence. In doing this the system ensures that the sequences will be bound together in a way that resembles a puzzle. This process of placing and creating sequences is then repeated until the entire timeline has been generated. Should the simulation fail at any point it will retry the whole generation attempt.

When placing the components, checks are made to deter- mine if it is feasible, for example, by making sure the outgoing reflection path of the frying pan has enough space for the next components to be placed. Similarly, the alarm clock has checks that do this as well but with the added constraint that it should have enough time to fully turn the character it is affecting. The placements or rotations of these components loop through the available possibilities and should they fail they will notify the system that the simulation has failed.

(10)

If the system has successfully placed a character or compo- nent it will mark all the tiles in the simulated projectile path as being part of a projectile path. By doing this the system can prevent sequences from invalidating each other by placing components that block other sequences’ projectile path.

3) Puzzle Difficulty and Complexity

Because of how the PCG system works, as explained above, it enables the designer to have significant control over a puzzle’s complexity. This is due to the fact that they control all of the main four factors affecting a given puzzle’s complexity.

The factors are; grid-size, number of characters, number of special items, number and position of environment objects.

Complexity is defined by how many different ways there are for a player to reasonably attempt to solve a puzzle. By reasonable attempt, it means, for example, that the placement of a special item has to be in a position where it has a realistic chance of being part of the solution.

The primary contributing factor towards the overall com- plexity of a puzzle is its grid size. A smaller grid size means that the puzzle will limit the number of characters, special items and objects that can fit into a given puzzle, thus narrowing the potential solution space. Although a larger grid size generally means that the puzzle will be more difficult, it is not always true as it is still possible to create very simple puzzles. For example, if the grid is mostly empty and the puzzle uses a small number of characters and items.

Nonetheless, the potential and ability to create difficult puzzles are increased with the size of the grid.

For the second and third factors, the number of characters and special items has perhaps the most impact on the puzzle’s complexity. For these two factors, a higher number increases the complexity of the puzzle as it means a larger amount of things for the player to keep track of and manage. For example, more characters will mean that a larger number (six and more) of different characters can kill each other. In the same way, a higher amount (more than roughly 2.5 items per character) of special items will result in the characters having more potential ways of killing each other, hence increasing the possible solution space. It is important to note these are just general rules observed from the puzzles created in EMD. It is technically possible to create a puzzle with a large number of extra items that are not necessarily needed for the puzzle, which will make it easier. The same is true for the number of characters, it would be possible to create a puzzle with a large number of characters that were very easily killed.

Lastly, the fourth factor, the number of environment objects inside the puzzle in conjunction with their positions. The com- plexity, in this case, can not be determined as straightforwardly as with the previous factors. Instead, through extensive play- testing and experience with the game, some rough conclusions have been drawn regarding the relationship between a puzzle’s complexity and its environment objects’ amount and positions.

In general, the complexity increases with the number of objects inside a puzzle until a certain point, when roughly half of the grid cells are covered with objects. Beyond that, a larger number of objects existing on the grid decreases the

complexity of the puzzle as the number of grid cells available for the player to place items on top of becomes fewer. The reason that the complexity first increases, however, is because the objects will block the path of projectiles and the placement of special items and thus block potential easy solutions to the puzzle. Hence, for the complexity to increase it requires the position of the objects to be scattered around the map and block solutions. If most of the environment objects in a puzzle instead were clumped up in a corner it would mean that they would decrease the complexity of the puzzle.

Fig. 7. Example of a manually created puzzle considered to be of easy difficulty.

Fig. 8. Example of a manually created puzzle considered to be of hard difficulty.

Although complexity by itself is not particularly useful for analyzing the PCG system, it is related to something which is, difficulty. This relation can be seen by looking at the current existing puzzles inside the game and their respective difficulty level. For instance, the easiest levels in the game, the tutorial levels, have an average grid size of 12x12. On the other hand, the average puzzle under the hard category, which are the most difficult puzzles in the game, have an average grid size of 33x33. Furthermore, The tutorial levels also have on average 2.9 characters and 1.1 special items per character, see Figure 7 for an example of an easy level. Meanwhile, the hard levels have an average of 7.3 characters and each character has an average of 3.1 items, see Figure 8 for an example of what a

(11)

hard puzzle in the game can look like. These facts along with the way complexity was defined earlier, show that complexity and difficulty are related. This is not a perfect relation as there are scenarios where a puzzle could have high complexity, yet a low difficulty. An example is a large puzzle with a significant amount of characters and items but every character can use a mirror and frying pan to kill themselves. Conversely, there could also be a puzzle with low complexity and high difficulty, an example is a small puzzle with few characters but a very unintuitive solution. However, for the majority of puzzles, it holds true and for the purposes of this paper, it is considered sufficient.

B. Results

1) User Study Procedure

All test participants have been a part of the development of Everyone Must Die, during which everyone created puzzles.

The conducted tests lasted for one hour under supervision from one of the authors of this paper. The testers were all informed about how to use the PCG system as well as what it does, aided by pointing out the various UI elements. They were given about five minutes to experiment with the timeline and the system. Finally, they were tasked with creating as many puzzles as possible with the remaining time. They were also informed that they could experiment with the creation of the puzzles by using the system in an existing environment or creating a custom one. After the elapsed time the users were presented with a questionnaire about their experience and thoughts. The answers sought from this questionnaire are used to evaluate the value of the PCG system. This is partly done by comparing how it was to create puzzles with the PCG system as opposed to without it.

The tests conducted by the authors of this paper differentiate themselves from the external tests by skipping the informative segment since the authors were already familiar with the system. In every other regard, the process was identical to the external tests.

2) User Study Results

In total six tests were conducted, two of these were with the authors themselves. This means we tested everyone except one person from the old Everyone Must Die development team. The answers to every question are summarized below alongside the collected metrics. The summary also clarifies any differences or similarities between the external testers and the author’s experience when necessary.

a) Did the design tool make designing the puzzles easier?

All answers said that it made designing puzzles easier by providing the designer with a base to work from in the form of puzzle suggestions. These answers also clarified that getting this starting point sped up the process of creating puzzles because it can be one of the more difficult things of puzzle design to figure out. Four of them also felt they had some control over the difficulty of the generated puzzle, because of the large control over the number of characters and featured components, but that it also depended on the environments.

In general, one can infer from this that it is easier to improve an existing but perhaps faulty puzzle than to create one from scratch.

Some answers commented that despite its limitations it managed to generate interesting scenarios, however, the types of components available would need to be expanded for more complex and interesting puzzles.

The answers of the authors of this paper aligned with the ones given by the external tests. However, both authors clarified that the amount of modifications required on the puzzles depends on the difficulty of the generated puzzles.

This means there might still be a lot of work when creating truly difficult puzzles but both authors pointed out that it still served as a great inspirational source.

b) Were there inputs into the tool editor that resulted in unplayable or unexpected results?

There were a few bugs that resulted in a sequence not being completely generated and not flagging the simulation as a failure. There were also issues with some rare environment objects where they did not seem to be detected by the generator. These were, however, outlier cases.

The major unexpected result brought up by testers was that the generator did not seem to take that much advantage of the fact that characters can shoot over certain low-environment objects. The generator did occasionally do this but the testers felt it underused that particular aspect of creating puzzles.

This was particularly evident when they explicitly designed the environment to have sections in which they wanted characters to shoot over it. If the generator then did not make use of that particular aspect of the environment they were disappointed.

Some testers were surprised when the system generated an incomplete puzzle suggestion where it was clear that the solution to make it complete was straightforward. This forces the designers to almost always have to interact with the generated puzzle suggestion to make it complete.

c) To which degree did you make use of the generated puzzle suggestion?

All answers stated they mostly made smaller modifications to the generated puzzle suggestions to either make it more complete and interesting or aesthetically pleasing. These small modifications came in the form of moving or rotating some of the existing components or adding and removing a few components. This demonstrates that the puzzle suggestions are capable of being interesting almost all on their own.

The answers said the degree to which the puzzles had to be modified depended on the difficulty or complexity of the generated puzzle suggestions, needing more input from the designer the higher the difficulty. In general, this meant the puzzle suggestions acted more as inspiration when the puzzles were more complex. The required input also depended on whether the puzzle suggestion was generated upon a pre- existing environment or if the environment was created around the generated puzzle. In the first case, care had to be taken that it looked aesthetically pleasing for the created environment, while in the latter the designer only had to be concerned with blocking ways for a player to use an unintended solution.

(12)

d) Collected Metrics

One metric gathered was how many times the user pressed the generate button. Typically for easier and less complex puzzles, with smaller sequences, there were two to six genera- tion presses before the user was satisfied with the suggestion.

This number increased heavily when dealing with puzzles with a lot more complexity, where the sequences were quite large, with fifteen to twenty-five presses before the user was satisfied. This indicates that the larger timelines tend to generate uninteresting suggestions.

Of all the times the users pressed the generate button around 5% of the gathered attempts resulted in failures. Importantly these failures arise when there was little available space for the generator to place sequences. This can be caused by small grid sizes or very occupied grids with a lot of objects already on the grid. These limited spaces could, depending on the type and number of components in conjunction with the number of sequences, result in failure.

The time for creating puzzles was around five minutes for easier puzzles and up to around seventeen minutes for more complex ones. Puzzles featuring a good middle-ground took around ten minutes. Comparing these results to the time it took to create puzzles without the PCG system, during the development of EMD, it can be observed that there is a considerable increase in the speed at which one typically creates puzzles. Easier puzzles created manually took around nine minutes meaning the PCG system is in general almost twice as fast for this type of puzzle creation. Puzzles with a good middle-ground were created around three times as fast with the system, where the manual creation took thirty minutes. Finally, difficult puzzles featuring many components took around fifty-five minutes manually, which results in it being little over three times faster when doing it with the PCG system.

3) User Created Puzzles

There was a healthy variety in the created puzzles despite the limited components. As a side note, some of these puzzles were stimulating enough that they are being refined and incorporated into the game, by the designers that made them.

Which in itself speaks to the apparent value of the system.

Fig. 9. Showcases six puzzles created on a pre-existing environment.

The puzzles presented in Figure 9 showcase some puzzles that have been generated around a mostly pre-existing envi- ronment. While the puzzles presented in Figure 10 showcase

Fig. 10. Showcases six puzzles created by placing the environment around the generated suggestions after the fact or between attempts.

some puzzles that have been generated where the environment has either been placed after the fact or in an iterative process between attempts.

V. DISCUSSIONS

A. Bias in the Data

Before the results are discussed, it is important to mention biases and other factors that can impact the data. The most note-worthy biases being the fact that all of the testers were part of developing the game and therefore also know the authors well. This might affect the harshness in which the testers delivered their critic of the system. Also important to note is that, because of the limited amount of testers, the authors make up two out of the six testers, which is a significant percentage, and can affect the test data to be possibly more favoured towards this study’s intended goals.

B. Findings

The following section will use the data and results from the previous section to discuss if and how effective the mixed- initiative PCG tool was in helping the designer create puzzles.

Note that only the process of creating the puzzles will be discussed, any measurement of how good or fun a puzzle will not be included. Puzzles are only considered completed by the discretion of the designer, which works well, as all of the individuals who tested the system have previous experience in manually designing puzzles for EMD.

The perceived effectiveness and the usability of the tool will instead be measured by analyzing the answers to the questionnaire. A sense of how effective the tool is can also be gathered from the time it took to complete puzzles at specific difficulties. Further, the time required to create the puzzles, with or without the PCG tool, also provides insight into whether the tool acts only as an inspirational source or if it speeds up the process of the design as well. Other data points such as how many times a user attempted to generate a puzzle until they were satisfied will give information about the controllability of the system, as well as how interesting the results are. These results are also used to validate the system’s function as a mixed-initiative tool.

(13)

Further, the section will also go over any unexpected results found from the questionnaire and data points, as well as the limitations of the tool in terms of what it can and can not do.

1) Effectiveness of the Tool

The research question posed in the introduction was re- garding how the progressive content generation approach can be adapted, in conjunction with a mixed-initiative approach, and be applied to the puzzle game Everyone Must Die.

To determine if the research question has been sufficiently answered we examine the effectiveness of the tool to see if it is successfully applied to Everyone Must Die. This is done by looking at the results previously mentioned to see if it speeds up or helps the puzzle design process. The quality of these puzzles should preferably be similar or of better quality but as mentioned previously the tests do not measure the quality outside of the designer’s subjective judgement. The effectiveness is also analyzed by looking at how effective the tool is in helping create puzzles with a specific difficulty.

a) Usability and Controllability

The usability can be considered positive since answers in User Study Results a) stated it made the creation of puzzles easier because they were provided with a base to start from.

This infers that it is easier to improve an existing puzzle instead of creating one from scratch. This is a sentiment shared by five developers of EMD which was noticed early in development. The method these designers used, for the manual creation of puzzles, was to place down many pieces and components and move them around until something interesting is found which could then be improved. The most difficult part of this process was to find an interesting configuration to improve. It, therefore, comes as no surprise that provided with this base it eased the creation of puzzles, especially when this base is much more connected and functioning thanks to the timeline simulator. It does not do all the work for the designer, however, forcing them to interact with the suggestion and therefore potentially being more creatively engaged with the result.

The usability of the system is limited by the available components, restricting the designer from creating any type of puzzle they wish. There is much room for improvement on that particular subject, detailed in the coming Limitations section. Further, the usability and controllability were lacking when the designer wished for special interactions with the environment. For example, many designers wished the gener- ator would make active use of the fact that it can shoot over low-environment objects when generating puzzle suggestions.

This could potentially be alleviated by a bias that encourages this particular interaction as stated in the coming Unexpected Resultssection or a way for the designer to specify if they have a certain preference for a character’s placement and rotation.

This would allow for a much higher degree of controllability as well as improve the usability of the system for various scenarios.

The designers seem to have good control over the generated puzzles’ difficulty, as stated in User Study Results a) This can also be inferred from User Study Results c), showcasing

limited modifications to the generated puzzles, and User Study Results d), demonstrating few generation attempts, suggesting the designer quickly found working suggestions that did not need much post-work to achieve their goals.

However, the modifications required although in most cases aesthetic still showcase some lack of controllability. Harder puzzles required both more modifications and generation at- tempts to be interesting. This was observed to mostly be caused by clustered components not using the grid available to its fullest potential. This could be alleviated by adding a bias that encourages the generator to make use of a much larger part of the grid. Further, the generator failed to generate puzzle suggestions when there was limited space available on the grid.

In many cases trying to simulate working timelines on these grids was a wasteful use of time. Since a simple mathematical calculation would be able to quickly tell that there are too many components for the available grid tiles.

b) How it managed to generate puzzles with a specific difficulty

As mentioned earlier, one of the key issues within the topic of procedurally generating puzzles, is the fact it is challenging to generate puzzles with a certain difficulty [1]. For that reason, we examine how the tool helps in creating puzzles with the specified difficulty that the designer intended. By looking at the definition of puzzle difficulty in the previous section and by how the tool works (see UI and Workflow section), it is clear that the user has control over all of the four main factors that affect the puzzle’s difficulty.

To understand how effective the tool was in helping create puzzles with a specific difficulty, we can look at the tested users’ answers to the question in User Study Results a). These answers seem to indicate that the tool struggles in helping to create puzzles with a high level of difficulty. Creating puzzles with a lower difficulty seems to work very well, however, according to the answers, these puzzles also require less input from the user.

The difficulties with generating complex puzzles are born partly from some limitations such as the limited types of components as well as the fact that sequences could be constructed more efficiently. Another contributing fact is that when dealing with many and large sequences it can be hard to follow the generated solution, especially in simulations where the suggestion places all of these components closely together in a cluster.

The limited types of components cause the more difficult puzzles to be less interesting as the same components have to be reused a large number of times. In contrast, if more types of components were available it would allow for more varied and interesting interactions between the components and therefore result in more interesting puzzles. The more efficient approach would be to allow sequences to use each other’s already placed components to fulfill their own sequences. For instance, if a mirror is already placed in front of the character from a previous sequence, and the next sequence aims to place one, there is no reason for the character to place its own mirror when it could reuse the one already there.

(14)

c) Time comparison tool vs manual

One of the core motivations to use almost any PCG system is the idea that you can produce more content using less time or resources. We will therefore compare the time it took creating a puzzle with a specific difficulty while using the tool versus designing it manually from scratch. It is important to remark that the tool can generate puzzles around pre-existing environments. This is something that has to be taken into consideration when comparing the time, which is done by also including the time to create the pre-existing environment.

By looking at the data in User Study Results d), we can see initially that the more difficult a puzzle is the more worthwhile it is to use the tool. However, when we reach a high enough difficulty, as mentioned in the previous section, the tool starts to struggle and requires more time. Overall, it seems that for most puzzle difficulties, outside of the extremities of extremely easy and hard difficulties, the results show that the tool significantly boosts the speed at which puzzles can be created. Thus we can establish that the tool is effective and worthwhile to use.

2) Unexpected Results

Other than creating puzzles with a specific difficulty, another problem with procedurally generating puzzles is the fact that most PCG methods have a degree of uncertainty in their outcome. This naturally does not work well when designing puzzles, as they always need to have a possible solution.

Fortunately, because of the way our PCG system works, as explained in the The Underlying PCG System section, the partial puzzles it generates are almost guaranteed to be functional. The times it did not function were because of underlying bugs in the system, mentioned in the User Study Results section.

Another unexpected result brought up by the testers was the fact that the generated characters often did not make use of low objects, as they tended to not shoot over them or incorporate them in the puzzle solution in other ways.

This can be considered a failure of the system, as having characters shoot over objects such as fences or other objects, can make for more interesting puzzle solutions. Puzzles that incorporate environment objects into the design tend to be more visually appealing and interesting to the designer and player. This could potentially be alleviated by implementing a bias that encourages the generator to make use of these low environment objects.

One more unexpected thing testers brought up was the fact that it did not generate complete suggestions even in situations in which it was seemingly easy to do so. This is something that was purposefully designed that way, so that it always forces the designer to interact with the puzzle suggestion. This means that the designer almost always engages with the mixed- initiative system, which in general will make the puzzles more unique by being partially edited by a human designer. having the designers touch. This alleviates one of the common issues with PCG systems, which is the generated content being too repetitive or similar in certain ways [1].

3) In what way was the tool used

A core feature of our PCG tool is the fact that it is a mixed- initiative tool. This fact is important as mentioned earlier in the paper, there has only been a limited amount of papers done on mixed-initiative tools for procedurally generating puzzles.

To understand how the mixed-initiative aspect of the tool impacted the process and resulting puzzles, we look at how different users interact with the tool and what their results were.

Through looking at the answers from the question in User Study Results c)we can get some information regarding how the different users interacted with the tool. The answers show that most users only needed to make a relatively small amount of modifications, as long as the puzzles generated were not too complex. This result can be considered successful as if a large amount of modifications were necessary it would likely mean that the time needed to create the puzzle would increase.

Consequently, the increased time would go against one of the primary reasons to use the tool in the first place; to shorten the development time of puzzles. It could also mean that the results were uninteresting to the designer which in turn would question the capabilities of the system to generate puzzles.

Therefore the fact that these modifications were kept to a minimum indicates a positive aspect to the capabilities of the system.

Nonetheless, some answers also mentioned that for more complex puzzles a larger amount of modifications were needed and that the generated partial puzzles served more as inspi- ration. This could be considered negative although obtaining inspiration can still be seen as an improvement compared to manually designing a puzzle from scratch, which is seen by looking at the time comparison for difficult puzzles shown in User Study Results d). Ultimately, the PCG system could be improved to require fewer modifications when dealing with complex puzzles but that is something for future research.

4) Limitations

Due to the limited amount of time available to create the underlying PCG system it meant that a large number of special items and other mechanics were unavailable to use with the design tool. The exact constraints made are outlined in Application of Methodology 2) section. These limitations undoubtedly had a significant effect on what type of puzzles that the system was able to generate, as well as how much variation they could potentially have.

The limitations on the type of special items, environmental hazards and moveable objects that are available to the designer causes several drawbacks to the PCG system. Primarily, the usefulness of the tool is limited as it can not generate most types of puzzles featured in the game. This also affects the complexity or difficulty of the puzzles it can generate as pre- viously mentioned in the Effectiveness of the Tool b) section.

Similarly, the fact that only one character type is available to the generator severely affects the types of puzzles it can generate. This character is very special in the game since it is the only one of five types that do not make use of pathfinding because its vision range is identical to its firing range. All

References

Related documents

Also, since a bundle is a manifold, Conlon emphasizes that it’s fair to view a bundle over an n-dimensional manifold as a “special case” of an n + m- dimensional manifold where

(0.5p) b) The basic first steps in hypothesis testing are the formulation of the null hypothesis and the alternative hypothesis.. f) Hypothesis testing: by changing  from 0,05

The teachers at School 1 as well as School 2 all share the opinion that the advantages with the teacher choosing the literature is that they can see to that the students get books

The children in both activity parameter groups experienced the interaction with Romo in many different ways but four additional categories were only detected in the co-creation

Ett relativt stort antal arter registrerades dven utefter strdckor med niira an- knytning till naturbetesmarker (striickorna 5, 6.. = 9,

The findings led to the identification of four key elements that facilitates the implementation of restorative justice with children and young offenders in

This research adopts some of the most well-known models to predict financial distress to be able to investigate whether firms with higher probability to default and

Clarify the techniques used to position the Clarify the techniques used to position the free free - - falling LISA test masses, and how falling LISA test masses, and how..