• No results found

6.Utvärdering

6.1 Resultat från frågeformulär

6.3.1 Framtida arbeten

För att föra arbetet vidare finns det flera saker som går att åtgärda för att förbättra resultatet. Det två första åtgärderna som kan göras är att vidareutveckla beteendeträdet och LSTM agenten från deras nuvarande förmågor och på det viset göra att de upplevs som mer mänskliga. LSTM agenten kan tränas med beteendekloning istället för enbart reinforcement learning för att ge den en grund att basera sitt beteende på. Bättre hårdvara och stegvis träning bör även användas för att förbättra träningstiden för LSTM och på det viset öka LSTM slutliga beteende. Det hade också varit intressant att använda en annan metod som

exempelvis ett vanligt ANN för att se om LSTM medförde några fördelar. För att ändra studien och göra en djupare undersökning bör multiplayer funktionalitet implementeras för att jämföra båda varianterna mot en riktig människa istället för att konstruera en lögn och påstå att en agenten är en mänsklig spelare. Eventuellt kan det neurala nätvket exkulderas och istället enbart testa beteendeträdet genom att polera och finjustera beteendeträdet och testa det tillsammans med en riktigt mänsklig spelare.

Referenser

Amerikanska Psykologförbundet (2017). ​Ethical Principles of Psychologists and Code of

Conduct: Including 2010 and 2016 Amendments​. https://www.apa.org/ethics/code/index. Arbon, J (2020). AI for Testing Games​. medium ​[blogg], 16 feb.

https://medium.com/@jarbon/ai-for-testing-games-c5bd90c3153 [2020-06-05]

Arrabales, R., Ledezma, A. & Sanchis, A. (2009). ​Towards conscious-like behaviour in

computer games characters.​ CIG'09: Proceedings of the 5th international conference on Computational Intelligence and Games. ​Mile​stone, Italy September 2009, ss.217–224. https://dl.acm.org/doi/10.5555/1719293.1719334 [2020-02-18]

Berner et al. (2019). ​Dota 2 with Large Scale Deep Reinforcement Learning. https://arxiv.org/pdf/1912.06680.pdf [2020-01-24]

Blizzard Entertainment (2014). ​Hearthstone​, [TV-Spel] Irvine, Kalifornien, United States. https://playhearthstone.com/en-us/

Bratko, I., Urbančič, T. & Sammut, C. (1995). ​Behavioural Cloning: Phenomena, Results and

Problems.​ 5th IFAC Symposium on Automated Systems Based on Human Skill (Joint Design of Technology and Organisation), Berlin, Germany, September 1995, 28(17), ss.143-149. Bungie (2004). Halo 2, [TV-Spel] Bellevue, Washington, United States.

https://www.halowaypoint.com/en-us/games/halo-2

Chevalier, G. (2018) ​The Long Short-Term Memory (LSTM) cell can process data

sequentially and keep its hidden state through time. ​[fotograf]

https://en.wikipedia.org/wiki/Long_short-term_memory#/media/File:The_LSTM_cell.png [2020-03-11]

Colledanchise, M. & Ögren, P. (2018). ​Behavior Trees in Robotics and AI: An Introduction.​ 1. uppl. CRC Press.

Conroy, D., Wyeth, P. & Johnson, D. (2011).​ Modeling player-like behavior for game AI

design.​ ACE '11: Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology. New York, United States November 2011, 9, ss.1–8

https://doi.org/10.1145/2071423.2071434 [2020-01-30]

Coppin, B. (2004). ​Artificial Intelligence Illuminated.​ Canada: Jones and Bartlett Publishers. Deng, L. & Dong, Y. (2014). ​Deep Learning: Methods and Applications. ​Foundations and Trends® in Signal Processing. United States​ ​30 Juni 2014​ ​7(3-4), ss.197-387.

http://dx.doi.org/10.1561/2000000039 [2020-01-30]

Discord Inc (2015). ​Discord​, [Mjukvara] San Francisco, Kalifornien, United States. https://discord.com/

Eladhari, M., Sullivan, A., Smith, G. & McCoy, J. (2011). ​AI-Based Game Design : Enabling

New Playable Experiences.

Emmerich, K., Ring, P. & Masuch, M. (2018). ​I’m Glad You Are on My Side: How to Design

Compelling Game Companions​. CHI PLAY '18: Proceedings of the 2018 Annual Symposium on Computer-Human Interaction in Play. Melbourne, Australia October 2018, ss.141–152. https://doi.org/10.1145/3242671.3242709 [2020-02-18]

Evensen, P. Stien, H. Bentsen, D (2018) ​Modelling human behaviour using behaviour trees. Kjeller: FFI (18/01651)

https://www.ffi.no/en/publications-archive/modelling-human-behaviour-using-behaviour-trees [2020-03-11]

Flores, R. & Tomai, E. (2014). ​Adapting in-game agent behavior by observation of players

using learning behavior trees​. FDG 2014, the 9th International Conference on the Foundations of Digital Games. Lauderdale, United States April 3-7, 2014.

https://www.semanticscholar.org/paper/Adapting-in-game-agent-behavior-by-observation-of-Tomai-Flores/c9a75c6280536a7a95b707751e919bec8f68e7b3 [2020-02-18]

Gers, F., Schmidhuber, J. & Cummins, F. (2000). Learning to Forget: Continual Prediction with LSTM. Neural computation. ​Neural computation​ (12). ss. 71 -2451. doi:

10.1162/089976600300015015.

Glosser.ca (2013) Colored neural network. [fotografi]

https://upload.wikimedia.org/wikipedia/commons/4/46/Colored_neural_network.svg [2020-02-18]

Graves, A (2012). ​Supervised Sequence Labelling with Recurrent Neural Networks.​ 1. uppl. Heidelberg: Springer-Verlag Berlin Heidelberg

Hadeld-Menell, D., Andrus, M. & Hadeld, G. (2019).​ Legible Normativity for AI Alignment:

The Value of Silly Rules. ​AIES '19: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. Honolulu, United States January 2019, ss.115–121

https://doi.org/10.1145/3306618.3314258 [2020-01-26]

Hagan, M., Demuth, H. Beale, M. & Jesús, O (2014). ​Neural Network Design.​ 2. uppl. Martin Hagan

Hearthstone Wiki (2013) Ui-guide-small.png [Fotografi]

https://gamepedia.cursecdn.com/hearthstone_gamepedia/d/d2/Ui-guide- small.png?version=6f13fb9810e62647c3ffc6c224b55fef [2020-02-18]

Hermans, M. & Schrauwen, B (2013). ​Training and Analysing Deep Recurrent Neural

Networks.​ Neural Information Processing Systems 2013. Lake Tahoe, United States 2013 https://papers.nips.cc/paper/5166-training-and-analysing-deep-recurrent-neural-networks [2020-02-17]

Hochreiter, H., Bengio, Y., Frasconi, P. & Schmidhuber, J (2001). ​Gradient flow in recurrent

nets: the difficulty of learning long-term dependencies​. In S. C. Kremer and J. F. Kolen, eds., A Field Guide to Dynamical Recurrent Neural Networks. IEEE press.

Isla, D. (2005).​ Handling Complexity in the Halo 2 AI​. GDC 2005 Proceeding.

https://www.gamasutra.com/view/feature/130663/gdc_2005_proceeding_handling_.php [2020-01-25]

Juliani, A., Berges, V., Vckay, E., Gao, Y., Henry, H., Mattar, M. & Lange, D. (2018). Unity: A General Platform for Intelligent Agents. ​arXiv preprint arXiv:1809.02627.

https://github.com/Unity-Technologies/ml-agents.

Kambhampati, S. (2019).​ Synthesizing Explainable Behavior for Human-AI Collaboration. AAMAS '19: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems. Montreal, Canada May 2019, ss.1–2

https://dl.acm.org/doi/10.5555/3306127.3331663 [2020-01-24]

Kapadia M., Shoulson A., Boatright C.D., Huang P., Durupinar F., Badler N.I. (2012) What’s Next? The New Era of Autonomous Virtual Humans. In: Kallmann M., Bekris K. (eds) Motion in Games. MIG 2012. Lecture Notes in Computer Science, vol 7660. Springer, Berlin,

Heidelberg

Kennedy W.G. (2012) Modelling Human Behaviour in Agent-Based Models. I: Heppenstall A., Crooks A., See L., Batty M. (eds) Agent-Based Models of Geographical Systems. Springer, Dordrecht

Lidén, L. (2004). Artificial Stupidity: The Art of Intentional Mistakes. I Rabin, S (red) ​AI Game

Programming Wisdom II. 2. uppl.​, Charles River Media. ss.41-48.

Livingstone, D. (2006). ​Turing's test and believable AI in games.​​Computers in Entertainment (CIE).Tillgänglig: ACM​.

Lipton, Z. & Elkan, C. (2016) ​Playing the Imitation Game With Deep Learning: By Mimicking

People, Recurrent Neural Networks Gain Some Amazing Abilities. ​IEEE Spectrum, 53(2), ss.40-45. https://doi.org/10.1109/MSPEC.2016.7419799 [2020-02-17]

Millington, I. & Funge, J. (2009). ​Artificial Intelligence for Games​. 2. uppl. Boca Raton: Taylor & Francis Group

Namee, M. (2004) ​Proactive Persistent Agents - Using Situational Intelligence to Create

Support Characters in Character-Centric Computer Games. ​Tillgänglig: Semantic Scholar. 52

Nilsen, M. (2015). ​Neural Networks and Deep Learning​. Determination Press.

Panchal, G., Ganatra, A., Shah, P. & Panchal, D. (2011). ​Determination Of Over-learning

And Over-fitting Problem In Back Propagation Neural Network​. International Journal on Soft Computing ( IJSC ), 2(2).

https://www.researchgate.net/deref/http%3A%2F%2Fdx.doi.org%2F10.5121%2Fijsc.2011.2204  [2020-02-18]

Pfau, J., Smeddinck, J. & Rainer, M. (2018). ​Towards Deep player Behavior Models in

MMORPGs. ​CHI PLAY '18: Proceedings of the 2018 Annual Symposium on

Computer-Human Interaction in Play. Melbourne, Australia October 2018, ss.381–392. https://doi.org/10.1145/3242671.3242706 [2020-02-18]

Potisartra, K. & Kotrajaras, V. (2010).​ An evenly matched opponent AI in Turn-based

Strategy games.​ 2010 3rd International Conference on Computer Science and Information Technology. ​Chengdu, China​ 9-11 July 2010 2, ss.42 - 45.

https://doi.org/10.1109/ICCSIT.2010.5564451 [2020-01-24]

Rabin, S. (2008). ​AI Game Programming Wisdom 4​. Boston, Massachusetts: Course Technology. ss.257-263.

Raizin (2019). Map of MOBA.png. [fotografi]

https://upload.wikimedia.org/wikipedia/commons/d/dc/Map_of_MOBA.svg [2020-02-18] Schulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. (2017). Proximal Policy Optimization Algorithms. arXiv:1707.06347v2 [Hämtad 2020-03-21]

Sibi, P., Allwyn, S. & Siddarth, P. (2013). Analysis of different activation functions using back propagation neural networks. ​Journal of Theoretical and Applied Information Technology​, 47(3). Tillgänlig: jatit.org.

Simpson, C. (2014) Behavior trees for AI: How they work.​ Gamasutra ​[blogg], 17 juli. https://www.gamasutra.com/blogs/ChrisSimpson/20140717/221339/Behavior_trees_for_AI_ How_they_work.php [2020-03-09]

Soni, B. & Hingston, P. (2008).​ Bots trained to play like a human are more fun.​ 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). Hong Kong, China 1-8 June 2008, ss.363 - 369.

https://doi.org/10.1109/IJCNN.2008.4633818 [2020-01-26]

Sutton, R. & Barto, A. (2018). ​Reinforcement Learning: An Introduction​. 2. uppl. London: A Bradford Book

Szepesvari, C. (2010). ​Algorithms for Reinforcement Learning​. 1. uppl. Morgan and Claypool Publishers

Tenserflow: Large-Scale Machine Learning on Heterogeneous Distributed Systems (2015). [Mjukvara], Mountain View: Google LLC. https://www.tensorflow.org/ [Hämtad 2020-03-12] Tremblay, J., Dragert, C. & Verbrugge, C. (2014) ​Target Selection for AI Companions in FPS

Games. ​ FDG 2014, the 9th International Conference on the Foundations of Digital Games. Lauderdale, United States April 3-7, 2014.

https://www.semanticscholar.org/paper/Target-selection-for-AI-companions-in-FPS-games-Tr emblay-Dragert/f4228172fbfcdc1a9b395045ac48bd941d7a98f6 [2020-02-07]

Tremblay, J. (2013)​ Improving Behaviour and Decision Making for Companions in Modern

Digital Games.​ AAAI Workshop - Technical Report. ss.33-36.

https://www.semanticscholar.org/paper/Improving-Behaviour-and-Decision-Making-for-in-Tre mblay/a0a25d150a6d54549a66fb506b31d80a2f9c037e [2020-02-07]

Turing A. M. (1950). ​Computing Machinery and Intelligence​. ​Computers & thought. ​October 1995, ss.11–35. https://www.csee.umbc.edu/courses/471/papers/turing.pdf [2020-01-22] Unity Technologies. 2020. ​Learning-Environment-Design-Agents.md​. [Github]. 26 mars. https://github.com/Unity-Technologies/ml-agents/blob/master/docs/Learning-Environment-De sign-Agents.md (Hämtad 2020-03-26).

Unity Technologies. 2019. ​Memory-enhanced agents using Recurrent Neural Networks​. [Github]. 19 oktober.

https://github.com/Michaelwolf95/Hierarchical-ML-agents/blob/master/docs/Feature-Memory. md (Hämtad 2020-04-03).

Vetenskapsrådet. 2009. ​Forskningsetiska principer inom humanistisk-samhällsvetenskaplig

forskning. ​http://www.codex.vr.se/texts/HSFR.pdf (Hämtad 2020-05-21)

Yannakakis, G. & Togelius, J. (2018). ​Artificial Intelligence and Games​. 1. uppl. New York: Springer International Publishing

Yildirim, S. & Stene, S.B. (2008).​ A survey on the need and use of AI in game agents. SpringSim '08: Proceedings of the 2008 Spring simulation multiconference. Ottawa, Canada April 2008, ss.124–131. https://dl.acm.org/doi/abs/10.5555/1400549.1400565 [2020-01-24]

Related documents