• No results found

Predicting movie Ratings using KNN

N/A
N/A
Protected

Academic year: 2021

Share "Predicting movie Ratings using KNN"

Copied!
28
0
0

Loading.... (view fulltext now)

Full text

(1)

DEGREE PROJECT, IN COMPUTER SCIENCE, FIRST LEVEL

STOCKHOLM, SWEDEN 2020

Predicting movie

ratings using KNN

(2)

Bachelor in Computer Science Date: 2020-06-08

Supervisor: Kevin Smith Examiner: Pawel Herman

Swedish title: Förutse betygsättningar på filmer med användning av KNN School of Electrical Engineering and Computer Science

(3)

Explanation of terms

K nearest neighbors - K nearest neighbors, or KNN, is a machine learning

algorithm for evaluating a value.

Baseline - In this study the term baseline is defined as the method of using the

mean rating for a movie when evaluating a users rating.

Mean absolute error - Mean absolute error (MAE) is a metric measuring the

average size of the errors of a set of predictions.

Root mean square error - The root mean square error (RMSE) is a metric

(4)

Abstract

Many services provide recommendations for their users in order for them to easily find relevant information. Thus, the development of recommender systems is important for these services to constantly improve. With the integration of new technology, it is common to implement recommender systems with the use of machine learning algorithms. This report investigates a method for recommender systems based on the machine learning algorithm K-nearest neighbors, or KNN. Specifically, the algorithm was used to predict what users’ would rate movies before they had rated them. In addition, the method was compared with the use of a baseline method taking the mean value of all user ratings as predictions. The objective of this study was to analyze the usefulness of KNN. The conclusion was that the implementation of a movie recommender system based on KNN received better results than using a baseline method.

(5)

Sammanfattning

Många tjänster föreser rekommendationer till deras användare för att göra det enkelt att hitta relevant information. Det är därför viktigt för dessa tjänster att utveckla rekommendationssystem till att bli bättre. Genom användning av ny teknik är det vanligt att implementera rekommendationssystem som baserar sig på maskininlärning. I denna rapport undersökes en metod för att förutspå tittares betygsättning av filmer baserat på maskininlärningsalgoritmen K-nearest neighbors, eller KNN. Ytterligare jämfördes systemet med användningen av en referensmetod som använder medelvärdet av alla användares betygsättning som förutsägelser. Studiens ändamål var att analysera användbarheten av KNN. Slutsatsen var att implementation av ett rekommendationssystem för filmer baserat på KNN algoritmen genererade ett bättre resultat än referensmetoden.

(6)

Contents

1 Introduction 1 1.1 Problem statement . . . 1 1.2 Research question . . . 1 1.3 Scope . . . 2 2 Background 3 2.1 Similar studies . . . 3 2.2 Limitations of recommender systems . . . 4 2.3 Critics on the usage of RMSE and MAE . . . 4

3 Method 6 3.1 Implementation . . . 6 3.2 Measurements . . . 9 4 Results 11 4.1 Resources . . . 11 4.2 Implementation . . . 11 4.3 Other studies . . . 13 5 Discussion 16 5.1 Implementation . . . 16 5.2 Other studies and comparisons . . . 16 5.3 Sources of errors . . . 17

6 Conclusions 19

6.1 Future research . . . 19

(7)

1

Introduction

In recent years digital technologies are used everywhere, affecting our daily lives and enhancing the accessibility of information on the internet. This constant flow of information is something that most people are accustomed to, and that most services are expected to provide. The problem arises in presenting relevant information. Few people have the patience to spend exhaustive time looking through the internet for a specific piece of information, which pressures services to provide some sort of system to assist users in this task. One common approach to providing this is the development of recommender systems [8]. This can be found on today’s popular applications such as Netflix [7] and Facebook [12].

1.1

Problem statement

Recommendation systems are often implemented as a service applicable to every user. This study will analyze the advantages of these recommender systems and discuss whether the resources spent give a satisfactory outcome for companies. There are several machine learning algorithms that can be used for this purpose. To delimit this report, it will focus on the K nearest neighbors algorithm. KNN is one of the more intuitive algorithms for the purpose of movie rating prediction, which made it interesting to analyze its’ usefulness. A baseline method of using a movies’ mean rating as an estimate for all users’ rating predictions will be used for comparison.

1.2

Research question

The research questions for this thesis are the following:

• How will KNN perform when predicting what a user will rate a movie? • How much does it differentiate from the alternative of taking the mean

(8)

1.3

Scope

This report will focus on implementing a method to efficiently predict users’ movie ratings. The methods investigated will be the KNN algorithm and a baseline method. All data for this report will be taken from the public source MovieLens. [11] [5].

(9)

2

Background

2.1

Similar studies

Mladen Marovic´et al. focused on some methods for an automatic recommender system that would predict movie ratings. Data was retrieved from the movie database IMDb. The database inquired movie titles, genres, year of release, directors, screenwriters, and actors, as well as the movie ratings based on what users had rated. The database consisted of 1059 users, 9428 movies, and 65581 ratings with a rating scale of 1 to 10.

The similarity between users was calculated with the Pearson correlation coefficient, the correlation between the ratings provided by the pairs of users for the same movies [2]. Lowest differences in movie ratings for the same movies between users corresponded as highly similar users.

Similarly to this report, the study of Mladen Marovic´et al. studied the KNN algorithm and used a baseline method of estimating a user rating for a particular movie to the movies’ mean rating. The KNN algorithm based the similarities of users on their pearson correlation. The baseline method was then compared to the results with used metrics, MAE and RMSE. The results showed that the KNN algorithm predicted a better estimation than the baseline method. As the study retrieved data from the database IMDb, the result of the study will be different from the generated result of this implementation[10].

The study of Lorentz and Ek investigated the possibility of implementing a movie recommender system based on two different approaches of the KNN algorithm, a user- and item-based KNN. In an item-based KNN algorithm, the interest is on what movies are similar to the evaluated movie whereas a user-based approach of KNN is to find the users that are similar to the evaluated user. To find the similarities, the Pearson correlation coefficient was used, described above. The study investigates how the implementation of the different approaches of KNN will perform in a so-called cold start issue.

A cold start issue occurs in situations where a user has given too few or no ratings [4]. The cold start issue arises in different situations, one being that a new user is

(10)

added. In this case, the user will not have provided any ratings that can be used to find similar users, which are needed for the machine learning algorithm. The study of Lorentz and Ek also compares the result of KNN to a baseline method that is measured as the average of movie ratings, which is how this implementation will investigate the performance of.

Similarly, the recommender system of Lorentz and Ek retrieved two datasets from MovieLens, one with one million ratings and the other with a hundred thousand ratings with the rating scores ranging from 1 to 5. Since the objective of the study was to investigate the subject of cold-start, which is not relevant to this report, only the results from the implementation of the user-based KNN without the cold start issue were discussed.

In conclusion of the study, the KNN algorithm performed better than the baseline method of the movie for both the user- and item-based approach. The accuracy of the result improved when the number of ratings increased[9].

2.2

Limitations of recommender systems

In the report of Adomavicius and Tuzhilini, an overview of the field of recommender systems is presented using different recommendation methods. The study focuses on describing various limitations when implementing a recommender system. Specifically, one limitation when analyzing the recommender system is that the measures are often performed on test data that the users choose to rate. Items that users choose are more likely to correspond to the items they like, influencing the accuracy of the result. The report also describes the lack of experiments on unbiased random data due to the cost of resources and time consumption. Affecting the understanding of the effectiveness of the different techniques when implementing recommender systems [1].

2.3

Critics on the usage of RMSE and MAE

Cremonesi, Koren and Turrin explored the outcome of using RMSE and MAE to evaluate the performance of the ”top-N” recommender, which recommends the N number of items that the user is most likely to like. Two datasets were used, one

(11)

from Netflix and one from MovieLens. Multiple algorithms were investigated, only some of which were focused on minimizing the root mean square errors (RMSE). Two of the other algorithms were completely non-personalized, one on which recommended the movie with the highest rating and one of which recommended the movies with the largest number of ratings. As this report only focuses on the KNN algorithm, compared to the baseline, other algorithms will not be discussed. There were two different KNN based algorithms. The first used Pearson correlation to calculate similarities between users and combined the classical kNN with a bias for every movie for every user, a measurement of how predictable the rating will be. The second was not RMSE-oriented, and used cosine similarity, which uses the ratings for all movies, whether they are done or not, as a vector that is used to calculate the similarity[13]. To calculate the placement a measurement also based on the number of similar users that had seen the movie. This means that an exact rating was not calculated, but only an association between user and movie. The results showed that both KNN based algorithms outperformed the baseline method. However, it also became apparent that the second KNN algorithm, which was not RMSE oriented, outperformed the one focused on minimizing RMSE[6].

(12)

3

Method

The problem was approached by implementing a recommender system using the KNN algorithm. Applied to the data extracted from MovieLens, the implementation was used to predict users’ movie ratings. Both the predictions from the KNN algorithm and the baseline method were then compared to the actual ratings. In order to measure the accuracy of the predictions, two metrics were used, the RMSE and MAE.

3.1

Implementation

The implementation was based on the machine learning algorithm K-nearest neighbors, or KNN, to build a recommender system to predict movie ratings for users. KNN is an algorithm that is used for estimations. By finding a number of “neighbors”, as similar users, the value is estimated as the mean value for the K nearest neighbors [3].

In the case of predicting a users’ rating for some movie, all other users who had seen that movie would be considered. A distance between them would be calculated between the user whose rating is being predicted and the other users. Based on that distance the K number of closest users would be considered the ”K nearest neighbors” and the users’ rating will be estimated as their mean rating of the movie.

The implemented algorithm used user-based KNN, meaning that the evaluation of a user rating was approached by finding a number of neighbors that corresponded to similar users. The similarities between users were calculated based on the eucladian distance of their mean rating for movies of the same genre. For instance one user’s mean rating for horror might be the worst rating score, and for romance the highest rating score. The user will be estimated as similar to another user who also likes romance and does not like horror.

The K value of the algorithm decided the number of neighbors that were considered to be the nearest. The value for K was found by testing the implemented code with the test dataset by calculating the RMSE and MAE for those predictions. The K value that generated the smallest error was then

(13)

selected.

The datasets that were used for the implementation were collected from the website MovieLens, where users can find movies and rate movies. The movie ratings were on a scale from 1 to 5.

3.1.1 Retrieved datasets

Two disjoint datasets were retrieved from MovieLens, one dataset of 25 million ratings [5] and the other with 100 000 ratings [11]. MovieLens 25M Dataset contains 25 million movie ratings applied to 62 000 movies by 162 000 users. The other retrieved dataset, MovieLens 100K Dataset, contains 100 000 movie ratings from 1000 users on 1700 movies. All users in both the datasets had rated at least 20 movies, and the users were selected at random. A unique user ID and nothing else was used to represent each user.

The data structure of the 25 million movie ratings is divided into six tables containing ratings, movies, tags, links, genome scores, and genome tags.

The ratings table is a collection of all the ratings for different movies that users have been given. Each rating has a userId, a movieId, a rating, and a timestamp. Movies with at least one rating were included in the dataset.

In the movie data file, each entry represented a movie with a movieId, title, and genres. Genres included:

• action • adventure • animation • children’s • comedy • crime • documentary • drama

(14)

• fantasy • film-noir • horror • musical • mystery • romance • sci-fi • thriller • war • western

• “no genres listed”

In the tags data file, each entry represented one tag applied to one movieId by one userId. Tags are the meta data about movies that are generated by the users. Each tag is typically a short word or phrase that is determined by each user.

The data structure of the links file links the sources of each movie. Each movieId has a imbdId and a tmdbId, each representing two database sources.

Tag genome contains tag relevance scores for movies. Each movieId in the genome has a value for every tag in the genome. The value encodes how strongly the tags represent the movie. Tag genome is split into two files; genome-scores and genome-tags.

The data structure of the 100 000 ratings contains only the tables of ratings, links, movies and tags as described above.

For the implementation only two tables were used from both datasets; movies and ratings. Other tables were discarded. The reason for this limitation was that the information given was enough to implement the algorithm, and there is no reason to believe that the usage of other tables would improve the results.

(15)

3.1.2 Training dataset and test dataset

The training dataset in this implementation was from the dataset of 100 000 movie ratings, whereas the test dataset was retrieved from ratings in the dataset of 25 million ratings. The test set included 1000 randomly selected ratings of movies present in the training dataset and the users of those ratings. The training dataset and test dataset were disjoint.

3.1.3 Pre-processing data

Since the datasets included a wide range of different users and ratings, accurate data with more similar users was retrieved when removing users that have seen less than 20 movies. For every user in each of the datasets, a number between 1 to 5 for each genre was calculated, representing their appreciation of that genre. The numbers were calculated by taking the users’ mean rating for the genre. The similarity between two users was then calculated as the euclidean distance of their numbers, where lower distance corresponds to higher similarity. With this method, the nearest neighbors could be calculated.

3.2

Measurements

Two datasets from Movielens were retrieved, a training set and a test set. The rating predictions were based on the data from the training set, while the test set was used when measuring the accuracy of the implementation when predicting the ratings. Two metrics were used, the RMSE and MAE [14]. In addition, the result was compared to the baseline to see which method was most effective as it was compared with other previous studies that investigated similar subjects.

3.2.1 Performance

To measure the performance of the implementation, the mean absolute error (MAE) and root mean square error (RMSE) were used to calculate the results and the actual ratings. The formula for the MAE computes the mean absolute error, where the predicted value is denoted as ˆyi and the actual rating as yi and n as the

(16)

M AE =n

i=1|(ˆyi − yi)|

n

The formula for RMSE is shown below. A high value of RMSE implies a less accurate prediction of the rating, whereas a low value implies a more accurate prediction in regards to the real value. The ˆyi− yidenotes the difference between

the actual and predicted rating, where ˆyiis the predicted rating and yiis the actual

rating. n is the total amount of ratings.

RM SE =

√∑n

i=1yi− yi)2

(17)

4

Results

The results aquired from the implementation are presented below. In addition, results from previous studies investigating similar subjects were presented. The results of the KNN algorithm were then used to compare with the baseline method.

4.1

Resources

The implementation used data from the website MovieLens, where ratings were on a scale from one to five. All users in both the datasets had rated at least 20 movies, and the users were selected at random. Only two of the tables in the datasets were used; users and ratings.

The algorithm that was used for the implementation was the KNN algorithm, along with the baseline method to be used for comparison.

4.2

Implementation

In order to find the most suitable value for K different values were tested. The value was then chosen based on the result that generated the smallest error, calculated with RMSE and MAE. The results of the baseline were also plotted for reference, see figure 4.1 and 4.2.

(18)

K 0,60 0,65 0,70 0,75 0,80 10 20 30 40 50 k-NN Baseline

Figure 4.1: Mean absolute error of the predictions, y-axis, with different values for K, x-axis. K 0,85 0,90 0,95 1,00 1,05 10 20 30 40 50 k-NN Baseline

Figure 4.2: Root mean square error of the predictions, y-axis, with different values for K, x-axis.

(19)

Tables 4.1 and 4.2 show that a K value around 25 generated the best results in terms of both RMSE and MAE. This K value was used for the rest of the results. The results could then show the differences between predicted ratings and actual ratings in the test set. A division of the error intervals was calculated, see table 4.3.

Percentage of ratings in intervall

Error for predicted rating

0-0,5 0,5-1,0 1,0-1,5 1,5-2,0 >2,0 0 10 20 30 40 50 KNN Baseline

Figure 4.3: Errors in each range for the KNN algorithm and the average ratings for the movie.

The results gave an mean absolute error of 0.649 and a root mean square error of 0.861. The baseline methods’ MAE was 0.710 and RMSE 0.919. In comparison with the KNN algorithm, the mean absolute error for the baseline method was 9.4% bigger than the same metric for KNN, and the root mean square error was 6.7% bigger.

4.3

Other studies

To compare the results with other studies, the numbers from the different implementation with KNN and baseline methods were used.

Marovic�et al. did a similar study and recieved the results presented below. The main differences between our implementation and theirs is that they used

(20)

a different detaset, which also had a rating scale of 1 to 10 compared to the range of 1 to 5. They also approached KNN in a different way. The results show that the results from the baseline method were 27% bigger than the KNN algorithm measured in MAE, and 20% bigger measured in RMSE [10].

• Results from the KNN algorithm: MAE: 1.319

RMSE: 1.8797

• Results from the baseline method: MAE: 1.676

RMSE: 2.289

In the study of Lorentz and Ek the objective was to investigate the possibility of creating a recommendation system by estimating user ratings while approaching the problem with cold start. Since the objective of this study was to investigate the subject of cold-start, which is not relevant to this report, only some of the results were discussed. The implementation of the user-based KNN was applied to different sets of ratings, both taken from the same source as this study. However, they also used a different approach to KNN than this study did. The results show that for the MAE the baseline method was 17.2% bigger than KNN for the 100K dataset and 21% bigger for the 1M dataset. For RMSE the baseline method was 10,7% bigger for the 100K dataset and 15% bigger for the 1M dataset. [9]

• Results from the KNN algorithm: MAE: 0.8059 (on 100 000 ratings) RMSE: 1.0157 (on 100 000 ratings) MAE: 0.7707 (on 1 million ratings) RMSE: 0.9677 (on 1 million ratings) • Results from the baseline method:

MAE: 0.94439 (on 100 000 ratings) RMSE: 1.1248 (on 100 000 ratings)

(21)

MAE: 0.9337 (on 1 million ratings) RMSE: 1.1169 (on 1 million ratings)

(22)

5

Discussion

5.1

Implementation

The result of the implementation showed that both metrics of error were smaller for KNN. However, the difference was smaller than what could be expected. The difference in mean absolute error was only 0.061, meaning than on average estimates given from the methods could be very close. Since one of the two is a common machine learning algorithm and the other is a trivial calculation this is an unexpected result.

The range of errors shown in figure 4.3 also showed that KNN more frequently gave estimates than were in the best category, of an error below 0.5, than the baseline. At the same time, the KNN algorithm gave more estimations that were in the worst category of an error bigger than 2.0. The baseline method on the other hand had more estimations that were in the middle error categories. This leads to the assumption that KNN gives very close estimates in more cases than the baseline method, but is also at a greater risk of being further off. However, it is worth noting that both methods have their biggest range in the best categories and the size of the groups decreases for the worse groups.

5.2

Other studies and comparisons

In the study of Mladen Marovic´ et al. the used dataset had a rating range between 1 to 10 [10], compared to the MovieLens datasets’ range of 1 to 5. The usage of this different dataset may be a considered factor to why the RMSE and MAE for the result is bigger than the same measures from the implementation in this report. The difference in the size of the datasets might also be a considered factor as to why there was a difference in the measurements.

Cremonesi, Koren and Turrin , also used a different dataset provided by Netflix. However, they also used data from MovieLens, and the general results presented were similar[6].

In the study of Lorentz and Ek, the implementation was run on different cases. For instance, there were the cases that did not address the cold start issue and

(23)

was run on both the datasets of 100 000 ratings, as well as 1 million ratings [9]. Therefore, both cases could be comparable in the context of this study, indicating better results of RMSE and MAE than the baseline.

Furthermore, from the report of Adomavicius and Tuzhilini, one limitation of researching the performance of recommender systems is that the quality of test data may affect the result [1]. Items that users rated were more likely to be the items that users liked. Since the result of this report is affected by the quality of the selected datasets from MovieLens, this might have affected the result. In the case that users rated all movies they have seen and not only movies that they like, the result of this subject might be different.

The results from the implementation along with the other studies showed that the differences between KNN and the baseline method differed depending on the method for finding the nearest neighbors, as well as other aspects in the usage of different datasets, the quality of data, and rating ranges. These factors suggest that the exact numbers presented presented in the other studies might not be possible to compare to the results in this study. Also that the quality of the selected datasets may affect the result of this report since it is uncertain if the users were rating random unbiased movies. However, more general conclusions concerning the advantage of the KNN algorithm compared to the baseline are still relevant to discuss. In all cases presented, the KNN algorithm gave a better estimation than the method of taking the average movie rating. This leads to the assumption that KNN is in general better at estimating movie ratings than the baseline, even if an exact difference in performance is not possible to disclose.

5.3

Sources of errors

The used training dataset was relatively small, with only 100 000 ratings from 1000 users on 1700 movies. Since the implementation was user-based this may have affected the result in that inconsistent users were given a bigger influence, as well as decreasing the selection of similar users. The report from Lorentz and Ek also showed that the usage of a bigger dataset could generate a better result for both KNN and the baseline, which strengthens the theory that the small dataset can make the results from the implementation unreliable [9].

(24)

The study from Cremonesi, Koren and Turrin, suggests that the method of using RMSE and MAE as a measure may not be applicable for every usage of the estimated ratings. This suggests that the usage of this result for any other reason than to give close exact numbers as estimated ratings is not investigated and may give significantly different results.

The RMSE and MAE for the baseline method are smaller than the compared studies, which might suggest that the data that was used for the implementation consisted of a small range of users with similar opinions. This could result in better predictions for the KNN than what the same algorithm would give for a more representative dataset.

Furthermore, another source of error might be that the selection of other reports was limited and could give an incomplete picture of the subject.

(25)

6

Conclusions

From the results of this study, implementing a recommender system based on KNN received smaller errors than the baseline method. In order to strengthen the result it would be beneficial to run on more and different types of datasets with control of the quality. Additionally, to include a random unbiased dataset. Since the study had delimits of data and only one approach of KNN, it is uncertain if other types of approaches would be beneficial to investigate. One can conclude that developing recommender systems based on machine learning might be beneficial for services if implemented effectively, in terms of giving close predictions.

6.1

Future research

Since the differences in results of KNN and the baseline method were small, further extension of this subject could be investigated. Such as integrating the prediction method based on other machine learning algorithms, other approaches of KNN, as well as using different datasets. In addition, it would be interesting to investigate the usage of KNN in recommender systems from other aspects, such as economic and social. For instance, investigating the value of a machine learning algorithm in terms of both results, time, and resources needed for the implementation and the profit gained by the difference in the result. In addition, due to the lack of experiments on random unbiased data, it could be interesting to research when the invested resources are enough to cover the cost and time of implementing the results.

(26)

References

[1] Adomavicius, G. and Tuzhilin, A. “Toward the next generation of recommender systems: a survey of the state-of-the-art and possible extensions”. In: IEEE Transactions on Knowledge and Data Engineering 17.6 (2005), pp. 734–749.

[2] Benesty, J., Chen, J., Huang, Y., Cohen, I. “Pearson Correlation Coefficient”. In: Noise Reduction in Speech Processing. Springer Topics in Signal Processing, vol 2. Springer, Berlin, Heidelberg (2009).

[3] Bhavsar, H. and Ganatra, A. “A Comparative Study of Training Algorithms for Supervised Machine Learning”. In: International Journal of Soft Computing and Engineering (IJSCE) 2 (2012,january).

[4] Brusilovsky, P., Kobsa, A., and Nejdl, W. The Adaptive Web: Methods and Strategies of Web Personalization. 1st ed. Berlin, Heidelberg: Springer-Verlag, 2007.

[5] Chai, T. MovieLens(25m). 2019. URL:http://grouplens.org/datasets/ movielens/25m/ (visited on 05/15/2020).

[6] Cremonesi, P., Koren,Y., and Turrin, R. “Performance of recommender algorithms on top-N recommendation tasks”. In: RecSys’10 - Proceedings of the 4th ACM Conference on Recommender Systems (2010,january), pp. 39–46.

[7] Gomez-Uribe, C. A., and Hunt, N. “The Netflix Recommender System”. In: ACM Transactions on Management Information Systems 6.4 (2015, december), pp. 1–19.

[8] Isinkaye, F. O. , Folajimi, Y. O. and, Ojokoh, B. A. “Recommendation systems: Principles, methods and evaluation.” In: Egyptian Informatics Journal, vol. 16, no. 3 (2015,november), pp. 261–273.

[9] Lorentz, R., and Ek, O. “Cold-start recommendations for the user- and item-based recommender system algorithm k-Nearest Neighbors”. In: Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE credits Student thesis (2017,may).

(27)

[10] Marovic, M., Mihokovic, M., Miksa, M., Pribil, S., & Tus,A. “Automatic movie ratings prediction using machine learning”. In: 2011 Proceedings of the 34th International Convention MIPRO (2011,may), pp. 1640–1645.

[11] MovieLens. MovieLens(100k). 1998. URL: http : / / grouplens . org / datasets/movielens/100k/ (visited on 05/15/2020).

[12] Naumov, M.,and Mudigere, D. DLRM: An advanced, open source deep learning recommendation model. 2019. URL: https : / / ai . facebook . com / blog / dlrm an advanced open source deep learning -recommendation-model/. Accessed: 13 May 2020.

[13] Nguyen H.V., Bai L. “Cosine Similarity Metric Learning for Face Verification”. In: Kimmel R., Klette R., Sugimoto A. (eds) Computer Vision – ACCV 2010. ACCV 2010. Lecture Notes in Computer Science, vol 6493. Springer, Berlin, Heidelberg (2011).

[14] Wang, W. and Lu, Y. “Analysis of the Mean Absolute Error (MAE) and the Root Mean Square Error (RMSE) in Assessing Rounding Model”. In: IOP Conference Series: Materials Science and Engineering 324 (Mar. 2018), p. 012049.

(28)

References

Related documents

46 Konkreta exempel skulle kunna vara främjandeinsatser för affärsänglar/affärsängelnätverk, skapa arenor där aktörer från utbuds- och efterfrågesidan kan mötas eller

The increasing availability of data and attention to services has increased the understanding of the contribution of services to innovation and productivity in

Närmare 90 procent av de statliga medlen (intäkter och utgifter) för näringslivets klimatomställning går till generella styrmedel, det vill säga styrmedel som påverkar

I dag uppgår denna del av befolkningen till knappt 4 200 personer och år 2030 beräknas det finnas drygt 4 800 personer i Gällivare kommun som är 65 år eller äldre i

These points will be added to those you obtained in your two home assignments, and the final grade is based on your total score.. Justify all your answers and write down all

Industrial Emissions Directive, supplemented by horizontal legislation (e.g., Framework Directives on Waste and Water, Emissions Trading System, etc) and guidance on operating

The EU exports of waste abroad have negative environmental and public health consequences in the countries of destination, while resources for the circular economy.. domestically

Error concealment methods are usually categorized into spatial approaches, that use only spa- tially surrounding pixels for estimation of lost blocks, and temporal approaches, that