• No results found

Loss Aversion Financial Economics Bachelor Thesis

N/A
N/A
Protected

Academic year: 2021

Share "Loss Aversion Financial Economics Bachelor Thesis"

Copied!
77
0
0

Loading.... (view fulltext now)

Full text

(1)

Loss Aversion

A study of changes in loss aversion towards a 50/50 gamble

Financial Economics Bachelor Thesis

Authors:

David Nilsson Mauritz Smedensjö Myhre

Supervisor: Magnus Willesson Examiner: Håkan Locking Term: VT20

Subject: Financial economics Level: Bachelor

Course code: 2FE32E

(2)

Abstract

Loss aversion is a theory which states that losses loom larger than gains.

Negative outcomes are weighted heavier than positive outcomes in decision making but could this weight change when different prospects are evaluated?

This thesis focuses on how the loss aversion changes toward different magnitudes of a loss for young individuals when they are faced with a 50/50 chance of winning or losing a gamble. The loss aversion is tested toward six different magnitudes of a potential loss ranging from 100 kr to 4 000 kr. The loss aversion toward these six different magnitudes is then compared to examine how the loss aversion changes. This data was collected using a survey experiment that was digitally distributed to economics students at Linnaeus University in Växjö.The results from the subsequent analysis showed that the loss aversion was not constant towards all six losses. The loss aversion was different in ten out of fifteen pairwise comparisons.

Respondents became more loss averse when the loss increased but the loss aversion did however seem to be less sensitive to increases in losses above the 1 000 kr mark.

Keywords

Loss aversion, reference point, behavioural finance, losses and gains, required win, risky prospects

(3)

Acknowledgements

We would first of all like to thank our examiner Håkan Locking and

supervisor Magnus Willesson who has been very helpful and given us advice to help us move forward with this research. Further we would like to thank My Gustafsson who has given a lot of good guidance and feedback during this work even though she had no obligation to offer her help. A special thanks is given to the 10 test subjects who have helped us with feedback on interpretations on our survey. They helped us make sure that the survey was interpreted the way it was supposed to, in order for us to gather reliable data.

We would of course also like to thank all 111 respondents of our survey who have made it possible for us to gather the data for the analysis in this work and our opponents for good constructive criticism and advice.

(4)

Table of contents

1.0 Inledning 1

1.1 Background 1

1.2 Problemdefinition 2

1.3 Purpose 5

1.4 Research questions 5

1.5 Limitations 5

2. Theoretical framework 7

2.1 Expected utility theory 7

2.2 Prospect theory 8

2.2.1 Evaluation is relative to a neutral reference point 9

2.2.2 Diminishing sensitivity 9

2.2.3 Loss aversion 10

2.2.4 Drawbacks 12

2.3 Perspectives on Loss aversion 12

2.3.1 Constant loss aversion 12

2.3.2 Adaptive loss aversion 13

2.3.3 Research on non-constant loss aversion 14

2.5 Endowment effect 15

2.6 Mental accounting 16

3. Method 18

3.1 Quantitative vs. qualitative method 18

3.2 Survey vs. structured interviews 18

3.3 Sample 20

3.4 Constructing a survey 21

3.4.1 Visual representation 21

3.4.2 Question order 21

3.4.3 How to ask questions 22

3.4.5 What is being measured 23

3.5 Test survey 24

3.6 The Survey 25

3.6.1 Distribution of the survey 27

3.7 Null hypothesis 27

3.7.1 Friedman Test 27

3.7.2 Post Hoc Test (Wilcoxon test) 27

3.8 Choice of analysis 28

4. Result 30

5. Discussion 35

6. Conclusion 40

7. Further research 41

8. Sources 42

8.1 Literature 42

8.2 Handbook/Survey instructions 42

(5)

8.3 Digital sources 43

8.4 Images 43

8.5 Articles 44

9. Appendix 49

9.1 First Draft of the Survey 49

9.2 Final Survey 51

9.3 Distributions 55

9.3.1 Gender 55

9.3.2 Age 55

9.3.3 Income 56

9.3.4 Total savings 56

9.3.5 Question A. 57

9.3.6 Question B. 57

9.3.7 Question C. 58

9.3.8 Question D. 58

9.3.9 Question E. 59

9.3.10 Question F. 59

9.4 Kolmoforov-Smirnov and Shapiro-Wilk Test for Normal Distribution 60

9.5 Friedman Test 63

9.6 Wilcoxon Signed Ranks Test 64

9.6.1 Wilcoxon signed rank test, comparing Q. A 64

9.6.2 Wilcoxon signed rank test, comparing Q. B 65

9.6.3 Wilcoxon signed rank test, comparing Q. C 66

9.6.4 Wilcoxon signed rank test, comparing Q. D 67

9.6.5 Wilcoxon signed rank test, comparing Q. E 67

9.7 Wilcoxon Signed Rank Test for the non significant difference between

distributions. 68

9.7.1 Wilcoxon Signed Rank Test between the distribution for Q. A & Q. B 68 9.7.2 Wilcoxon Signed Rank Test between the distribution for Q. C & Q. D 69 9.7.3 Wilcoxon Signed Rank Test between the distribution for Q. D & Q. E 70 9.7.4 Wilcoxon Signed Rank Test between the distribution for Q. D & Q. F 71 9.7.5 Wilcoxon Signed Rank Test between the distribution for Q. E & Q. F 72

(6)

1.0 Inledning

1.1 Background

Prospect theory is a theory that has been developed to explain the way in which people make choices when there is risk involved (Häckler, Pfosser, Tränkler 2017). Test results show that people do not make choices in the same way that the traditional utility theory predicts they will. Within the traditional utility theory an individual will always experience the same utility from having 100 000 kr no matter what their previous state was. The same utility will be experienced for a person who previously had 110 000 kr, but now has 100 000 kr, as for another individual which previously had 90 000 kr but now has 100 000 kr too (Tversky, Kahneman, 1979). However, according to Tversky and Kahneman (1979) this is not what is being observed in reality. Prospect theory says that individuals evaluate different types of outcomes from the gains or loss perspective rather than the final state. This means that in the previous example the individual who gained 10 000 kr to now have 100 000 kr will

experience a higher utility than the individual who lost 10 000 kr. Expressed in other words, the value one experiences is dependent on the gain or loss rather than the final state

(Kahneman, Tversky, 1979).

As part of the prospect theory Tversky and Kahneman developed the theory of loss aversion (Kahneman, 2013). Loss aversion is a theory which says that a change in wealth is evaluated differently depending on if it is a gain or loss. If a gain and a loss is of an equal amount the effect will be greater from the loss than the gain. The individual will feel a greater reduction in value from the loss than he will feel an increase in value from the gain (Tversky,

Kahneman, 1991). According to Tversky and Kahneman (1979) the same behaviour is observed within decision making. Given the same difference between two decisions, the difference will be of a greater significance when the decision is between two losses than when it is a decision between two gains (Tversky, Kahneman, 1979). These differences are explained by the asymmetric relationship between gains and losses within individuals. This asymmetric relationship is the reason that a greater weight is attached to losses than gains in decision making (Tversky, Kahneman, 1991).

(7)

Does an individual always possess the same degree of loss aversion? The adaptive loss aversion theory suggests that the degree of loss aversion an individual possesses fluctuate when outcomes of decisions are different from what was anticipated. If a gain was bigger than anticipated individuals become less loss averse and vice versa (Lindsay, 2019).

Furthermore, a study made by Wang et.al (2016) also showed that loss aversion could differ depending on the magnitude of a potential loss.

Other research indicates that loss aversion is constant and independent from the reference point. Shalev (2002) proposed a utility model which takes loss aversion into account. The model assumes that loss aversion is a constant which lowers the utility compared to the traditional utility theory (Shalev, 2002). Later Hans Peters (2011) strengthened this argument by showing that the altering of the reference point1 does not affect the loss aversion (Peters, 2011).

1.2 Problem discussion

The expected utility theory (EUT) is normative in its nature, which means that it displays how rational decisions should be made under risk in order to maximize one's expected utility (Häckel, Pfosser, Tränkler, 2017).

“According to EUT, each action is ranked based on its expected utility, which depends on both the consequences and the probabilities of each possible scenario”

- Cappello, Zonta, Glišić, 2016

To exemplify this, we imagine a coin toss. If the coin toss lands on heads you win 10$ and if it lands on tails u lose 5$. This scenario would yield an expected utility of:

1 “The earlier state relative to which gains, and losses are evaluated” – Kahneman, 2013

(8)

0,5 ∗ 10 + 0.5 ∗ (−5) = 2,5

Paul A. Samuelson was an American economist and the first American to win the Nobel prize in economic science (Nobelprize, 2020), he has also been called “the father of modern

economics” (MIT, 2020). Samuelson once offered a colleague a wager in which they would toss a coin, if the coin landed on heads the colleague would win 200$ but if the coin would land on tails the colleague would lose 100$. This wager was turned down by the colleague, but he would agree to the wager if they did it 100 times. Samuelson deemed this behaviour irrational since if you agree to do a bet 100 times, the bet should not be turned down when it is being played once. The colleague replied, “I won’t bet because I would feel the 100$ loss more than the 200$ gain” (Thaler, Tversky, Kahneman, Schwartz, 1997). This behaviour observed by Samuelson violates the EUT which is what Tversky and Kahneman (1979) later investigated when proposing the prospect theory as an alternative theory to EUT (Tversky, Kahneman, 1979).

In 1979 as part of the prospect theory Tversky and Kahneman proposed a value function for losses and gains. The function is steeper for losses than for gains (Tversky, Kahneman, 1979).

Graph 1. An illustration of a Value function from Tversky and Kahneman (1979).

(9)

The shape of the function is derived from the fact that when the magnitude of a loss or gain increases, the marginal value of a given change decreases. The change from 100 to 200 is perceived as greater than a change between 1100 to 1200, hence the shape of the curve (Tversky, Kahneman, 1979). The value function proposed by Tversky and Kahneman (1979) shows the concept of loss aversion. The slope reflects the observed 2:1 relationship between gains and losses (Kahneman, Knetsch, Thaler, 1991). This translates to a loss aversion parameter of two with the definition given by Tversky and Kahneman (1992). They stated that in a 50/50 gamble the loss aversion parameter could be calculated by the required win to accept the bet divided by the given potential loss (Tversky, Kahneman, 1992). While Tversky and Kahneman’s research shows that loss aversion is present in preferences and decision making, one may wonder whether the loss aversion is constant for each individual or not?

Lindsay (2019) researched different scenarios with lottery tickets to test loss aversion. The result from the different scenarios was that the adaptive loss aversion model best explained the subject’s behaviour and the gap in their bid and ask spread. This research shows that loss aversion tends to change based on if experiences exceeded or fell short of anticipated results from the lottery (Lindsay, 2019).

Lindsay (2019) showed evidence of a change in loss aversion based on experience while other research by Shalev (2002) and Peters (2011) have used a constant loss aversion factor in their research. The constant loss aversion factor used by Shalev (2002) and Peters (2011) was constant as long as the same relationship between wins and losses remained equal when altering the reference point. Even though the constant loss aversion factor is based on certain assumptions one could ask if loss aversion really is constant?

Wang et.al (2016) tested the impact that culture has on loss aversion. In the study they found different loss aversion parameters for potential losses of 25$ and 100$ in a 50/50 gamble (Wang et.al, 2016). Barsky et.al (1997) states that people's preference parameters could differ due to sensitivity toward the size of the potential loss (Barsky et.al, 1997) which could

explain why the parameters differed. The results from Wang et.al (2016) showed that loss aversion was significantly different (p < 0,001) between the two losses which is in line with

(10)

the notation by Barsky et.al (1997) that preference parameters could be sensitive to the size of a potential loss (Wang et.al, 2016).

Tversky and Kahneman (1991) also presented the concept of diminishing sensitivity.

Diminishing sensitivity refers to the fact that a given change is of greater importance when it is closer to the reference point (Tversky, Kahneman, 1991). To exemplify this, we imagine a discount of 200 kr. If the discount is on an item worth 400 kr the discount will be perceived as better than a 200 kr discount on an item worth 1 000 kr even though the discount itself is the same (Sharma, Park, Nicolau, 2020). This raises a question of whether the case is the same for losses. How do individuals behave when different magnitudes of potential losses are evaluated?

1.3 Purpose

The purpose of this research is to examine how young individuals' loss aversion changes towards different magnitudes of a potential loss when there is equal probability of winning or losing a gamble.

1.4 Research questions

I. Do we see a change in the loss aversion within individuals for different magnitudes of potential losses or does it remain constant?

II. Between which magnitudes of the potential loss are the loss aversion significantly different?

III. Is an increase in the magnitude of a potential loss of less significance when losses are further away from the reference point?

1.5 Limitations

In the research a few restrictions had to be implemented in order to achieve a result in the given time period. The research is limited to Växjö in Sweden and the population we will

(11)

gather data from are students at the Linnaeus University in the major of economics. The amount of money for potential losses will only be examined up to 4 000 kr since losses above 4 000 kr could be perceived as too large for students. This could then make our data

unreliable based on feedback from other students. Test subjects experienced that if the loss were above 4 000 kr they would not be able to handle their finances that month. This research is also a partial study to analyse the loss aversion strictly in the decision of a coin toss. It will not consider further effects that the results could have on the utility function or other types of decision making.

(12)

2. Theoretical framework

2.1 Expected utility theory

In Bernoulli's utility theory the state of wealth is all you need to know to determine its utility.

At the same amount of wealth, the same utility should be generated between two people no matter their preferences. However, this is not always correct (Kahneman, 2013).

Kahneman (2013) gives an example that states:

“Today Jack and Jill each have a wealth of 5 million. Yesterday Jack had 1 million. Are they equally happy? (Do they have the same utility?)”

- Kahneman, 2013

According to Bernoulli's theory Jack and Jill should have the same utility and be equally happy, but as Kahneman expresses it: “you don't need to have a degree in psychology to know that today Jack is elated and Jill despondent” (Kahneman, 2013).

Moreover, Bernoulli stated that an item's value should be determined by the utility it provides rather than the item's price. This is because the utility is dependent on a person's specific circumstances while the price of an item is equal to everyone. For example, obtaining 1000$

is more significant for a poor person than a rich person even though both would obtain the same amount. The poor person would have more use for the 1000$ and hence gain more utility than the rich person. However, no matter how small or insignificant, any increase in wealth will increase one’s utility (Bernoulli, 1954). Bernoulli (1954) summarizes this with a quote:

“I believe that it results from the fact that, in their theory, mathematicians evaluate money in proportion to its quantity while, in practice, people with common sense evaluate money in proportion to the utility they can obtain from it”

- Bernoulli, 1954

(13)

With this quote it becomes clear that a measurement of risk that does not take utility into account becomes unreliable. However, an accurate generalization does not seem reasonable to make because the utility of an item can change depending on its circumstances, an example of this is “a rich prisoner who possesses two thousand ducats but needs two thousand ducats more to repurchase his freedom, will place a higher value on a gain of two thousand ducats than does another man who has less money than he” (Bernoulli 1954).

In EUT an individual faced with a decision will compare the decision’s anticipated utility by multiplying the probabilities of each outcome with the utility value and then summarize them.

This will then provide an expected utility for the decision the individual is encountering and the one with the highest expected utility will be chosen (Mongin, 1997).

Furthermore Häckel et.al (2017) states that the foundation of standard neoclassical theory is built on the presumption that people act rationally and make their decisions to maximize their expected utility. EUT is a normative theory that explores how decisions should be made rationally while they are made under risk. They explain further that EUT can be split into three main principles (Häckel et.al, 2017). To quote Häckel et.al (2017) the three main principles are:

“(1) the overall expected utility of a choice is the expected utility of the distribution of possible outcomes. (2) It exists a utility function u() that represents the risk profile of an investor and can be used to value uncertain future outcomes xi. (3) A choice is acceptable if it adds utility to the existing assets.”

- Häckel et.al, 2017

2.2 Prospect theory

According to Tversky and Kahneman (1979) the prospect theory tries to describe how decisions by individuals are made under risk (Tversky, Kahneman, 1979). Kahneman and

(14)

Tversky showed that people’s decision making do in fact violate EUT. When individuals are faced with decisions about gains, they are risk averse but when the decision is about a loss they are risk-seeking even when both are given the same value (Mishra, 2014). This

behaviour is called the reflection effect. Once outcomes turn negative individuals' preferences shift from risk-aversion to risk seeking (Tversky, Kahneman, 1979). In contrast to Bernoulli’s utility theory, prospect theory contains the aspect of the reference point which is according to Kahneman “the earlier state relative to which gains and losses are evaluated” (Kahneman, 2013).

Kahneman (2013) explains that at the core of prospect theory there are three important factors which are decisive when financial outcomes are evaluated (Kahneman 2013).

● “Evaluation is relative to a neutral reference point”

● “Diminishing sensitivity”

● “Loss aversion”

2.2.1 Evaluation is relative to a neutral reference point

The reference point is most often stated as the status quo. However, things such as

expectations or social comparison may also affect the reference point (Tversky, Kahneman, 1991). Kahneman and Tversky (1979) demonstrate this through an example; When touching an object with a given temperature it can be perceived as either hot or cold. The reason why it can be experienced differently is that individuals might have adapted to different

temperatures of the object. Transferring this over to a monetary example it means that a given wealth might be experienced as poverty by one individual and being very wealthy for another (Tversky, Kahneman, 1979).

2.2.2 Diminishing sensitivity

Diminishing sensitivity says that individuals are less sensitive to a given change when the change is between two larger sums than between two smaller sums (Tversky, Kahneman, 1991). Stated in a different way, individuals put less weight into a change when it is distant from the reference point (Klein, Deissenroth, 2017). To exemplify the diminishing sensitivity

(15)

concept, this gives an explanation to why people think of a 5$ discount as better when it is on something that costs 15$ than something that costs 30$. The amount you save is the same in both cases, but it is perceived as better in the first since it is closer to the reference point (Sharma, Park, Nicolau, 2020).

2.2.3 Loss aversion

Loss aversion states that a loss of a given amount has a bigger impact than a gain of the same amount (Tversky, Kahneman, 1991). Expressed in another way, individuals try harder to not experience a loss than they try to reach gains (Riedl, Heuer, Strauss, 2015). Loss aversion is an important part in explaining a gap that has been observed in trades between the

willingness to accept (WTA) and the willingness to pay (WTP) (Tversky, Kahneman, 1991).

Graph 2 is a visual representation of the loss aversion concept. The kink at the origin of the function shows the fact that the observed difference between a smaller loss and a smaller gain is a 2:1 relationship. In most cases two options at a given difference have more impact when it is seen as two negative outcomes than when it has two positive outcomes which is why the slope is steeper in the loss domain (Kahneman, Knetsch, Thaler, 1991).

Graph 2. Atypical value function by Kahneman, Knetsch and thaler (1991).

Furthermore the status quo bias is a concept very much related to loss aversion. Individuals are unwilling to deviate from their status quo which is their current state. The reason behind this is that potential negative outcomes of leaving it is given a higher decision weight than the

(16)

corresponding potential positive outcomes. However, the same effect can be observed when remaining at the current state is not possible (Kahneman, Knetsch, Thaler, 1991).

Kahneman and Tversky (1991) gives the following example:

“Imagine that as part of your professional training you were assigned to a part- time job. The training is now ending and you must look for employment. You consider two possibilities. They are like your training job in most respects except for the amount of social contact and the convenience of commuting to and from work. To compare the two jobs to each other and to present one you have made up the following table:”

Job Contact with others Commute

Time Present

job

isolated for long stretches

10 min

job A limited contact with others

20 min

job D moderately sociable 60 min

- Tversky, Kahneman, 1991

Both options are compared to the present job which is the reference point in this decision. In both options, one condition with the job is better and the other is worse (Longer commuting time and less social contact). The same option was also presented in the experiment where job D had “much pleasant social interaction and 80 minutes daily commuting time” instead.

In the first example 70 percent of the experiment’s participants preferred job A but in the second example only 33 percent preferred job A. This showed that individuals have a higher sensitivity to the losing aspect of the option relative to their reference point (Kahneman, Knetsch, Thaler, 1991).

(17)

2.2.4 Drawbacks

Prospect theory has some drawbacks brought up by Kahneman (2013). Firstly, prospect theory is unable to handle that outcomes can be disappointing. When there is a very low chance for a bad outcome the theory can not change its value to account for the

disappointment of the bad outcome. Second the theory will fail in the presence of regret.

When faced with two choices people might regret their decision. How the outcome is perceived is largely dependent on the other option that could have been chosen (Kahneman, 2013).

2.3 Perspectives on Loss aversion

2.3.1 Constant loss aversion

When making a decision that contains risk, the preferences of the decision makers will be based on the reference outcome. If the end product from the decision made does not reach the reference outcome the result is regarded as a loss. Shalev (2002) proposed a model that was both simple and elegant for this type of situation (Peters, 2011),

The utility of an outcome below the reference outcome is obtained from the basic utility by subtracting a multiple of the loss in basic utility: this multiple, the loss aversion coefficient, is constant across different reference outcomes. We provide a preference foundation for this loss aversion model.

- Peters, 2011

(18)

Graph 3. An illustration of utility with constant loss aversion by Shalev (2002). In this graph the black line represents a traditional utility function. The grey line represents a utility function which accounts for the

constant loss aversion (Peters, 2011).

Having a constant loss aversion factor is the reason behind the simplicity in Shalev’s model.

In the model, utilities that are below the reference outcome will be reduced by subtracting the loss multiplied by the loss aversion coefficient. In this case the loss aversion coefficient is a constant factor denoted as λ. There are two aspects one could take to the word “constant”

when measuring loss aversion this way. In the first aspect a specified reference outcome has identical multiple λ of loss. This loss will then be subtracted from the regular utility in different outcomes. In the second aspect, different reference outcomes will not affect the multiple since it is constant (Peters, 2011).

Peters (2011) explains a scenario with two different lotteries that have the same reference outcome. Both lotteries are constructed in a way that the total weights of negative outcomes are identical. Now if the reference point were altered without altering the lotteries proportions between the positive and negative outcome the preference between the lotteries would remain equal. This would indicate that the loss aversion coefficient is not only constant but also independent of the reference outcome (Peters, 2011).

2.3.2 Adaptive loss aversion

Adaptive loss aversion is a model that contains two aspects of trading behaviour in an environment where payoffs can depend on your own actions, others actions and the state of nature. The first aspect is about how people act when they do not know the joint distribution

(19)

of actions and states. The second aspect is about how people adapt their behaviour in response to earlier experienced outcomes (Lindsay, 2019).

The adaptive loss aversion is a behavioural model that can be divided into three components.

The first component consists of people's expectations, this component is based on others behaviour. The second component is that decisions are based on their predetermined utility.

The third and last component takes place after the trade is done and you have the result from the trade. If the trade does not fulfil the anticipated utility, the rate of loss aversion will change. If the trades utility is below the anticipated utility the individual's loss aversion will increase and vice versa (Lindsay, 2019).

When a given strategy is played successfully it increases the probability for this strategy to be played again. An individual has a degree of loss aversion that affects his or hers expected gains from trades that involve risk, which in turn affects the individual's willingness to trade.

If the trades payoff is larger than anticipated, the individual will in all likelihood be less loss averse and more open for future trades (Lindsay, 2019).

The third component that was just mentioned has also been researched by Novemsky and Kahneman (2005), they explore the assumption that when goods are traded for the expected price loss aversion does not exist. The reason behind this is because the traded good fulfils the expected utility, with this argument Novemsky and Kahneman (2005) states that loss aversion does not exist in routine transactions since the utility will always be equal to the anticipated utility (Novemsky, Kahneman, 2005).

2.3.3 Research on non-constant loss aversion

Wang et.al (2016) tested the impact that culture has on loss aversion. The study found different parameters for the loss aversion for potential losses of 25$ and 100$ in a 50/50 gamble. The results show that loss aversion is significantly different (p < 0,001) (Wang et.al, 2016). Barsky et. al (1997) also states that people's preference parameters could differ due to sensitivity toward the size of the potential loss (Barsky et. al, 1997). Wang et.al (2016)

(20)

further found that culture could affect the degree of loss aversion an individual possesses (Wang et.al, 2016).

Moreover, research by Rau (2014) shows that there is a difference between the male and female gender when it comes to loss aversion, and that females behave less loss averse than males (Rau, 2014). Schmidt and Traub (2002) also concluded that there were differences in loss aversion between genders. Female subjects seemed to be more loss averse than male subjects (Schmidt, Traub, 2002).

2.5 Endowment effect

The endowment effect theory states that a given good in your possession is deemed more valuable than the same good if it has not yet been acquired. In other words, you would

require a higher price to sell the good than you would be prepared to pay for it (Thaler, 1980).

Thaler (1980) gives an example of the endowment effect:

“Mr. R bought a case of good wine in the late '50's for about $5 a bottle. A few years later hxs wine merchant offered to buy the wine back for $100 a bottle. He refused, although he has never paid more than $35 for a bottle of wine. “

- Thaler 1980

Chatterjee et.al (2013) argue that there are two major driving factors behind the endowment effect which are loss aversion and ownership. The loss aversion part of the endowment effect has the focus on the reference point. The reference point is whether you own the good or not.

Owning the good means that selling it is regarded as a loss and not owning it means that buying the good is seen as a gain. Since losses weigh heavier than gains the selling price will be higher than the price individuals are willing to purchase it for. The other major part of the endowment effect is ownership (Chatterjee et.al, 2013). The authors give the following reasons to why ownership is a part of the endowment effect:

(21)

“Two principles on which the ownership account of the endowment effect is built are: (1) people get attached to what they own, that is, people’s possessions become a part of themselves (Beggan 1992; Belk 1988; Dittmar 1992) and (2) most people have a positive attitude toward themselves (Brown 1998; Steele 1988), and, thus, they are likely to see their possessions, which are associated with the self, as attractive (see “the mere ownership effect”; Beggan 1992).”

- Chatterjee et.al, 2013

2.6 Mental accounting

Mental accounting (MA) is a part of behavioural economics where according to theory, behaviour patterns in savings are analysed. Here saving indicates that an individual will prioritize future spending to present spending (Shefrin, Thaler, 1988). MA is based on the assumption that the value of money varies from individual to individual. One reason for this is that the perception of money varies depending on how it is acquired and on what the money is ment for (Thaler, 1990). For example, if you have worked hard to obtain a

paycheck, parting with that money will feel like a larger loss than money that you have been given or acquired by chance (Thaler, 1985).

According to Yuntong et.al, MA can be divided into three divisions. The first one is how something is funded, for example if it is funded by normal earnings or windfall. Windfall is money that you have acquired “easier” than normal earnings and can be anything from money that you have been given or that has been won by chance. The second division is called consumer item which for example could be food expenditures, luxury expenditures or entertainment expenditure. The third and last division of MA is saving patterns. Yuntong et.al (2013) gives the following example to explain saving patterns (Yuntong et.al, 2013)

“people divide their wealth into fixed accounts and interim accounts on the basis of their saving goals, and generally do not transfer money from the fixed account in order to meet temporary consumption demands”

- Yutong et.al 2013

(22)

To summarize MA with an example we can take Tversky and Kahneman's experiment about two similar scenarios, a jacket and a calculator.

“Example A. Imagine that you are about to purchase a jacket for $125 and a calculator for $15. The calculator salesman informs you that the calculator you wish to buy is on sale for $10 at the other branch of the store, located 20 minutes drive away. Would you make the trip to the other store?

Example B. Imagine that you are about to purchase a jacket for $15 and a calculator for $125. The calculator salesman informs you that the calculator you wish to buy is on sale for $120 at the other branch of the store, located 20 minutes drive away. Would you make the trip to the other store?

Most people would make the travel in example A, but not in example B, indicating that saving five dollars on a $15 purchase is perceived as more valuable than saving five dollars on a $120 purchase.”

- Tversky, Kahneman, 1981

The evidence from research on the mental accounting topic have shown that a mental

categorization of expenses is present in a lot of different decisional situations. Entertainment expenses for example are being tracked against previous expenses for the same purpose, i.e a previous purchase of a sports game ticket makes another purchase of a ticket for sports or another form of entertainment less desirable. This effect can be observed since these ticket expenses are drawn from the same mental account (Hossain, 2018).

(23)

3. Method

3.1 Quantitative vs. qualitative method

When researching either a quantitative or a qualitative method can be applied. One of the biggest differences between quantitative and qualitative research is that qualitative research puts an emphasis on words while quantitative research has more focus on numbers. They both also have a different relationship to theory. Quantitative method is deductive towards theory and qualitative method is inductive (Bryman, Bell, 2005).

The method used in this research is quantitative. A hypothesis has been built from the theory to test a certain phenomenon. The research that is going to be conducted is a testing of existing theory which is a deductive way of handling theory. The approach taken in this research to compare loss aversion for different magnitude of losses is a parametric approach.

The focus is on what the required win is for taking the risk of losing a certain amount of money. Since the focus is on the amount itself and not the reasoning behind why that amount is required, the quantitative method is more suitable in this case. A qualitative approach could perhaps be more suitable if the purpose was to research why a certain individual would require a certain amount to bear the risk of losing.

3.2 Survey vs. structured interviews

The process of gathering data is mainly split into structured interviews and surveys (Bryman, Bell, 2017). In this thesis the data will be gathered from a digital survey. The choice of a survey instead of structured interviews was made because of the time restriction in this thesis since a survey would be the most effective way to obtain a larger sample. The initial idea was to make both a digital and physical survey, but due to the Covid-19 physical surveys were no longer an option.

There are both advantages and disadvantages with a survey method compared to structured interviews. One disadvantage is that while answering a survey, subjects won't be able to ask questions if they deem something unclear. Therefore, the survey needs to be comprehensible

(24)

with questions that are easy to answer. This minimizes the room for misinterpretation and omitted answers. Short predetermined answers i.e. multiple choice are preferable to open answers since it reduces the risk of subjects to tire and answer all questions or in the worst case scenario they do not answer the survey at all (Bryman, Bell, 2017).

Another advantage that a survey has is that it is cheaper and less time-consuming to conduct than interviews, but the response time to gather the data can be longer. A further advantage with surveys is that the so-called “interviewing effect” is not present. The interviewing effect is that the interviewers ethnic background, gender, social background and emphasizing on key words can affect the answers of the subjects (Bryman, Bell, 2017).

There is a larger risk of respondents giving dishonest and incomplete answers in a survey than in interviews (Bryman, Bell, 2017). One additional advantage that a survey has which is extra beneficial during the existing pandemic is that it is easier to adapt a survey to the subjects' needs since they can answer the survey where and whenever they want. Having a survey for gathering the data also has some disadvantages. Since surveys try to avoid open answers, follow-up questions become a lot harder (Bryman, Bell, 2017).

Another disadvantage is that in a survey all the questions are available at once, this makes a specific order that the subject has to answer impossible to regulate. This leads to the

questions no longer being independent from each other. Other things that are important are anonymity as well as clear instructions and an attractive layout, to increase the chance of more truthful answers and increase the response rate. Surveys also generally have worse response rates than interviews. Response rate can be increased by distinct questions phrasing.

It can also be increased by short and easy answers in a language that respondents are fluent in (Bryman, Bell, 2017).

(25)

3.3 Sample

The population targeted in this research are economics students at Linnaeus University in Växjö Sweden. This population is chosen mainly because of accessibility to the subjects.

Since we are studying at Linnaeus University, we would have easier access to acquire data from economics students at our university than in other locations. This due to our connections with teachers that can help us to distribute the survey and connections to other fellow

students. Another reason that economics students at Linnaeus university are chosen as the population from which we draw our sample is that they have similar previous education, age and monthly income. In addition to this, some earlier studies in behavioural finance by for example Tversky and Kahneman (1979) and Kahneman, Tahler and Knetsch (1990) have also used students at universities in their research.

Bryman & Bell (2005) describe two different types of sampling techniques, probability sampling and non-probability sampling (Bryman, Bell, 2005). In this research non- probability sampling will be used due to time and budget constraints.

There are three main non-probability sampling techniques, convenience sampling, snowball sampling and quota sampling. First there are a few things that are important to keep in mind when using a non-probability sample. The fact that the whole population does not have the same probability of being chosen in the sample has implications when we will interpret the answers from the research. Especially concerning convenience sampling and snowball sampling the results can not be generalized since the population it reflects can not be

identified. This does not mean that this type of sampling is inherently bad or useless. We can not generalize results across a whole population, but it works well as a preliminary research which can be further developed (Bryman, Bell, 2005).

The sampling technique we are applying is mainly convenience sampling and snowball sampling. However, the choice of doing a probability sample would give us the ability to do a generalization of the results across all economics students at Linnaeus university in Växjö.

Unfortunately, we do not have access to all the students which makes it very hard and time- consuming for us to take a sample from all economics students. This is why the use of

convenience sampling and snowball sampling is applied in our study. Therefore, there will be mainly MBA students in the sample because it is the most accessible and most populated group at the university. This will as mentioned earlier make it impossible for us to generalize

(26)

the results from our study across the population, but our result can still be useful for future research on the topic.

3.4 Constructing a survey

3.4.1 Visual representation

There are a few factors to keep in mind concerning the layout of the survey in order to get a better response rate. According to Bryman & Bell (2005) a survey should not be too long, it should be kept as short as possible so respondents do not get deterred by the size of the survey. At the same time, a well worked and professional layout is important too. A survey that is too dense in order for it to be short may deter the respondent because it looks like the survey is hard to answer when the text is small and the lines are squeezed too tight together (Bryman, Bell, 2005).

With this in mind a large focus has been put into constructing a short and concise survey. The total amount of questions mounts up to ten, four control questions for age, gender, income and savings and then six questions for the measurement of loss aversion. The relatively low amount of questions enables us to have a survey that does not have to be dense in order to not look too massive.

Further the authors explain that having a certain font for each different kind of text in the survey is important, i.e. one font for headlines, one for the questions, one for the headings and so on. The use of the same font between different kinds of text may even confuse the respondent (Bryman, Bell, 2005). There are relatively few instructions and headlines in the survey but in order for respondents to not miss the instructions for the loss aversion questions they have been set to a different font to highlight its importance.

3.4.2 Question order

Dean Lacy (2001) states that the order of questions in a survey can affect the responses. Each question has a certain effect on the subject. Different questions activate a specific thought

(27)

process and certain stored information, which then affect answers in the following questions (Lacy, 2001). This is certainly an important part that has been considered in the construction of the survey. The question order in the survey is the lowest potential loss to the highest.

Considerations of having the potential losses in random order have been made but feedback received from test-subjects have led to the conclusion that there was too big of a risk for confusion with a random question order.

Other important factors to think of concerning question order is that the survey should start off with the easier questions to answer, questions that may be sensitive to answer for respondents (i.e. income) should be asked close to the end of the survey (Harrison, 2007).

Harrison says that questions about income should preferably be asked later in the survey.

However, deviations from this recommendation have been made in the survey construction in order to avoid confusion and misinterpretation. Feedback in the test-surveys indicated that subjects in this case would prefer this type of question in the beginning of the survey.

3.4.3 How to ask questions

When writing survey questions, it is important that every respondent can understand the question and that all respondents understand it in the same way (Dolnicar, 2013) & (Harrison, 2007). To accomplish this Dolnicar (2013) and Harrison (2007) says that things such as technical terms and acronyms, the use of long or complex sentences and nonspecific questions should be avoided (Dolnicar, 2013) & (Harrison, 2007). Furthermore Harrison (2007) also mentions that strong words should be avoided. With strong words he refers to words that are leading, emotionally loaded or evocative (Harrison, 2007).

Moving on to response options there are two main options that can be chosen for survey experiments, open-ended or closed-ended questions. The main advantage of an open-ended question is that it allows for more variety in the answers of the questions and no influence from different given response options is placed on the respondent. When using closed-ended questions the response options must include all possible answers, so respondents do not have to choose an answer which is not their preferred answer (Dolnicar, 2013). Harrison (2007) further explains that the use of good closed-ended questions lessens the risk of different interpretations of the survey questions (Harrison, 2007).

(28)

Bryman & Bell (2005) further explains that for closed-ended questions the answers can be presented vertically or horizontally. Having vertical answers is in most cases the preferred choice. It distinguishes the different answers from each other in a more distinct way whilst it at the same time makes coding of the answers easier (Bryman, Bell, 2005).

3.4.5 What is being measured

Most researchers agree that the definition of what is being measured has to be clear in order to develop good survey questions. However, guidelines on how this should be done have been hard to come by. Dolcinar (2013) highlights three elements specified by Rossiter (2011) that are key in the definition of what is being measured (Dolcinar 2013):

“1. the rater (the person being asked),

2. the object (the object under study), and

3. the attribute (what exactly about the object will be studied).”

- Dolcinar 2013 If either the object, the attribute or both contains more than one component or if there are some kind of ambiguity two more elements need to be specified according to Rossiter (2011) (Dolcinar, 2013):

“1. the rater 2. the object

3. the components of the object 4. the attribute

5. the components of the attribute”

- Dolcinar 2013

(29)

The survey used in this research has the respondents as the rater, loss aversion is the object under study and changes in loss aversion for different magnitudes of a potential loss is the attribute.

3.5 Test survey

To make sure that the data obtained from the survey would be possible to analyse and that the subject interpreted the questions as they were intended, the survey was tested on two different occasions.

The first test survey was sent out to ten test subjects to see how they interpreted our questions and to give us feedback. From the first test survey a lot of feedback was acquired. The

subjects deemed the surveys title as confusing and they felt a decrease in their interest. The test subject also thought that it was unnecessary to have the survey in English since all of the subjects are fluent in Swedish, they thought that a Swedish survey would leave less room for misinterpretation. Another issue was that questions were experienced as too similar, this led to a “pattern” in how they answered. Test subjects felt that after the first two questions they had decided what to do after a positive or a negative change in their wealth. The phrasing of the questions and the survey in general had too much room for interpretation. The subjects also felt that a fixed reference point like in question 1 (appendix 9.1) would help them to understand the question better.

The most consistent feedback was that we should have multiple choice questions instead of open answers since it is more compelling and easier to answer. To quote one of the subjects

“It felt more like I was taking a test than answering a survey, and therefore my initial feeling was that I didn't want to finish it”.

After reconstructing the survey with regards to the feedback received on the first draft of the survey, another reworked draft of the survey was sent out. The second draft was sent out to the same ten test subjects to give feedback on their interpretation of the questions. This led to

(30)

some minor adjustments in the phrasing of the questions. After these minor adjustments the questions in the survey are interpreted as intended.

3.6 The Survey

The questions in this survey have been constructed on the basis of two sources, “The impact of culture on loss aversion” by Wang et.al (2016) and “Advances in Prospect Theory:

Cumulative Representation of Uncertainty” by Tversky and Kahneman (1992). Tversky and Kahneman specified that a parameter of loss aversion could be estimated by dividing the required win with the potential loss in a 50/50 gamble, i.e. a coin toss. If the potential loss is 100$ and the required win for a certain individual in order to accept the bet is 250$, the loss aversion parameter is:

This approach was applied in the study by Wang et.al. In their research individuals were asked how much the win would have to be in order to accept a bet with a 50 percent chance to lose 25$ and 100$ respectively.

“Y should be at least $____ to make the lottery acceptable.”

Figure 4 Illustration of survey question from the study by Wang et.al (2016)

The required win would then be divided by the loss to obtain the loss aversion parameter for each individual. In their questions, open answers were used where respondents would state their required win in order for them to see the bet as acceptable (Wang et.al, 2016).

The questions used in this survey research are somewhat different. Instead of open answers, predetermined multiple choice answers that respondents can choose between is used. This change has been made largely due to feedback received from test subjects, but it is also a

(31)

recommendation by Bryman & Bell (2005) to use multiple choice answers over open

answers. The respondents are faced with six different situations stated as the question below:

D. För en sannolikhet på 50% att förlora 1.000 kr kräver du en vinst på lägst:

❒ 0 - 1.000 kr

❒ 1.000 - 1.500 kr

❒ 1.500 - 2.000 kr

❒ 2.000 - 2.500 kr

❒ 2.500 - 3.000 kr

❒ 3.000 +

Respondents have a 50 percent chance to lose 1000 kr in a coin toss. Instead of open answers, required wins are stated in intervals. The respondents select the interval which contains their required win in order to accept the bet. One immediate drawback with this compared to the study by Wang et.al (2016) is that we can not get a precise parameter of loss aversion since the answers are within certain intervals. If the answer for a given individual in the question above is 2000 - 2500 the loss aversion parameter is between 2 - 2,5 but knowing if it is 2,1 or 2,4 is impossible. If the analysis shows that there are differences in the distribution of

answers over different magnitudes of the potential loss it will be interpreted as a difference in loss aversion parameter, but the exact difference will not be possible to determine.

Another potential problem that could be discussed is the intervals chosen in the answers.

What are the appropriate intervals? The reason for this survey’s intervals is fairly simple.

Kahneman, Knetsch, Thaler (1991) proposed a 2:1 relationship between gains and losses.

This would indicate a loss aversion parameter of 2. The answers in this survey have three alternatives lower than the 2:1 relationship and three answers higher than the 2:1 relationship.

To conclude the answers are constructed in order for the proposed loss aversion parameter of 2 to be in the middle of the possible answer alternatives.

(32)

Concerning the question itself the word probability has been chosen over the words chance and risk which could otherwise have been used. These two words were omitted because they are suggestive to people and can have different meaning to respondents. The word risk could be negatively associated, and chance can be positively associated. Probability is a more neutral word which lowers the risk of influencing respondents’ answers. The probability of 50 percent to lose and to win have been applied partly because it was used in the study by Wang et.al (2016) and because the 50/50-coin toss is a relatively easy hypothetical situation to relate to.

3.6.1 Distribution of the survey

This survey will be distributed through Linnaeus Universities webpage Mymoodle to the MBA programs year one and two on their course forums. Year one and two on the MBA program were selected due to their larger number of students compared to other classes in the department of economics. The distribution has also been made through two separate MBA program groups on the social media platform Facebook, where only second and third years MBA students are members. The reason behind using social media and Mymoodle was to gather as much data as possible during the current circumstances of the pandemic.

3.7 Null hypothesis

3.7.1 Friedman Test

H0: There are no significant difference between the distributions (D) (D1=D2=D3=D4=D5=D6)

H1: At least one of the distributions (Di) are different from another (Dj) (Di ≠ Dj) 3.7.2 Post Hoc Test (Wilcoxon test)

H0: There are no significant difference between the distributions (D) (Di=Dj) H1: At least one of the distributions (Di) are different from another (Di ≠ Dj)

(33)

3.8 Choice of analysis

This research produces dependent samples since the six different distributions of interest are drawn from the same subjects. Since the distributions are dependent, the choice of statistical test is narrowed down. The distributions will first be tested for normality to see if they are normally distributed. The tests for normality that will be used are the Kolmogorov-Smirnov test and the Shapiro-Wilk test. The test used in order to test the null hypothesis is a Friedman test. The Friedman test does not assume a normal distribution nor homogeneity in the data. It is used to analyse multiple related samples, data that are drawn from the same subject on multiple occasions and it is typically used to analyse ordinal data (Sheldon, Fillyaw, Thompson, 1996).

Other research on the subject such as Wang et.al (2016) and Rau (2014) have used ANOVA and Mann-Whitney tests respectively. This research has more than two distributions to compare which excludes the Mann-Whitney U test but leaves us with the possibility of performing a repeated measures ANOVA test (Anderson et.al, 2017). The problem is that ANOVA assumes a normal distribution and the distributions in this data are not normally distributed (Ruthford, 2000). This leaves us with the option of doing a Kruskal-Wallis test or a Friedman test. The Kruskal-Wallis test is not suitable because it assumes independent samples (Anderson et.al, 2017). This concludes that the Friedman test is the test that suits the data since it can handle both non normal distributions and dependent samples. A disclaimer should be made about the chi square test too. The reason for not using a chi square test is that this research does not seek to compare the results to any hypothetical value (Rana, Singhal, 2015). It is strictly aimed at looking for differences in loss aversion between different potential losses.

The data collected in the survey are multiple choice answers within certain intervals. We can not know the exact answer that has been given, only which interval it lies within. Therefore, the answers are ordinal and in the analysis each answer corresponds to a number on the ordinal scale of one to six, with one being the lowest interval and six the highest.

(34)

1. ❒ 0 - 1.000 kr 2. ❒ 1.000 - 1.500 kr 3. ❒ 1.500 - 2.000 kr 4. ❒ 2.000 - 2.500 kr 5. ❒ 2.500 - 3.000 kr 6. ❒ 3.000 +

There are also six distributions to be compared which makes us unable to use a regular Wilcoxon test without significantly lowering the statistical power of the analysis (Wild, 1997), all these tests will be performed in SPSS. What the Friedman test does is that it allows us to test for differences between all six distributions in order to see if any of the distributions are significantly different from any of the other.

When the Friedman test has been performed a post hoc test is run to see where the differences are between the distributions. This post hoc test is a Wilcoxon test where all distributions are compared pairwise in order to determine which of them significantly differs from each other.

The pairwise tests p-value are then compared to the α=0,05 with a Bonferroni correction.

The Bonferroni correction can be used when conducting many dependent or independent statistical tests simultaneously. The correction takes the number of tests into account and lowers the α in order to reduce the risk of type 1 errors (Goldman, 2008). For example, in this research the α was 0,05 and 15 tests were made. With the Bonferroni correction α(0,05)/n(15) the p-value would need to be less than 0,003 to reject the null hypothesis. Two disadvantages to keep in mind while performing the Bonferroni correction is that the method can be

considered unnecessarily conservative and the adjusted α will in most cases be smaller than necessary. This leads to a reduction in type I errors but the risk of making a type II error increases (Lee, Lee, 2018).

(35)

4. Result

Loss aversion is a part of the prospect theory suggested by Tversky and Kahneman (1979).

They observed that losses had more impact on decisions than gains. People have a reference point which is most often the status quo to which different prospects are evaluated (Tversky, Kahneman, 1991). A 2:1 weighting relationship between losses and gains have been observed in experiments with smaller gains and losses (Kahneman, Knetsch, Thaler, 1991).

However, Wang et.al (2016) did a study of the impact of culture on loss aversion. This study showed that there was a statistically significant difference between the loss aversion

parameter when potential losses were set to 25$ and 100$ respectively (Wang et.al, 2016).

Barsky et. al (1997) also states that people's preference parameters could differ due to sensitivity toward the size of the potential loss (Barsky et. al, 1997). This was indeed also noted in the research of Wang et.al (2016) because a difference in loss aversion was detected.

This indicates that loss aversion is not a constant but that it changes depending on the magnitude of the potential loss.

This research will examine how young individuals' loss aversion changes over different magnitudes of potential losses ranging from 100 kr - 4 000 kr when there is equal probability of winning or losing a gamble.

After removing the uncompleted survey answers we had a total of 111 survey responses (N), and the test had 6 questions which gave us a degree of freedom of 5 (n-k). First a test for normality was carried out on all six distributions to test if they are normally distributed or not.

(36)

Since all p<0,001 < α=0,05 we can reject the null hypothesis that the distributions are normally distributed at a five percent significance level. At a five percent significance level, we can say that none of the distributions are normally distributed.

The null hypothesis was tested on a five percent significance level by a Friedman test to see if the distributions of the required wins differed between the magnitude of potential loss.

The observed p-value is smaller than alpha, p<0,001 < α=0,05, therefore we reject the null hypothesis at a five percent significance level. According to the Friedman test we can say that at a five percent significance level, at least one of the distributions is different from another (Di ≠ Dj) i.e. different loss aversion.

(37)

When the Friedman test showed statistical significance on differences between at least two distributions we performed a post hoc test. The post hoc test is performed in order to identify which distributions are significantly different from each other. The post hoc test that has been performed is a Wilcoxon test with a Bonferroni correction.

All the post hoc tests can be seen in appendix 4, ten of the fifteen tests showed significant difference between the distributions. For the purpose of not making this part too long the five tests that did not show statistical significance will be presented.

Test for differences between distributions of the potential loss of 2000 and 4000

The observed p-value is larger than alpha, p = 0,207 > α/n = 0,003, therefore we do not reject the null hypothesis at a five percent significance level. According to the Wilcoxon test we can not say that there is a significant difference in loss aversion between the potential losses of 2 000 and 4 000 at a five percent significance level.

Test for differences between distributions of the potential loss of 1000 and 4000

The observed p-value is larger than alpha, p = 0,016 > α/n = 0,003, therefore we do not reject the null hypothesis at a five percent significance level. According to the Wilcoxon test we can

(38)

not say that there is a significant difference in loss aversion between the potential losses of 1 000 and 4 000 at a five percent significance level.

Test for differences between distributions of the potential loss of 1000 and 2000

The observed p-value is larger than alpha, p = 0,056 > α/n = 0,003, therefore we do not reject

the null hypothesis at a five percent significance level. According to the Wilcoxon test we can not say that there is a significant difference in loss aversion between the potential losses of 1 000 and 2 000 at a five percent significance level.

Test for differences between distributions of the potential loss of 500 and 1000

The observed p-value is larger than alpha, p = 0,008 > α/n = 0,003, therefore we do not reject

the null hypothesis at a five percent significance level. According to the Wilcoxon test we can not say that there is a significant difference in loss aversion between the potential losses of 500 and 1000 at a five percent significance level.

(39)

Test for differences between distributions of the potential loss of 100 and 250

The observed p-value is larger than alpha, p = 0,135 > α/n = 0,003, therefore we do not reject

the null hypothesis at a five percent significance level. According to the Wilcoxon test we can not say that there is a significant difference in loss aversion between the potential losses of 100 and 250 at a five percent significance level.

In total fifteen pairwise post hoc tests were carried out and ten of these pairs showed differences in loss aversion that was statistically significant at a five percent significance level.

(40)

5. Discussion

One of the questions of interest in this research is if loss aversion is constant for different magnitudes of potential losses. Shalev (2002) proposed a model for utility using a constant loss aversion factor which was the same across different reference outcomes. Our result indicates that the loss aversion is not constant across different magnitudes of potential losses.

This is more in line with the research by Wang et.al (2016) that found different loss aversion parameters for potential losses of 25$ and 100$ respectively. The difference between these two potential losses are also detected in this research. The post hoc test conducted shows that there are significant differences in the loss aversion parameter between 250 kr and 1000 kr which is roughly the same amounts as 25$ and 100$.

Just like Kahneman and Tversky (1979) we observe that the answers to our questions are not in line with what the EUT predicts. A choice is acceptable if it adds utility to the existing asset (Häckel, Pfosser, Tränkler, 2017). According to the EUT a win of 500-750 kr with a corresponding loss of 500 kr yields a positive expected utility for every possible win in the interval except 500 kr. However, this is not what is being observed in the data gathered from the survey. If subjects were to answer in accordance with the EUT the larger part of the answers given would be in the second interval (see Appendix 9.3). That answers are not in line with the EUT was to be expected since losses and gains are weighted differently in prospect theory. The increase in loss aversion when potential losses increase could indicate that the weight put on losses increases with the magnitude of the loss. This would be rationalized by the notation from Barsky et.al (1997) where they stated that preference parameters of individuals could be sensitive to the magnitude of the money at risk. However it is important to clarify that this is not an invalidation of EUT. EUT is a very useful and important part of economic theory but it does not seem to work as a predictor of decision making under risk.

Furthermore, the main question of interest in this research is how young individuals' loss aversion changes towards different magnitudes of a potential loss when there is equal probability of winning or losing a gamble.

References

Related documents

The main finding of our study was that acute sleep loss in- creased the slope constant of natural scenes (i.e., blurrier repre- sentations of distant landscapes were perceived as

The variables that I have access to in- clude the outcome variables, final tax balance and reported capital gains, and the treatment variable, filing service used.. Further, I

[r]

The probability of claiming a deduction is much higher if the taxpayer has taxes due according to the preliminary tax balance than if the taxpayer will get a refund.. This is the

‘Tensions between UK and Russia soared over Syria-bound helicopters’, The Independent (London, 21 June.. 2 has taken an unprecedented step towards realizing its foreign

Instead of developing the proper procedure to calculate the marginal deadweight loss for variations in nonlinear income taxes, one has linearized the nonlinear budget constraint

For example, whether based on constant relative risk aversion (CRRA) or constant absolute risk aversion (CARA) preferences, a majority of the subjects from Holt and Laury would

Key words: Impulse consumption, Shopping, Teenage girls, Clothes, Fashion, Influences, Behaviour, Reference groups, Store environment?. Problem: Teenagers today spend an