2015, 103, 393–404

JOURNAL OF THE EXPERIMENTAL ANALYSIS OF BEHAVIOR

NUMBER

2 (MARCH)

IS THERE TIME DISCOUNTING FOR RISK PREMIUM? TAL SHAVIT THE SCHOOL OF BUSINESS ADMINISTRATION, THE COLLEGE OF MANAGEMENT ACADEMIC STUDIES, ISRAEL

AND

MOSI ROSENBOIM DEPARTMENT OF MANAGEMENT, BEN-GURION UNIVERSITY OF THE NEGEV, BEER-SHEVA, ISRAEL

Individuals with a higher subjective discount rate concentrate more on the present and delay is more significant for them. However, when a risky asset is delayed, not only is the outcome delayed but also the risk. In this paper, we suggest a new, two-stage experimental method with real monetary incentives that allows us to distinguish between the effect of the risk and the effect of the time when pricing a risky asset. We show that when individuals have greater preference for the present, their risk aversion for a risky asset realized in the future decreases. We argue that the effect of the risk for future asset is lower for individuals with higher time preference because they discount not only the outcome but also the risks. Key words: willingness to pay, decision-making process, risk, future, delay discounting, experiment design, human

In the years since Strotz (1955) presented his model of future discounting and intertemporal choice behavior, there have been studies examining the structure of subjective time discounting functions for certain cash flows realized in the future theoretically, experimentally and empirically (see the survey by Frederick, Loewenstein, & O’Donoghue, 2002). Experimental procedures are the most common way to evaluate subjective time discount rates (SDR) (e.g., Ahlbrecht & Weber, 1997; Andersen, Harrison, Lau, & Rutström, 2006; Anderson & Stafford 2009; Benzion, Krahnen, & Shavit, 2011; Benzion et al., 1989BenBenzion, Rappoport, & Yagil, 1989; Burks, Carpenter, Götte, & Rustichini, 2012; Charness, Gneezy, & Imas, 2013; Coller & Williams, 1999; Keren & Roelofsma, 1995; Onculer & Onay, 2008; Read, 2001; Rubinstein, 2003). Most research on time discounting has focused on certain future cash flow (i.e., larger–later rewards); only a few studies have examined risky future cash flow. A majority of studies that measured SDR for risky future cash flow found Individual participants’ data are available as an online supplement. Corresponding author: Prof. Tal Shavit (Ph.D), Associate Dean. The School of Business Administration. The College of Management Academic Studies, 7 Rabin Ave RishonLe’Zion. Israel. Phone: 972-52-2920868, Fax: 972-3-9634117. email: [email protected], http://www.colman.ac.il/english/Pages/default.aspx doi: 10.1002/jeab.139

that delayed risky outcomes are discounted less than delayed certain ones, meaning that SDR is lower for risky cash flow (e.g., Ahlbrecht & Weber, 1997; Benzion, Krahnen, & Shavit, 2011; Keren & Roelofsma, 1995). A small number of experimental results were contradictory, finding that risky options are discounted more heavily than certain ones (e.g., Anderson & Stafford, 2009; Onculer & Onay, 2008). One of the reasons for the different attitude towards delayed risky outcomes and delayed certain outcomes is the different weight given to a delayed payoff amount relative to the weight given to the probability of receiving that delayed amount. Sagristano, Trope, and Liberman (2002) showed that low payoffs with high probability were preferred for the near future, whereas high payoffs with low probability were preferred for the distant future. According to these authors, temporal distance increases the decision weight given to payoff amount relative to the weight given the probability of receiving that amount. Noussair and Wu (2006) confirmed the results of Sagristano et al. (2002) with an experimental procedure using real cash incentives. They measured risk aversion by asking the participants to choose between two lotteries based on the procedure of Holt and Laury (2002). In this procedure the participant receives 10 paired lotteries. In each lottery, one outcome is high and the other is low (e.g., lottery A: 10% chance of receiving $2 and 90% chance of receiving $1.60).

393

394

TAL SHAVIT and MOSI ROSENBOIM

The participants are asked to choose one lottery in each pair (e.g., lottery A or lottery B: 10% chance of receiving $3.85 and 90% chance of receiving $0.10). As just illustrated, in the first pair the probability of receiving the higher outcome in each lottery is 0.1 and lottery A is preferred because it has a higher expected value and a lower risk of obtaining a small reward than lottery B. In the next paired lottery, the probability of receiving the higher outcome in each lottery increases to 0.2 and the probability of getting the smaller amount decreases to 0.8 (amounts are unchanged from the previous pair). As a result the difference in the expected values of lotteries A and B is decreasing (from a difference of $1.17 in the first pair to $0.83 in the second pair). In each successive pair, the probability of the larger payoff increases by 0.1 until, in the final pair, its probability is 1.0. Only when the expected value of lottery B is high enough to compensate for its greater risk, should the participant start choosing lottery B. The more times the participant chooses lottery A, the higher his risk aversion. The participants in the Noussair and Wu experiment received cash payment based on one randomly selected pair and the outcome of the chosen lottery in this pair. Each participant was assigned to both treatments. In the first treatment participants were told that they would receive their cash payment immediately after the experiment. In the second treatment, participants were told they would receive the payment 3 months after they made their decisions. Participants tended to be more risk averse for payoffs realized immediately than for payoffs realized in the future. Noussair and Wu suggest that risk aversion is reduced for future payoffs. Note that they did not measure discounted values but only compared the risk aversion in both treatments. Several papers have argued that a delayed outcome is the same as a risky outcome. Halevy (2008) suggested, “the crucial distinction between the present and the future is that only the present can be certain, while any future plan is uncertain” (p. 1145). Baucells and Heukamp (2012) proposed a theoretical model that integrated time and probabilities. They based their model on the idea that people show similar behavior when facing probability and when facing delay noting, “delay makes people insensitive to probabilities and lowers their risk aversion. Or, for events of small probability, subjects become more time patient” (p.839). Rachlin, Raineri, and Cross (1991) compared the subjective discounting of delayed

cash flows to the discounting of risky cash flows. They found a relationship between how people perceive future outcomes and how they perceive risky outcomes, and argued that human behavior is affected similarly by delay and probability. Rachlin, Siegel, and Cross (1994) suggested that the time horizon is part of the delay discounting function, and proposed restructuring gambles into strings to incorporate both time horizon and delay. They argued that if the delay is longer than the individual’s time horizon, the time horizon becomes the effective delay. In addition to the literature on the similar influence of risk and time, there are many studies on the relationship between delay and probability discounting. This literature concentrates on the way people behave when outcomes are delayed (delay discounting) and how they behave when the odds to receive the outcome are variable (probability discounting). Some of this literature suggests that delay and probability discounting can be described by the same discounting function (Green & Myerson, 2004; Rachlin, 2006), which is consistent with the findings that there is a significant correlation between the two (Jarmolowicz, Bickel, Carter, Franck, & Mueller, 2012; Richards, Zhang, Mitchell & de Wit, 1999). However, other studies have found no correlation between delay and probability discounting (Green, Myerson, Oliveira & Chang, 2014; Holt, Green & Myerson, 2003; Petry, 2012; Shead & Hodgins, 2009), suggesting that probabilityand delay-discounting are different processes (Baumann & Odum, 2012; Green & Myerson, 2010, 2013; Hinvest & Anderson, 2010; McKerchar, Green & Myerson, 2010; Mitchell & Wilson, 2010; Terrell, Derenne, & Weatherly, 2014). In this paper, we combined delay and probability discounting. We did not examine the correlation between the two but tested, as in Noussair and Wu (2006) how people discount delayed probabilistic outcomes (see also Vanderveldt, Green & Myerson, 2015). We hypothesized that when a risky asset is delayed, not only is the asset delayed but also the risk. Thus, steeply discounting a future risk should yield reduced risk aversion. We argue, based on the findings of Onculer and Onay (2008), that there are two stages in the process of discounting a delayed risky asset. First, the individual discounts the delayed probabilities (shallow discounting yields risk aversion) and only then is the delayed asset discounted. We tested this hypothesis using a twostage experiment with a real monetary incentive.

395

RISK PREMIUM DISCOUNTING There are several alternative explanations for a decision maker’s behavior when evaluating risky assets to be realized in the future. According to Loewenstein’s (1988) “anticipation effect,” positive utility is derived while anticipating a positive outcome (e.g., savoring the future) and negative utility is derived while expecting a negative outcome (e.g., dread). Anticipation was also studied by Caplin and Leahy (2001, 2004) who were more specific regarding the anticipation effect prior to the resolution of uncertainty. They showed that the anticipation effect has an impact on asset prices for investments. They explain that when the uncertainty is resolved in the present there is no anticipation. However, when the resolution is in the future, there is time to “enjoy” the anticipation, which makes people willing to pay a higher price. The “certainty effect,” which was presented as part of the prospect theory (Kahneman & Tversky, 1979) could also explain a decision maker’s behavior when evaluating risky assets to be realized in the future. According to this effect, when a certain outcome is changed so that the same amount is no longer certain (i.e., it becomes probable) the effect is stronger than when reducing the probability of an outcome that was already uncertain. Delaying a certain outcome is the same as changing a certain outcome into a probabilistic outcome because delay increases the possibility that something will prevent payment (see Anderhub, Gneezy, Guth, & Sonsino, 2001; Green & Myerson, 1997; Myerson, Green, Hanson, Holt, & Estle, 2003; Stevenson, 1986). The certainty effect is the foundation for explaining why delaying a certain outcome adds risk to a nonrisky outcome but delaying a lottery adds risk to a risky outcome (reducing the probability of an outcome that is already uncertain). This is why the impact of delay is larger for a certain outcome than for a lottery. The rest of this paper is organized as follows: The next section presents a theoretical analysis of the time discount factor for risky and riskless assets. Then we present the method and the analysis of the results. Finally, in the last section, we summarize the paper and present our conclusions. Theoretical Model In this section we develop our hypothesis. The theoretical model is accompanied by a simple numerical example. Consider a risky

asset (L) that has two potential outcomes X1 and X2 with probabilities, p and (1-p) respectively, at time t. The risky asset’s expected value (EV) does not depend on time and is defined as follows: EV ¼ pX1 þ ð1−pÞX2

ð1Þ

For example, consider lottery L that yields rewards of amount 100 or 20 with equal probabilities (50% for each outcome). The expected value of the lottery is: EV ¼ 0:5  100 þ 0:5  20 ¼ 60: The lottery’s certainty equivalent (CE) is defined as the certain amount for which the decision maker is indifferent between it and the lottery. If a participant valued this lottery at 50, the decision would reflect risk aversion because 50 < EV. CEt is the certainty equivalent for a lottery held at time t, and CE0 is the certainty equivalent for a lottery held immediately. In this study, we used a twostage procedure. In the first stage, we asked each participant for his/her CE0. In the second stage, conducted 2 weeks later, we assessed the present (discounted) value (V) for a fixed reward amount realized at different times in the future. The fixed amount was different for each participant and set as the CE0 value obtained in the first stage. This allows us to determine the personal present value for each participant’s CE0. To the best of our knowledge, this is a unique methodology, which is used for the first time in this experiment. The VCE for a certain amount CE0 if it is realized at time t was calculated as follows: VCE ¼

CE0 ð1 þ r Þt

ð2Þ

where r is the SDR. In our example, the CE0 is 50. In the second stage, we asked the subject for the present value of 50 to be paid at different future times (2 and 6 weeks). For the example, assume that this subject said that the present value of CE0 ¼ 50 if paid in 2 weeks is 45.35 then we can find r, as we did for each participant in the experiment.

396

TAL SHAVIT and MOSI ROSENBOIM

VCE ¼ 45:35 ¼

50 ð1 þ r Þ2

Solving for r reveals a weekly SDR equal to 5% (r ¼ 0.05). Previous experimental evidence has shown that a lottery’s certainty equivalent depends on the time at which the lottery will be realized (i.e., when the drawing is held and the participant learns if the reward will be obtained or not). These papers argued that people are less risk averse about future gambles (e.g., Ahlbrecht & Weber, 1997; Keren & Roelofsma, 1995; Noussair & Wu, 2006; Sagristano et al., 2002) meaning that CEt is higher than CE0. Based on the findings of Onculer and Onay (2008), we assumed that participants use an indirect process when calculating the immediate value for a lottery to be realized in the future: They first determine CEt (which is different from CE0) and then discount it as a certain amount realized in the future, so the present value for a lottery L realized at time t is: VL ¼

CE t ð1 þ r Þ t

ð3Þ

Note that Equation 3 describes the way people discount a lottery realized in the future. They transform the lottery realized in time t into CEt, and then discount this future certainty equivalent as a certain amount realized in the future and get VL. Unlike Vanderveldt et al. (2015) who suggest a process in which the delay discounting and the probability discounting are combined multiplicatively, we suggest a twostage process consistent with the Onculer and Onay (2008) study. We maintain that experimental procedures similar to ours can obtain CE0 directly. However, it cannot obtain CEt directly. In order to do this, it is necessary to ask the subjects for their future value for a lottery realized in the future, a procedure that is potentially biased. CE0 is the value of the lottery held now; CEt is the value of the same lottery held in the future. The future self should evaluate the lottery in the future in the same manner that the present self evaluates it today. However, we claim that participants do not value CE0 and CEt equivalently because they discount the risk of not winning the lottery because the risk is delayed,

thereby making CEt > CE0, as in Onculer and Onay (2008). Instead, we argue, CEt can be derived by experimental procedure only by using obtained values of VL and r. Therefore, in the first stage of the experiment, in addition to obtaining CE0, we also asked participants for their present discounted value of a lottery if it were realized at different times in the future (i.e., VL). For our example, CE0 was 50 and we will assume that the present value for lottery L expiring in 2 weeks is 49.89 (i.e., VL ¼ 49.89). Note that we argue that the same lottery, when delayed by 2 weeks, is worth more than it was when it was resolved immediately, since participants discount the delayed risk. From our prior example applied to Equation 2, we know that r ¼ 0.051. Solving Equation 3 for CEt reveals that the future certainty equivalent of a lottery realized at time t equals 55. Our two-stage experiment helped us to reveal the certainty equivalent of a future lottery that we could use to calculate the risk discount rate (rt), which is the ratio between CEt and CE0: 1 þ rt ¼

CE t CE 0

In our example, the 1 þ rt ¼ 55 50 or rt ¼ 0:1

ratio

will

be:

Since CEt depends positively on subjective discount rate r (see Eqs. 2 & 3), we hypothesize that an individual with a higher subjective discount rate r concentrates more on the present, and when a risky asset is delayed, not only is the outcome delayed but also the risk. This means that CEt increases relative to CE0 as the subjective discount rate increases. Alternatively, if a participant has the same risk aversion for risk today and risk in the future, then CEt should be equal to CE0, so rt equals 0. The experiment that follows was conducted to test the hypothesis that as the subjective 1 We assumed that the subjective discount rate (r) of CE0 is the same for the discounting of CEt. Of course, it is possible that if CEt is very different from CE0, we would get a different subjective discount rate due to the magnitude effect (Benzion et al.1989; Green, Myerson, & McFadden, 1997; Kirby, 1997; Raineri & Rachlin, 1993; Yi, de la Piedad, & Bickel, 2006); that is, large delayed amounts are discounted less steeply than small amounts. For our purposes, we assume that differences between CEt and CE0 were not of sufficient magnitude to influence discounting rate.

RISK PREMIUM DISCOUNTING discount rate (r) increases, the risk discount rate (rt) increases. Method Participants The participants in the experiment were 74 undergraduate students of economics at BenGurion University in Israel. Their mean age was 24.17 years and 60.8% of them were male. Both stages of the experiment took place in a classroom prior to a lecture. The experimenter offered the students the option of participating in a decision-making experiment with a chance to make some money. He told them that the experiment was not related to the course, would not affect their grade in any way, and they could choose not to participate. Procedure The experiment included two stages in which the participants were asked how much they would be willing to pay for different assets using a second-price auction (SPA; Vickrey, 1961). In an SPA, the participant who bid the highest price wins the auction, but pays the second highest price bid in the group.2 In each stage, the participants first read the instructions (see Appendix 1), including an explanation of the SPA and examples, and then the experimenter answered their clarification questions. The auctions in each stage were presented in a random order to avoid any order effect. To minimize credibility concerns, the researchers, who were known to the students, were present for each stage of the experiment (as in Anderson & Stafford, 2009). Each participant received a serial number, and was asked to remember it for future experiments and future payment. The stages in the experiment were: The first stage. In the first stage the participants were asked for a) their immediate lottery certainty equivalent (CE0) for a lottery realized immediately and b) their current values (VL) for the same lottery realized in 2 weeks and in 6 weeks as in Equation 3. The lottery yields were

2 The SPA procedure discourages overbidding because if more than one participant overbids, the winner must pay more for the good than it is subjectively worth. Likewise, it discourages underbidding by decreasing the chance that the participant will be able to obtain the prize at a subjectively good price (Shogren et al., 2001).

397

NIS3 100 (p ¼ .5) or NIS 20 (p ¼ .5; the expected value [EV] of this lottery is NIS 60). The second stage. Two weeks after the first stage, the experimenter returned to the same class. Each participant was asked about his or her immediate value (VCE) for a certain amount, which was his own CE0 (obtained in the first stage), if realized in 2 weeks and in 6 weeks as in Equation 2. In both stages the participants were told that immediately after the experiment the computer program would randomly divide them into groups of five participants (one of the groups contained four participants because there were only 74 participants). Using a second-price auction, the five participants in each group competed to buy the assets. To provide concrete incentives, we told the participants that one of the problems would be randomly selected at the end of the session, and that we would pay them an amount based on their bid for that problem when the asset matured, either immediately or in the future. Given that only one randomly selected problem served as the determinant for the final monetary reward, participants were informed that the initial amount given to them (NIS 100) should be allotted towards each auction. For participants who won the auction, the price (the second highest bid in each group) was immediately reduced from their initial endowment. For those participants, the second part of the payment was due on the actual date when the fixed amount or the lottery matured, 2 or 6 weeks after the first experimental session. If the asset was a lottery, the lottery was conducted on the due date and the outcome was paid. The participants learned the outcome of the future lottery only on the maturity date. If the asset was a fixed amount, this amount was paid on the maturity date. For an asset realized in the present, the payment was made immediately. The actual payment for the experiment was 10% of the initial endowment today for those who didn’t win the auction (NIS 10). For those who won an auction, we paid 10% of the final endowment on the same day (initial endowment minus the price paid for the asset) and 10% of the asset’s outcome at the due date of the asset. Assume, for example, that the winner needed to pay 3 New Israel Shekels. The exchange rate at the time of the experiment was about NIS 4 ¼ USD1.

398

TAL SHAVIT and MOSI ROSENBOIM

NIS 60 for the lottery in 2 weeks. In this case, the final endowment on the day of the experiment is NIS 40 (the initial NIS 100 minus the payment of NIS 60). Therefore, the participant received NIS 4 that day (10% of NIS 40) and 10% of the lottery’s outcome 2 weeks later. If the lottery’s outcome was NIS 100, they would receive an extra NIS 10 in 2 weeks, and if the lottery’s outcome was NIS 20, they received an extra NIS 2 in 2 weeks. Participants were told that they would not have to make any effort in order to collect their money on the due date because the experimenters would come to their class and pay them in cash. This procedure prevents a transaction cost in the experiment that might affect the results, as described by Coller, Harrison, and Rutstorm (2005). The cash payment prevented the need to exert any effort to deposit checks as in Anderhub et al. (2001). Results Table 1 presents the average immediate values for the lottery and the certain amounts (CE0) realized at different times. Individual participants’ data are available as an online supplement. Based on Equation 2, we found that the average weekly r was 12% (STDV ¼ 23.6%) for 2 weeks and 6.7% (STDV ¼ 10.5%) for 6 weeks. Next we used Equation 3 to find CEt, for each individual. We calculated CEt by multiplying VL by (1þr)t. Then we found the risk discount rate for a lottery held at time t (rt) based on Equation 4. We also calculated the average risk discount rate between 2 and 6 weeks (CE6 divided by CE2 for each participant). The results are presented in Table 2. The average CE2 and CE6 were significantly higher than CE0(t (73) ¼ 4.05, p < .01 for CE2 and t(73) ¼ 3.08, p < .01 for CE6). Table 1 Average WTP (STDV) Asset Lottery CE0

Realization time Today 2 weeks 6 weeks 2 weeks 6 weeks

Current value CE0 VL VCE

Average value 50.68 48.07 42.01 44.30 39.62

(23.41) (22.28) (22.91) (23.36) (24.37)

Table 2 Average CEt and rt (STDV) Asset CEt rt

Realization time 2 weeks 6 weeks 2 weeks 6 weeks Between 2 weeks and 6 weeks

Average value 60.84 63.99 27.7% 31.5% 14.2%

(29.8) (43.5) (69.1%) (83.4%) (67.2%)

Both average r2 and r6 were significantly above zero (t(73) ¼ 3.45, p < .01 for r2 and t(73) ¼ 3.25, p < .01 for r6). This means that the average CEt was higher than CE0. Note that 60.8% and 64.9% of the participants showed rates above zero (meaning higher CEt than CE0), for 2 and 6 weeks, respectively. On average, the risk discount rate for one week for a lottery held in 2 weeks (the risk discount rate r2 divided by 2) was 13.8% (STDV ¼ 34.6%) and the average risk discount rate for 1 week between 2 and 6 weeks (the risk discount rate r between 2 and 6 weeks divided by 4) was 3.6% (STDV ¼ 16.8%). Note that the average weekly risk discount rate was higher for the first 2 weeks than for the weeks between the 2nd and 6th weeks (t(73) ¼ 2.06 , p ¼.04). For 58.1% of the participants, the weekly risk discount rate was higher for the first 2 weeks than for the 2nd through 6th weeks. This means that like the time discount rate, the risk discount rate decreases with time, with a jump in the first period and decreasing marginal change in the risk discount rate at longer delays. To confirm or refute our hypothesis, we tested the correlations between the weekly subjective discount rate (r) and the other values. The correlations are presented in Table 3. The results show that as the subjective discount rate increased so did the risk discount rate. Not only did the risk discount rate increase with the subjective discount rate but also the nominal value of the future certainty equivalent for a lottery held at time t increased. We found no relationship between the individual risk attitude (the certainty equivalent for an immediate lottery) and time preference. Discussion In this study, we used a new method, a twostage experiment with real monetary incentives

RISK PREMIUM DISCOUNTING Table 3 Correlations with weekly subjective discount rate (r) Asset CE0 CEt rt

Realization time

Correlation with r

Today 2 weeks 6 weeks 2 weeks 6 weeks

−0.076* 0.493+ 0.505+ 0.866+ 0.657+

þ

p-value < .01 This is the correlation with r for 2 weeks. The correlation with r for 6 weeks is -0.040 and is also nonsignificant. *

and a second price auction to determine the subjective future certain equivalent for lottery realized at the future. We found that delaying a lottery not only leads to discounting of the possible reward, but also discounting of the risk. Simply put, not only are outcomes discounted but risks are also. We argue that a steeper discounter discounts the outcome of a delayed lottery more heavily and, as a result, the present value of the outcome decreases. Conversely, a steeper discounter also discounts the risk of the delayed lottery more heavily and, as a result, the present value of the risk is decreased. The value of the delayed lottery today is a combination of the outcome and the risk discounting. The final value depends on the individual’s utility function and the weight he gives to the outcome and to the risk in his utility function. Additional experimental studies and empirical results are needed in order to understand this combination and the utility function and we leave these to future research. Our argument is consistent with the results of Sagristano et al. (2002). However, our explanation of the results is different because we do not argue that probabilities are less important for future outcomes. Rather, we argue that people discount delayed probabilities, just as they discount delayed outcomes. Our argument can also explain the findings of Noussair and Wu (2006) that people tend to be more risk averse for payoffs realized immediately than for payoffs realized in the future. They also suggest that discounting the outcomes is not the cause of lower risk aversion for payoffs realized in the future. Our theory and results suggest the same. Our alternative explanation is that the individual is less risk averse towards payoffs realized in

399

the future because he discounts also the delayed probabilities. Since delayed probabilities are distant, the future risk becomes less risky in the present. Our theory and results have many implications for decision making, economics and finance. They might be an alternative to the anticipation effect (Loewenstein, 1988) or the time horizon hypothesis (Rachlin et al., 1994) as an explanation for why people buy lottery tickets or are willing to pay today for a future gamble even if the payment is higher than the gamble’s expected value. It seems that when the gamble is in the future, the risk is also discounted and the lottery ticket or gambles look more attractive than if it were immediate. Time preference also has important implications for health behavior like smoking and obesity (e.g., Chapman & Elstein, 1995; Fuchs, 1982; Khwaja, Silverman, & Sloan, 2007; Redelmeier & Heller, 1993; Rose & Weeks, 1988). Research in this field suggested that people discount the health problems in the future due to unhealthy behavior in the present. People smoke in the present because the potential future health problems seem less terrible today because of the discount function. This paper’s findings suggest that people may discount also the risk of suffering from health problems in the future because of their unhealthy behavior in the present. For example, they might discount the risk of contracting lung cancer due to smoking, and not only the damage it does. We suggest that future research on health behavior test this argument empirically. Decision making for a future risky outcome is not limited only to individuals’ everyday behavior. Some studies suggest that time preference also affects managers’ decisions to invest in risky projects with a future outcome (Ivanovic, Karanovic, & Bogdan, 2010; Liu & Siu, 2011; Shavit & Adam, 2011). Again, the time preference effect on decisions concentrates on the discounting of outcomes. We suggest that since projects also have risk, it is possible that some managers discount the possible risk. This is another issue worth testing in future research. Our results also have important implications for the field of behavioral finance and financial decision making. For example, when pricing risky assets with a future expiration date, such as options and futures, the difference between the subjective time discount processes for riskless and risky assets should be considered, and the

400

TAL SHAVIT and MOSI ROSENBOIM

perception of the future asset’s risk changes with the amount of time until expiration. In pricing models, such as the arbitrage pricing model (e.g., Ross, 1976) the source of risk is the expectation of changes in future macroeconomic factors. In this kind of model, one should take into consideration the discounting of these future risks. These results are also relevant to the design of hedging contracts used to manage future risk. An investor who is more present oriented will tend to invest less in the hedging contracts because he discounts the future risk more than a future-oriented investor. The results of this study can also explain the equity premium puzzle introduced by Mehra and Prescott (1985) and myopic loss aversion introduced by Benartzi and Thaler (1995). The typical explanation hinges on loss aversion being applied to each period as opposed to the entire horizon (e.g., Benartzi & Thaler, 1995; Thaler, Tversky, Kahneman, & Schwartz, 1997). We suggest that when looking at each period rather than the entire horizon, investors see an immediate risk and not the future risk, and therefore discount at a subjectively lower rate than the immediate risk. Our study has some limitations. For example, the current procedure is limited in its ability to identify how individuals might handle a similar scenario if not forced to use a two-step process. When making real-life decisions that are both delayed and risky, an individual might not first evaluate the risk, then the delay. The two-step process could also occur in the reverse order, with the decision maker first evaluating the delay and then the risk. We suggest that future research test the reverse process. The magnitude effect might influence our results. According to this effect, large delayed amounts of money are discounted less steeply than small amounts (Benzion et al.1989; Yi et al., 2006). With risky outcomes, this effect works in the opposite direction, meaning that small amounts are discounted less steeply (Green et al., 1997). Therefore, for small, risky amounts an individual is less risk averse than for higher risky amounts. Using the decision making process suggested in this paper, an individual who discounts delayed outcomes steeply would arrive at a discounted value that is relatively small. Using this smaller amount when making decisions about risk, the individual would appear less risk averse because of the reverse magnitude effect. The theory in this

paper suggests that an individual would appear less risk averse because he discounts the risk heavily. The present findings could possibly be due, in part, to the reverse magnitude effect commonly seen with probability discounting, not only the discounting of the risk. However, although there are differences between the outcomes of the lottery and the certainty equivalent of the lottery the magnitude remains the same. If an individual is extremely risk averse or an extreme risk seeker, the magnitude might be different. It is also possible that we would find the influence of the magnitude effect in lotteries with differences in the magnitude of the outcomes. Our findings are consistent with findings of other studies and can explain previous findings. The uniqueness of this study is that we both propose a new method to measure and calculate the risk discount rate (rt) using a two-stage experiment, and demonstrate the difference between time discounting of a delayed but certain amount, and time discounting of delayed risky assets. We hope that the results of this study will motivate further research on the interaction between individuals’ time preferences and their attitudes toward future risk.

References Ahlbrecht, M., & Weber, M. (1997). An empirical study on intertemporal decision making under risk. Management Science, 43, 813–826. Anderhub, V., Gneezy, U., Guth, W., & Sonsino, D. (2001). On the interaction of risk and time preferences: An experimental study. German Economic Review, 2, 239–253. Andersen, S., Harrison, G. W., Lau, M. I., & Rutström, E. E. (2006). Elicitation using multiple price list formats. Experimental Economics, 9(4), 383–405. Anderson, L. R., & Stafford, S. L. (2009). Individual decision-making experiments with risk and intertemporal choice. Journal of Risk and Uncertainty, 38, 51–72. Baucells, M., & Heukamp, F. H. (2012). Probability and time trade-off. Management Science, 58(4), 831–842. Baumann, A. A., & Odum, A. L. (2012). Impulsivity, risk taking, and timing. Behavioural Processes, 90(3), 408–414. Benartzi, S., & Thaler, R. H. (1995). Myopic loss aversion and the equity premium puzzle. Quarterly Journal of Economics, 110(1), 75–92. Benzion, U., Krahnen, J. P., & Shavit, T. (2011). Subjective evaluation of delayed risky outcomes for buying and selling positions: The behavioral approach. Annals of Finance, 7(2), 247–265. Benzion, U., Rappoport, A., & Yagil, J. (1989). Discount rates inferred from decisions: An experimental study. Management Science, 35, 270–284.

RISK PREMIUM DISCOUNTING Burks, S., Carpenter, J., Götte, L., & Rustichini, A. (2012). Which measures of time preference best predict outcomes: Evidence from a large-scale field experiment. Journal of Economic Behavior & Organization, 84(1), 308–320. Caplin, A., & Leahy, J. (2001). Psychological expected utility theory and anticipatory feeling. Quarterly Journal of Economics, 116, 55–79. Caplin, A., & Leahy, J. (2004). The supply of information by a concerned expert. The Economic Journal, 114, 487–505. Chapman, G. B., & Elstein, A. S. (1995). Valuing the future temporal discounting of health and money. Medical Decision Making, 15(4), 373–386. Charness, G., Gneezy, U., & Imas, A. (2013). Experiential methods: Eliciting risk preferences. Journal of Economic Behavior & Organization, 87, 43–51. Coller, M., Harrison, G. W., & Rutstorm, E. E. (2005). Are discount rates constant? Reconciling theory and observation. Working Paper 3-31, Department of Economics, College of Business Administration, University of Central Florida. Coller, M., & Williams, M. B. (1999). Eliciting individual discount rates. Experimental Economics, 2(2), 107–127. Frederick S., Loewenstein, G., & O’Donoghue, T. (2002). Time discounting and time preference: A critical review. Journal of Economic Literature, 40, 351–401. Fuchs, V. (1982). Time preference and health: an exploratory study. In: R.F. Victor (Ed.), Economic Aspects of Health. Chicago: University of Chicago Press. Green, L., & Myerson, J. (1997). Exponential versus hyperbolic discounting of delayed outcomes: risk and waiting time. American Zoologist, 36, 496–505. Green, L., Myerson, J., & McFadden, E. (1997). Rate of temporal discounting decreases with amount of reward. Memory & Cognition, 25(5), 715–723. Green, L., & Myerson, J. (2004). A discounting framework for choice with delayed and probabilistic rewards. Psychological Bulletin, 130(5), 769. Green, L., & Myerson, J. (2010). Experimental and correlational analyses of delay and probability discounting. In: G. J. Madden & W. K. Bickel (Eds.), Impulsivity: the behavioral and neurological science of discounting (pp. 67–92). Washington, DC: American Psychological Association. Green, L., & Myerson, J. (2013). How many impulsivities? A discounting perspective. Journal of the Experimental Analysis of Behavior, 99(1), 3–13. Green, L., Myerson, J., Oliveira, L., & Chang, S. E. (2014). Discounting of delayed and probabilistic losses over a wide range of amounts. Journal of the Experimental Analysis of Behavior, 101(2), 186–200. Halevy, Y. (2008). Strotz meets Allais: Diminishing impatience and the certainty effect. The American Economic Review, 98(3), 1145–1162. Hinvest, N. S., & Anderson, I. M. (2010). The effects of real versus hypothetical reward on delay and probability discounting. The Quarterly Journal of Experimental Psychology, 63(6), 1072–1084. Holt, D. D., Green, L., & Myerson, J. (2003). Is discounting impulsive? Evidence from temporal and probability discounting in gambling and non-gambling college students. Behavioural Processes, 64(3), 355–367.

401

Holt, C., & Laury, S. (2002). Risk aversion and incentive effects. The American Economic Review 92(5), 1644– 1655. Ivanovic, Z., Karanovic, G., & Bogdan, S. (2010). Impact of discount rate on the decision-making process of investments projects. Tourism Hospitality Management 2010, Conference Proceedings, 931–939. Jarmolowicz, D. P., Bickel, W. K., Carter, A. E., Franck, C. T., & Mueller, E. T. (2012). Using crowdsourcing to examine relations between delay and probability discounting. Behavioural Processes, 91(3), 308–312. Keren and Roelofsma, 1995–>Keren, G., & Roelofsma, P. (1995). Immediacy and certainty in intertemporal choice. Organizational Behavior and Human Decision Processes, 63, 287–297. Khwaja, A., Silverman, D., & Sloan, F. (2007). Time preference, time discounting, and smoking decisions. Journal of Health Economics, 26(5), 927–949. Kirby, K. N. (1997). Bidding on the future: evidence against normative discounting of delayed rewards. Journal of Experimental Psychology: General, 126(1), 54. Liu, Q., & Siu, A. (2011). Institutions and corporate investment: evidence from investment-implied return on capital in China. Journal of Financial & Quantitative Analysis, 46(6), 1831–1863. Loewenstein, G. F. (1988). Frames of mind in intertemporal choice. Management Science, 34, 200–214. McKerchar, T. L., Green, L., & Myerson, J. (2010). On the scaling interpretation of exponents in hyperboloid models of delay and probability discounting. Behavioural Processes, 84(1), 440–444. Mehra, R., & Prescott, E. C. (1985). The equity premium puzzle. Journal of Monetary Economics, 15, 145–161. Mitchell, S. H., & Wilson, V. B. (2010). The subjective value of delayed and probabilistic outcomes: Outcome size matters for gains but not for losses. Behavioural Processes, 83(1), 36–40. Myerson, J., Green, L., Hanson, S., Holt. D. D., & Estle. S. J. (2003). Discounting delayed and probabilistic rewards: Processes and traits. Journal of Economic Psychology, 24, 619–635. Noussair, C., & Wu, P. (2006). Risk tolerance in the present and the future: An experimental study. Managerial and Decision Economics, 27, 401–412. Onculer, A., & Onay, S. (2008). How do we evaluate future gambles? Experimental evidence on path dependency in risky intertemporal choice. Journal of Behavioral Decision Making, 22, 280–300. Petry, N. M. (2012). Discounting of probabilistic rewards is associated with gambling abstinence in treatmentseeking pathological gamblers. Journal of Abnormal Psychology, 121(1), 151. Rachlin, H., Raineri, A., & Cross, D. (1991). Subjective probability and delay. Journal of the Experimental Analysis of Behavior, 55(2), 233–244. Rachlin, H., Siegel, E., & Cross, D. (1994). Lotteries and the time horizon. Psychological Science, 5(6), 390–393. Rachlin, H. (2006). Notes on discounting. Journal of the Experimental Analysis of Behavior, 85(3), 425–435. Raineri, A., & Rachlin, H. (1993). The effect of temporal constraints on the value of money and other commodities. Journal of Behavioral Decision Making, 6(2), 77–94. Read, D. (2001). Is time-discounting hyperbolic or subadditive? Journal of Risk & Uncertainty 23, 5–32.

402

TAL SHAVIT and MOSI ROSENBOIM

Redelmeier, D. A., & Heller, D. N. (1993). Time preference in medical decision making and cost-effectiveness analysis. Medical Decision Making, 13(3), 212–217. Richards, J. B., Zhang, L., Mitchell, S. H., & de Wit, H. (1999). Delay or probability discounting in a model of impulsive behavior: effect of alcohol. Journal of the Experimental Analysis of Behavior, 71(2), 121–143. Rose, D. N., & Weeks, M. G. (1988). Individuals’ discounting of future monetary gains and health states. Medical Decision Making, 8, 334. Ross, S. (1976). The arbitrage theory of capital asset pricing. Journal of Economic Theory 13(3), 341–360. Rubinstein, A. (2003). Is it economy and psychology? The case of hyperbolic discounting. International Economic Review 44, 1207–1216. Sagristano, M., Trope, Y., & Liberman, N. (2002). Timedependent gambling: Money now, odds later. Journal of Experimental Psychology, 131, 364–376. Shavit, T., & Adam, A. (2011). A preliminary exploration of the effects of rational factors and behavioral biases on the managerial choice to invest in corporate responsibility. Managerial and Decision Economics, 32(3), 205–213. Shead, N. W., & Hodgins, D. C. (2009). Probability discounting of gains and losses: Implications for risk attitudes and impulsivity. Journal of the Experimental Analysis of Behavior, 92(1), 1–16. Shogren, J. F., Cho, S., Koo, C., List, J., Park, C., Polo, P., & Wilhelmi, R. (2001). Auction mechanisms and the measurement of WTP and WTA. Resource Energy Economics 23, 97–109.

Stevenson, M. K. (1986). A discounting model for decisions with delayed positive or negative outcomes. Journal of Experimental Psychology, 115, 131–154. Strotz, R. H. (1955). Myopia and inconsistency in dynamic utility maximization. Review of Economic Studies 23, 165– 180. Terrell, H. K., Derenne, A., & Weatherly, J. N. (2014). Exploratory and confirmatory factor analyses of probability discounting of different outcomes across different methods of measurement. The American Journal of Psychology, 127(2), 215–231. Thaler, R. H., Tversky, A., Kahneman, D., & Schwartz, A. (1997). The effect of myopia and loss aversion on risk taking: An experimental test. Quarterly Journal of Economics, 112, 647–661. Vanderveldt, A., Green, L., & Myerson, J. (2015). Discounting of monetary rewards that are both delayed and probabilistic: delay and probability combine multiplicatively, not additively. Journal of Experimental Psychology: Learning, Memory, and Cognition, 41(1), 148–162. Vickrey, W. (1961). Counterspeculation, auctions, and competitive sealed tenders. Journal of Finance, 16, 8–37. Yi, R., de la Piedad, X., & Bickel, W. K. (2006). The combined effects of delay and probability in discounting. Behavioural Processes, 73(2), 149–155. Received: July 22, 2014 Final Acceptance: January 27, 2015

RISK PREMIUM DISCOUNTING

403

APPENDIX Instructions Welcome to an experiment involving decision-making in lotteries and fixed amounts. General Explanation Thank you for participating in this experiment. The objective of this experiment is to examine subjects’ decision-making processes.

 

In the experiment, you will be asked to evaluate lotteries and fixed amounts that are realized in the present and in the future. Please answer all the questions, and please do not make contact with other subjects during the experiment.

Evaluation of a fixed amount In some the questions you will be asked to bid a price today for a certain amount that will be realized in the future.

 

In these questions, you will be asked to bid the maximum price you are willing to pay today for this certain amount. In each question you will get an initial amount of NIS 100 that you can use to pay for your offer. An example: You have an initial amount of NIS 100. You are offered the option of receiving NIS 50 a week from now. What is the maximum price you are willing to pay immediately for this offer? Evaluation of the Lottery

  

In some of the questions you will be asked to bid a price today for a lottery that will be realized in the future. In these questions you will be asked to bid the maximum price you are willing to pay immediately for this lottery. In each question you will get an initial amount of NIS 100 that you can use to pay for your offer. An example: You have an initial amount of NIS 100. The following lottery will be held two weeks from now. Probability 50% 50%

Outcome NIS 50 NIS 20

What is the maximum price you are willing to pay for this lottery today? The Auctions All subjects will be randomly divided by computer into groups of five subjects.

    

Each group will participate in auctions for buying the fixed amounts or the lotteries. The subject bidding the highest price in his or her group will win the auction. If you win the auction, you will be asked to pay the second highest bid in your group of five subjects. If you do not win the auction, you will keep the initial amount of money you received. In case two or more subjects in the same group bid the same highest bid, the winner will be chosen randomly.

404



TAL SHAVIT and MOSI ROSENBOIM

You should consider each auction separately without any relation to the other auctions. In each auction, the initial amount is the same and the amounts do not accumulate. The outcome from the auctions At the end of the experiment, one of the auctions will be randomly chosen by computer.

     

If you won the auction, we will ask you to pay the second bid in your group. If you didn’t win the auction, you will stay with the initial endowment of NIS 100. If you won the auction, we will pay you the outcome in that auction. If the auction is for a certain amount, you will receive this amount. If the auction is for a lottery, the computer will choose the outcome randomly (based on the lottery’s probabilities), and you will receive that outcome. The time of payment depends on the outcome’s expiration date. It is possible that you will get part of the payment today and the other part in the future. If you won the auction and the payment is in the future, we will come to class on the expiration date and pay you.

An example: Assume that the final amount is NIS 50 today and the outcome of the lottery is two weeks from today. First, we will pay you NIS 50 today. In two weeks we will pay you the outcome of the lottery. The outcome of the lottery will be determined by a computer program. In order to pay you, we will come to your class two weeks after the lottery and give you the money at the end of the lecture. * Please note that each auction is separate from the others. In each auction you can use the initial endowment of NIS 100 that you received from the experimenter. The actual payment for your participation. As we explained, at the end of the experiment one of the auctions will be chosen randomly. We will make the final tally according to the outcomes of this auction. The final payment will be 10% of the initial endowment today for those who didn’t win the auction (NIS 10). For those who won the auction, we will pay 10% of the final endowment today (initial endowment - the price paid for the asset) and 10% of the asset’s outcome at the due date of the asset.

Copyright of Journal of the Experimental Analysis of Behavior is the property of WileyBlackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Is there time discounting for risk premium?

Individuals with a higher subjective discount rate concentrate more on the present and delay is more significant for them. However, when a risky asset...
160KB Sizes 2 Downloads 12 Views