I am a PhD candidate at Wharton researching consumer behavior. My research focuses on how cognitive biases influence consumer decision making. I aim to answer questions such as:
Publications and Manuscripts in the Review Process
(updated June 2019)
Lewis, Joshua, Celia Gaertig, and Joseph P. Simmons (2019), “Extremeness Aversion Is a Cause of Anchoring,” Psychological Science, 30(2), 159–173.
Lewis, Joshua and Joseph P. Simmons, “Prospective Outcome Bias: Investing to Succeed When Success Is Already Likely,” under second round review (with minor revisions) at Journal of Experimental Psychology: General.
Lewis, Joshua, Alex Rees-Jones, Uri Simonsohn, and Joseph P. Simmons, “Diminishing Sensitivity to Outcomes: What Prospect Theory Gets Wrong about Diminishing Sensitivity to Price,” under review at Journal of Marketing Research.
Lewis, Joshua and Deborah A. Small, “Ineffective Altruism: Giving Less When Donations Do More Good.”
Lewis, Joshua and Joseph P. Simmons, “The Directional Anchoring Bias.”
Green, Etan A., Joshua Lewis, “The Forgone-Option Fallacy.”
Green, Etan A. and Joshua Lewis, and David Rothschild, “Barely Plausible Anchor Values Maximize Bias.”
Moore, Alexander, Joshua Lewis, Emma E. Levine, and Maurice E. Schweitzer, “Trusting Kind Friends and Fair Leaders: How Relationships Affect the Antecedents of Trust.”
Abstract: When people estimate an unknown quantity after previously considering a high candidate value (or “anchor”), they estimate higher values than they would have done after considering a low anchor. In explaining this effect, previous anchoring research has emphasized the distance between the anchor and the estimate. However, across 5 studies (N = 5,662), we find a directional anchoring bias: people disproportionately estimate values that are higher than high anchors and lower than low anchors, and this bias accounts for between 10% and 20% of the total anchoring effect (Study 1). The bias seems to result from people expressing their intuitions about estimation quantities. For example, when estimating an intuitively high quantity (such as the weight of an elephant), people tend to express their intuition that the quantity is “high” by adjusting their estimates upwards from the anchor. When anchors are higher, a decision to adjust upwards necessitates a higher estimate, so higher anchors lead to higher estimates. Consistent with this mechanism, we find that participants’ intuitions about the stimuli moderate the directional anchoring bias (Studies 2-5). In addition, we demonstrate the adverse effects of this bias for estimation accuracy (Study 3) and consumer choice (Studies 4 & 5).
Abstract: How do people decide whether to incur costs to increase their likelihood of success? In investigating this question, we developed a theory called prospective outcome bias. According to this theory, people make decisions that they expect to feel good about after the outcome has been realized. Importantly, people expect to feel best about decisions that are followed by successes – even when the decisions did not cause the successes. Consequently, they are most inclined to incur costs to increase their likelihood of success when success is already likely (e.g., people are more inclined to increase their probability of winning a prize from 80% to 90% than from 10% to 20%). We find evidence for this effect, and for prospective outcome bias, in nine experiments. In Study 1, we establish that people expect to evaluate decisions that precede successes more favorably than decisions that precede failures, even when the decisions did not cause the success or failure. Then, we document that people are more motivated to increase higher chances of success. Study 2 establishes this effect in an incentive-compatible laboratory setting, and Studies 3-5 generalize the effect to different kinds of decisions. Studies 6-8 establish that prospective outcome bias drives the effect (rather than regret aversion, waste aversion, or probability weighting). Finally, in Study 9, we find evidence for another prediction of prospective outcome bias: holding expected value constant, people prefer small increases in the probability of a large reward to large increases in the probability of a small reward.
Joshua Lewis and Deborah Small (Working), Ineffective Altruism: Giving Less When Donations Do More Good.
Abstract: Despite well-meaning intentions, people rarely allocate their charitable donations in the most cost-effective way possible. The manner in which cost-effectiveness information is presented can be a contributing factor. In four studies (N = 2,725), when we inform participants of the cost of a unit of impact (e.g. the cost of a mosquito net), they perversely donate less when the cost is cheaper. This result arises because people want their donation to have a tangible impact, and when the cost of such an impact is lower, people can achieve it with a smaller donation. A remedy for this inefficiency is to express cost-effectiveness in terms of “units per dollar amount” (e.g. 5 nets provided per $10 donated) and leave the cost of providing one tangible item unstated, rendering it less salient as a target donation amount. Across Studies 2 and 3, we demonstrate both the inefficiency and the effectiveness of the remedy for incentive-compatible donations decisions about providing meals, oral rehydration therapy, deworming medication, and measles vaccines.
Abstract: Prospect Theory assumes that decision makers are diminishingly sensitive to the magnitude of gains and losses. A well-known demonstration of this phenomenon involves people being more willing to travel across town to save $5 off a $15 purchase rather than to save $5 off a $125 purchase (e.g., the “Jacket/Calculator” scenario). In this paper, we present evidence that diminishing sensitivity to price is separate, different, and arguably inconsistent with Prospect Theory. Across four studies, we find that people exhibit diminishing sensitivity with respect to outcomes that do not align with their evaluations of gains and losses. Specifically, a reference point determines if a price is coded as a gain or a loss, but whatever that reference point, people are diminishingly sensitive to the absolute magnitudes of amounts considered.
Abstract: When estimating unknown quantities, people insufficiently adjust from values they have previously considered, a phenomenon known as anchoring. We suggest that anchoring is at least partially caused by a desire to avoid making extreme adjustments. In seven studies (N = 5,279), we found that transparently irrelevant cues of extremeness influenced people’s adjustments from anchors. In Studies 1-6, participants were less likely to adjust beyond a particular amount when that amount was closer to the maximum allowable adjustment. For example, in Study 5, participants were less likely to adjust by at least 6 units when they were allowed to adjust by a maximum of 6 units than by a maximum of 15 units. In Study 7, participants adjusted less after considering whether an outcome would be within a smaller distance of the anchor. These results suggest that anchoring effects may reflect a desire to avoid adjustments that feel too extreme.