Joseph Simmons

Joseph Simmons
  • Professor of Operations, Information, and Decisions

Contact Information

  • office Address:

    3730 Walnut St.
    JMHH 551
    Philadelphia, PA 19104

Research Interests: , experimental methods, consumer behavior

Links: CV, Data Colada Blog, Easy Pre-registration

Overview

Joe Simmons is an Associate Professor at the Wharton School of the University of Pennsylvania, where he teaches a course on Managerial Decision Making. He has two primary areas of research. The first explores the psychology of judgment and decision-making, with an emphasis on understanding and fixing the errors and biases that plague people’s judgments, predictions, and choices. The second area focuses on identifying and promoting easy-to-adopt research practices that improve the integrity of published findings. Joe is also an author of Data Colada, an online resource that attempts to improve our understanding of scientific methods, evidence, and human behavior, and a co-founder of AsPredicted.org, a website that makes it easy for researchers to pre-register their studies.

Continue Reading

Research

  • Celia Gaertig and Joseph Simmons (2018), Do People Inherently Dislike Uncertain Advice?, Psychological Science, 29 (4), pp. 504-520.

    Abstract: Research suggests that people prefer confident to uncertain advisors. But do people dislike uncertain advice itself? In eleven studies (N = 4,806), participants forecasted an uncertain event after receiving advice, and then rated the quality of the advice (Studies 1-7, S1-S2) or chose between two advisors (Studies 8-9). Replicating previous research, confident advisors were judged more favorably than advisors who were “not sure.” Importantly, however, participants were not more likely to prefer certain advice: They did not dislike advisors who expressed uncertainty by providing ranges of outcomes, numerical probabilities, or by saying that one event is “more likely” than another. Additionally, when faced with an explicit choice, participants were more likely to choose an advisor who provided uncertain advice over an advisor who provided certain advice. Our findings suggest that people do not inherently dislike uncertain advice. Advisors benefit from expressing themselves with confidence, but not from communicating false certainty.

  • Joshua Lewis, Celia Gaertig, Joseph Simmons (Forthcoming), Extremeness Aversion Is a Cause of Anchoring.

    Abstract: When estimating unknown quantities, people insufficiently adjust from values they have previously considered, a phenomenon known as anchoring. We suggest that anchoring is at least partially caused by a desire to avoid making extreme adjustments. In seven studies (N = 5,279), we found that transparently irrelevant cues of extremeness influenced people’s adjustments from anchors. In Studies 1-6, participants were less likely to adjust beyond a particular amount when that amount was closer to the maximum allowable adjustment. For example, in Study 5, participants were less likely to adjust by at least 6 units when they were allowed to adjust by a maximum of 6 units than by a maximum of 15 units. In Study 7, participants adjusted less after considering whether an outcome would be within a smaller distance of the anchor. These results suggest that anchoring effects may reflect a desire to avoid adjustments that feel too extreme.

  • Celia Gaertig and Joseph Simmons (Under Review), The Psychology of Second Guesses: Implications for the Wisdom of the Inner Crowd.

    Abstract: Prior research suggests that averaging two guesses from the same person can improve quantitative judgments, an effect dubbed the “wisdom of the inner crowd.” In this article, we suggest that this effect hinges on whether people (1) resample their second guess from a similar distribution as their first guess was sampled from (what we call a Resampling Process), or (2) explicitly decide in which direction their first guess had erred before making their second guess (what we call a Choice Process). We report the results from seven studies (N = 5,768) in which we manipulated whether we asked participants to explicitly indicate, right before they made their second guess, whether their first guess was too high or too low, thereby inducing a Choice Process. We found that asking participants to decide whether their first guess was too high or too low before they made a second guess increased their likelihood of making a more extreme second guess. When the correct answer was not very extreme (as was often the case), this reduced people’s likelihood of making a second guess in the right direction and harmed the benefits of averaging, thus rendering the inner crowd less wise. When the correct answer was very extreme, then asking participants to indicate whether their first guess was too high or too low improved the wisdom of the inner crowd. Our findings suggest that the wisdom-of-the-inner-crowd effect is not inevitable, but rather that it hinges on the process by which people generate their second guesses.

  • Celia Gaertig and Joseph Simmons (Under Review), Why (and When) Are Uncertain Price Promotions More Effective Than Equivalent Sure Discounts?.

    Abstract: Past research suggests that offering customers an uncertain promotion, such as an X% chance to get a product for free, is always more effective than providing a sure discount of equal expected value. In seven studies (N = 11,238), we find that uncertain price promotions are more effective than equivalent sure discounts only when those sure discounts are or seem small. Specifically, we find that uncertain promotions are relatively more effective when the sure discounts are actually smaller, when the sure discounts are made to feel smaller by presenting them alongside a larger discount, and when the sure discounts are made to feel smaller by framing them as a percentage-discount rather than a dollar amount. These findings are inconsistent with two leading explanations of consumers’ preferences for uncertain over certain promotions – diminishing sensitivity and the overweighting of small probabilities – and suggest that people’s preferences for uncertainty are more strongly tethered to their perceptions of the size of the sure outcome than they are to their perceptions of the probability of getting the uncertain reward.

  • Leif D. Nelson, Joseph Simmons, Uri Simonsohn (2018), Psychology's Renaissance, Annual Review of Psychology, 69, pp. 511-534.

    Abstract: In 2010-2012, a few largely coincidental events led experimental psychologists to realize that their approach to collecting, analyzing, and reporting data made it too easy to publish false-positive findings. This sparked a period of methodological reflection that we review here and call “psychology’s renaissance.” We begin by describing how psychology’s concerns with publication bias shifted from worrying about file-drawered studies to worrying about p-hacked analyses. We then review the methodological changes that psychologists have proposed and, in some cases, embraced. In describing how the renaissance has unfolded, we attempt to describe different points of view fairly but not neutrally, so as to identify the most promising paths forward. In so doing, we champion disclosure and pre- registration, express skepticism about most statistical solutions to publication bias, take positions on the analysis and interpretation of replication failures, and contend that “meta-analytical thinking” increases the prevalence of false-positives. Our general thesis is that the scientific practices of experimental psychologists have improved dramatically.

  • Hannah Perfecto, Jeff Galak, Joseph Simmons, Leif D. Nelson (2017), Rejecting A Bad Option Feels Like Choosing A Good One, Journal of Personality and Social Psychology, 113, pp. 659-670.

    Abstract: Across 4,151 participants, the authors demonstrate a novel framing effect, attribute matching, whereby matching a salient attribute of a decision frame with that of a decision’s options facilitates decision- making. This attribute matching is shown to increase decision confidence and, ultimately, consensus estimates by increasing feelings of metacognitive ease. In Study 1, participants choosing the more attractive of two faces or rejecting the less attractive face reported greater confidence in and perceived consensus around their decision. Using positive and negative words, Study 2 showed that the attribute’s extremity moderates the size of the effect. Study 3 found decision ease mediates these changes in confidence and consensus estimates. Consistent with a misattribution account, when participants were warned about this external source of ease in Study 4, the effect disappeared. Study 5 extended attribute matching beyond valence to objective judgments. The authors conclude by discussing related psycho- logical constructs as well as downstream consequences.

  • Joseph Simmons and Uri Simonsohn (2017), Power Posing: P-curving the Evidence, Psychological Science , 28 (May), pp. 687-693.

    Abstract: In a well-known article, Carney, Cuddy, and Yap (2010) documented the benefits of “power posing.” In their study, participants (N=42) who were randomly assigned to briefly adopt expansive, powerful postures sought more risk, had higher testosterone levels, and had lower cortisol levels than those assigned to adopt contractive, powerless postures. In their response to a failed replication by Ranehill et al. (2015), Carney, Cuddy, and Yap (2015) reviewed 33 successful studies investigating the effects of expansive vs. contractive posing, focusing on differences between these studies and the failed replication, to identify possible moderators that future studies could explore. But before spending valuable resources on that, it is useful to establish whether the literature that Carney et al. (2015) cited actually suggests that power posing is effective. In this paper we rely on p-curve analysis to answer the following question: Does the literature reviewed by Carney et al. (2015) suggest the existence of an effect once we account for selective reporting?  We conclude not. The distribution of p-values from those 33 studies is indistinguishable from what is expected if (1) the average effect size were zero, and (2) selective reporting (of studies and/or analyses) were solely responsible for the significant effects that are published. Although more highly powered future research may find replicable evidence the purported benefits of power posing (or unexpected detriments), the existing evidence is too weak to justify a search for moderators or to advocate for people to engage in power posing to better their lives.

  • Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2017), False-Positive Citations, Perspectives on Psychological Science, forthcoming.

    Abstract: This invited paper describes how we came to write an article called "False-Positive Psychology."

  • Berkeley J. Dietvorst, Joseph Simmons, Cade Massey (2016), Overcoming Algorithm Aversion: People Will Use Algorithms If They Can (Even Slightly) Modify Them, Management Science, forthcoming.

    Abstract: Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3).  This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.

  • Theresa F. Kelly and Joseph Simmons (2016), When Does Making Detailed Predictions Make Predictions Worse?, Journal of Experimental Psychology: General, 145 (October), pp. 1298-1311.

    Abstract: In this paper, we investigate whether makring detailed predictions about an event worsens other predictions of the event. Across 19 experiments, 10,896 participants, and 407,045 predictions about 724 professional sports games, we find that people who made detailed predictions about sporting events (e.g., how many hits each baseball team would get) made worse predictions about more general outcomes (e.g., which team would win). We rule out that this effect is caused by inattention or fatigue, thinking too hard, or a differential reliance on holistic information about the teams. Instead, we find that thinking about game-relevant details before predicting winning teams causes people to give less weight to predictive information, presumably because predicting details makes useless or redundant information more accessible and thus more likely to be incorporated into forecasts. Furthermore, we show that this differential use of information can be used to predict what kinds of events will and will not be suscepible to the negative effect of making detailed predictions.

Teaching

Past Courses

  • MGMT690 - MANAGERIAL DECISN MAKING

    There has been increasing interest in recent years as to how managers make decisions when there is uncertainty regarding the value or likelihood of final outcomes. What type of information do they collect? How do they process the data? What factors influence the decisions? This course will address these issues. By understanding managerial decision processes we may be better able to prescribe ways of improving managerial behavior. Building on recent work in cognitive psychology, students will gain an understanding of the simplified rules of thumb and apparent systematic biases that individuals utilize in making judgments and choices under uncertainty. At the end of the course, students should understand the decision making process more thoroughly and be in a position to become a better manager.

  • OIDD290 - DECISION PROCESSES

    This course is an intensive introduction to various scientific perspectives on the processes through which people make decisions. Perspectives covered include cognitive psychology of human problem-solving, judgment and choice, theories of rational judgment and decision, and the mathematical theory of games. Much of the material is technically rigorous. Prior or current enrollment in STAT 101 or the equivalent, although not required, is strongly recommended.

  • OIDD299 - JUDG & DEC MAKING RES IM

    This class provides a high-level introduction to the field of judgment and decision making (JDM) and in-depth exposure to the process of doing research in this area. Throughout the semester you will gain hands-on experience with several different JDM research projects. You will be paired with a PhD student or faculty mentor who is working on a variety of different research studies. Each week you will be given assignments that are central to one or more of these studies, and you will be given detailed descriptions of the research projects you are contributing to and how your assignments relate to the successful completion of these projects. To complement your hands-on research experience, throughout the semester you will be assigned readings from the book Nudge by Thaler and Sunstein, which summarizes key recent ideas in the JDM literature. You will also meet as a group for an hour once every three weeks with the class's faculty supervisor and all of his or her PhD students to discuss the projects you are working on, to discuss the class readings, and to discuss your own research ideas stimulated by getting involved in various projects. Date and time to be mutually agreed upon by supervising faculty and students. the 1CU version of this course will involve approx. 10 hours of research immersion per week and a 10-page paper. The 0.5 CU version of this course will involve approx 5 hours of research immersion per week and a 5-page final paper. Please contact Maurice Schweitzer if you are interested in enrolling in the course: schweitzer@wharton.upenn.edu

  • OIDD690 - MANAG DECSN MAKING

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

Awards and Honors

  • MBA Excellence in Teaching Award, 2014
  • MBA Excellence in Teaching Award, 2013
  • Winner of the Helen Kardon Moss Anvil Award, awarded to the one Wharton faculty member “who has exemplified outstanding teaching quality during the last year”, 2013
  • One of ten faculty nominated by the MBA student body for the Helen Kardon Moss Anvil Award, 2012
  • MBA Excellence in Teaching Award, 2012
  • Wharton Excellence in Teaching Award, Undergraduate Division, 2011

In the News

Knowledge @ Wharton

Activity

Latest Research

Celia Gaertig and Joseph Simmons (2018), Do People Inherently Dislike Uncertain Advice?, Psychological Science, 29 (4), pp. 504-520.
All Research

In the News

Why Humans Distrust Algorithms – and How That Can Change

Many people are averse to using algorithms when making decisions, preferring to rely on their instincts. New Wharton research says a simple adjustment can help them feel differently.

Knowledge @ Wharton - 2017/02/13
All News

Awards and Honors

MBA Excellence in Teaching Award 2014
All Awards