Joseph Simmons

Joseph Simmons
  • Professor of Operations, Information, and Decisions

Contact Information

  • office Address:

    3730 Walnut St.
    JMHH 551
    Philadelphia, PA 19104

Research Interests: research methods, experimental methods, consumer behavior

Links: CV, Data Colada Blog, Easy Pre-registration

Overview

Joe Simmons is a Professor at the Wharton School of the University of Pennsylvania, where he teaches a course on Managerial Decision Making. He has two primary areas of research. The first explores the psychology of judgment and decision-making, with an emphasis on understanding and fixing the errors and biases that plague people’s judgments, predictions, and choices. The second area focuses on identifying and promoting easy-to-adopt research practices that improve the integrity of published findings. Joe is also an author of Data Colada, an online resource that attempts to improve our understanding of scientific methods, evidence, and human behavior, and a co-founder of AsPredicted.org, a website that makes it easy for researchers to pre-register their studies.

Continue Reading

Research

  • Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2020), Specification Curve Analysis, Nature Human Behaviour.

    Abstract: Empirical results hinge on analytic decisions that are defensible, arbitrary, and motivated. These decisions probably introduce bias (towards the narrative put forward by the authors), and certainly involve variability not reflected by standard errors. To address this source of noise and bias, we introduce Specification Curve Analysis, which consists of three steps: (i) identifying the set of theoretically justified, statistically valid, and non-redundant specifications, (ii) displaying the results graphically, allowing readers to identify consequential specifications decisions, and (iii) conducting joint inference across all specifications. We illustrate the use of this technique by applying it to three findings from two different papers, one investigating discrimination based on distinctively black names, and the other investigating the effect of assigning female vs. male names to hurricanes. Specification curve reveals that one finding is robust, one is weak, one is not robust at all.

  • Joshua Lewis and Joseph Simmons (2020), Prospective Outcome Bias: Incurring (Unnecessary) Costs to Achieve Outcomes That Are Already Likely, Journal of Experimental Psychology: General, 149, pp. 870-888.

    Abstract: How do people decide whether to incur costs to increase their likelihood of success? In investigating this question, we offer a theory called prospective outcome bias. According to this theory, people tend to make decisions that they expect to feel good about after the outcome has been realized. Because people expect to feel best about decisions that are followed by successes—even when the decisions did not cause those successes—they will pay more to increase their chances of success when success is already likely (e.g., people will pay more to increase their probability of success from 80% to 90% than from 10% to 20%). We find evidence for prospective outcome bias in nine experiments. In Study 1, we establish that people evaluate costly decisions that precede successes more favorably than costly decisions that precede failures, even when the decisions did not cause the outcome. Study 2 establishes, in an incentive-compatible laboratory setting, that people are more motivated to increase higher chances of success. Studies 3–5 generalize the effect to other contexts and decisions and Studies 6–8 indicate that prospective outcome bias causes it (rather than regret aversion, waste aversion, goals-as-reference-points, probability weighting, or loss aversion). Finally, in Study 9, we find evidence for another prediction of prospective outcome bias: people prefer small increases in the probability of large rewards (e.g., a 1% improvement in their chances of winning $100) to large increases in the probability of small rewards (e.g., a 10% improvement in their chances of winning $10).

  • Joachim Vosgerau, Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2019), 99% Impossible: A Valid, or Falsifiable, Internal Meta-Analysis, Journal of Experimental Psychology: General, 148, pp. 1628-1639.

    Abstract: Several researchers have relied on, or advocated for, internal meta-analysis, which involves statistically aggregating multiple studies in a paper to assess their overall evidential value. Advocates of internal meta-analysis argue that it provides an efficient approach to increasing statistical power and solving the file-drawer problem. Here we show that the validity of internal-meta-analysis rests on the assumption that no studies or analyses were selectively reported. That is, the technique is only valid if (1) all conducted studies were included (i.e., an empty file-drawer), and (2) for each included study, exactly one analysis was attempted (i.e., there was no p-hacking). We show that even very small doses of selective reporting invalidate internal-meta-analysis. For example, the kind of minimal p-hacking that increases the false-positive rate of one study to just 8% increases the false-positive rate of a 10-study internal meta-analysis to 83%. If selective reporting is approximately zero, but not exactly zero, then internal meta-analysis is invalid. To be valid, (1) an internal meta-analysis would need to exclusively contain studies that were properly pre-registered, (2) those pre-registrations would have to be followed in all essential aspects, and (3) the decision of whether to include a given study in an internal meta-analysis would have to be made before any of those studies are run.

  • Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2019), P-curve won’t do your laundry, but it will distinguish replicable from non-replicable findings in observational research: Comment on Bruns & Ioannidis (2016), PLoS ONE, 14(3), e0213454.

    Abstract: P-curve, the distribution of significant p-values, can be analyzed to assess if the findings have evidential value, whether p-hacking and file-drawering can be ruled out as the sole explanations for them. Bruns and Ioannidis (2016) have proposed p-curve cannot examine evidential value with observational data. Their discussion confuses false-positive findings with confounded ones, failing to distinguish correlation from causation. We demonstrate this important distinction by showing that a confounded but real, hence replicable association, gun ownership and number of sexual partners, leads to a right-skewed p-curve, while a false-positive one, respondent ID number and trust in the supreme court, leads to a flat p- curve. P-curve can distinguish between replicable and non-replicable findings. The observational nature of the data is not consequential.

  • Joshua Lewis, Celia Gaertig, Joseph Simmons (2019), Extremeness Aversion Is a Cause of Anchoring, Psychological Science, 30, pp. 159-173.

    Abstract: When estimating unknown quantities, people insufficiently adjust from values they have previously considered, a phenomenon known as anchoring. We suggest that anchoring is at least partially caused by a desire to avoid making extreme adjustments. In seven studies (N = 5,279), we found that transparently irrelevant cues of extremeness influenced people’s adjustments from anchors. In Studies 1-6, participants were less likely to adjust beyond a particular amount when that amount was closer to the maximum allowable adjustment. For example, in Study 5, participants were less likely to adjust by at least 6 units when they were allowed to adjust by a maximum of 6 units than by a maximum of 15 units. In Study 7, participants adjusted less after considering whether an outcome would be within a smaller distance of the anchor. These results suggest that anchoring effects may reflect a desire to avoid adjustments that feel too extreme.

  • Celia Gaertig and Joseph Simmons (2018), Do People Inherently Dislike Uncertain Advice?, Psychological Science, 29, pp. 504-520.

    Abstract: Research suggests that people prefer confident to uncertain advisors. But do people dislike uncertain advice itself? In eleven studies (N = 4,806), participants forecasted an uncertain event after receiving advice, and then rated the quality of the advice (Studies 1-7, S1-S2) or chose between two advisors (Studies 8-9). Replicating previous research, confident advisors were judged more favorably than advisors who were “not sure.” Importantly, however, participants were not more likely to prefer certain advice: They did not dislike advisors who expressed uncertainty by providing ranges of outcomes, numerical probabilities, or by saying that one event is “more likely” than another. Additionally, when faced with an explicit choice, participants were more likely to choose an advisor who provided uncertain advice over an advisor who provided certain advice. Our findings suggest that people do not inherently dislike uncertain advice. Advisors benefit from expressing themselves with confidence, but not from communicating false certainty.

  • Berkeley J. Dietvorst, Joseph Simmons, Cade Massey (2018), Overcoming Algorithm Aversion: People Will Use Algorithms If They Can (Even Slightly) Modify Them, Management Science, 64, pp. 1155-1170.

    Abstract: Although evidence-based algorithms consistently outperform human forecasters, people often fail to use them after learning that they are imperfect, a phenomenon known as algorithm aversion. In this paper, we present three studies investigating how to reduce algorithm aversion. In incentivized forecasting tasks, participants chose between using their own forecasts or those of an algorithm that was built by experts. Participants were considerably more likely to choose to use an imperfect algorithm when they could modify its forecasts, and they performed better as a result. Notably, the preference for modifiable algorithms held even when participants were severely restricted in the modifications they could make (Studies 1-3). In fact, our results suggest that participants’ preference for modifiable algorithms was indicative of a desire for some control over the forecasting outcome, and not for a desire for greater control over the forecasting outcome, as participants’ preference for modifiable algorithms was relatively insensitive to the magnitude of the modifications they were able to make (Study 2). Additionally, we found that giving participants the freedom to modify an imperfect algorithm made them feel more satisfied with the forecasting process, more likely to believe that the algorithm was superior, and more likely to choose to use an algorithm to make subsequent forecasts (Study 3).  This research suggests that one can reduce algorithm aversion by giving people some control - even a slight amount - over an imperfect algorithm’s forecast.

  • Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2018), False-Positive Citations, Perspectives on Psychological Science, 13, pp. 255-259.

    Abstract: We describe why we wrote “False-Positive Psychology,” analyze how it has been cited, and explain why the integrity of experimental psychology hinges on the full disclosure of methods, the sharing of materials and data, and, especially, the preregistration of analyses.  

  • Leif D. Nelson, Joseph Simmons, Uri Simonsohn (2018), Psychology’s Renaissance, Annual Review of Psychology, 69, pp. 511-534.

    Abstract: In 2010-2012, a few largely coincidental events led experimental psychologists to realize that their approach to collecting, analyzing, and reporting data made it too easy to publish false-positive findings. This sparked a period of methodological reflection that we review here and call “psychology’s renaissance.” We begin by describing how psychology’s concerns with publication bias shifted from worrying about file-drawered studies to worrying about p-hacked analyses. We then review the methodological changes that psychologists have proposed and, in some cases, embraced. In describing how the renaissance has unfolded, we attempt to describe different points of view fairly but not neutrally, so as to identify the most promising paths forward. In so doing, we champion disclosure and pre- registration, express skepticism about most statistical solutions to publication bias, take positions on the analysis and interpretation of replication failures, and contend that “meta-analytical thinking” increases the prevalence of false-positives. Our general thesis is that the scientific practices of experimental psychologists have improved dramatically.

  • Hannah Perfecto, Jeff Galak, Joseph Simmons, Leif D. Nelson (2017), Rejecting A Bad Option Feels Like Choosing A Good One, Journal of Personality and Social Psychology, 113, pp. 659-670.

    Abstract: Across 4,151 participants, the authors demonstrate a novel framing effect, attribute matching, whereby matching a salient attribute of a decision frame with that of a decision’s options facilitates decision- making. This attribute matching is shown to increase decision confidence and, ultimately, consensus estimates by increasing feelings of metacognitive ease. In Study 1, participants choosing the more attractive of two faces or rejecting the less attractive face reported greater confidence in and perceived consensus around their decision. Using positive and negative words, Study 2 showed that the attribute’s extremity moderates the size of the effect. Study 3 found decision ease mediates these changes in confidence and consensus estimates. Consistent with a misattribution account, when participants were warned about this external source of ease in Study 4, the effect disappeared. Study 5 extended attribute matching beyond valence to objective judgments. The authors conclude by discussing related psycho- logical constructs as well as downstream consequences.

Teaching

Past Courses

  • COGS398 - SENIOR THESIS

    This course is a directed study intended for cognitive science majors who have been admitted to the cognitive science honors program. Upon admission into the program, students may register for this course under the direction of their thesis supervisor.

  • MGMT690 - MANAG DECSN MAKING

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

  • OIDD290 - DECISION PROCESSES

    This course is an intensive introduction to various scientific perspectives on the processes through which people make decisions. Perspectives covered include cognitive psychology of human problem-solving, judgment and choice, theories of rational judgment and decision, and the mathematical theory of games. Much of the material is technically rigorous. Prior or current enrollment in STAT 101 or the equivalent, although not required, is strongly recommended.

  • OIDD299 - JUDG & DEC MAKING RES IM

    This class provides a high-level introduction to the field of judgment and decision making (JDM) and in-depth exposure to the process of doing research in this area. Throughout the semester you will gain hands-on experience with several different JDM research projects. You will be paired with a PhD student or faculty mentor who is working on a variety of different research studies. Each week you will be given assignments that are central to one or more of these studies, and you will be given detailed descriptions of the research projects you are contributing to and how your assignments relate to the successful completion of these projects. To complement your hands-on research experience, throughout the semester you will be assigned readings from the book Nudge by Thaler and Sunstein, which summarizes key recent ideas in the JDM literature. You will also meet as a group for an hour once every three weeks with the class's faculty supervisor and all of his or her PhD students to discuss the projects you are working on, to discuss the class readings, and to discuss your own research ideas stimulated by getting involved in various projects. Date and time to be mutually agreed upon by supervising faculty and students. the 1CU version of this course will involve approx. 10 hours of research immersion per week and a 10-page paper. The 0.5 CU version of this course will involve approx 5 hours of research immersion per week and a 5-page final paper. Please contact Maurice Schweitzer if you are interested in enrolling in the course: schweitzer@wharton.upenn.edu

  • OIDD399 - SUPERVISED STUDY

    This course number is currently used for several course types including independent studies, experimental courses and Management & Technology Freshman Seminar. Instructor permission required to enroll in any independent study. Wharton Undergraduate students must also receive approval from the Undergraduate Division to register for independent studies. Section 002 is the Management and Technology Freshman Seminar; instruction permission is not required for this section and is only open to M&T students.

  • OIDD690 - MANAG DECSN MAKING

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

  • PPE 498 - DIRECTED HONORS RESEARCH

    Student arranges with a faculty member to do research and write a thesis on a suitable topic. For more information on honors visit: https://ppe.sas.upenn.edu/study/curriculum/honors-theses

Awards and Honors

  • MBA Excellence in Teaching Award, 2019
  • MBA Excellence in Teaching Award, 2014
  • MBA Excellence in Teaching Award, 2013
  • Winner of the Helen Kardon Moss Anvil Award, awarded to the one Wharton faculty member “who has exemplified outstanding teaching quality during the last year”, 2013
  • One of ten faculty nominated by the MBA student body for the Helen Kardon Moss Anvil Award, 2012
  • MBA Excellence in Teaching Award, 2012
  • Wharton Excellence in Teaching Award, Undergraduate Division, 2011

In the News

Knowledge @ Wharton

Activity

Latest Research

Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2020), Specification Curve Analysis, Nature Human Behaviour.
All Research

In the News

Why Humans Distrust Algorithms – and How That Can Change

Many people are averse to using algorithms when making decisions, preferring to rely on their instincts. New Wharton research says a simple adjustment can help them feel differently.

Knowledge @ Wharton - 2017/02/13
All News

Awards and Honors

MBA Excellence in Teaching Award 2019
All Awards