Joseph Simmons

Joseph Simmons
  • Dorothy Silberberg Professor of Applied Statistics
  • Professor of Operations, Information, and Decisions

Contact Information

  • office Address:

    3730 Walnut St.
    JMHH 551
    Philadelphia, PA 19104

Research Interests: research methods, experimental methods, consumer behavior

Links: CV, Data Colada, (Easy Pre-registration), ResearchBox (Easy Data & Materials Posting)


I am (somehow) the Dorothy Silberberg Professor of Applied Statistics. I am also a Professor of Operations, Information, and Decisions. I teach an MBA course called Managerial Decision Making. I have two primary areas of research.

In my normal job, I investigate the psychology of judgment and decision-making, with an emphasis on understanding and fixing the errors and biases that plague people’s judgments, predictions, and choices. In this research, my (excellent) co-authors and I have contributed to our understanding of advice taking, algorithm aversion, anchoring effects, consumer rating systems, forecasting biases, the illusion of control, intuitive biases, optimistic biases, outcome bias, and the wisdom of crowds.

I am probably better known/infamous for my work in the area of research methods. With Leif Nelson and Uri Simonsohn, I have spent much of the past 12 years trying to identify and fix research practices that contribute to the publication of false-positive research findings. In these efforts, we have exposed p-hacking as a major cause of the problem, developed a statistical tool designed to assess whether a selection of studies contains evidential value (, conducted systematic replications of published findings, started a blog devoted largely to advancing the credibility of published research (, designed a pre-registration website that has been used by thousands of researchers (, designed a new website that makes it very easy to post and access materials and data (, and, in our spare time, worked to expose cases of outright fraud (e.g., I am also a co-founder of the Wharton Credibility Lab, which is dedicated to providing online platforms that make it easy for researchers to make and demonstrate that their research is credible.


Continue Reading


  • Beidi Hu and Joseph Simmons (2022), Does Constructing A Belief Distribution Truly Reduce Overconfidence?, Journal of Experimental Psychology: General.

    Abstract: Can overconfidence be reduced by asking people to provide a belief distribution over all possible outcomes – that is, by asking them to indicate how likely all possible outcomes are? Although prior research suggests that the answer is “yes,” that research suffers from methodological confounds that muddle its interpretation. In our research, we remove these confounds to investigate whether providing a belief distribution truly reduces overconfidence. In 10 studies, participants made predictions about upcoming sports games or other participants’ preferences, and then indicated their confidence in these predictions using rating scales, likelihood judgments, and/or incentivized wagers. Contrary to prior research, and to our own expectations, we find that providing a belief distribution usually increases overconfidence, because doing so seems to reinforce people’s prior beliefs. 

  • Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2022), Above averaging in literature reviews, Nature Reviews Psychology, 1 (9), pp. 1-2.

    Abstract: Meta-analysts’ practice of transcribing and numerically combining all results in a research literature can generate uninterpretable and/or misleading conclusions. Meta-analysts should instead critically evaluate studies, draw conclusions only from those that are valid and provide readers with enough information to evaluate those conclusions.

  • Celia Gaertig and Joseph Simmons (2021), The Psychology of Second Guesses: Implications for the Wisdom of the Inner Crowd, Management Science, 67 (September), pp. 5921-5942.

    Abstract: Prior research suggests that averaging two guesses from the same person can improve quantitative judgments, a phenomenon known as the “wisdom of the inner crowd.” In this article, we find that this effect hinges on whether people explicitly decide in which direction their first guess had erred before making their second guess. In nine studies (N = 8,465), we found that asking people to explicitly indicate whether their first guess was too high or too low prior to making their second guess made people more likely to provide a second guess that was more extreme (in the same direction) than their first. As a consequence, the introduction of that “Too High/Too Low” question reduced (and sometimes eliminated or reversed) the wisdom-of-the-inner-crowd effect for (the majority of) questions with non-extreme correct answers and increased the wisdom-of-the-inner-crowd effect for questions with extreme correct answers. Our findings suggest that the wisdom-of-the-inner-crowd effect is not inevitable, but rather that it depends on the processes people use to generate their second guesses.

  • Joowon Klusowski, Deborah Small, Joseph Simmons (2021), Does Choice Cause an Illusion of Control?, Psychological Science, 32 (February), pp. 159-172.

    Abstract: Previous research suggests that choice causes an illusion of control—that it makes people feel more likely to achieve preferable outcomes, even when they are selecting among options that are functionally identical (e.g., lottery tickets with an identical chance of winning). This research has been widely accepted as evidence that choice can have significant welfare effects, even when it confers no actual control. In this article, we report the results of 17 experiments that examined whether choice truly causes an illusion of control (N = 10,825 online and laboratory participants). We found that choice rarely makes people feel more likely to achieve preferable outcomes—unless it makes the preferable outcomes actually more likely—and when it does, it is not because choice causes an illusion but because choice reflects some participants’ preexisting (illusory) beliefs that the functionally identical options are not identical. Overall, choice does not seem to cause an illusion of control.

  • Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2021), Pre-registration Is A Game Changer. But, Like Random Assignment, It Is Neither Necessary Nor Sufficient For Credible Science, Journal of Consumer Psychology, 31 (January), pp. 177-180.

    Abstract: We identify 15 claims Pham and Oh (2020) make to argue against pre-registration. We agree with 7 of the claims, but think that none of them justify delaying the encouragement and adoption of pre-registration. Moreover, while the claim they make in their title is correct—pre-registration is neither necessary nor suffi- cient for a credible science—this is also true of many our science’s most valuable tools, such as random assignment. Indeed, both random assignment and pre-registration lead to more credible research. Pre-registration is a game changer.

  • Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2021), Pre-registration: Why and How, Journal of Consumer Psychology, 31 (January), pp. 151-162.

    Abstract: In this article, we (1) discuss the reasons why pre-registration is a good idea, both for the field and individual researchers, (2) respond to arguments against pre-registration, (3) describe how to best write and review a pre-registration, and (4) comment on pre-registration’s rapidly accelerating popularity. Along the way, we describe the (big) problem that pre-registration can solve (i.e., false positives caused by p-hacking), while also offering viable solutions to the problems that pre-registration cannot solve (e.g., hidden confounds or fraud). Pre-registration does not guarantee that every published finding will be true, but without it you can safely bet that many more will be false. It is time for our field to embrace pre-registration, while taking steps to ensure that it is done right.

  • Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2020), Specification Curve Analysis, Nature Human Behaviour, 4 (November), pp. 1208-1214.

    Abstract: Empirical results hinge on analytical decisions that are defensible, arbitrary and motivated. These decisions probably introduce bias (towards the narrative put forward by the authors), and they certainly involve variability not reflected by standard errors. To address this source of noise and bias, we introduce specification curve analysis, which consists of three steps: (1) identifying the set of theoretically justified, statistically valid and non-redundant specifications; (2) displaying the results graphically, allowing readers to identify consequential specifications decisions; and (3) conducting joint inference across all specifications. We illustrate the use of this technique by applying it to three findings from two different papers, one investigating discrimination based on distinctively Black names, the other investigating the effect of assigning female versus male names to hurricanes. Specification curve analysis reveals that one finding is robust, one is weak and one is not robust at all.

  • Joshua Lewis and Joseph Simmons (2020), Prospective Outcome Bias: Incurring (Unnecessary) Costs to Achieve Outcomes That Are Already Likely, Journal of Experimental Psychology: General, 149, pp. 870-888.

    Abstract: How do people decide whether to incur costs to increase their likelihood of success? In investigating this question, we offer a theory called prospective outcome bias. According to this theory, people tend to make decisions that they expect to feel good about after the outcome has been realized. Because people expect to feel best about decisions that are followed by successes—even when the decisions did not cause those successes—they will pay more to increase their chances of success when success is already likely (e.g., people will pay more to increase their probability of success from 80% to 90% than from 10% to 20%). We find evidence for prospective outcome bias in nine experiments. In Study 1, we establish that people evaluate costly decisions that precede successes more favorably than costly decisions that precede failures, even when the decisions did not cause the outcome. Study 2 establishes, in an incentive-compatible laboratory setting, that people are more motivated to increase higher chances of success. Studies 3–5 generalize the effect to other contexts and decisions and Studies 6–8 indicate that prospective outcome bias causes it (rather than regret aversion, waste aversion, goals-as-reference-points, probability weighting, or loss aversion). Finally, in Study 9, we find evidence for another prediction of prospective outcome bias: people prefer small increases in the probability of large rewards (e.g., a 1% improvement in their chances of winning $100) to large increases in the probability of small rewards (e.g., a 10% improvement in their chances of winning $10).

  • Joachim Vosgerau, Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2019), 99% Impossible: A Valid, or Falsifiable, Internal Meta-Analysis, Journal of Experimental Psychology: General, 148, pp. 1628-1639.

    Abstract: Several researchers have relied on, or advocated for, internal meta-analysis, which involves statistically aggregating multiple studies in a paper to assess their overall evidential value. Advocates of internal meta-analysis argue that it provides an efficient approach to increasing statistical power and solving the file-drawer problem. Here we show that the validity of internal-meta-analysis rests on the assumption that no studies or analyses were selectively reported. That is, the technique is only valid if (1) all conducted studies were included (i.e., an empty file-drawer), and (2) for each included study, exactly one analysis was attempted (i.e., there was no p-hacking). We show that even very small doses of selective reporting invalidate internal-meta-analysis. For example, the kind of minimal p-hacking that increases the false-positive rate of one study to just 8% increases the false-positive rate of a 10-study internal meta-analysis to 83%. If selective reporting is approximately zero, but not exactly zero, then internal meta-analysis is invalid. To be valid, (1) an internal meta-analysis would need to exclusively contain studies that were properly pre-registered, (2) those pre-registrations would have to be followed in all essential aspects, and (3) the decision of whether to include a given study in an internal meta-analysis would have to be made before any of those studies are run.

  • Uri Simonsohn, Leif D. Nelson, Joseph Simmons (2019), P-curve won’t do your laundry, but it will distinguish replicable from non-replicable findings in observational research: Comment on Bruns & Ioannidis (2016), PLoS ONE, 14(3), e0213454.

    Abstract: P-curve, the distribution of significant p-values, can be analyzed to assess if the findings have evidential value, whether p-hacking and file-drawering can be ruled out as the sole explanations for them. Bruns and Ioannidis (2016) have proposed p-curve cannot examine evidential value with observational data. Their discussion confuses false-positive findings with confounded ones, failing to distinguish correlation from causation. We demonstrate this important distinction by showing that a confounded but real, hence replicable association, gun ownership and number of sexual partners, leads to a right-skewed p-curve, while a false-positive one, respondent ID number and trust in the supreme court, leads to a flat p- curve. P-curve can distinguish between replicable and non-replicable findings. The observational nature of the data is not consequential.


Current Courses (Fall 2022)

  • MGMT6900 - Manag Decsn Making

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

    MGMT6900402 ( Syllabus )

    MGMT6900401 ( Syllabus )

  • OIDD6900 - Manag Decsn Making

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

    OIDD6900402 ( Syllabus )

    OIDD6900401 ( Syllabus )

All Courses

  • MGMT6900 - Manag Decsn Making

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

  • OIDD2900 - Decision Processes

    This course is an intensive introduction to various scientific perspectives on the processes through which people make decisions. Perspectives covered include cognitive psychology of human problem-solving, judgment and choice, theories of rational judgment and decision, and the mathematical theory of games. Much of the material is technically rigorous. Prior or current enrollment in STAT 101 or the equivalent, although not required, is strongly recommended.

  • OIDD2990 - Judg & Dec Making Res Im

    This class provides a high-level introduction to the field of judgment and decision making (JDM) and in-depth exposure to the process of doing research in this area. Throughout the semester you will gain hands-on experience with several different JDM research projects. You will be paired with a PhD student or faculty mentor who is working on a variety of different research studies. Each week you will be given assignments that are central to one or more of these studies, and you will be given detailed descriptions of the research projects you are contributing to and how your assignments relate to the successful completion of these projects. To complement your hands-on research experience, throughout the semester you will be assigned readings from the book Nudge by Thaler and Sunstein, which summarizes key recent ideas in the JDM literature. You will also meet as a group for an hour once every three weeks with the class's faculty supervisor and all of his or her PhD students to discuss the projects you are working on, to discuss the class readings, and to discuss your own research ideas stimulated by getting involved in various projects. Date and time to be mutually agreed upon by supervising faculty and students. the 1CU version of this course will involve approx. 10 hours of research immersion per week and a 10-page paper. The 0.5 CU version of this course will involve approx 5 hours of research immersion per week and a 5-page final paper. Please contact Professor Joseph Simmons if you are interested in enrolling in the course:

  • OIDD3990 - Supervised Study

    This course number is currently used for several course types including independent studies, experimental courses and Management & Technology Freshman Seminar. Instructor permission required to enroll in any independent study. Wharton Undergraduate students must also receive approval from the Undergraduate Division to register for independent studies. Section 002 is the Management and Technology Freshman Seminar; instruction permission is not required for this section and is only open to M&T students. For Fall 2020, Section 004 is a new course titled AI, Business, and Society. The course provides a overview of AI and its role in business transformation. The purpose of this course is to improve understanding of AI, discuss the many ways in which AI is being used in the industry, and provide a strategic framework for how to bring AI to the center of digital transformation efforts. In terms of AI overview, we will go over a brief technical overview for students who are not actively immersed in AI (topic covered include Big Data, data warehousing, data-mining, different forms of machine learning, etc). In terms of business applications, we will consider applications of AI in media, Finance, retail, and other industries. Finally, we will consider how AI can be used as a source of competitive advantage. We will conclude with a discussion of ethical challenges and a governance framework for AI. No prior technical background is assumed but some interest in (and exposure to) technology is helpful. Every effort is made to build most of the lectures from the basics.

  • OIDD6900 - Manag Decsn Making

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

  • PPE4998 - Directed Honors Research

    Student arranges with a Penn faculty member to do research and write a thesis on a suitable topic. For more information on honors visit:

Awards and Honors

  • MBA Excellence in Teaching Award, 2019
  • MBA Excellence in Teaching Award, 2014
  • MBA Excellence in Teaching Award, 2013
  • Winner of the Helen Kardon Moss Anvil Award, awarded to the one Wharton faculty member “who has exemplified outstanding teaching quality during the last year”, 2013
  • One of ten faculty nominated by the MBA student body for the Helen Kardon Moss Anvil Award, 2012
  • MBA Excellence in Teaching Award, 2012
  • Wharton Excellence in Teaching Award, Undergraduate Division, 2011

In the News


In the News

Why Humans Distrust Algorithms – and How That Can Change

Many people are averse to using algorithms when making decisions, preferring to rely on their instincts. New Wharton research says a simple adjustment can help them feel differently.Read More

Knowledge at Wharton - 2/13/2017
All News

Wharton Stories

Orientation in IrvineThe Story Behind Wharton Convocation and 7 Pieces of Advice to Begin Your Academic Journey

Think of Convocation as a bookend to Commencement — you walk together as a class for the first time through the doors of Irvine Auditorium and in two years, you’ll process out of the Palestra as Wharton graduates….

Wharton Stories - 08/06/2018
All Stories