Joseph Simmons

Joseph Simmons
  • Dorothy Silberberg Professor of Applied Statistics
  • Professor of Operations, Information, and Decisions

Contact Information

  • office Address:

    3730 Walnut St.
    JMHH 500
    Philadelphia, PA 19104

Research Interests: research methods, judgments and decision making, consumer behavior

Links: CV, Data Colada, AsPredicted.org (Easy Pre-registration), ResearchBox (Easy Data & Materials Posting)

Overview

I am (somehow) the Dorothy Silberberg Professor of Applied Statistics. I am also a Professor of Operations, Information, and Decisions. I teach an MBA course called Managerial Decision Making. And I was recently sued for defamation (case dismissed). I have two primary areas of research.

In my normal job, I investigate the psychology of judgment and decision-making, with an emphasis on understanding and fixing the errors and biases that plague people’s judgments, predictions, and choices. In this research, my (excellent) co-authors and I have contributed to our understanding of advice taking, algorithm aversion, anchoring effects, consumer rating systems, forecasting biases, the illusion of control, intuitive biases, optimistic biases, outcome bias, and the wisdom of crowds.

I am better known for my work in the area of research methods. With Leif Nelson and Uri Simonsohn, I have spent much of the past 13 years trying to identify and fix research practices that contribute to (1) the publication of false-positive research findings, and (2) to a perverse incentive structure that rewards bad research more than good research. In these efforts, we have exposed p-hacking as a major cause of the problem, started a blog devoted largely to advancing the credibility of published research (http://datacolada.org/), worked to expose cases of outright fraud (e.g., https://datacolada.org/98 and https://datacolada.org/109), developed a statistical tool designed to assess whether a selection of studies contains evidential value (www.p-curve.com), designed a pre-registration website that has been used by thousands of researchers (https://aspredicted.org/), designed a website that makes it very easy to post and access materials and data (https://researchbox.org/), and conducted systematic replications of published findings (https://datacolada.org/81). I am also a co-founder of the Wharton Credibility Lab, which is dedicated to providing online platforms that make it easy for researchers to make and demonstrate that their research is credible.

Continue Reading

Research

  • Beidi Hu and Joseph Simmons (2025), Different Methods Elicit Different Belief Distributions, Journal of Experimental Psychology: General, 154 (2), pp. 476-496.

    Abstract: When eliciting people’s forecasts or beliefs, you can ask for a point estimate—for example, what is the most likely state of the world?—or you can ask for an entire distribution of beliefs—for example, how likely is every possible state of the world? Eliciting belief distributions potentially yields more information, and researchers have increasingly tried to do so. In this article, we show that different elicitation methods elicit different belief distributions. We compare two popular methods used to elicit belief distributions: Distribution Builder and Sliders. In 10 preregistered studies (N = 14,553), we find that Distribution Builder elicits more accurate belief distributions than Sliders, except when true distributions are right-skewed, for which the results are mixed. This result holds when we assess accuracy (a) relative to a normative benchmark and (b) relative to participants’ own beliefs. Our evidence suggests that participants approach these two methods differently: Sliders users are more likely to start with the lowest bins in the interface, which in turn leads them to put excessive mass in those bins. Our research sheds light on the process by which people construct belief distributions while offering a practical recommendation for future research: All else equal, Distribution Builder yields more accurate belief distributions.

  • Katie Mehr and Joseph Simmons (2024), How Does Rating Specific Features of an Experience Alter Consumers’ Overall Evaluations of That Experience?, Journal of Consumer Research, 51 (4), pp. 739-757.

    Abstract: How does the way companies elicit ratings from consumers affect the ratings that they receive? In 10 pre-registered experiments, we find that consumers rate sub-par experiences more positively overall when they are also asked to rate specific aspects of those experiences (e.g., a restaurant’s food, service, and ambiance). Studies 1–4 established the basic effect across different scenarios and experiences. Study 5 found that the effect is limited to being asked to rate specific features of an experience, rather than providing open-ended comments about those features. Studies 6–9 provided evidence that the effect does not emerge because rating positive aspects of a subpar experience reminds consumers that their experiences had some good features. Rather, it emerges because consumers want to avoid incorporating negative information into both the overall and the attribute ratings. Lastly, study 10 found that asking consumers to rate attributes of a subpar experience reduces the predictive validity of their overall rating. We discuss implications of this work and reconcile it with conflicting findings in the literature.

  • Don A. Moore, Juliana Schroeder, Erica R. Bailey, Rachel Gershon, Joshua E. Moore, Joseph Simmons (2024), Does Thinking About God Increase Acceptance of Artificial Intelligence in Decision Making?, Proceedings of the National Academy of Sciences, 121 (31).

    Abstract: Karataş and Cutright (2023) present eight experiments (N = 2,462) suggesting that “God salience” (i.e., thinking about God) increases people’s willingness to rely on guidance from AI. We conducted pre-registered replications of five of their experiments. Although the majority of our replications produced results directionally consistent with those of Karataş and Cutright, our results imply that if these effects exist, they are so small that the original studies lacked the ability to detect them.

  • Celia Gaertig and Joseph Simmons (2023), Are People More or Less Likely To Follow Advice That Is Accompanied By A Confidence Interval?, Journal of Experimental Psychology: General, 152 (July), pp. 2008-2025.

    Abstract: Are people more or less likely to follow numerical advice that communicates uncertainty in the form of a confidence interval? Prior research offers competing predictions. Although some research suggests that people are more likely to follow the advice of more confident advisors, other research suggests that people may be more likely to trust advisors who communicate uncertainty. Participants (N = 17,615) in 12 incentivized studies predicted the outcomes of upcoming sporting events, the preferences of other survey responders, or the number of deaths due to COVID-19 by a future date. We then provided participants with an advisor’s best guess and manipulated whether or not that best guess was accompanied by a confidence interval. In all but one study, we found that participants were either directionally or significantly more likely to choose the advisor’s forecast (over their own) when the advice was accompanied by a confidence interval. These results were consistent across different measures of advice following and did not depend on the width of the confidence interval (75% or 95%), advice quality, or on whether people had information about the advisor’s past performance. These results suggest that advisors may be more persuasive if they provide reasonably-sized confidence intervals around their numerical estimates.

  • Beidi Hu and Joseph Simmons (2023), Does Constructing A Belief Distribution Truly Reduce Overconfidence?, Journal of Experimental Psychology: General, 152 (2), pp. 571-589.

    Abstract: Can overconfidence be reduced by asking people to provide a belief distribution over all possible outcomes – that is, by asking them to indicate how likely all possible outcomes are? Although prior research suggests that the answer is “yes,” that research suffers from methodological confounds that muddle its interpretation. In our research, we remove these confounds to investigate whether providing a belief distribution truly reduces overconfidence. In 10 studies, participants made predictions about upcoming sports games or other participants’ preferences, and then indicated their confidence in these predictions using rating scales, likelihood judgments, and/or incentivized wagers. Contrary to prior research, and to our own expectations, we find that providing a belief distribution usually increases overconfidence, because doing so seems to reinforce people’s prior beliefs. 

  • Uri Simonsohn, Joseph Simmons, Leif D. Nelson (2022), Above averaging in literature reviews, Nature Reviews Psychology, 1 (9), pp. 1-2.

    Abstract: Meta-analysts’ practice of transcribing and numerically combining all results in a research literature can generate uninterpretable and/or misleading conclusions. Meta-analysts should instead critically evaluate studies, draw conclusions only from those that are valid and provide readers with enough information to evaluate those conclusions.

  • Celia Gaertig and Joseph Simmons (2021), The Psychology of Second Guesses: Implications for the Wisdom of the Inner Crowd, Management Science, 67 (September) (), pp. 5921-5942.

    Abstract: Prior research suggests that averaging two guesses from the same person can improve quantitative judgments, a phenomenon known as the “wisdom of the inner crowd.” In this article, we find that this effect hinges on whether people explicitly decide in which direction their first guess had erred before making their second guess. In nine studies (N = 8,465), we found that asking people to explicitly indicate whether their first guess was too high or too low prior to making their second guess made people more likely to provide a second guess that was more extreme (in the same direction) than their first. As a consequence, the introduction of that “Too High/Too Low” question reduced (and sometimes eliminated or reversed) the wisdom-of-the-inner-crowd effect for (the majority of) questions with non-extreme correct answers and increased the wisdom-of-the-inner-crowd effect for questions with extreme correct answers. Our findings suggest that the wisdom-of-the-inner-crowd effect is not inevitable, but rather that it depends on the processes people use to generate their second guesses.

  • Joowon Klusowski, Deborah Small, Joseph Simmons (2021), Does Choice Cause an Illusion of Control?, Psychological Science, 32 (February) (), pp. 159-172.

    Abstract: Previous research suggests that choice causes an illusion of control—that it makes people feel more likely to achieve preferable outcomes, even when they are selecting among options that are functionally identical (e.g., lottery tickets with an identical chance of winning). This research has been widely accepted as evidence that choice can have significant welfare effects, even when it confers no actual control. In this article, we report the results of 17 experiments that examined whether choice truly causes an illusion of control (N = 10,825 online and laboratory participants). We found that choice rarely makes people feel more likely to achieve preferable outcomes—unless it makes the preferable outcomes actually more likely—and when it does, it is not because choice causes an illusion but because choice reflects some participants’ preexisting (illusory) beliefs that the functionally identical options are not identical. Overall, choice does not seem to cause an illusion of control.

  • Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2021), Pre-registration Is A Game Changer. But, Like Random Assignment, It Is Neither Necessary Nor Sufficient For Credible Science, Journal of Consumer Psychology, 31 (January) (), pp. 177-180.

    Abstract: We identify 15 claims Pham and Oh (2020) make to argue against pre-registration. We agree with 7 of the claims, but think that none of them justify delaying the encouragement and adoption of pre-registration. Moreover, while the claim they make in their title is correct—pre-registration is neither necessary nor suffi- cient for a credible science—this is also true of many our science’s most valuable tools, such as random assignment. Indeed, both random assignment and pre-registration lead to more credible research. Pre-registration is a game changer.

  • Joseph Simmons, Leif D. Nelson, Uri Simonsohn (2021), Pre-registration: Why and How, Journal of Consumer Psychology, 31 (January) (), pp. 151-162.

    Abstract: In this article, we (1) discuss the reasons why pre-registration is a good idea, both for the field and individual researchers, (2) respond to arguments against pre-registration, (3) describe how to best write and review a pre-registration, and (4) comment on pre-registration’s rapidly accelerating popularity. Along the way, we describe the (big) problem that pre-registration can solve (i.e., false positives caused by p-hacking), while also offering viable solutions to the problems that pre-registration cannot solve (e.g., hidden confounds or fraud). Pre-registration does not guarantee that every published finding will be true, but without it you can safely bet that many more will be false. It is time for our field to embrace pre-registration, while taking steps to ensure that it is done right.

Teaching

Current Courses (Fall 2025)

  • MGMT6900 - Manag Decsn Making

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

    MGMT6900401 ( Syllabus )

    MGMT6900402 ( Syllabus )

  • OIDD6900 - Manag Decsn Making

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

    OIDD6900401 ( Syllabus )

    OIDD6900402 ( Syllabus )

  • OIDD9999 - Independent Study

    Independent Study

    OIDD9999004 ( Syllabus )

All Courses

  • MGMT6900 - Manag Decsn Making

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

  • OIDD2900 - Decision Processes

    This course is an intensive introduction to various scientific perspectives on the processes through which people make decisions. Perspectives covered include cognitive psychology of human problem-solving, judgment and choice, theories of rational judgment and decision, and the mathematical theory of games. Much of the material is technically rigorous. Prior or current enrollment in STAT 1010 or the equivalent, although not required, is strongly recommended.

  • OIDD2990 - Judg & Dec Making Res Im

    This class provides a high-level introduction to the field of judgment and decision making (JDM) and in-depth exposure to the process of doing research in this area. Throughout the semester you will gain hands-on experience with several different JDM research projects. You will be paired with a PhD student or faculty mentor who is working on a variety of different research studies. Each week you will be given assignments that are central to one or more of these studies, and you will be given detailed descriptions of the research projects you are contributing to and how your assignments relate to the successful completion of these projects. To complement your hands-on research experience, throughout the semester you will be assigned readings from the book Nudge by Thaler and Sunstein, which summarizes key recent ideas in the JDM literature. You will also meet as a group for an hour once every three weeks with the class's faculty supervisor and all of his or her PhD students to discuss the projects you are working on, to discuss the class readings, and to discuss your own research ideas stimulated by getting involved in various projects. Date and time to be mutually agreed upon by supervising faculty and students. the 1CU version of this course will involve approx. 10 hours of research immersion per week and a 10-page paper. The 0.5 CU version of this course will involve approx 5 hours of research immersion per week and a 5-page final paper. Please contact Professor Joseph Simmons if you are interested in enrolling in the course: jsimmo@wharton.upenn.edu

  • OIDD3990 - Supervised Study

    This course number is currently used for several course types including independent studies, experimental courses and Management & Technology Freshman Seminar. Instructor permission required to enroll in any independent study. Wharton Undergraduate students must also receive approval from the Undergraduate Division to register for independent studies. Section 002 is the Management and Technology Freshman Seminar; instruction permission is not required for this section and is only open to M&T students. For Fall 2020, Section 004 is a new course titled AI, Business, and Society. The course provides a overview of AI and its role in business transformation. The purpose of this course is to improve understanding of AI, discuss the many ways in which AI is being used in the industry, and provide a strategic framework for how to bring AI to the center of digital transformation efforts. In terms of AI overview, we will go over a brief technical overview for students who are not actively immersed in AI (topic covered include Big Data, data warehousing, data-mining, different forms of machine learning, etc). In terms of business applications, we will consider applications of AI in media, Finance, retail, and other industries. Finally, we will consider how AI can be used as a source of competitive advantage. We will conclude with a discussion of ethical challenges and a governance framework for AI. No prior technical background is assumed but some interest in (and exposure to) technology is helpful. Every effort is made to build most of the lectures from the basics.

  • OIDD6900 - Manag Decsn Making

    The course is built around lectures reviewing multiple empirical studies, class discussion,and a few cases. Depending on the instructor, grading is determined by some combination of short written assignments, tests, class participation and a final project (see each instructor's syllabus for details).

  • OIDD9999 - Independent Study

    Independent Study

  • PPE4998 - Directed Honors Research

    Student arranges with a Penn faculty member to do research and write a thesis on a suitable topic. For more information on honors visit: https://ppe.sas.upenn.edu/study/curriculum/honors-theses

Awards and Honors

  • MBA Excellence in Teaching Award, 2019
  • MBA Excellence in Teaching Award, 2014
  • MBA Excellence in Teaching Award, 2013
  • Winner of the Helen Kardon Moss Anvil Award, awarded to the one Wharton faculty member “who has exemplified outstanding teaching quality during the last year”, 2013
  • One of ten faculty nominated by the MBA student body for the Helen Kardon Moss Anvil Award, 2012
  • MBA Excellence in Teaching Award, 2012
  • Wharton Excellence in Teaching Award, Undergraduate Division, 2011

In the News

Activity

Latest Research

Beidi Hu and Joseph Simmons (2025), Different Methods Elicit Different Belief Distributions, Journal of Experimental Psychology: General, 154 (2), pp. 476-496.
All Research

In the News

Why Humans Distrust Algorithms – and How That Can Change

Many people are averse to using algorithms when making decisions, preferring to rely on their instincts. New Wharton research says a simple adjustment can help them feel differently.Read More

Knowledge at Wharton - 2/13/2017
All News

Wharton Stories

Orientation in IrvineThe Story Behind Wharton Convocation and 7 Pieces of Advice to Begin Your Academic Journey

Think of Convocation as a bookend to Commencement — you walk together as a class for the first time through the doors of Irvine Auditorium and in two years, you’ll process out of the Palestra as Wharton graduates….

Wharton Stories - 08/06/2018
All Stories