Seminars / Conferences


September 12-13, 2019


Since 2006, the Workshop for Empirical Research in Operations Management brings together a community of scholars with a passion for empirical research in Operations. The purpose of the Workshop is to exchange research ideas, share experiences in the publication process, discuss methodological issues, and grow together as a group of colleagues with a common research interest.

October 20th, 2019

2019 Informs Reception


Seattle, Washington

Location: Ivar’s Acres of Clams – 1001 Alaskan Way, Seattle, WA 98104

Sunday, October 20th

7:00 – 9:00 pm

Seminar Time & Location Information 

Time and Location: 12:00PM – 1:20PM

**Due to the COVID-19 pandemic, all seminars will be held virtually until further notice.**

Jon M Huntsman Hall (JMHH)

3730 Walnut St.

Philadelphia, PA 19104

Suite 540/541 (unless otherwise noted)

To schedule a meeting with a speaker log in to the OID SharePoint site or Joy McManus.

Seminars 2021-2022

Tuesday, September 21, 2021

Virtual seminar via Zoom

Presenter: Dean Eckles

Title: Long ties: Formation, social contagion, and economic outcomes


Network structure can affect when, where, and how widely new ideas, products, and behaviors are adopted. Classic work in the social sciences has emphasized that “long ties” provide access to novel and advantageous information. In our empirical work, we show how particular life events (migration, education) are associated with forming long ties and how having long ties is associated with beneficial economic outcomes. Counties in the United States with more long ties have higher incomes, lower unemployment, and more economic mobility, even after adjusting for other measures of social connections.

These stylized facts are consistent with some models of contagion. In widely-used models of biological contagion, interventions that randomly rewire edges (generally making them “longer”) accelerate spread. However, there are other models relevant to social contagion, such as those motivated by myopic best-response in games with strategic complements, in which individuals adopt if and only if the number of adopting neighbors exceeds a threshold. Recent work has argued that highly clustered, rather than random, networks facilitate spread of these “complex contagions”. Here we show that minor modifications to this model, which make it more realistic, reverse this result: we allow very rare below-threshold adoption, i.e., rarely adoption occurs when there is only one adopting neighbor. In a version of “small world” networks, allowing adoptions below threshold to occur with order 1/√n probability — even only along some “short” cycle edges — is enough to ensure that random rewiring accelerates spread. Hypothetical interventions that randomly rewire existing edges or add random edges (versus adding “short”, triad-closing edges) in hundreds of empirical social networks reduce time to spread.

In summary, we emphasize the outsized role of long ties in the spread of valuable information and behaviors, even when those behaviors spread via threshold-based contagions.

     This is joint work based on two papers: one on threshold-based contagions with Elchanan Mossel, M. Amin Rahimian, Subhabrata Sen, and one on formation of long ties and economic outcomes with Eaman Jahani and Michael Bailey.

Thursday, September 23, 2021

Seminar held in JMHH G50 and via Zoom

Presenter: Kuang Xu

Title: Diffusion asymptotics for sequential experiments


We propose a new diffusion-asymptotic analysis for sequentially randomized experiments, including those that arise in solving multi-armed bandit problems. In an experiment with n time steps, we let the mean reward gaps between actions scale to the order 1/\sqrt{n} so as to preserve the difficulty of the learning task as n grows. In this regime, we show that the behavior of a class of sequentially randomized Markov experiments converges to a diffusion limit, given as the solution of a stochastic differential equation. The diffusion limit thus enables us to derive a refined, instance-specific characterization of the stochastic dynamics of adaptive experiments. As an application of this framework, we use the diffusion limit to obtain several new insights on the regret and belief evolution of Thompson sampling. We show that a version of Thompson sampling with an asymptotically uninformative prior variance achieves nearly-optimal instance-specific regret scaling when the reward gaps are relatively large. We also demonstrate that, in this regime, the posterior beliefs underlying Thompson sampling are highly unstable over time.

Bio:  Kuang Xu is an Associate Professor of Operations, Information and Technology at Stanford Graduate School of Business, and Associate Professor by courtesy with the Electrical Engineering Department, Stanford University. Born in Suzhou, China, he received the B.S. degree in Electrical Engineering (2009) from the University of Illinois at Urbana-Champaign, and the Ph.D. degree in Electrical Engineering and Computer Science (2014) from the Massachusetts Institute of Technology. His research focuses on understanding fundamental properties and design principles of large-scale stochastic systems using tools from probability theory and optimization, with applications in queueing networks, privacy and machine learning. He is a recipient of the First Place in the INFORMS George E. Nicholson Student Paper Competition (2011), the Best Paper Award, as well as the Kenneth C. Sevcik Outstanding Student Paper Award at ACM SIGMETRICS (2013), and the ACM SIGMETRICS Rising Star Research Award (2020). He currently serves as an Associate Editor for Operations Research and Management Science.

Tuesday, October 5, 2021

Seminar held in JMHH G50 and via Zoom

Presenter: Angelica Leigh

Title: Am I Next? The Influence of Mega-Threats on Individuals at Work


Despite acknowledging the importance of events, organizational scholars rarely explore the influence of broader societal events on employee experiences and behaviors at work. Recognizing the importance of major societal occurrences, Leigh and Melwani (2019) introduced a theory of mega-threats – large scale identity relevant negative occurrences that receive significant media attention – that begins to explain the impact of major societal events on individuals and organizations. In this talk, I will introduce new theory and present results from multiple studies that explains the psychological consequences of mega-threats – namely embodied threat – for individuals that share identity group membership with those targeted and/or harmed by mega-threats. I then demonstrate that this embodied threat spills over into the workplace, leading racial minority employees to engage in a process of emotional and cognitive suppression that I characterize as identity labor. Finally, I demonstrate that this process of identity labor has detrimental effects on employees and ultimately organizations, by leading employees to engage in higher levels of avoidance behaviors in the days following a mega-threat. Taken together, this work yields important theoretical and practical implications about the significant influence that societal events have on employees at work.

Tuesday, October 12, 2021

Seminar held in JMHH G50 and via Zoom

Presenter: Jónas Oddur Jónasson

Title:Redesigning Sample Transportation in Malawi Through Improved Data Sharing and Daily Route Optimization


Healthcare systems in resource-limited settings rely on diagnostic networks in which medical samples (e.g. blood, sputum) and results need to be transported between geographically dispersed healthcare facilities and centralized laboratories. Due to lack of updated information, existing sample transportation (ST) systems typically operate fixed schedules which do not account for demand variability. We present an innovative approach for timely collection of information on transportation demand (samples and results) using low-cost technology based on feature phones and integrate it with a novel Multi-Stage version of the Dynamic Multi-Period Vehicle Routing Problem to generate daily routes in response to this updated information. The Optimized Sample Transportation (OST) system which comprises two components: a novel data sharing platform to monitor incoming sample volumes at healthcare facilities, and an optimization-based solution approach to the problem of routing and scheduling courier trips in a multi-stage transportation system. Our solution approach performs well in a range of numerical experiments. We implement OST in collaboration with Riders For Health, who operate the national ST system in Malawi. Based on analysis of implementation data describing over 20,000 samples and results transported during July-October 2019, we show that the implementation of OST routes reduced average ST delays in three districts of Malawi by approximately 25%. In addition, the proportion of unnecessary trips by ST couriers decreased by 55%. Results from our implementation demonstrate the practical feasibility of our approach for improving centralized ST operations in Malawi and its broader applicability to other resource-limited settings, particularly in sub-Saharan Africa.

Tuesday, October 19, 2021

Seminar held in JMHH G50 and via Zoom

Presenter: Jann Spiess

Title: Evidence-Based Policy Learning


The past years have seen the development and deployment of machine-learning algorithms to estimate personalized treatment-assignment policies from randomized controlled trials. Yet such algorithms for the assignment of treatment typically optimize expected outcomes without taking into account that treatment assignments are frequently subject to hypothesis testing. In this article, we explicitly take significance testing of the effect of treatment-assignment policies into account, and consider assignments that optimize the probability of finding a subset of individuals with a statistically significant positive treatment effect. We provide an efficient implementation using decision trees, and demonstrate its gain over selecting subsets based on positive (estimated) treatment effects. Compared to standard tree-based regression and classification tools, this approach tends to yield substantially higher power in detecting subgroups with positive treatment effects.

Thursday, October 21, 2021

Virtual seminar via Zoom

Presenter: Gabriel Weintraub

Title: Experimentation in Two-Sided Marketplaces: The Impact of Interference


Marketplace platforms use experiments (also known as “A/B tests”) as a method for making data-driven decisions about which changes to make on the platform. When platforms consider introducing a new feature, they often first run an experiment to test the feature on a subset of users and then use this data to decide whether to launch the feature platform-wide. However, it is well documented that estimates of the treatment effect arising from these experiments may be biased due to the presence of interference, due to substitution effects on the demand and supply sides of the market.

In this work, we develop an analytical framework to study experimental design in two-sided marketplaces. We develop a stochastic market model and associated mean field limit to capture dynamics in such experiments. Notably, we use our model to show how the bias of commonly used experimental designs and associated estimators depend on market balance. We also propose novel experimental designs that reduce bias for a wide range of market balance regimes. Finally, we discuss a simpler model to study the bias-variance trade-off among different experimental choices. Overall, our results yield insights on experimental design for practitioners.

Based on joint work with Ramesh Johari, Hannah Li, Inessa Liskovich, and Geng Zhao.

Thursday, October 28, 2021

Seminar held in JMHH G50 and via Zoom

Presenter: Ioannis (Yannis) Stamatopoulos

Title: TBA



Tuesday, November 2, 2021

Seminar held in JMHH G50 and via Zoom

Presenter: Shane M. Greenstein

Title: Hidden Software and Veiled Value Creation: Illustrations from Server Software Usage


How do you measure the value of a commodity that transacts at a price of zero from an economic standpoint? This study examines the potential for and extent of omission and misattribution in standard approaches to economic accounting with regards to open source software, an unpriced commodity in the digital economy. The study is the first to follow usage and upgrading of unpriced software over a long period of time. It finds evidence that software updates mislead analyses of sources of firm productivity and identifies several mechanisms that create issues for mismeasurement. To illustrate these mechanisms, this study closely examines one asset that plays a critical role in the digital economic activity, web server software. We analyze the largest dataset ever compiled on web server use in the United States and link it to disaggregated information on over 200,000 medium to large organizations in the United States between 2001 and 2018. In our sample, we find that the omission of economic value created by web server software is substantial and that this omission indicates there is over $4.5 billion dollars of mismeasurement of server software across organizations in the United States. This mismeasurement varies by organization age, geography, industry and size. We also find that dynamic behavior, such as improvements of server technology and entry of new products, further exacerbates economic mismeasurement.

Thursday, November 4, 2021

Seminar held in JMHH G50 and via Zoom

Presenter: Mor Armory

Title: TBA



Thursday, November 11, 2021

Virtual seminar via Zoom

Presenter: Nitish Jain

Title: TBA



Seminars 2020-2021

Tuesday, October 13, 2020

Virtual seminar via Zoom

Presenter: Dan Adelman

Title: An Efficient Frontier Approach to Scoring and Ranking Hospital Performance


The Centers for Medicare and Medicaid Services (CMS) star rating methodology for publicly evaluating hospitals uses a latent variable model that is based on the presumption of a single, but unobservable, hospital-specific quality factor shared across a group of performance measures. Performance measures are given higher weight if they statistically appear to be more strongly correlated with this hidden factor. We show how this approach, when applied to measures that are weakly or not correlated with each other, can effectively ignore measures and can exhibit “knife-edge” instability, so that even if hospitals improve relative to all other hospitals, they may nonetheless score lower overall because of weight shifting onto different measures than before. In contrast, we provide an approach to scoring and ranking hospitals that, under reasonable conditions, ensures that hospitals that improve relative to all other hospitals obtain higher scores, while also having the capability to autonomously adjust weights as measures are added or subtracted over time. Rather than exploit statistical correlation, we propose a conic optimization framework that offers a new integrated approach in data envelopment analysis for simultaneous efficiency analysis and performance evaluation. We develop theory that explains the behaviour of our approach, including various properties satisfied by hospital scores at optimality. Using data, we apply our approach to score and rank nearly every hospital in the United States and demonstrate the extent to which it agrees or disagrees with the existing approach to the CMS star ratings.

Thursday, October 15, 2020


Presenter: L. Beril Toktay

Title: Truthful Mechanisms for Medical Surplus Allocation


We analyze a resource allocation problem faced by Medical Surplus Recovery Organizations (MSROs) that recover medical surplus products to fulfill the needs of under-served healthcare facilities in developing countries. Due to the uncertain, uncontrollable supply and limited information about recipient needs, delivering the right product to the right recipient in MSRO supply chains is particularly challenging. The objective of this study is to identify strategies to improve MSROs’ value provision capability. In particular, we propose a mechanism design approach, and determine which recipient to serve at each shipping opportunity based on recipients’ reported preference rankings of different products. We find that when MSRO inventory information is shared with recipients, the only truthful mechanism is random selection among recipients, which defeats the purpose of eliciting information. Consequently, we propose two operational strategies to improve MSROs’ value provision: i) not sharing MSRO inventory information with recipients; and ii) withholding information regarding other recipients. We characterize the set of truthful mechanisms under each setting, and show that eliminating inventory and competitor information provision both improve MSROs’ value provision. Further, we investigate the value of cardinal mechanisms where recipients report their valuations. We show that in our setting, eliciting valuations has no value added beyond eliciting rankings under a wide class of implementable mechanisms. Finally, we present a calibrated numerical study based on historical data from a partner MSRO, and show that a strategy consisting of a ranking-based mechanism in conjunction with eliminating inventory and competitor information can significantly improve MSROs’ value provision.

Tuesday, October 20, 2020

Virtual seminar via Zoom

Presenter: David Chan

Title: Selection with Variation in Diagnostic Skill: Evidence from Radiologists


Physicians, judges, teachers, and agents in many other settings differ systematically in the decisions they make when faced with similar cases. Standard approaches to interpreting and exploiting such differences assume they arise solely from variation in preferences. We develop an alternative framework that allows variation in both preferences and diagnostic skill, and show that both dimensions are identified in standard settings under quasi-random assignment. We apply this framework to study pneumonia diagnoses by radiologists. Diagnosis rates vary widely among radiologists, and descriptive evidence suggests that a large component of this variation is due to differences in diagnostic skill. Our estimated model suggests that radiologists view failing to diagnose a patient with pneumonia as more costly than incorrectly diagnosing one without, and that this leads less-skilled radiologists to optimally choose lower diagnosis thresholds. Variation in skill can explain 44 percent of the variation in diagnostic decisions, and policies that improve skill perform better than uniform decision guidelines. Failing to account for skill variation can lead to highly misleading results in research designs that use agent assignments as instruments.

Thursday, October 22, 2020

Virtual seminar via Zoom

Presenter: Yash Kanoria

Title: To be announced


Abstract forthcoming

Tuesday, October 27, 2020

Virtual seminar via Zoom

Presenter: Catherine Tucker

Title:Does accurate consumer profiling depend on who you are?  An empirical investigation of what is driving audience profiling errors


We present evidence that differences in profiling accuracy are rarely influenced by which data broker is doing the profiling, and instead depend on who is being profiled. Consumers who are better off – those with high income and home ownership, employment, college education – are profiled accurately more often. Occupation – white collar vs. blue collar jobs as well as age and household arrangements also affect profiling accuracy.
Our analyses suggest profiling errors are not driven by how lucrative an individual is, but by the nature of people’s digital footprint and regularity of online activities, which in turn reflect socio-economic and demographic status. The results that better-off people are more likely to be accurately profiled has consequences for both policy and marketing practice.

Thursday, October 29, 2020

Virtual seminar via Zoom

Presenter: Daniela Saban

Title:Online Assortment Optimization for Two-sided Matching Platforms


Motivated by online labor markets, we consider the online assortment optimization problem faced by a two-sided matching platform that hosts a set of suppliers waiting to match with a customer. Arriving customers are shown an assortment of suppliers, and may choose to issue a match request to one of them. Before leaving the platform, each supplier reviews all the match requests he has received, and based on his preferences, he chooses whether to match with a customer or to leave unmatched. We study how platforms should design online assortment algorithms to maximize the expected number of matches in such two-sided settings.
We show that, when suppliers do not immediately accept/reject match requests, our problem is fundamentally different from standard (one-sided) assortment problems, where customers choose over a set of commodities. We establish that a greedy algorithm, that offers to each arriving customer the assortment that maximizes the expected increase in matches, is 1/2 competitive when compared against the clairvoyant algorithm that knows in advance the full sequence of customers’ arrivals.  In contrast with related online assortment problems, we show that there is no randomized algorithm that can achieve a better competitive ratio, even in asymptotic regimes. Next, we introduce a class of algorithms, termed preference-aware balancing algorithms, that achieve significantly better competitive ratios when suppliers’ preferences follow the Multinomial Logit and the Nested Logit choice models. Using prior knowledge about the “shape” of suppliers’ preferences, these algorithms are calibrated to “balance” optimally the match requests received by suppliers. Overall, our results suggest that the timing of suppliers’ decisions and the structure of suppliers’ preferences play a fundamental role in designing online two-sided assortment algorithms. (joint work with Ali Aouad)


Thursday, November 5, 2020


Presenter: Mor Armony

Title: To be announced


Abstract forthcoming

Tuesday, November 17, 2020

Virtual seminar via Zoom

Presenter: Retsef Levi

Title: Food and Agriculture Supply Chain Analytics & Sensing : Managing Risks on Human Health


Food and agriculture supply chains are essential to any society and economy, but at the same time pose significant risks to human health. In this talk, we will discuss how supply chain analytics and sensing, machine learning and modeling can inform regulatory policies and resource allocation to address and manage food adulteration and safety risks as well as zoonotic disease risks. Much of the talk will focus on a major multidisciplinary research effort to study risks related to economically motivated adulteration (EMA) of food in China, particularly ones originated from the upstream parts of the corresponding supply chains.


The talk is based on work done under a new collaborative project funded by the Walmart Foundation as well as work under a contract with the US FDA. It is joint work with multiple faculty and students at MIT and in Chinese universities.

Thursday, November 19, 2020

Virtual seminar via Zoom

Presenter: Vivek F. Farias

Title: Causal Inference for Panel Data with General Treatment Patterns


We present a near-optimal solution to the problem of causal inference on panel data. Specifically, we present a convex estimator, which for a certain `tangent space’ condition on the design matrix recovers the treatment effect at an optimal rate. A negligible violation of this tangent-space condition renders recovery of the treatment effect impossible. Our work alleviates the need for strong structural assumptions on the design matrix (synthetic control) or strong distributional assumptions (common to propensity based methods for factor models). Our results are made possible through a simple insight on a deficiency of existing estimation approaches (they fail to leverage all the information in ‘treated’ observations), an exploitation of the connection between certain convex and non-convex estimators, and an adaptation of the leave-one-out approach to analyzing entry-wise guarantees for matrix completion problems.

Tuesday, December 1, 2020

Virtual seminar via Zoom

Presenter: Irene Lo

Title: Market Design for Social Good: Using Algorithms and Economics to Address Social Problems


How can we improve information markets for actors in smallholder supply chains, increase incentives to not deforest, or assign students more efficiently and equitably to public schools? Many socially important problems involve markets where scarce societal resources are allocated, or information or incentives are provided to individuals with differing needs and preferences. In this talk, I will present ongoing work with Joann de Zegher on crowdsourcing market information from competitors in an Indonesia-based smallholder supply chain. I will then discuss how we can build on theoretical tools from mechanism design to design and implement effective markets that give the right resources to those who need them most. (paper and abstract attached)

Thursday, December 3, 2020

Virtual seminar via Zoom

Presenter: Saed Alizamir

Title: Electricity Pricing with Limited Consumer Response


Matching demand with supply has been a long-standing challenge in operating residential electricity markets. The utility firms often face stochastic demand functions that are affected by the unpredictable exogenous random shocks (e.g., outdoor weather condition). Although various Demand Response programs are in place to regulate electricity consumption, the effectiveness of these programs has been undermined, largely because the consumers have demonstrated limited capability in adjusting their household appliances’ settings. In this paper, we construct a demand model to describe how consumers make consumption decisions in response to random external factors representing their ambient environment at a given price. To that end, we adopt the notion of “rational inattention” to capture the consumers’ inertia in readjusting their decisions over time. Subsequently, we investigate an electricity firm’s pricing decision as well as the important role of smart appliances in driving the overall consumption patterns. Our findings highlight the nuanced implications of rationally inattentive consumers, and lead to guidelines for better regulating retail electricity markets.

Tuesday, December 8, 2020

Virtual seminar via Zoom

Presenter: Kimon Drakopoulos

Title: Testing for COVID-19: From Modeling to Practice


In the first, modeling, part of this work I will discuss the tradeoff between accuracy and availability of tests and show how the accuracy of a test in detecting the underlying state affects the demand for the information product differentially across heterogeneous agents. Correspondingly, the test accuracy can serve as a rationing device to ensure that the limited supply of information products is appropriately allocated to the heterogeneous agents. When test availability is low and the social planner is unable to allocate tests in a targeted manner to the agents, we find that moderately good tests can outperform perfect tests in terms of social outcome.


In the second part of the talk, I will discuss the work that we recently completed with screening travelers at the Greek Border. From July 1st to November 1st we designed, implemented and deployed an online learning system to allocate the country’s limited testing resources on the incoming tourist population. Specifically, for each of the 40 points of entry and given the daily number of tests available, we use travelers’ characteristics to decide who to test. Using this approach, we essentially double the effectiveness of testing resources and provide early warnings for outbreaks around the world.

Thursday, December 10, 2020

Virtual seminar via Zoom

Presenter: Juan Camilo Serpa

Title: Inventory in Times of War


Using data from 38,916 businesses in war-torn Colombia and from 5,138 attacks by the two rebel groups, FARC and ELN, we study how firms manage inventory during civil war. We obtain exogenous variation in the conflict intensity via a difference-in-differences model, which hinges on the peace process between Colombia’s government and FARC. Relying on this identification strategy, we hypothesize and show that war causes two effects on firm-level inventories. First, it leads firms to replace physical assets (inventory) with fungible assets (cash), causing them to operate with an over-secured financial buffer but a fragile operational buffer. Second, this inventory reduction occurs mostly in unprocessed inventories (finished-goods inventories are insensitive to violence), meaning that whereas war-torn businesses are equipped to fulfill planned orders, they become inflexible at handling uncertain future demand. We then show that the magnitude of these effects is highly contingent on the firm’s position in the supply chain, its proximity to distribution markets, and the type of attacks it is subject to. We propose policies to address war-related risk in supply chains.

Tuesday, December 15, 2020

Virtual seminar via Zoom

Presenter: Gerben van Kleef

Title: A Threat-Opportunity Framework of Responses to Norm Violations: Implications for Power and Leadership


Norms uphold the social order by guiding behavior without the force of laws. Behaviors that violate norms therefore pose a potential threat to organizations and societies. Accordingly, norm violations often trigger negative reactions in observers, such as unfavorable impressions, moral outrage, and gossip. Despite these reputational detriments, norm violations are omnipresent. I propose that one reason why norm violations persist is that the readiness to violate norms – despite potential costs in the form of sanctions – serves as a social signal that can afford benefits for actors’ social rank. Against this theoretical background, I will first present evidence that individuals who violate social norms are perceived as more powerful than those who abide by norms. Second, I will elucidate when observers are willing to actively grant power to norm violators by supporting them as leaders. A number of interrelated research projects indicate that the effects of norm violations on leadership granting depend on (1) the prosociality of the norm violation (i.e., whether it benefits others), (2) the hierarchical position of the observer (i.e., high or low power or social-economic status), (3) the type of norm that is violated (i.e., societal level or group level), and (4) the cultural context of the norm violation (i.e., variations in individualism-collectivism and tightness-looseness). I will discuss implications of these findings for understanding the perpetuation of norm-violating behavior in organizations and society at large.

Tuesday, February 2, 2021

Virtual seminar via Zoom

Presenter: Mark E. Lewis

Title: Power and Scheduling in a Parallel Processing Network


We consider a parallel processing network with removable servers. Beginning with the single server model with power and service rate control, we study the importance of a delayed restart when the server is off. In particular, we show that an optimal policy exists (under the average cost criterion) that delays restarting until a “safety stock” of work is in the system. It then behaves similarly to that of the classic service rate control models. With that as the backdrop, we consider scheduling with the ability to remove servers. We introduce “delay-JSQ” (join the shortest queue) policies, show their stability and asymptotic optimality in the two-server case, and conclude with a detailed numerical study that shows they outperform JSQ by up to 80%. This is joint work with Professor Douglas Down from McMaster University and Dr. Pamela Badian-Pessot (now at Proctor and Gamble).

Tuesday, February 23, 2021

Virtual seminar via Zoom

Presenter: Ray Reagans

Title: Centralization, language similarity, and performance: Renovating a classic experiment to identify network effects on team problem solving


Existing research illustrates a contingent association between a team’s work assignment and the ideal network for superior performance. For a basic task, the ideal network is organized around a central individual. If the task is complex, the ideal network is decentralized and democratic. Language similarity is one reason why complex work requires a decentralized network. To perform well, team members must apply the same problem-solving framework, and decentralized teams have an advantage in reaching consensus. Recent research suggests that language similarity is more beneficial for performance when a network is centralized. The implications of this potential outcome are underappreciated. Even if centralized teams struggle to agree on what framework to use, if the performance implication of language similarity is larger in a centralized team, centralized teams could be preferable, even if the focal task is complex. We analyze the performance of seventy-seven teams working to identify abstract symbols, which is a complex task and which also requires language similarity. People are randomly assigned to different network conditions and work together for a number of trials. We find that language similarity improves with experience at a slower rate in centralized teams, but we also find that the language similarity effect on team performance is larger for centralized teams, large enough to shift the overall advantage to centralized teams. We also estimate the performance of teams working in networks that combine elements of centralized and decentralized networks. Performance is higher in teams that combine both network features.

Thursday, February 25, 2021

Virtual seminar via Zoom

Presenter: Frances X. Frei

Title: Trust and Inclusion


Leadership isn’t about you. It’s about how effective you are at empowering other people—and making sure this impact endures even in your absence. The origins of great leadership are found, paradoxically, not in worrying about your own status and advancement, but in the unrelenting focus on other people’s potential.

The session will show how the boldest, most effective leaders use a special combination of trust, love, and belonging to create an environment in which other people can excel.

Tuesday, March 2, 2021

Virtual seminar via Zoom

Presenter: Ashley Martin

Title: The Importance of Gender in Ascribing Humanness


What does it mean to be human? Seven studies explore this age-old question and show that the attribution of gender is a critical component of seeing human (i.e., anthropomorphizing). Given gender’s primacy in social cognition, we propose gender is linked to seeing human in a way that cannot be said of other social categories (race, age, sexual-orientation, religion, disability). We test this hypothesis in seven studies: six that induce humanization (i.e., anthropomorphism) and measure social-category ascription; and one that includes (versus removes) gender and measures humanization. From recalling personal experience (Study 1), to perceiving “human-like” movement (Study 2a–2b), to anthropomorphizing stimuli (Study 3a–3b), and even creating a “human form” (Study 4), we demonstrate the heightened tendency to see gender (versus other social categories) when anthropomorphizing non-human entities, and further show that gender ascription does not happen when merely describing them (Studies 2–4). In addition, we show the reciprocal process, where assigning an anthropomorphized entity with a gender increases its humanness (Study 5). These results highlight the fundamental role of gender in humanization and have theoretical implications for research on gender, anthropomorphism, and mind perception. Further, these findings have practical relevance for current discussions around “genderlessness” and the rapidly growing movement towards a genderless society.

Thursday, March 11, 2021

Virtual seminar via Zoom

Presenter: L. Beril Toktay

Title: Waste Management Strategies under Information Asymmetry


Billions of tons of solid waste are generated every year globally, estimated at 60M tons of electronic waste, 730M tons of medical waste, 1B tons of hazardous waste, and 2B tons of municipal solid waste (World Bank Group 2018). Not all this waste is properly treated in the country of origin; rather, it is either dumped locally or exported. While some value-added recovery happens in export locations when high-value waste is exported, when this waste is low-quality, unwanted waste, it leads to a myriad of health and environmental problems in the destination country. In this talk, I will draw on two papers addressing these issues. “Truthful Mechanisms for Medical Product Surplus Allocation” (Zhang, Atasu, Ayer, Toktay) is motivated by the estimated 6M tons of medical surplus waste generated in the US annually, some of which is exported by Medical Surplus Recovery Organizations (MSROs) to under-resourced hospitals abroad. Yet the World Health Organization estimates that over seventy percent of donated medical equipment was inappropriate. In this paper, we analyze a resource allocation problem faced by an MSRO under information asymmetry regarding the needs of a heterogeneous set of recipients. We identify implementable strategies to support recipient selection decisions that maximizes value to recipients. “Treat, Dump, or Export? How Domestic and International Waste Management Policies Shape Waste Chain Outcomes” (Wijnsma, Lauga, Toktay) turns to the regulatory environment and studies the role of anti-dumping and anti-export policies in shaping waste outcomes under double-sided information asymmetry between waste producers and waste treatment operators.

Tuesday, March 16, 2021

Virtual seminar via Zoom

Presenter: Drew Jacoby-Senghor

Title: Majority group members misperceive the effects of diversity policies that benefit them


Six studies show that majority group members misperceive diversity policies as unbeneficial to their ingroup, even when policies benefit them. Majority members perceived non-zero-sum university admission policies—policies that increase the acceptance of both URM (i.e., underrepresented minority) and non-URM applicants—as harmful to their ingroup when merely framed as “diversity” policies. Even for policies lacking diversity framing (i.e., “leadership” policies), majority members misperceived that their ingroup would not benefit when initiatives provided relatively greater benefit to URMs, but not when they provided relatively greater benefit to non-URMs. No evidence emerged that these effects were driven by ideological factors:  Majority members’ misperceptions occurred even when accounting for beliefs around diversity, groups, hierarchy, race, and politics. Instead, we find that majority group membership itself predicts misperceptions, such that both Black and White participants accurately perceive non-zero-sum policies as also benefiting the majority when participants are represented as a member of the minority group.

Thursday, March 18, 2021

Virtual seminar via Zoom

Presenter: Feryal Erhun

Title: Rapid COVID-19 Modeling Support for Regional Health Systems in England


Problem definition: This paper describes the real-time participatory modeling work that our team of academics, public health officials, and clinical decision-makers has been undertaking to support the regional efforts to tackle COVID-19 in the East of England. Methodology: Since March 2020, we have been studying four research questions that have allowed us to address the pandemic’s current and near-future rapidly evolving epidemiological state, as well as the bed capacity demand in the short (a few weeks) and medium (several months) term. Frequent data input from and consultations with our public health and clinical partners allow our academic team to apply dynamic data-driven approaches using time series modeling, Bayesian estimation, and system dynamics modeling. We thus obtain a broad view of the evolving situation. Results: The academic team presents the model outcomes and insights during weekly joint meetings among public health services, national health services, and academics to support COVID-19 planning activities in the East of England, contributing to the discussion of the COVID-19 response and issues beyond immediate COVID-19 planning. Academic/practical relevance: As COVID-19 planning efforts necessitate rapid response, our portfolio of scratch models aims to achieve the right balance between rigor and speed in the face of an uncertain and changing situation. Managerial implications: Our regional and local focus enables us to better understand the pandemic’s progression and to help decision-makers make more informed short- and medium-term capacity plans in different localities in the East of England. In addition, the learnings from our collaborative experiences may present guidance on how academics and practitioners can successfully collaborate in rapid response to disasters such as COVID-19.

(In collaboration with Cambridge Judge Business School Covid-19 Planning team and Public Health England)

Thursday, April 1, 2021

Virtual seminar via Zoom

Presenter: William Schmidt

Title: Mitigating Supply Chain Disruptions Using Part Inventory Portfolios


High impact / low probability supply chain disruptions can pose major challenges for a firm. When the firm can backlog some portion of its unmet customer orders, such disruptions represent a form of bottleneck shifting because the firm’s constrained resources change over time. The firm’s disruption exposure is driven by (i) lost orders in the disruption stage due to part constraints and (ii) lost orders in the recovery stage due to production capacity constraints. The firm would like to reduce its disruption exposure in both stages without incurring material incremental costs. To be practically useful, a solution must account for the operational reality that the firm’s part inventories are constantly changing. We show that this complexity can be exploited to solve the firm’s problem. First, we analytically prove that the firm’s disruption exposure is (i) decreasing at a decreasing rate with the inventory quantity of the disrupted part and (ii) decreasing at a decreasing rate with the inventory quantity of the non-disrupted parts. We then develop an optimization model to examine the practical implications of these effects using detailed data from our research partner, a large diversified manufacturing firm (DMF). With targeted changes to its inventory policies across a set of parts, DMF can reduce its aggregate disruption exposure by 52.6% to 55.4% while reducing its inventory holding and ordering costs by 1.7% to 8.1%. Achieving these results, however, requires inventory policy changes to many parts, and trades off disruption exposure decreases to some parts with increases to other parts. We introduce an alternative solution, a “strategic portfolio” of parts, that is simpler to implement, also inexpensive, and allows the firm to materially reduce its disruption exposure across all parts. Holding inventory is known to mitigate a firm’s disruption exposure, but it is perceived to be costly relative to other mitigation strategies, such as supply chain insurance or cultivating alternative source of supply. Our part portfolio approach overcomes these limitations, thereby adding a new strategy to management’s arsenal of disruption risk mitigation options.

Thursday, April 8, 2021

Virtual seminar via Zoom

Presenter: Negin Golrezaei

Title: Learning Product Rankings Robust to Fake Users


In many online platforms, customers’ decisions are substantially influenced by product rankings as most customers only examine a few top-ranked products. Concurrently, such platforms also use the same data corresponding to customers’ actions to learn how these products must be ranked or ordered. These interactions in the underlying learning process, however, may incentivize sellers to artificially inflate their position by employing fake users, as exemplified by the emergence of click farms. Motivated by such fraudulent behavior, we study the ranking problem of a platform that faces a mixture of real and fake users who are indistinguishable from one another. We first show that existing learning algorithms—that are optimal in the absence of fake users—may converge to highly sub-optimal rankings under manipulation by fake users. To overcome this deficiency, we develop efficient learning algorithms under two informational environments: in the first setting, the platform is aware of the number of fake users, and in the second setting, it is agnostic to the number of fake users. For both these environments, we prove that our algorithms converge to the optimal ranking, while being robust to the aforementioned fraudulent behavior; we also present worst-case performance guarantees for our methods, and show that they significantly outperform existing algorithms. At a high level, our work employs several novel approaches to guarantee robustness such as: (i) constructing product-ordering graphs that encode the pairwise relationships between products inferred from the customers’ actions; and (ii) implementing multiple levels of learning with a judicious amount of bi-directional cross-learning between levels.

Link to the paper:

Thursday, April 15, 2021


Presenter: Nitish Jain

Title: To be announced



Tuesday, April 20, 2021

Virtual seminar via Zoom

Presenter: Eduardo B. Andrade

Title: In Search of Moderation: How Counter-Stereotypical Endorsers Attenuate Polarization


Although it is well-established that people’s opinions and tastes are deeply divided along political lines, less is known on what can bring liberals and conservatives together. In this research, we examine whether and how counter-stereotypical political endorsement (i.e., individuals who endorse [reject] a policy that most from their own group are perceived to reject [endorse]) can help reduce political polarization. Across three studies conducted within the Brazilian context, we show that relative to controls and stereotypical endorsers, counter-stereotypical endorsers attenuate the association between an individual’s self-reported political orientation and his/her policy preferences. This is true for cannabis legalization (study 1), gun rights (study 2), and abortion (study 3). As important, we reveal that the attenuation in polarization is asymmetric. When a counter-stereotypical endorser supports a given policy (e.g., a conservative politician supporting cannabis legalization), it persuades in-groups (e.g., increasing support for cannabis legalization among conservatives) more than it dissuades out-groups (e.g., reducing support for cannabis legalization among liberals). The role of changes in beliefs about (a) policy effectiveness and (b) in-group social acceptance is discussed.

Thursday, April 29, 2021

Virtual seminar via Zoom

Presenter: Ming Hu

Title: Blockbuster or Long Tail? Competitive Strategy Under Network Effects


We provide a theory that unifies the blockbuster and long tail phenomena. Specifically, we analyze a model where a large number of firms compete in making market entry and product quality decisions and then sequentially arriving customers with (random) private preferences make purchase decisions based on product quality and historical sales. We show that a growing network effect always contributes to more sales concentration on a small number of products, supporting the blockbuster phenomenon. However, product variety and investments in quality, as an outcome of firms’ ex ante competitive decisions, may increase or decrease as the network effect grows. If the strength of the network effect is below a threshold, an increasing network effect will shift more sales towards the products with higher quality, preventing more products from entering the market ex ante and inducing firms to adopt the blockbuster strategy by making high-budget products. Otherwise, the network effect will easily cause the market to be concentrated on a few products; even some low-quality products may have a chance to become a “hit” due to luck. In this case, when the network effect is growing, the ex-ante equilibrium product variety will be wider, and firms make lower-budget products, a finding that is consistent with the long tail theory. We test our theory with the movie box office data and find strong supporting evidence.

Tuesday, May 4, 2021

Virtual seminar via Zoom

Presenter: Raffaella Sadun

Title: The Demand for Executive Skills


We use a large and unique corpus of job specifications for C-suite positions to document and explain heterogeneity in the skills demanded for high-level executives across firms. A novel algorithm maps the text for each executive search into six separate skill clusters that reflect cognitive, interpersonal, and operational dimensions. Patterns in the social skills cluster are particularly striking: it features the highest growth in the sample, it is the relatively most common cluster in CEO searches, and is very heterogeneous across firms. We propose a mechanism whereby executive social skills facilitate the exchange of problems between workers and managers; construct proxies for the need for such coordination; and show these correlate with the presence of social skills language. The results suggest that the varied structure of firms induces demand for different executive skills.

Tuesday, May 11, 2021

Virtual seminar via Zoom

Presenter: Samantha Keppler

Title: Crowdfunding the Front Lines: An Empirical Study of Teacher-Driven School Improvement


A widespread belief is that the traditional brick-and-mortar K–12 education system in the US is broken. The private sector, specifically education technology (EdTech) companies, have stepped in to try to help. In this paper, we study DonorsChoose, a nonprofit that works to improve the traditional brick-and-mortar system with a teacher crowdfunding platform. Given the constraints of working with the current struggling system, we ask whether DonorsChoose moves the needle on effectiveness and inequality. Combining DonorsChoose data with data on student test scores in Pennsylvania from 2012–2013 to 2017–2018, we find an increase in the number of DonorsChoose projects funded at a school leads to higher student performance, after controlling for selection biases. In high schools, a 10% increase in the number of funded projects leads to a 0.1 to 0.2 percentage point (pp) increase in students scoring basic and above in all tested subjects. A 10% increase in the number of funded projects at an elementary or middle school leads to a 0.06 pp increase in the percentage of students scoring basic and above in language arts and a 0.15 pp increase in science. We find these effects are driven primarily by teacher projects from the lowest income schools, suggesting the platform helps reduce inequality in educational outcomes. Based on a textual analysis of thousands of statements from all funded teachers describing how resources are used, we find two channels of improvement uniquely effective in the lowest income schools. Our study suggests that those in the education sector can harness the wisdom of front-line workers — teachers — to improve effectiveness, efficiency, and equity.

Link to paper:

Seminars 2019-2020

Tuesday, September 3, 2019

Presenter: Joshua Lewis

Title: Prospective Outcome Bias: Incurring (Unnecessary) Costs to Achieve Outcomes That Are Already Likely



How do people decide whether to incur costs to increase their likelihood of success? In investigating this question, we offer a theory called prospective outcome bias. According to this theory, people tend to make decisions that they expect to feel good about after the outcome has been realized. Because people expect to feel best about decisions that are followed by successes – even when the decisions did not cause those successes – they will pay more to increase their chances of success when success is already likely (e.g., people will pay more to increase their probability of success from 80% to 90% than from 10% to 20%). We find evidence for prospective outcome bias in nine experiments. In Study 1, we establish that people evaluate costly decisions that precede successes more favorably than costly decisions that precede failures, even when the decisions did not cause the outcome. Study 2 establishes, in an incentive-compatible laboratory setting, that people are more motivated to increase higher chances of success. Studies 3-5 generalize the effect to other contexts and decisions, and Studies 6-8 indicate that prospective outcome bias causes it (rather than regret aversion, waste aversion, goals-as-reference-points, probability weighting, or loss aversion). Finally, in Study 9, we find evidence for another prediction of prospective outcome bias: people prefer small increases in the probability of large rewards (e.g., a 1% improvement in their chances of winning $100) to large increases in the probability of small rewards (e.g., a 10% improvement in their chances of winning $10).



Tuesday, September 10, 2019

JMHH – Room 350

Presenter: Edward Chang

Title: Understanding What Drives Diversity-Related Hiring Decisions in Organizations


Using archival field data and experiments, I provide evidence of novel factors that influence diversity-related hiring decisions in organizations. First, I explore the implications of impression management as a driver of diversity. If organizations have impression management concerns around diversity, they may strive to match the levels of diversity found among peer organizations, thereby conforming to the descriptive social norm for diversity. I examine this prediction in the context of gender diversity on U.S. corporate boards and find that significantly more S&P 1500 boards include exactly two women (the descriptive social norm) than would be expected by chance. Experimental data corroborate these findings and provide additional evidence that social norms, visibility, and impression management concerns all affect organizational preferences for diversity. Second, I explore how a common feature of personnel selection decisions–the fact that they are made in isolation–can affect the diversity of hired candidates. In a series of experiments, I show individuals select less diversity when making decisions in isolation, as opposed to making collections of choices, because diversity is less salient when selection decisions are made in isolation. Together, these projects illuminate novel factors that determine when and why organizations demand diversity. Understanding these factors can provide guidance about potential interventions to increase diversity in organizations.

Tuesday, September 24, 2019

JMHH – Room 350

Presenter: Ken Moon

Title: Matching in Online Marketplaces when Talent is Difficult to Discern


We study the problem of assigning workers to short-term jobs in online marketplaces. In settings where workers’ most relevant skills and attributes are readily observed (e.g., Uber), the marketplace platform should clearly prioritize matches of the workers with the best attributes. However, in many important and growing settings, workers are distinguished in quality by skills and attributes that are difficult to measure at scale.  Information about these attributes is perceived by marketplace participants through effort and cost (e.g., interviewing) – in particular, mounting evidence suggests that reputational systems do not bridge the gap.  We expect marketplace to increasingly encounter this challenge as the online gig economy expands from its current niche, at 0.5% of the overall US labor force.

We use data covering millions of job postings and transactions on a major online platform for sourcing freelance labor. We structurally estimate employers’ demand preferences, including the extent to which they hire based on uncertain information about workers’ quality-relevant competencies, in a setting featuring an asymptotically large number of choices (freelancers) sorted into essentially unique consideration sets (rather than each being one of large N instances).  We recommend how and when the platform should prioritize matching for compatible skills, matching for repeat relationships, and matching that encourages exploration.

Seminars 2018-2019

Tuesday, April 30, 2019

Presenter: Pei-yu Chen

Title: From Data to Actionable Analytics: The Magical Power of Individual Shopping Time Habit


I will share some of my recent work on deriving actionable analytics from large scale data. This talk will focus on a project that aims to understand and measure individual shopping time habits and its effects. Little research has focused on online shopping habit, particularly concerning time, missing opportunity to potentially improve important outcomes by the simple innovative use of time. Based on a unique dataset that includes reviews as well as pertinent purchases at the individual level from a large online retailer, this study investigates whether consumers exhibit time habits for online shopping and whether following such time habits affects their satisfaction and re-visit behavior. We employ activity-based metrics to assess shopping time habit at the INDIVIDUAL level, and results show that consumers form shopping time habits, and they obtain higher consumer satisfaction and exhibit greater re-visit behavior when the timing of shopping follows their shopping time habits. While prior works have documented that consumers exhibit time habits for physical shopping, driven mostly by time and location constraints, this current study is the first, to our knowledge, to examine online shopping time habit and, most importantly, its effects on consumer satisfaction and re-visit behavior. With the availability of detailed individual transaction data in online shopping and the advance of technology in providing personalized services which enable companies to act upon knowledge of individual behaviors, this research provides important practical implications for system and website design, marketing strategy as well as customer relationship management.




Tuesday, April 23, 2019

Presenter: Ashton Anderson

Title:  Assessing Human Error Against a Benchmark of Perfection


An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors. To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using chess tablebases to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging for even the best players in the world. We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difficulty of the decision. We identify rich structure in all three of these categories of features, and find strong evidence that in our domain, features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time.



Tuesday, April 9, 2019

Presenter: Costis Maglaras

Title: The role of operational controls and driver-side pricing in ride-hailing networks


I will discuss two separate projects. The first explores the role of operational controls in a stylized ride-hailing network with strategic drivers, and characterizes the value of platform admission and matching controls on equilibrium behavior; one particular finding is that the platform may proactively wish to deny a match to arriving riders at a particular location, even when local driver capacity is available, so as to aggravate driver queueing and incentivize them to reposition elsewhere in the network (a form of damaged goods). The second project, still preliminary, explores how the platform can use dynamic driver-side pricing to optimize transient network behavior around a random demand shock. We explore this issue with emphasis on the interplay between the timescale of the demand shock duration, rider delay patience, and driver transportation delay. We find that personalized driver pricing (location- and time-dependent as a fcn of the demand shock location and start time) improves performance significantly. Additionally, the platform can improve performance by either sharing some of the risk with drivers, or tactically managing the driver perceived risk of not arriving in time to the hotspot to benefit from the surge.


Both are joint work with Zhe Liu and Philipp Afeche


Tuesday, April 2, 2019

Presenter: Georgia Perakis

Title: Retail Analytics: High-Low Promotion Optimization for Peak-End Demand Models


Promotions are a key instrument for driving sales for retailers. As a result, retailers know it is crucial to optimize the timing and depth of promotions in order to maximize profits.  But how should they process the data necessary to determine optimal pricing and timing? Left to the intuition of product managers, retailers risk missing out. This talk discusses a body of work that addresses the problem of promotion pricing. We address both how to predict demand effectively as well as how to optimize promotions using tools of optimization and machine learning. The demand model we discuss uses features such as current and past period price but also the minimum price set within a set of past periods (that we refer to as memory) among other features. Using this demand model (that we refer to as the bounded memory peak-end demand model) we propose a compact dynamic programming model for determining optimal promotions for both the single item as well as the multiple item problem. We analyze the promotion effects and illustrate that the bounded memory peak-end demand model yields high-low optimal strategies. In addition, we illustrate that the methods we introduce are computationally efficient and “easy” to use in practice. We also illustrate the performance of our models using data through our collaboration with the Oracle Retail Business Unit and discuss the overall practical impact of our approach to Oracle RGBU clients. We show how our approach has the potential to help retailers increase profits by an average of 3-10 percent. In a world of slim profit margins, this could be a game changer for retailers.

Tuesday, March 26, 2019

Presenter:  Sam Ransbotham

Title: Open Source Code and the Risk of Attacks


Vulnerabilities in software may be found before or after release.  For open source software, security attention frequently focuses on the discovery of vulnerabilities prior to release.  For example, the large number of diverse people who can view the source code may find vulnerabilities before the software product is released.  As a result, open source software has the potential to be more secure than closed source software.   Unfortunately, for vulnerabilities found after release, the benefits of access to source code may now work against open source software security.  Attackers may be more likely to exploit discovered vulnerabilities since they too can view the source code and can use it to learn the details of a weakness and how best to exploit it.  This research focuses on post-release exploitation attempts, contrasting vulnerabilities discovered in open source software with those based on closed source software.  Empirical analysis of two years of security alert data from intrusion detection systems indicates that once discovered, open source software vulnerabilities are at greater risk of exploitation.

Tuesday, March 19, 2019

Presenter:  Mohamed Mostagir

Title: Dynamic Contest Design: Theory, Experiments, and Applications


Contests are a common mechanism for extracting effort from participants. Their use is widespread in a variety of settings like workplace promotions, crowdsourcing innovation, and healthcare quality.  One of the pivotal aspects of contest design is the contest’s information structure: what information should the contest designer provide to participants and when should this information be revealed?  The answers to these questions directly impact the behavior of players and the outcome of the contest, and also have broader implications for institutional and policy design.  We derive the contest’s optimal information disclosure policy within a large class of policies and design a novel experiment to evaluate how these policies perform in the lab.

Tuesday, March 12, 2019

Presenter:  Kelly Shue

Title: Can the Market Multiply and Divide? Non-Proportional Thinking in Financial Markets


When pricing financial assets, rational agents should think in terms of proportional price changes, i.e., returns. However, stock price movements are often reported and discussed in dollar rather than percentage units, which may cause investors to think that news should correspond to a dollar change in price rather than a percentage change in price. Non-proportional thinking in financial markets can lead to return underreaction for high-priced stocks and overreaction for low-priced stocks. Consistent with a simple model of non-proportional thinking, we find that total volatility, idiosyncratic volatility, and absolute market beta are significantly higher for stocks with low share prices, controlling for size. To identify a causal effect of price, we show that volatility increases sharply following stock splits and drops following reverse stock splits. The economic magnitudes are large: non-proportional thinking can explain the “leverage effect” puzzle, in which volatility is negatively related to past returns, as well as the volatility-size and beta-size relations in the data. We also show that low-priced stocks drive the long-run reversal phenomenon in asset pricing, and the magnitude of long run reversals can be sorted by price, holding past returns and size constant. Finally, we show that non-proportional thinking biases reactions to news that is itself reported in nominal rather than scaled units. Investors react to nominal earnings per share surprises, after controlling for the earnings surprise scaled by share price. The reaction to the nominal earnings surprise reverses in the long run, consistent with correction of mispricing.

Tuesday, September 25, 2018

Presenter:  Jacob Abernethy

Title: Emerging tools for sequential decision making, with applications in learning and game-playing


In this talk we will explore algorithmic tools for solving sequential decision and prediction problems. These methods have grown quite popular in recent years given their scalability, their broad use in practice, and their reliance on much weaker statistical assumptions. We will begin by exploring a couple of key applications: (a) adaptive pricing for revenue maximization of a monopolist seller, and (b) the search for lead pipes in Flint MI. But we will then turn our attention to a more foundational result, which is the solution of zero-sum games using so-called “no-regret algorithms”. We will describe recent work that allows us to view several classical iterative optimization methods through the lens of game theory.

Tuesday, October 2, 2018

Presenter:  John Beshears

Title: Borrowing to Save?  The Impact of Automatic Enrollment on Debt


Automatic enrollment in defined contribution retirement savings plans is one of the most widely recognized applications of behavioral science in a managerial setting. Previous research documents that automatic enrollment increases average savings plan contributions. But how much of the retirement savings induced by automatic enrollment is offset by increased borrowing outside the savings plan? We study a natural experiment created when the U.S. Army began automatically enrolling its newly hired civilian employees into the Thrift Savings Plan (TSP) at a default contribution rate of 3% of income. Four years after hire, automatic enrollment causes no significant change in debt excluding auto loans and first mortgages (point estimate = 0.9% of income, 95% confidence interval = [-0.9%, 2.7%]). Automatic enrollment does significantly increase auto loan balances by 2.0% of income and first mortgage balances by 7.4% of income. Because we do not observe car or home values, we do not know whether this new debt is offset by greater accumulation of the assets on which these collateralized debts were issued.

Tuesday, October 9, 2018
(JMHH 270)

Presenter:  Abigail Sussman

Title: Mental Accounting Failures: The Case of Exceptional Consumption


Expenses fall along a continuum from ordinary (common or frequent) to exceptional (unusual or infrequent), with many of the largest expenses (e.g., electronics, celebrations) being the most exceptional. In the current project, I show that consumers are fairly adept at budgeting and predicting how much they will spend on ordinary items, but they both underestimate their spending on exceptional purchases overall and overspend on each individual purchase.  Based on the principles of mental accounting and choice bracketing, I show that this discrepancy arises in part because consumers have difficulty categorizing and tracking exceptional expenses. Specifically, consumers are less likely to draw connections between exceptional (versus ordinary) items and other items they purchase, and less likely to believe that consumption of these items meaningfully impacts their budgets.  The current research extends findings beyond the domain of money and financial budgeting by drawing parallels to food and caloric budgeting, providing evidence that exceptional items create a common set of challenges for consumers across contexts.  I conclude by examining how we can harness our understanding of exceptional items to help improve outcomes for consumers. First, I provide evidence from both lab and field settings that exceptional framing of an identical charitable opportunity increases willingness to donate. Second, I provide evidence that an intervention that helps consumers consider their spending on exceptional items as part of a larger set of purchases reduces spending on these products.

Tuesday, October 16, 2018
(JMHH 370)

Presenter:  Panos Ipeirotis

Title: Targeted Crowdsourcing with a Billion (Potential) Users


We describe Quizz, a gamified crowdsourcing system that simultaneously assesses the knowledge of users and acquires new knowledge from them. Quizz operates by asking users to complete short quizzes on specific topics; as a user answers the quiz questions, Quizz estimates the user’s competence. To acquire new knowledge, Quizz also incorporates questions for which we do not have a known answer; the answers given by competent users provide useful signals for selecting the correct answers for these questions. Quizz actively tries to identify knowledgeable users on the Internet by running advertising campaigns, effectively leveraging “for free” the targeting capabilities of existing, publicly available, ad placement services. Quizz quantifies the contributions of the users using information theory and sends feedback to the advertising system about each user. The feedback allows the ad targeting mechanism to further optimize ad placement. Our experiments, which involve over ten thousand users, confirm that we can crowdsource knowledge curation for niche and specialized topics, as the advertising network can automatically identify users with the desired expertise and interest in the given topic. We present controlled experiments that examine the effect of various incentive mechanisms, highlighting the need for having short-term rewards as goals, which incentivize the users to contribute. Finally, our cost-quality analysis indicates that the cost of our approach is below that of hiring workers through paid-crowdsourcing platforms, while offering the additional advantage of giving access to billions of potential users all over the planet, and being able to reach users with specialized expertise that is not typically available through existing labor marketplaces.

Tuesday, October 23, 2018
(JMHH 370)

Presenter:  Nir Halevy

Title: The Calculus of Peacemaking


Third-parties have acted as peacemakers since the dawn of history. However, little is known about the causes and consequences of voluntary, informal third-party intervention in conflict. A series of experiments investigated when, why, and how third-parties intervene in others’ conflicts, transform them, and promote cooperation. Overall, this program of research finds that: (a) The mere possibility of third-party intervention is sufficient to increase cooperation among disputants; (b) Third-parties’ willingness to intervene critically depends on their ability to secure gains and avoid costs to themselves; (c) The positive effects of introducing third-party intervention are evident even following a history of conflict; and (d) persist even after the third-party can no longer intervene. These findings are discussed in the context of a broader conceptual framework that considers when, why and how third parties influence others’ interactions and relationships, for better or worse.

Tuesday, October 30, 2018
(JMHH 370)

Presenter:  Tianshu Sun

Title: Displaying Things in Common to Encourage Friendship Formation: A Large Randomized Field Experiment


Friendship formation is of central importance to online social network sites and to society. In this study, we investigate whether and how displaying things in common (TIC) between users (mutual hometown, interest, education, work, city) may encourage friendship formation. Displaying TIC computed from big data may update an individual’s belief about the shared similarity with another and reduce information friction that may be hard to overcome in offline communication. In collaboration with a large online social network, we design and implement a randomized field experiment involving over 50 million viewer-profile pairs, which randomly varies the prominence of things in common information when a user (viewer) is browsing a non-friend’s profile. The dyad-level exogenous variation, orthogonal to any unobserved factors in viewer-profile’s network, allows us to cleanly isolate the role of individuals’ preference (over TIC) in driving network formation and homophily. We find that displaying TIC to the viewers may significantly increase their probability of sending a friend request and forming a friendship, and especially effective for viewer-profile pairs who have little in common (with no mutual friends and only one or two things in common). Such findings suggest that information intervention is effective in encouraging the formation of weak ties, and also provide the first experimental evidence on the role of individuals’ preference (versus structural factors) in network formation. We further explore the heterogeneity in the effect and demonstrate that displaying TIC could improve friendship formation for a wide range of viewers with different characteristics. Finally, we propose an information-theoretic model to characterize the belief update process underlying a viewer’s decision, and provide evidence that displaying TIC is more effective when the TIC information is more surprising to the viewer. The insights and information-theoretical framework can guide the optimal design of information display in friendship formation process.

 Seminars 2017-2018

Spring 2018

Tuesday, January 30, 2018

Presenter:  Karan Girotra

Title: Bike Share Systems


The cities of Paris, London, Chicago, and New York (among many others) have set up bike-share systems to facilitate the use of bicycles for urban commuting. This paper estimates the impact of two facets of system performance on bike-share ridership: accessibility (how far the user must walk to reach stations) and bike-availability (the likelihood of finding a bicycle). We obtain these estimates from a structural demand model for ridership estimated using data from the Vélib’ system in Paris. We find that every additional meter of walking to a station decreases a user’s likelihood of using a bike from that station by 0.194% (±0.0693%), the reduction is even more significant at higher distances (>300m). These estimates imply that almost 80% of bike-share usage comes from areas within 300m of stations, highlighting the need for dense station networks. We find that a 10% increase in bike-availability would increase ridership by 12.211% (±1.097%), three-fourths of which comes from fewer abandonments, and the rest from increased user interest. We illustrate the use of our estimates in comparing the effect of adding stations or increasing bike-availabilities in different parts of the city, at different times, and in evaluating other proposed improvements.

Tuesday, February 13, 2018

Presenter:  Natalia Levina

Title: “Organizational Impacts of Crowdsourcing: What Happens with “Not Invented Here” Ideas?”


Recent work on the organizational impacts of crowdsourcing suggests a number of difficulties including not only the familiar difficulties associated with learning from outside, but also difficulties specific to paying proper attention to a large number of crowdsourced submissions. How does relying on consulting differ from relying on crowdsourcing as two modes of open innovation when it comes to their potential impact of each on organizational ability to learn novel business and scientific insights?  What can we learn from this comparison about how ideas are evaluated when they come from different external sources?  We investigate these differences in the context of an in-depth longitudinal field study of an R&D organization that engaged both innovation consulting and crowdsourcing at the same time to address one of its critical R&D problems. We draw on and contribute to the literature on open innovation by elaborating how different practices of engagements shaped the potential for impact of various external ideas that were voiced, or stayed silent, in the process

Tuesday, February 20, 2018

Presenter:  Diana Tamir

Title: Making Predictions in the Social World


The social mind is tailored to the problem of predicting other people. Imagine trying to navigate the social world without understanding that tired people tend to become frustrated, or that mean people tend to lash out. Our social interactions depend on the ability to anticipate others’ actions, and we rely on knowledge about their state (i.e., tired) and traits (i.e., mean) to do so. I will present a multi-layered framework of social cognition that helps to explain how people represent the richness and complexity of others’ minds, and how they use this representation to predict others’ actions. Using both neuroimaging and Markov modeling, I demonstrate how the social mind might leverage both the structure and dynamics of mental state representations to make predictions about the social world.

Tuesday, February 27, 2018

Presenter:  Ajay Agrawal

Title: Decision Making with Artificial Intelligence: Prediction, Judgment, and Complexity


We interpret recent developments in the field of artificial intelligence (AI) as improvements in prediction technology. In this paper, we explore the consequences of improved prediction in decision-making. To do so, we adapt existing models of decision-making under uncertainty to account for the process of determining payoffs. We label this process of determining the payoffs ‘judgment.’ There is a risky action, whose payoff depends on the state, and a safe action with the same payoff in every state. Judgment is costly; for each potential state, it requires thought on what the payoff might be. Prediction and judgment are complements as long as judgment is not too difficult. We show that in complex environments with a large number of potential states, the effect of improvements in prediction on the importance of judgment depend a great deal on whether the improvements in prediction enable automated decision-making. We discuss the implications of improved prediction in the face of complexity for automation, contracts, and firm boundaries.

Tuesday, March 13, 2018

Presenter:  Stephen Spiller

Title: Judgments Based on Stocks and Flows: Different Presentations of the Same Data Can Lead to Opposing Inferences


Measurements of a quantity over time can be presented as stocks (the total quantity at each point of time) or flows (the change in quantity between each point of time). We show that the choice of presenting data as stocks or flows can have a consequential impact on judgments. The same data can lead to positive or negative assessments when presented as stocks versus flows and can engender optimistic or pessimistic forecasts for the future. For example, when employment data from 2007 to 2013 are shown as flows (jobs created or lost), President Obama’s impact on the economy is viewed positively, whereas when presenting the same data as stocks (total jobs), his impact is viewed negatively. We document the data patterns likely to engender these inconsistencies, show they are robust to non-graphical data representations, and occur even when people can accurately transform the data between stocks and flows.

Tuesday, March 20, 2018

Presenter:  Elena Belavina

Title: Grocery Store Density and Food Waste


We study the impact of grocery-store density on the food waste generated at stores and households. Food waste is a major contributor to carbon emissions (as big as road transport). Identifying and influencing market conditions that can decrease food waste is thus important to combat global warming. We build and calibrate a stylized two-echelon perishable-inventory model to capture grocery purchases and expiration at competing stores and households in a market. We examine how the equilibrium waste in this model changes with store density.

An increase in store density decreases consumer waste due to improved access to groceries, while increasing retail waste due to decentralization of inventory, increased variability propagation in the supply chain (cycle truncation) and diminished demand by customers. Higher density also induces more competition which further increases (decreases) waste when stores compete on prices (service-levels). Overall, consumer waste reductions compete with store waste increases and the effects of increased competition. Our analysis shows that higher density reduces food waste up to a threshold density; it leads to higher food waste beyond this threshold. Put differently, in so far as food waste is concerned, there exists an optimal store density.

Calibration using grocery industry, economic and demographic data reveals that actual store density in most American cities is well-below this threshold/optimal level, and modest increases in store density substantially reduce waste; e.g. in Chicago, just 3-4 more stores (per 10 sq-km) can lead to a 6-9% waste reduction, and a 1-4% decrease in grocery expenses. These results arise from the principal role of consumer waste, suggesting that activists and policy makers’ focus on retail waste may be misguided. Store operators, urban planners and decision makers should aim to increase store densities to make grocery shopping more affordable and sustainable.

Tuesday, March 27, 2018

Presenter:  Terry Taylor

Title: On-Demand Service Platforms: Worker Independence and Welfare


An on-demand service platform connects waiting-time sensitive customers with service-providing workers. This talk addresses two topics. First, a defining feature of an on-demand service platform is that the workers are independent contractors rather than employees. We examine the implications of this worker independence for the platform’s optimal decisions (e.g., prices). Second, platforms’ efforts to aggressively recruit workers have been controversial. Some labor advocates have argued that an expansion in a platform’s labor supply hurts workers, who see, as consequence of the expansion, less work and lower income. We examine the extent to which the interest of platforms in increasing labor supply is indeed at odds with those of workers.

Tuesday, April 3, 2018

Presenter:  Srikanth Jagabathula

Title: The Limit of Rationality in Choice Modeling: Formulation, Computation, and Implications


Customer preferences may not be rational, and therefore we focus on quantifying the limit of rationality (LoR) in choice modeling applications. We define LoR as the “cost” of approximating the observed choice fractions from a collection of offer sets with those from the best fitting probability distribution over rankings. Computing LoR is intractable in the worst case. To tackle this challenge, we introduce two new concepts – rational separation and choice graph, using which we reduce the problem to solving a dynamic program on the choice graph and express the computational complexity in terms of structural properties of the graph. By exploiting the graph structure, we provide practical methods to compute LoR efficiently for a large class of applications. We apply our methods to real-world grocery sales data from the IRI Academic Dataset and identify product categories for which going beyond rational choice models is necessary to obtain acceptable performance.
Joint with: Paat Rusmevichientong, USC Marshall

Tuesday, April 10, 2018

Presenter:  Marcelo Olivares

Title: Managing Worker Utilization in Service Platforms: An Empirical Study of an Outbound Call-Center


In many service industries, providing prompt response to customers can be an important competitive advantage, especially when customers are time-sensitive. When demand for the service is variable and the staffing requirements cannot be adjusted quickly, capacity decisions require making a trade-off between the responsiveness to customers versus controlling operating costs through worker utilization. To break this trade-off, the service system can operate as a platform with access to a large pool of employees with flexible working hours that are compensated through piece-rates. Examples of these service platforms can be found in transportation, food delivery and customer contact centers, among many others. While this business model can operate at low levels of utilization without increasing operating costs, a different trade-off emerges: in settings where employee training and experience is important, the service platform must control employee turnover, which may increase when employees are working at low levels of utilization. Hence, to make staffing decisions and managing workload, it is necessary to understand both customer behavior (measuring their sensitivity to service times) and employee retention. We analyze this trade-off in the context of an outbound call-center that operates with a pool of flexible agents working remotely, selling auto insurance. We develop an econometric approach to model customer behavior in the context of an out-bound call center, that captures special features of out-bound calls, time-sensitivity and the effect of employee experience. A survival model is used to measure how agent retention is affected by the assigned workload. These empirical models of customers and agents are combined to illustrate how to balance time-sensitivity and employee experience, showing that both effects are relevant in practice to plan workload and staffing in a service platform.

(joint work with Andres Musalem and Daniel Yung)

Tuesday, April 17, 2018

Presenter:  Paat Rusmevichientong

Title: A New Approach in Approximate Dynamic Programming for Revenue Management of Reusable Products


We present a new approach in approximate dynamic programming for revenue management of reusable products. The problem is motivated by emerging industries that rent out computing capacity and fashion items, where customers request products on-demand, use the products for a random duration of time, and afterward return the products back to the firm. The goal is to find a policy that determines what products to offer to each customer to maximize the total expected revenue over a finite selling horizon. For this problem, the firms must simultaneously consider the inventories of available products, along with the products that are currently in use by other customers. So, the resulting dynamic programming formulation is intractable because of the high-dimensional state variable.

Using a novel approach for constructing an affine approximation to the value functions, we present a policy that is guaranteed to obtain at least 50% of the optimal expected revenue. Our construction is based on a simple and efficient backward recursion. We provide computational experiments based on the parking transaction data in Seattle. Our numerical experiments demonstrate that the practical performance of our policy is substantially better than its worst-case performance guarantee.

Joint work with Huseyin Topaloglu and Mika Sumida (Cornell Tech)

Fall 2017

Tuesday, September 5th, 2017

Presenter:  Sharad Goel

Title: Algorithmic Decision Making and the Cost of Fairness


Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. We reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.


Tuesday, September 12th, 2017

Presenter:  Sharad Goel

Title: Algorithmic Decision Making and the Cost of Fairness


Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. We reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.


Tuesday, September 19th, 2017

Presenter:  Marshall Van Alstyne

Title: The Role of APIs in Firm Performance


ABSTRACT: Do firms benefit from external as well as internal performance enhancements? Using proprietary information from a significant fraction of the API tool provision industry, we explore the impact of API adoption and complementary investments on firm performance. Data include external as well as internal developers. We use a difference in difference approach centered on the date of first API use to show that API adoption — measured both as a binary treatment and as a function of the number of calls and amount of data processed — is related to increased sales, operating income, and decreased costs. It is especially tightly related to increased market value. In a specification with year and firm fixed effects, binary API adoption predicts a 4.5\% increase in a firms’ market value. Creation of API developer portals is associated with a decrease in R+D expenditure inside the firm, supporting the hypothesis that outside development substitutes for internal development. Categorizing APIs by their orientation, we find that B2B, B2C, and Internal API calls are heterogeneous in their association with financial outcomes. Finally, the fact that increases in API calls are associated with contemporaneous increases in firm value suggest that data flow at the boundary of the firm can be used for stock market trading.

Tuesday, September 26th, 2017

Presenter:  Heather Sarsons

Title: Interpreting Signals: Evidence from Doctor Referrals


This project asks whether someone’s gender influences the way we interpret signals about his or her ability, and the implications this has for her career trajectory. Using data on referrals from primary care physicians (PCPs) to surgeons, I show that PCPs view good and bad patient outcomes differently depending on the performing surgeon’s gender. If a PCP refers a patient to a surgeon and the patient dies during surgery, the PCP is less likely to refer to that surgeon in the future but is significantly less likely to do so if the surgeon is female. Conversely, PCPs are more likely to refer to surgeons after a good patient outcome, but are significantly more likely to do so if the surgeon is male. I provide evidence that this is not driven by surgeon behaviour or by differences in underlying patient risk. I discuss the results in the context of a standard Bayesian updating model as well as a model of confirmation bias.

Tuesday, October 3rd, 2017

Presenter:  Jake Hofman

Title: How Predictable is the Spread of Information?


How does information spread in online social networks, and how predictable are online information diffusion events?

Despite a great deal of existing research on modeling information diffusion and predicting “success” of content in social systems, these questions have remained largely unanswered for a variety of reasons, ranging from the inability to observe most word-of-mouth communication to difficulties in precisely and consistently formalizing different notions of success.

This talk will attempt to shed light on these questions through an empirical analysis of billions of diffusion events under one simple but unified framework.

We will show that even though information diffusion patterns exhibit stable regularities in the aggregate, it remains surprisingly difficult to predict the success of any particular individual or single piece of content in an online social network, , with our best performing models explaining only half of the empirical variance in outcomes.

We conclude by exploring this limit theoretically through a series of simulations that suggest that it is the diffusion process itself, rather than our ability to estimate or model it, that is responsible for this unpredictability.

Tuesday, October 10th, 2017

Presenter:  Kostas Bimpikis

Title: Spatial Pricing in Ride-Sharing Networks


We explore spatial price discrimination in the context of a ride-sharing platform that serves a network of locations. Riders are heterogeneous in terms of their destination preferences and their willingness to pay for receiving service. Drivers decide whether, when, and where to provide service so as to maximize their expected earnings, given the platform’s prices. Our findings highlight the impact of the demand pattern on the platform’s prices, profits, and the induced consumer surplus. In particular, we establish that profits and consumer surplus are maximized when the demand pattern is “balanced” across the network’s locations. In addition, we show that they both increase monotonically with the balancedness of the demand pattern (as formalized by its structural properties). Furthermore, if the demand pattern is not balanced, the platform can benefit substantially from pricing rides differently depending on the location they originate from. Finally, we consider a number of alternative pricing and compensation schemes that are commonly used in practice and explore their performance for the platform.

Tuesday, October 17th, 2017 in JMHH F55 (note location change)

Presenter:  Eytan Bakshy

Title: Experimental Learning and Optimization 


Online experiments (“A/B tests”) are the workhorse of modern Internet development, yet these experiments are generally limited to evaluating the effects of only one or two variants.  In many cases, however, we are interested in evaluating the effects of thousands or a potentially infinite number of possible interventions, such as treatments parametrized by continuous variables, or dynamic personalized treatment regimes that map particular states to different actions.  I will discuss a new approach to large-scale field experimentation using Gaussian process regression models and Bayesian optimization to solve such multi-armed bandit problems.  Using empirical examples, I will show how we are able to effectively apply Bayesian modeling to both finite and infinite action spaces to make predictions about yet-to-be-observed treatments.   These models are combined with optimization procedures that produce demonstrable improvements to mobile software, infrastructure, and machine learning systems.


Tuesday, October 31st, 2017

Presenter:  Tatiana Homonoff

Title: Do FICO Scores Influence Financial Behavior? Evidence from a Field Experiment with Student Loan Borrowers

Tuesday, November 7th, 2017

Presenter:  Yiangos Papanastasiou

Title: Fake News Propagation and Detection: A Sequential Model


In the wake of the 2016 US presidential election, social media platforms are facing increasing pressure to safeguard their users against the propagation of “fake news” (i.e., articles whose content is fabricated). In this paper, we develop a simple model of news propagation, in which a sequence of heterogeneous rational agents choose first whether to inspect an article to determine its validity (i.e., perform a “fact-check”), and then whether to share the article with the next agent. Although the agents are intent on sharing only truthful news, our model highlights how the sequential nature of content-sharing on social media can lead to pathological outcomes, whereby fake news articles attain “truthful news status” and are propagated in perpetuity. We then consider a social media platform’s problem of deciding whether and when to intervene in the sharing of a news article by conducting its own inspection. We show that the optimal policy reduces to the solution of a simple finite-horizon optimal stopping problem, and identify the characteristics of the news environment that render immediate inspection, delayed inspection, and non-inspection optimal. Through a combination of analytical results and numerical experiments, we quantify the impact of fake news articles on the agents’ beliefs and highlight cases where this impact is most pronounced.

Tuesday, November 14th, 2017

Presenter:  Bo Cowgill

Title: Bias and Productivity in Humans and Algorithms: Theory and Evidence from Résumé Screening


Where should algorithms improve decision-making? I formally model the advantages of human judgements and decision-making algorithms. I show that algorithms can remove human biases exhibited in the training data, but only if the human judgment is sufficiently noisy. The model suggests that decision-making algorithms have the biggest effects on productivity where human judgement is both biased and inconsistent. Where human decisions are biased and consistent, then algorithms trained on these judgments (and their outcomes) will codify the bias rather than reducing it. By contrast: Noise in human judgement facilitates de-biasing by contributing quasi-experimental variation into algorithms’ training data. I test these predictions in a field experiment in applying machine learning for hiring workers for white-collar team-production jobs. The marginal candidate selected by the machine (rejected by human screeners) is a) +14\% more likely to pass a face-to-face interview with incumbent workers and receive a job offer offer, b) +18\% more likely to accept job offers when extended by the employer, and c) 0.2$\sigma$-0.4$\sigma$ more productive once hired as employees. They are also 12\% less likely to show evidence of competing job offers during salary negotiations. Estimates of heterogeneous effects suggest that the results are driven by non-traditional job applicants: Candidates from non-elite backgrounds, those who lack job referrals, those without prior experience, those with atypical credentials and those with strong non-cognitive soft-skills. Empirical evidence suggests that human evaluation of these candidates was both noisy and biased.

Monday, November 20th, 2017

Presenter:  Chloe Kim Glaeser

Title: Optimal Retail Location: Empirical Methodology and Application to Practice


We empirically study the spatio-temporal location problem motivated by an online retailer that uses the Buy-Online-Pick-Up-In-Store fulfillment method. Customers pick up their orders from trucks parked at specific locations on specific days, and the retailer’s problem is to determine where and when these pick-ups occur. Customer demand is influenced by the convenience of pick-up locations and days. We combine demographic and economic data, business location data, and the retailer’s historical sales and operations data to predict demand at potential locations. We introduce a novel procedure that combines machine learning and econometric techniques. First, we use a fixed effects regression to estimate spatial and temporal cannibalization effects. Then, we use a random forests algorithm to predict demand when a particular location operates in isolation. Based on the predicted demand, we solve the spatio-temporal integer program using quadratic program relaxation to find the optimal pick-up location configuration and schedule. We estimate a revenue increase of at least 42% from the improved location configuration and schedule.