Seminars / Conferences


September 12-13, 2019


Since 2006, the Workshop for Empirical Research in Operations Management brings together a community of scholars with a passion for empirical research in Operations. The purpose of the Workshop is to exchange research ideas, share experiences in the publication process, discuss methodological issues, and grow together as a group of colleagues with a common research interest.

October 20th, 2019

2019 Informs Reception


Seattle, Washington

Location: Ivar’s Acres of Clams – 1001 Alaskan Way, Seattle, WA 98104

Sunday, October 20th

7:00 – 9:00 pm

Seminar Time & Location Information 

Time and Location: 12:00PM – 1:20PM

**Due to the COVID-19 pandemic, all seminars will be held virtually until further notice.**

Jon M Huntsman Hall (JMHH)

3730 Walnut St.

Philadelphia, PA 19104

Suite 540/541 (unless otherwise noted)

To schedule a meeting with a speaker log in to the OID SharePoint site or Joy McManus.

Seminars 2020-2021

Tuesday, October 13, 2020

Virtual seminar via Zoom

Presenter: Dan Adelman

Title: To be announced


Abstract forthcoming

Tuesday, October 20, 2020

Virtual seminar via Zoom

Presenter: David Chan

Title: To be announced


Abstract forthcoming

Tuesday, October 27, 2020

Virtual seminar via Zoom

Presenter: Catherine Tucker

Title: To be announced


Abstract forthcoming

Tuesday, November 17, 2020

Virtual seminar via Zoom

Presenter: Retsef Levi

Title: To be announced


Abstract forthcoming

Tuesday, December 1, 2020

Virtual seminar via Zoom

Presenter: Irene Lo

Title: To be announced


Abstract forthcoming

Tuesday, December 8, 2020

Virtual seminar via Zoom

Presenter: Kimon Drakopoulos

Title: To be announced


Abstract forthcoming

Tuesday, December 15, 2020

Virtual seminar via Zoom

Presenter: Gerben van Kleef

Title: To be announced


Abstract forthcoming

Seminars 2019-2020

Tuesday, September 3, 2019

Presenter: Joshua Lewis

Title: Prospective Outcome Bias: Incurring (Unnecessary) Costs to Achieve Outcomes That Are Already Likely



How do people decide whether to incur costs to increase their likelihood of success? In investigating this question, we offer a theory called prospective outcome bias. According to this theory, people tend to make decisions that they expect to feel good about after the outcome has been realized. Because people expect to feel best about decisions that are followed by successes – even when the decisions did not cause those successes – they will pay more to increase their chances of success when success is already likely (e.g., people will pay more to increase their probability of success from 80% to 90% than from 10% to 20%). We find evidence for prospective outcome bias in nine experiments. In Study 1, we establish that people evaluate costly decisions that precede successes more favorably than costly decisions that precede failures, even when the decisions did not cause the outcome. Study 2 establishes, in an incentive-compatible laboratory setting, that people are more motivated to increase higher chances of success. Studies 3-5 generalize the effect to other contexts and decisions, and Studies 6-8 indicate that prospective outcome bias causes it (rather than regret aversion, waste aversion, goals-as-reference-points, probability weighting, or loss aversion). Finally, in Study 9, we find evidence for another prediction of prospective outcome bias: people prefer small increases in the probability of large rewards (e.g., a 1% improvement in their chances of winning $100) to large increases in the probability of small rewards (e.g., a 10% improvement in their chances of winning $10).



Tuesday, September 10, 2019

JMHH – Room 350

Presenter: Edward Chang

Title: Understanding What Drives Diversity-Related Hiring Decisions in Organizations


Using archival field data and experiments, I provide evidence of novel factors that influence diversity-related hiring decisions in organizations. First, I explore the implications of impression management as a driver of diversity. If organizations have impression management concerns around diversity, they may strive to match the levels of diversity found among peer organizations, thereby conforming to the descriptive social norm for diversity. I examine this prediction in the context of gender diversity on U.S. corporate boards and find that significantly more S&P 1500 boards include exactly two women (the descriptive social norm) than would be expected by chance. Experimental data corroborate these findings and provide additional evidence that social norms, visibility, and impression management concerns all affect organizational preferences for diversity. Second, I explore how a common feature of personnel selection decisions–the fact that they are made in isolation–can affect the diversity of hired candidates. In a series of experiments, I show individuals select less diversity when making decisions in isolation, as opposed to making collections of choices, because diversity is less salient when selection decisions are made in isolation. Together, these projects illuminate novel factors that determine when and why organizations demand diversity. Understanding these factors can provide guidance about potential interventions to increase diversity in organizations.

Tuesday, September 24, 2019

JMHH – Room 350

Presenter: Ken Moon

Title: Matching in Online Marketplaces when Talent is Difficult to Discern


We study the problem of assigning workers to short-term jobs in online marketplaces. In settings where workers’ most relevant skills and attributes are readily observed (e.g., Uber), the marketplace platform should clearly prioritize matches of the workers with the best attributes. However, in many important and growing settings, workers are distinguished in quality by skills and attributes that are difficult to measure at scale.  Information about these attributes is perceived by marketplace participants through effort and cost (e.g., interviewing) – in particular, mounting evidence suggests that reputational systems do not bridge the gap.  We expect marketplace to increasingly encounter this challenge as the online gig economy expands from its current niche, at 0.5% of the overall US labor force.

We use data covering millions of job postings and transactions on a major online platform for sourcing freelance labor. We structurally estimate employers’ demand preferences, including the extent to which they hire based on uncertain information about workers’ quality-relevant competencies, in a setting featuring an asymptotically large number of choices (freelancers) sorted into essentially unique consideration sets (rather than each being one of large N instances).  We recommend how and when the platform should prioritize matching for compatible skills, matching for repeat relationships, and matching that encourages exploration.

Seminars 2018-2019

Tuesday, April 30, 2019

Presenter: Pei-yu Chen

Title: From Data to Actionable Analytics: The Magical Power of Individual Shopping Time Habit


I will share some of my recent work on deriving actionable analytics from large scale data. This talk will focus on a project that aims to understand and measure individual shopping time habits and its effects. Little research has focused on online shopping habit, particularly concerning time, missing opportunity to potentially improve important outcomes by the simple innovative use of time. Based on a unique dataset that includes reviews as well as pertinent purchases at the individual level from a large online retailer, this study investigates whether consumers exhibit time habits for online shopping and whether following such time habits affects their satisfaction and re-visit behavior. We employ activity-based metrics to assess shopping time habit at the INDIVIDUAL level, and results show that consumers form shopping time habits, and they obtain higher consumer satisfaction and exhibit greater re-visit behavior when the timing of shopping follows their shopping time habits. While prior works have documented that consumers exhibit time habits for physical shopping, driven mostly by time and location constraints, this current study is the first, to our knowledge, to examine online shopping time habit and, most importantly, its effects on consumer satisfaction and re-visit behavior. With the availability of detailed individual transaction data in online shopping and the advance of technology in providing personalized services which enable companies to act upon knowledge of individual behaviors, this research provides important practical implications for system and website design, marketing strategy as well as customer relationship management.




Tuesday, April 23, 2019

Presenter: Ashton Anderson

Title:  Assessing Human Error Against a Benchmark of Perfection


An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors. To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using chess tablebases to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging for even the best players in the world. We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difficulty of the decision. We identify rich structure in all three of these categories of features, and find strong evidence that in our domain, features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time.



Tuesday, April 9, 2019

Presenter: Costis Maglaras

Title: The role of operational controls and driver-side pricing in ride-hailing networks


I will discuss two separate projects. The first explores the role of operational controls in a stylized ride-hailing network with strategic drivers, and characterizes the value of platform admission and matching controls on equilibrium behavior; one particular finding is that the platform may proactively wish to deny a match to arriving riders at a particular location, even when local driver capacity is available, so as to aggravate driver queueing and incentivize them to reposition elsewhere in the network (a form of damaged goods). The second project, still preliminary, explores how the platform can use dynamic driver-side pricing to optimize transient network behavior around a random demand shock. We explore this issue with emphasis on the interplay between the timescale of the demand shock duration, rider delay patience, and driver transportation delay. We find that personalized driver pricing (location- and time-dependent as a fcn of the demand shock location and start time) improves performance significantly. Additionally, the platform can improve performance by either sharing some of the risk with drivers, or tactically managing the driver perceived risk of not arriving in time to the hotspot to benefit from the surge.


Both are joint work with Zhe Liu and Philipp Afeche


Tuesday, April 2, 2019

Presenter: Georgia Perakis

Title: Retail Analytics: High-Low Promotion Optimization for Peak-End Demand Models


Promotions are a key instrument for driving sales for retailers. As a result, retailers know it is crucial to optimize the timing and depth of promotions in order to maximize profits.  But how should they process the data necessary to determine optimal pricing and timing? Left to the intuition of product managers, retailers risk missing out. This talk discusses a body of work that addresses the problem of promotion pricing. We address both how to predict demand effectively as well as how to optimize promotions using tools of optimization and machine learning. The demand model we discuss uses features such as current and past period price but also the minimum price set within a set of past periods (that we refer to as memory) among other features. Using this demand model (that we refer to as the bounded memory peak-end demand model) we propose a compact dynamic programming model for determining optimal promotions for both the single item as well as the multiple item problem. We analyze the promotion effects and illustrate that the bounded memory peak-end demand model yields high-low optimal strategies. In addition, we illustrate that the methods we introduce are computationally efficient and “easy” to use in practice. We also illustrate the performance of our models using data through our collaboration with the Oracle Retail Business Unit and discuss the overall practical impact of our approach to Oracle RGBU clients. We show how our approach has the potential to help retailers increase profits by an average of 3-10 percent. In a world of slim profit margins, this could be a game changer for retailers.

Tuesday, March 26, 2019

Presenter:  Sam Ransbotham

Title: Open Source Code and the Risk of Attacks


Vulnerabilities in software may be found before or after release.  For open source software, security attention frequently focuses on the discovery of vulnerabilities prior to release.  For example, the large number of diverse people who can view the source code may find vulnerabilities before the software product is released.  As a result, open source software has the potential to be more secure than closed source software.   Unfortunately, for vulnerabilities found after release, the benefits of access to source code may now work against open source software security.  Attackers may be more likely to exploit discovered vulnerabilities since they too can view the source code and can use it to learn the details of a weakness and how best to exploit it.  This research focuses on post-release exploitation attempts, contrasting vulnerabilities discovered in open source software with those based on closed source software.  Empirical analysis of two years of security alert data from intrusion detection systems indicates that once discovered, open source software vulnerabilities are at greater risk of exploitation.

Tuesday, March 19, 2019

Presenter:  Mohamed Mostagir

Title: Dynamic Contest Design: Theory, Experiments, and Applications


Contests are a common mechanism for extracting effort from participants. Their use is widespread in a variety of settings like workplace promotions, crowdsourcing innovation, and healthcare quality.  One of the pivotal aspects of contest design is the contest’s information structure: what information should the contest designer provide to participants and when should this information be revealed?  The answers to these questions directly impact the behavior of players and the outcome of the contest, and also have broader implications for institutional and policy design.  We derive the contest’s optimal information disclosure policy within a large class of policies and design a novel experiment to evaluate how these policies perform in the lab.

Tuesday, March 12, 2019

Presenter:  Kelly Shue

Title: Can the Market Multiply and Divide? Non-Proportional Thinking in Financial Markets


When pricing financial assets, rational agents should think in terms of proportional price changes, i.e., returns. However, stock price movements are often reported and discussed in dollar rather than percentage units, which may cause investors to think that news should correspond to a dollar change in price rather than a percentage change in price. Non-proportional thinking in financial markets can lead to return underreaction for high-priced stocks and overreaction for low-priced stocks. Consistent with a simple model of non-proportional thinking, we find that total volatility, idiosyncratic volatility, and absolute market beta are significantly higher for stocks with low share prices, controlling for size. To identify a causal effect of price, we show that volatility increases sharply following stock splits and drops following reverse stock splits. The economic magnitudes are large: non-proportional thinking can explain the “leverage effect” puzzle, in which volatility is negatively related to past returns, as well as the volatility-size and beta-size relations in the data. We also show that low-priced stocks drive the long-run reversal phenomenon in asset pricing, and the magnitude of long run reversals can be sorted by price, holding past returns and size constant. Finally, we show that non-proportional thinking biases reactions to news that is itself reported in nominal rather than scaled units. Investors react to nominal earnings per share surprises, after controlling for the earnings surprise scaled by share price. The reaction to the nominal earnings surprise reverses in the long run, consistent with correction of mispricing.

Tuesday, September 25, 2018

Presenter:  Jacob Abernethy

Title: Emerging tools for sequential decision making, with applications in learning and game-playing


In this talk we will explore algorithmic tools for solving sequential decision and prediction problems. These methods have grown quite popular in recent years given their scalability, their broad use in practice, and their reliance on much weaker statistical assumptions. We will begin by exploring a couple of key applications: (a) adaptive pricing for revenue maximization of a monopolist seller, and (b) the search for lead pipes in Flint MI. But we will then turn our attention to a more foundational result, which is the solution of zero-sum games using so-called “no-regret algorithms”. We will describe recent work that allows us to view several classical iterative optimization methods through the lens of game theory.

Tuesday, October 2, 2018

Presenter:  John Beshears

Title: Borrowing to Save?  The Impact of Automatic Enrollment on Debt


Automatic enrollment in defined contribution retirement savings plans is one of the most widely recognized applications of behavioral science in a managerial setting. Previous research documents that automatic enrollment increases average savings plan contributions. But how much of the retirement savings induced by automatic enrollment is offset by increased borrowing outside the savings plan? We study a natural experiment created when the U.S. Army began automatically enrolling its newly hired civilian employees into the Thrift Savings Plan (TSP) at a default contribution rate of 3% of income. Four years after hire, automatic enrollment causes no significant change in debt excluding auto loans and first mortgages (point estimate = 0.9% of income, 95% confidence interval = [-0.9%, 2.7%]). Automatic enrollment does significantly increase auto loan balances by 2.0% of income and first mortgage balances by 7.4% of income. Because we do not observe car or home values, we do not know whether this new debt is offset by greater accumulation of the assets on which these collateralized debts were issued.

Tuesday, October 9, 2018
(JMHH 270)

Presenter:  Abigail Sussman

Title: Mental Accounting Failures: The Case of Exceptional Consumption


Expenses fall along a continuum from ordinary (common or frequent) to exceptional (unusual or infrequent), with many of the largest expenses (e.g., electronics, celebrations) being the most exceptional. In the current project, I show that consumers are fairly adept at budgeting and predicting how much they will spend on ordinary items, but they both underestimate their spending on exceptional purchases overall and overspend on each individual purchase.  Based on the principles of mental accounting and choice bracketing, I show that this discrepancy arises in part because consumers have difficulty categorizing and tracking exceptional expenses. Specifically, consumers are less likely to draw connections between exceptional (versus ordinary) items and other items they purchase, and less likely to believe that consumption of these items meaningfully impacts their budgets.  The current research extends findings beyond the domain of money and financial budgeting by drawing parallels to food and caloric budgeting, providing evidence that exceptional items create a common set of challenges for consumers across contexts.  I conclude by examining how we can harness our understanding of exceptional items to help improve outcomes for consumers. First, I provide evidence from both lab and field settings that exceptional framing of an identical charitable opportunity increases willingness to donate. Second, I provide evidence that an intervention that helps consumers consider their spending on exceptional items as part of a larger set of purchases reduces spending on these products.

Tuesday, October 16, 2018
(JMHH 370)

Presenter:  Panos Ipeirotis

Title: Targeted Crowdsourcing with a Billion (Potential) Users


We describe Quizz, a gamified crowdsourcing system that simultaneously assesses the knowledge of users and acquires new knowledge from them. Quizz operates by asking users to complete short quizzes on specific topics; as a user answers the quiz questions, Quizz estimates the user’s competence. To acquire new knowledge, Quizz also incorporates questions for which we do not have a known answer; the answers given by competent users provide useful signals for selecting the correct answers for these questions. Quizz actively tries to identify knowledgeable users on the Internet by running advertising campaigns, effectively leveraging “for free” the targeting capabilities of existing, publicly available, ad placement services. Quizz quantifies the contributions of the users using information theory and sends feedback to the advertising system about each user. The feedback allows the ad targeting mechanism to further optimize ad placement. Our experiments, which involve over ten thousand users, confirm that we can crowdsource knowledge curation for niche and specialized topics, as the advertising network can automatically identify users with the desired expertise and interest in the given topic. We present controlled experiments that examine the effect of various incentive mechanisms, highlighting the need for having short-term rewards as goals, which incentivize the users to contribute. Finally, our cost-quality analysis indicates that the cost of our approach is below that of hiring workers through paid-crowdsourcing platforms, while offering the additional advantage of giving access to billions of potential users all over the planet, and being able to reach users with specialized expertise that is not typically available through existing labor marketplaces.

Tuesday, October 23, 2018
(JMHH 370)

Presenter:  Nir Halevy

Title: The Calculus of Peacemaking


Third-parties have acted as peacemakers since the dawn of history. However, little is known about the causes and consequences of voluntary, informal third-party intervention in conflict. A series of experiments investigated when, why, and how third-parties intervene in others’ conflicts, transform them, and promote cooperation. Overall, this program of research finds that: (a) The mere possibility of third-party intervention is sufficient to increase cooperation among disputants; (b) Third-parties’ willingness to intervene critically depends on their ability to secure gains and avoid costs to themselves; (c) The positive effects of introducing third-party intervention are evident even following a history of conflict; and (d) persist even after the third-party can no longer intervene. These findings are discussed in the context of a broader conceptual framework that considers when, why and how third parties influence others’ interactions and relationships, for better or worse.

Tuesday, October 30, 2018
(JMHH 370)

Presenter:  Tianshu Sun

Title: Displaying Things in Common to Encourage Friendship Formation: A Large Randomized Field Experiment


Friendship formation is of central importance to online social network sites and to society. In this study, we investigate whether and how displaying things in common (TIC) between users (mutual hometown, interest, education, work, city) may encourage friendship formation. Displaying TIC computed from big data may update an individual’s belief about the shared similarity with another and reduce information friction that may be hard to overcome in offline communication. In collaboration with a large online social network, we design and implement a randomized field experiment involving over 50 million viewer-profile pairs, which randomly varies the prominence of things in common information when a user (viewer) is browsing a non-friend’s profile. The dyad-level exogenous variation, orthogonal to any unobserved factors in viewer-profile’s network, allows us to cleanly isolate the role of individuals’ preference (over TIC) in driving network formation and homophily. We find that displaying TIC to the viewers may significantly increase their probability of sending a friend request and forming a friendship, and especially effective for viewer-profile pairs who have little in common (with no mutual friends and only one or two things in common). Such findings suggest that information intervention is effective in encouraging the formation of weak ties, and also provide the first experimental evidence on the role of individuals’ preference (versus structural factors) in network formation. We further explore the heterogeneity in the effect and demonstrate that displaying TIC could improve friendship formation for a wide range of viewers with different characteristics. Finally, we propose an information-theoretic model to characterize the belief update process underlying a viewer’s decision, and provide evidence that displaying TIC is more effective when the TIC information is more surprising to the viewer. The insights and information-theoretical framework can guide the optimal design of information display in friendship formation process.

 Seminars 2017-2018

Spring 2018

Tuesday, January 30, 2018

Presenter:  Karan Girotra

Title: Bike Share Systems


The cities of Paris, London, Chicago, and New York (among many others) have set up bike-share systems to facilitate the use of bicycles for urban commuting. This paper estimates the impact of two facets of system performance on bike-share ridership: accessibility (how far the user must walk to reach stations) and bike-availability (the likelihood of finding a bicycle). We obtain these estimates from a structural demand model for ridership estimated using data from the Vélib’ system in Paris. We find that every additional meter of walking to a station decreases a user’s likelihood of using a bike from that station by 0.194% (±0.0693%), the reduction is even more significant at higher distances (>300m). These estimates imply that almost 80% of bike-share usage comes from areas within 300m of stations, highlighting the need for dense station networks. We find that a 10% increase in bike-availability would increase ridership by 12.211% (±1.097%), three-fourths of which comes from fewer abandonments, and the rest from increased user interest. We illustrate the use of our estimates in comparing the effect of adding stations or increasing bike-availabilities in different parts of the city, at different times, and in evaluating other proposed improvements.

Tuesday, February 13, 2018

Presenter:  Natalia Levina

Title: “Organizational Impacts of Crowdsourcing: What Happens with “Not Invented Here” Ideas?”


Recent work on the organizational impacts of crowdsourcing suggests a number of difficulties including not only the familiar difficulties associated with learning from outside, but also difficulties specific to paying proper attention to a large number of crowdsourced submissions. How does relying on consulting differ from relying on crowdsourcing as two modes of open innovation when it comes to their potential impact of each on organizational ability to learn novel business and scientific insights?  What can we learn from this comparison about how ideas are evaluated when they come from different external sources?  We investigate these differences in the context of an in-depth longitudinal field study of an R&D organization that engaged both innovation consulting and crowdsourcing at the same time to address one of its critical R&D problems. We draw on and contribute to the literature on open innovation by elaborating how different practices of engagements shaped the potential for impact of various external ideas that were voiced, or stayed silent, in the process

Tuesday, February 20, 2018

Presenter:  Diana Tamir

Title: Making Predictions in the Social World


The social mind is tailored to the problem of predicting other people. Imagine trying to navigate the social world without understanding that tired people tend to become frustrated, or that mean people tend to lash out. Our social interactions depend on the ability to anticipate others’ actions, and we rely on knowledge about their state (i.e., tired) and traits (i.e., mean) to do so. I will present a multi-layered framework of social cognition that helps to explain how people represent the richness and complexity of others’ minds, and how they use this representation to predict others’ actions. Using both neuroimaging and Markov modeling, I demonstrate how the social mind might leverage both the structure and dynamics of mental state representations to make predictions about the social world.

Tuesday, February 27, 2018

Presenter:  Ajay Agrawal

Title: Decision Making with Artificial Intelligence: Prediction, Judgment, and Complexity


We interpret recent developments in the field of artificial intelligence (AI) as improvements in prediction technology. In this paper, we explore the consequences of improved prediction in decision-making. To do so, we adapt existing models of decision-making under uncertainty to account for the process of determining payoffs. We label this process of determining the payoffs ‘judgment.’ There is a risky action, whose payoff depends on the state, and a safe action with the same payoff in every state. Judgment is costly; for each potential state, it requires thought on what the payoff might be. Prediction and judgment are complements as long as judgment is not too difficult. We show that in complex environments with a large number of potential states, the effect of improvements in prediction on the importance of judgment depend a great deal on whether the improvements in prediction enable automated decision-making. We discuss the implications of improved prediction in the face of complexity for automation, contracts, and firm boundaries.

Tuesday, March 13, 2018

Presenter:  Stephen Spiller

Title: Judgments Based on Stocks and Flows: Different Presentations of the Same Data Can Lead to Opposing Inferences


Measurements of a quantity over time can be presented as stocks (the total quantity at each point of time) or flows (the change in quantity between each point of time). We show that the choice of presenting data as stocks or flows can have a consequential impact on judgments. The same data can lead to positive or negative assessments when presented as stocks versus flows and can engender optimistic or pessimistic forecasts for the future. For example, when employment data from 2007 to 2013 are shown as flows (jobs created or lost), President Obama’s impact on the economy is viewed positively, whereas when presenting the same data as stocks (total jobs), his impact is viewed negatively. We document the data patterns likely to engender these inconsistencies, show they are robust to non-graphical data representations, and occur even when people can accurately transform the data between stocks and flows.

Tuesday, March 20, 2018

Presenter:  Elena Belavina

Title: Grocery Store Density and Food Waste


We study the impact of grocery-store density on the food waste generated at stores and households. Food waste is a major contributor to carbon emissions (as big as road transport). Identifying and influencing market conditions that can decrease food waste is thus important to combat global warming. We build and calibrate a stylized two-echelon perishable-inventory model to capture grocery purchases and expiration at competing stores and households in a market. We examine how the equilibrium waste in this model changes with store density.

An increase in store density decreases consumer waste due to improved access to groceries, while increasing retail waste due to decentralization of inventory, increased variability propagation in the supply chain (cycle truncation) and diminished demand by customers. Higher density also induces more competition which further increases (decreases) waste when stores compete on prices (service-levels). Overall, consumer waste reductions compete with store waste increases and the effects of increased competition. Our analysis shows that higher density reduces food waste up to a threshold density; it leads to higher food waste beyond this threshold. Put differently, in so far as food waste is concerned, there exists an optimal store density.

Calibration using grocery industry, economic and demographic data reveals that actual store density in most American cities is well-below this threshold/optimal level, and modest increases in store density substantially reduce waste; e.g. in Chicago, just 3-4 more stores (per 10 sq-km) can lead to a 6-9% waste reduction, and a 1-4% decrease in grocery expenses. These results arise from the principal role of consumer waste, suggesting that activists and policy makers’ focus on retail waste may be misguided. Store operators, urban planners and decision makers should aim to increase store densities to make grocery shopping more affordable and sustainable.

Tuesday, March 27, 2018

Presenter:  Terry Taylor

Title: On-Demand Service Platforms: Worker Independence and Welfare


An on-demand service platform connects waiting-time sensitive customers with service-providing workers. This talk addresses two topics. First, a defining feature of an on-demand service platform is that the workers are independent contractors rather than employees. We examine the implications of this worker independence for the platform’s optimal decisions (e.g., prices). Second, platforms’ efforts to aggressively recruit workers have been controversial. Some labor advocates have argued that an expansion in a platform’s labor supply hurts workers, who see, as consequence of the expansion, less work and lower income. We examine the extent to which the interest of platforms in increasing labor supply is indeed at odds with those of workers.

Tuesday, April 3, 2018

Presenter:  Srikanth Jagabathula

Title: The Limit of Rationality in Choice Modeling: Formulation, Computation, and Implications


Customer preferences may not be rational, and therefore we focus on quantifying the limit of rationality (LoR) in choice modeling applications. We define LoR as the “cost” of approximating the observed choice fractions from a collection of offer sets with those from the best fitting probability distribution over rankings. Computing LoR is intractable in the worst case. To tackle this challenge, we introduce two new concepts – rational separation and choice graph, using which we reduce the problem to solving a dynamic program on the choice graph and express the computational complexity in terms of structural properties of the graph. By exploiting the graph structure, we provide practical methods to compute LoR efficiently for a large class of applications. We apply our methods to real-world grocery sales data from the IRI Academic Dataset and identify product categories for which going beyond rational choice models is necessary to obtain acceptable performance.
Joint with: Paat Rusmevichientong, USC Marshall

Tuesday, April 10, 2018

Presenter:  Marcelo Olivares

Title: Managing Worker Utilization in Service Platforms: An Empirical Study of an Outbound Call-Center


In many service industries, providing prompt response to customers can be an important competitive advantage, especially when customers are time-sensitive. When demand for the service is variable and the staffing requirements cannot be adjusted quickly, capacity decisions require making a trade-off between the responsiveness to customers versus controlling operating costs through worker utilization. To break this trade-off, the service system can operate as a platform with access to a large pool of employees with flexible working hours that are compensated through piece-rates. Examples of these service platforms can be found in transportation, food delivery and customer contact centers, among many others. While this business model can operate at low levels of utilization without increasing operating costs, a different trade-off emerges: in settings where employee training and experience is important, the service platform must control employee turnover, which may increase when employees are working at low levels of utilization. Hence, to make staffing decisions and managing workload, it is necessary to understand both customer behavior (measuring their sensitivity to service times) and employee retention. We analyze this trade-off in the context of an outbound call-center that operates with a pool of flexible agents working remotely, selling auto insurance. We develop an econometric approach to model customer behavior in the context of an out-bound call center, that captures special features of out-bound calls, time-sensitivity and the effect of employee experience. A survival model is used to measure how agent retention is affected by the assigned workload. These empirical models of customers and agents are combined to illustrate how to balance time-sensitivity and employee experience, showing that both effects are relevant in practice to plan workload and staffing in a service platform.

(joint work with Andres Musalem and Daniel Yung)

Tuesday, April 17, 2018

Presenter:  Paat Rusmevichientong

Title: A New Approach in Approximate Dynamic Programming for Revenue Management of Reusable Products


We present a new approach in approximate dynamic programming for revenue management of reusable products. The problem is motivated by emerging industries that rent out computing capacity and fashion items, where customers request products on-demand, use the products for a random duration of time, and afterward return the products back to the firm. The goal is to find a policy that determines what products to offer to each customer to maximize the total expected revenue over a finite selling horizon. For this problem, the firms must simultaneously consider the inventories of available products, along with the products that are currently in use by other customers. So, the resulting dynamic programming formulation is intractable because of the high-dimensional state variable.

Using a novel approach for constructing an affine approximation to the value functions, we present a policy that is guaranteed to obtain at least 50% of the optimal expected revenue. Our construction is based on a simple and efficient backward recursion. We provide computational experiments based on the parking transaction data in Seattle. Our numerical experiments demonstrate that the practical performance of our policy is substantially better than its worst-case performance guarantee.

Joint work with Huseyin Topaloglu and Mika Sumida (Cornell Tech)

Fall 2017

Tuesday, September 5th, 2017

Presenter:  Sharad Goel

Title: Algorithmic Decision Making and the Cost of Fairness


Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. We reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.


Tuesday, September 12th, 2017

Presenter:  Sharad Goel

Title: Algorithmic Decision Making and the Cost of Fairness


Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. We reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.


Tuesday, September 19th, 2017

Presenter:  Marshall Van Alstyne

Title: The Role of APIs in Firm Performance


ABSTRACT: Do firms benefit from external as well as internal performance enhancements? Using proprietary information from a significant fraction of the API tool provision industry, we explore the impact of API adoption and complementary investments on firm performance. Data include external as well as internal developers. We use a difference in difference approach centered on the date of first API use to show that API adoption — measured both as a binary treatment and as a function of the number of calls and amount of data processed — is related to increased sales, operating income, and decreased costs. It is especially tightly related to increased market value. In a specification with year and firm fixed effects, binary API adoption predicts a 4.5\% increase in a firms’ market value. Creation of API developer portals is associated with a decrease in R+D expenditure inside the firm, supporting the hypothesis that outside development substitutes for internal development. Categorizing APIs by their orientation, we find that B2B, B2C, and Internal API calls are heterogeneous in their association with financial outcomes. Finally, the fact that increases in API calls are associated with contemporaneous increases in firm value suggest that data flow at the boundary of the firm can be used for stock market trading.

Tuesday, September 26th, 2017

Presenter:  Heather Sarsons

Title: Interpreting Signals: Evidence from Doctor Referrals


This project asks whether someone’s gender influences the way we interpret signals about his or her ability, and the implications this has for her career trajectory. Using data on referrals from primary care physicians (PCPs) to surgeons, I show that PCPs view good and bad patient outcomes differently depending on the performing surgeon’s gender. If a PCP refers a patient to a surgeon and the patient dies during surgery, the PCP is less likely to refer to that surgeon in the future but is significantly less likely to do so if the surgeon is female. Conversely, PCPs are more likely to refer to surgeons after a good patient outcome, but are significantly more likely to do so if the surgeon is male. I provide evidence that this is not driven by surgeon behaviour or by differences in underlying patient risk. I discuss the results in the context of a standard Bayesian updating model as well as a model of confirmation bias.

Tuesday, October 3rd, 2017

Presenter:  Jake Hofman

Title: How Predictable is the Spread of Information?


How does information spread in online social networks, and how predictable are online information diffusion events?

Despite a great deal of existing research on modeling information diffusion and predicting “success” of content in social systems, these questions have remained largely unanswered for a variety of reasons, ranging from the inability to observe most word-of-mouth communication to difficulties in precisely and consistently formalizing different notions of success.

This talk will attempt to shed light on these questions through an empirical analysis of billions of diffusion events under one simple but unified framework.

We will show that even though information diffusion patterns exhibit stable regularities in the aggregate, it remains surprisingly difficult to predict the success of any particular individual or single piece of content in an online social network, , with our best performing models explaining only half of the empirical variance in outcomes.

We conclude by exploring this limit theoretically through a series of simulations that suggest that it is the diffusion process itself, rather than our ability to estimate or model it, that is responsible for this unpredictability.

Tuesday, October 10th, 2017

Presenter:  Kostas Bimpikis

Title: Spatial Pricing in Ride-Sharing Networks


We explore spatial price discrimination in the context of a ride-sharing platform that serves a network of locations. Riders are heterogeneous in terms of their destination preferences and their willingness to pay for receiving service. Drivers decide whether, when, and where to provide service so as to maximize their expected earnings, given the platform’s prices. Our findings highlight the impact of the demand pattern on the platform’s prices, profits, and the induced consumer surplus. In particular, we establish that profits and consumer surplus are maximized when the demand pattern is “balanced” across the network’s locations. In addition, we show that they both increase monotonically with the balancedness of the demand pattern (as formalized by its structural properties). Furthermore, if the demand pattern is not balanced, the platform can benefit substantially from pricing rides differently depending on the location they originate from. Finally, we consider a number of alternative pricing and compensation schemes that are commonly used in practice and explore their performance for the platform.

Tuesday, October 17th, 2017 in JMHH F55 (note location change)

Presenter:  Eytan Bakshy

Title: Experimental Learning and Optimization 


Online experiments (“A/B tests”) are the workhorse of modern Internet development, yet these experiments are generally limited to evaluating the effects of only one or two variants.  In many cases, however, we are interested in evaluating the effects of thousands or a potentially infinite number of possible interventions, such as treatments parametrized by continuous variables, or dynamic personalized treatment regimes that map particular states to different actions.  I will discuss a new approach to large-scale field experimentation using Gaussian process regression models and Bayesian optimization to solve such multi-armed bandit problems.  Using empirical examples, I will show how we are able to effectively apply Bayesian modeling to both finite and infinite action spaces to make predictions about yet-to-be-observed treatments.   These models are combined with optimization procedures that produce demonstrable improvements to mobile software, infrastructure, and machine learning systems.


Tuesday, October 31st, 2017

Presenter:  Tatiana Homonoff

Title: Do FICO Scores Influence Financial Behavior? Evidence from a Field Experiment with Student Loan Borrowers

Tuesday, November 7th, 2017

Presenter:  Yiangos Papanastasiou

Title: Fake News Propagation and Detection: A Sequential Model


In the wake of the 2016 US presidential election, social media platforms are facing increasing pressure to safeguard their users against the propagation of “fake news” (i.e., articles whose content is fabricated). In this paper, we develop a simple model of news propagation, in which a sequence of heterogeneous rational agents choose first whether to inspect an article to determine its validity (i.e., perform a “fact-check”), and then whether to share the article with the next agent. Although the agents are intent on sharing only truthful news, our model highlights how the sequential nature of content-sharing on social media can lead to pathological outcomes, whereby fake news articles attain “truthful news status” and are propagated in perpetuity. We then consider a social media platform’s problem of deciding whether and when to intervene in the sharing of a news article by conducting its own inspection. We show that the optimal policy reduces to the solution of a simple finite-horizon optimal stopping problem, and identify the characteristics of the news environment that render immediate inspection, delayed inspection, and non-inspection optimal. Through a combination of analytical results and numerical experiments, we quantify the impact of fake news articles on the agents’ beliefs and highlight cases where this impact is most pronounced.

Tuesday, November 14th, 2017

Presenter:  Bo Cowgill

Title: Bias and Productivity in Humans and Algorithms: Theory and Evidence from Résumé Screening


Where should algorithms improve decision-making? I formally model the advantages of human judgements and decision-making algorithms. I show that algorithms can remove human biases exhibited in the training data, but only if the human judgment is sufficiently noisy. The model suggests that decision-making algorithms have the biggest effects on productivity where human judgement is both biased and inconsistent. Where human decisions are biased and consistent, then algorithms trained on these judgments (and their outcomes) will codify the bias rather than reducing it. By contrast: Noise in human judgement facilitates de-biasing by contributing quasi-experimental variation into algorithms’ training data. I test these predictions in a field experiment in applying machine learning for hiring workers for white-collar team-production jobs. The marginal candidate selected by the machine (rejected by human screeners) is a) +14\% more likely to pass a face-to-face interview with incumbent workers and receive a job offer offer, b) +18\% more likely to accept job offers when extended by the employer, and c) 0.2$\sigma$-0.4$\sigma$ more productive once hired as employees. They are also 12\% less likely to show evidence of competing job offers during salary negotiations. Estimates of heterogeneous effects suggest that the results are driven by non-traditional job applicants: Candidates from non-elite backgrounds, those who lack job referrals, those without prior experience, those with atypical credentials and those with strong non-cognitive soft-skills. Empirical evidence suggests that human evaluation of these candidates was both noisy and biased.

Monday, November 20th, 2017

Presenter:  Chloe Kim Glaeser

Title: Optimal Retail Location: Empirical Methodology and Application to Practice


We empirically study the spatio-temporal location problem motivated by an online retailer that uses the Buy-Online-Pick-Up-In-Store fulfillment method. Customers pick up their orders from trucks parked at specific locations on specific days, and the retailer’s problem is to determine where and when these pick-ups occur. Customer demand is influenced by the convenience of pick-up locations and days. We combine demographic and economic data, business location data, and the retailer’s historical sales and operations data to predict demand at potential locations. We introduce a novel procedure that combines machine learning and econometric techniques. First, we use a fixed effects regression to estimate spatial and temporal cannibalization effects. Then, we use a random forests algorithm to predict demand when a particular location operates in isolation. Based on the predicted demand, we solve the spatio-temporal integer program using quadratic program relaxation to find the optimal pick-up location configuration and schedule. We estimate a revenue increase of at least 42% from the improved location configuration and schedule.