Seminars / Conferences

Time and Location: 12:00PM – 1:20PM in JMHH 540/541
To schedule a meeting with a speaker log in to the OID SharePoint site or contact Tamara Amazan.

Seminars 2017-2018

 

Spring 2018

Tuesday, January 30, 2018

Presenter:  Karan Girotra

Title: Bike Share Systems

Abstract

The cities of Paris, London, Chicago, and New York (among many others) have set up bike-share systems to facilitate the use of bicycles for urban commuting. This paper estimates the impact of two facets of system performance on bike-share ridership: accessibility (how far the user must walk to reach stations) and bike-availability (the likelihood of finding a bicycle). We obtain these estimates from a structural demand model for ridership estimated using data from the Vélib’ system in Paris. We find that every additional meter of walking to a station decreases a user’s likelihood of using a bike from that station by 0.194% (±0.0693%), the reduction is even more significant at higher distances (>300m). These estimates imply that almost 80% of bike-share usage comes from areas within 300m of stations, highlighting the need for dense station networks. We find that a 10% increase in bike-availability would increase ridership by 12.211% (±1.097%), three-fourths of which comes from fewer abandonments, and the rest from increased user interest. We illustrate the use of our estimates in comparing the effect of adding stations or increasing bike-availabilities in different parts of the city, at different times, and in evaluating other proposed improvements.

Tuesday, February 13, 2018

Presenter:  Natalia Levina

Title: “Organizational Impacts of Crowdsourcing: What Happens with “Not Invented Here” Ideas?”

Abstract

Recent work on the organizational impacts of crowdsourcing suggests a number of difficulties including not only the familiar difficulties associated with learning from outside, but also difficulties specific to paying proper attention to a large number of crowdsourced submissions. How does relying on consulting differ from relying on crowdsourcing as two modes of open innovation when it comes to their potential impact of each on organizational ability to learn novel business and scientific insights?  What can we learn from this comparison about how ideas are evaluated when they come from different external sources?  We investigate these differences in the context of an in-depth longitudinal field study of an R&D organization that engaged both innovation consulting and crowdsourcing at the same time to address one of its critical R&D problems. We draw on and contribute to the literature on open innovation by elaborating how different practices of engagements shaped the potential for impact of various external ideas that were voiced, or stayed silent, in the process

Tuesday, February 20, 2018

Presenter:  Diana Tamir

Title: Making Predictions in the Social World

Abstract

The social mind is tailored to the problem of predicting other people. Imagine trying to navigate the social world without understanding that tired people tend to become frustrated, or that mean people tend to lash out. Our social interactions depend on the ability to anticipate others’ actions, and we rely on knowledge about their state (i.e., tired) and traits (i.e., mean) to do so. I will present a multi-layered framework of social cognition that helps to explain how people represent the richness and complexity of others’ minds, and how they use this representation to predict others’ actions. Using both neuroimaging and Markov modeling, I demonstrate how the social mind might leverage both the structure and dynamics of mental state representations to make predictions about the social world.

Tuesday, February 27, 2018

Presenter:  Ajay Agrawal

Title: Decision Making with Artificial Intelligence: Prediction, Judgment, and Complexity

Abstract

We interpret recent developments in the field of artificial intelligence (AI) as improvements in prediction technology. In this paper, we explore the consequences of improved prediction in decision-making. To do so, we adapt existing models of decision-making under uncertainty to account for the process of determining payoffs. We label this process of determining the payoffs ‘judgment.’ There is a risky action, whose payoff depends on the state, and a safe action with the same payoff in every state. Judgment is costly; for each potential state, it requires thought on what the payoff might be. Prediction and judgment are complements as long as judgment is not too difficult. We show that in complex environments with a large number of potential states, the effect of improvements in prediction on the importance of judgment depend a great deal on whether the improvements in prediction enable automated decision-making. We discuss the implications of improved prediction in the face of complexity for automation, contracts, and firm boundaries.

Tuesday, March 13, 2018

Presenter:  Stephen Spiller

Title: Judgments Based on Stocks and Flows: Different Presentations of the Same Data Can Lead to Opposing Inferences

Abstract

Measurements of a quantity over time can be presented as stocks (the total quantity at each point of time) or flows (the change in quantity between each point of time). We show that the choice of presenting data as stocks or flows can have a consequential impact on judgments. The same data can lead to positive or negative assessments when presented as stocks versus flows and can engender optimistic or pessimistic forecasts for the future. For example, when employment data from 2007 to 2013 are shown as flows (jobs created or lost), President Obama’s impact on the economy is viewed positively, whereas when presenting the same data as stocks (total jobs), his impact is viewed negatively. We document the data patterns likely to engender these inconsistencies, show they are robust to non-graphical data representations, and occur even when people can accurately transform the data between stocks and flows.

Tuesday, March 20, 2018

Presenter:  Elena Belavina

Title: Grocery Store Density and Food Waste

Abstract

We study the impact of grocery-store density on the food waste generated at stores and households. Food waste is a major contributor to carbon emissions (as big as road transport). Identifying and influencing market conditions that can decrease food waste is thus important to combat global warming. We build and calibrate a stylized two-echelon perishable-inventory model to capture grocery purchases and expiration at competing stores and households in a market. We examine how the equilibrium waste in this model changes with store density.

An increase in store density decreases consumer waste due to improved access to groceries, while increasing retail waste due to decentralization of inventory, increased variability propagation in the supply chain (cycle truncation) and diminished demand by customers. Higher density also induces more competition which further increases (decreases) waste when stores compete on prices (service-levels). Overall, consumer waste reductions compete with store waste increases and the effects of increased competition. Our analysis shows that higher density reduces food waste up to a threshold density; it leads to higher food waste beyond this threshold. Put differently, in so far as food waste is concerned, there exists an optimal store density.

Calibration using grocery industry, economic and demographic data reveals that actual store density in most American cities is well-below this threshold/optimal level, and modest increases in store density substantially reduce waste; e.g. in Chicago, just 3-4 more stores (per 10 sq-km) can lead to a 6-9% waste reduction, and a 1-4% decrease in grocery expenses. These results arise from the principal role of consumer waste, suggesting that activists and policy makers’ focus on retail waste may be misguided. Store operators, urban planners and decision makers should aim to increase store densities to make grocery shopping more affordable and sustainable.

Tuesday, March 27, 2018

Presenter:  Terry Taylor

Title: On-Demand Service Platforms: Worker Independence and Welfare

Abstract

An on-demand service platform connects waiting-time sensitive customers with service-providing workers. This talk addresses two topics. First, a defining feature of an on-demand service platform is that the workers are independent contractors rather than employees. We examine the implications of this worker independence for the platform’s optimal decisions (e.g., prices). Second, platforms’ efforts to aggressively recruit workers have been controversial. Some labor advocates have argued that an expansion in a platform’s labor supply hurts workers, who see, as consequence of the expansion, less work and lower income. We examine the extent to which the interest of platforms in increasing labor supply is indeed at odds with those of workers.

Tuesday, April 3, 2018

Presenter:  Srikanth Jagabathula

Title: The Limit of Rationality in Choice Modeling: Formulation, Computation, and Implications

Abstract

Customer preferences may not be rational, and therefore we focus on quantifying the limit of rationality (LoR) in choice modeling applications. We define LoR as the “cost” of approximating the observed choice fractions from a collection of offer sets with those from the best fitting probability distribution over rankings. Computing LoR is intractable in the worst case. To tackle this challenge, we introduce two new concepts – rational separation and choice graph, using which we reduce the problem to solving a dynamic program on the choice graph and express the computational complexity in terms of structural properties of the graph. By exploiting the graph structure, we provide practical methods to compute LoR efficiently for a large class of applications. We apply our methods to real-world grocery sales data from the IRI Academic Dataset and identify product categories for which going beyond rational choice models is necessary to obtain acceptable performance.
Joint with: Paat Rusmevichientong, USC Marshall

Tuesday, April 10, 2018

Presenter:  Marcelo Olivares

Title: Managing Worker Utilization in Service Platforms: An Empirical Study of an Outbound Call-Center

Abstract

In many service industries, providing prompt response to customers can be an important competitive advantage, especially when customers are time-sensitive. When demand for the service is variable and the staffing requirements cannot be adjusted quickly, capacity decisions require making a trade-off between the responsiveness to customers versus controlling operating costs through worker utilization. To break this trade-off, the service system can operate as a platform with access to a large pool of employees with flexible working hours that are compensated through piece-rates. Examples of these service platforms can be found in transportation, food delivery and customer contact centers, among many others. While this business model can operate at low levels of utilization without increasing operating costs, a different trade-off emerges: in settings where employee training and experience is important, the service platform must control employee turnover, which may increase when employees are working at low levels of utilization. Hence, to make staffing decisions and managing workload, it is necessary to understand both customer behavior (measuring their sensitivity to service times) and employee retention. We analyze this trade-off in the context of an outbound call-center that operates with a pool of flexible agents working remotely, selling auto insurance. We develop an econometric approach to model customer behavior in the context of an out-bound call center, that captures special features of out-bound calls, time-sensitivity and the effect of employee experience. A survival model is used to measure how agent retention is affected by the assigned workload. These empirical models of customers and agents are combined to illustrate how to balance time-sensitivity and employee experience, showing that both effects are relevant in practice to plan workload and staffing in a service platform.

(joint work with Andres Musalem and Daniel Yung)

Tuesday, April 17, 2018

Presenter:  Paat Rusmevichientong

Title: A New Approach in Approximate Dynamic Programming for Revenue Management of Reusable Products

Abstract

We present a new approach in approximate dynamic programming for revenue management of reusable products. The problem is motivated by emerging industries that rent out computing capacity and fashion items, where customers request products on-demand, use the products for a random duration of time, and afterward return the products back to the firm. The goal is to find a policy that determines what products to offer to each customer to maximize the total expected revenue over a finite selling horizon. For this problem, the firms must simultaneously consider the inventories of available products, along with the products that are currently in use by other customers. So, the resulting dynamic programming formulation is intractable because of the high-dimensional state variable.

Using a novel approach for constructing an affine approximation to the value functions, we present a policy that is guaranteed to obtain at least 50% of the optimal expected revenue. Our construction is based on a simple and efficient backward recursion. We provide computational experiments based on the parking transaction data in Seattle. Our numerical experiments demonstrate that the practical performance of our policy is substantially better than its worst-case performance guarantee.

Joint work with Huseyin Topaloglu and Mika Sumida (Cornell Tech)

Fall 2017

Tuesday, September 5th, 2017

Presenter:  Sharad Goel

Title: Algorithmic Decision Making and the Cost of Fairness

Abstract

Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. We reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.

Paper: https://5harad.com/papers/fairness.pdf

Tuesday, September 12th, 2017

Presenter:  Sharad Goel

Title: Algorithmic Decision Making and the Cost of Fairness

Abstract

Algorithms are now regularly used to decide whether defendants awaiting trial are too dangerous to be released back into the community. In some cases, black defendants are substantially more likely than white defendants to be incorrectly classified as high risk. To mitigate such disparities, several techniques have recently been proposed to achieve algorithmic fairness. We reformulate algorithmic fairness as constrained optimization: the objective is to maximize public safety while satisfying formal fairness constraints designed to reduce racial disparities. We show that for several past definitions of fairness, the optimal algorithms that result require detaining defendants above race-specific risk thresholds. We further show that the optimal unconstrained algorithm requires applying a single, uniform threshold to all defendants. The unconstrained algorithm thus maximizes public safety while also satisfying one important understanding of equality: that all individuals are held to the same standard, irrespective of race. Because the optimal constrained and unconstrained algorithms generally differ, there is tension between improving public safety and satisfying prevailing notions of algorithmic fairness. By examining data from Broward County, Florida, we show that this trade-off can be large in practice. We focus on algorithms for pretrial release decisions, but the principles we discuss apply to other domains, and also to human decision makers carrying out structured decision rules.

Paper: https://5harad.com/papers/fairness.pdf

Tuesday, September 19th, 2017

Presenter:  Marshall Van Alstyne

Title: The Role of APIs in Firm Performance

Abstract

ABSTRACT: Do firms benefit from external as well as internal performance enhancements? Using proprietary information from a significant fraction of the API tool provision industry, we explore the impact of API adoption and complementary investments on firm performance. Data include external as well as internal developers. We use a difference in difference approach centered on the date of first API use to show that API adoption — measured both as a binary treatment and as a function of the number of calls and amount of data processed — is related to increased sales, operating income, and decreased costs. It is especially tightly related to increased market value. In a specification with year and firm fixed effects, binary API adoption predicts a 4.5\% increase in a firms’ market value. Creation of API developer portals is associated with a decrease in R+D expenditure inside the firm, supporting the hypothesis that outside development substitutes for internal development. Categorizing APIs by their orientation, we find that B2B, B2C, and Internal API calls are heterogeneous in their association with financial outcomes. Finally, the fact that increases in API calls are associated with contemporaneous increases in firm value suggest that data flow at the boundary of the firm can be used for stock market trading.

Tuesday, September 26th, 2017

Presenter:  Heather Sarsons

Title: Interpreting Signals: Evidence from Doctor Referrals

Abstract

This project asks whether someone’s gender influences the way we interpret signals about his or her ability, and the implications this has for her career trajectory. Using data on referrals from primary care physicians (PCPs) to surgeons, I show that PCPs view good and bad patient outcomes differently depending on the performing surgeon’s gender. If a PCP refers a patient to a surgeon and the patient dies during surgery, the PCP is less likely to refer to that surgeon in the future but is significantly less likely to do so if the surgeon is female. Conversely, PCPs are more likely to refer to surgeons after a good patient outcome, but are significantly more likely to do so if the surgeon is male. I provide evidence that this is not driven by surgeon behaviour or by differences in underlying patient risk. I discuss the results in the context of a standard Bayesian updating model as well as a model of confirmation bias.

Tuesday, October 3rd, 2017

Presenter:  Jake Hofman

Title: How Predictable is the Spread of Information?

Abstract

How does information spread in online social networks, and how predictable are online information diffusion events?

Despite a great deal of existing research on modeling information diffusion and predicting “success” of content in social systems, these questions have remained largely unanswered for a variety of reasons, ranging from the inability to observe most word-of-mouth communication to difficulties in precisely and consistently formalizing different notions of success.

This talk will attempt to shed light on these questions through an empirical analysis of billions of diffusion events under one simple but unified framework.

We will show that even though information diffusion patterns exhibit stable regularities in the aggregate, it remains surprisingly difficult to predict the success of any particular individual or single piece of content in an online social network, , with our best performing models explaining only half of the empirical variance in outcomes.

We conclude by exploring this limit theoretically through a series of simulations that suggest that it is the diffusion process itself, rather than our ability to estimate or model it, that is responsible for this unpredictability.

Tuesday, October 10th, 2017

Presenter:  Kostas Bimpikis

Title: Spatial Pricing in Ride-Sharing Networks

Abstract

We explore spatial price discrimination in the context of a ride-sharing platform that serves a network of locations. Riders are heterogeneous in terms of their destination preferences and their willingness to pay for receiving service. Drivers decide whether, when, and where to provide service so as to maximize their expected earnings, given the platform’s prices. Our findings highlight the impact of the demand pattern on the platform’s prices, profits, and the induced consumer surplus. In particular, we establish that profits and consumer surplus are maximized when the demand pattern is “balanced” across the network’s locations. In addition, we show that they both increase monotonically with the balancedness of the demand pattern (as formalized by its structural properties). Furthermore, if the demand pattern is not balanced, the platform can benefit substantially from pricing rides differently depending on the location they originate from. Finally, we consider a number of alternative pricing and compensation schemes that are commonly used in practice and explore their performance for the platform.

Tuesday, October 17th, 2017 in JMHH F55 (note location change)

Presenter:  Eytan Bakshy

Title: Experimental Learning and Optimization 

Abstract

Online experiments (“A/B tests”) are the workhorse of modern Internet development, yet these experiments are generally limited to evaluating the effects of only one or two variants.  In many cases, however, we are interested in evaluating the effects of thousands or a potentially infinite number of possible interventions, such as treatments parametrized by continuous variables, or dynamic personalized treatment regimes that map particular states to different actions.  I will discuss a new approach to large-scale field experimentation using Gaussian process regression models and Bayesian optimization to solve such multi-armed bandit problems.  Using empirical examples, I will show how we are able to effectively apply Bayesian modeling to both finite and infinite action spaces to make predictions about yet-to-be-observed treatments.   These models are combined with optimization procedures that produce demonstrable improvements to mobile software, infrastructure, and machine learning systems.

 

Tuesday, October 31st, 2017

Presenter:  Tatiana Homonoff

Title: Do FICO Scores Influence Financial Behavior? Evidence from a Field Experiment with Student Loan Borrowers

Tuesday, November 7th, 2017

Presenter:  Yiangos Papanastasiou

Title: Fake News Propagation and Detection: A Sequential Model

Abstract

In the wake of the 2016 US presidential election, social media platforms are facing increasing pressure to safeguard their users against the propagation of “fake news” (i.e., articles whose content is fabricated). In this paper, we develop a simple model of news propagation, in which a sequence of heterogeneous rational agents choose first whether to inspect an article to determine its validity (i.e., perform a “fact-check”), and then whether to share the article with the next agent. Although the agents are intent on sharing only truthful news, our model highlights how the sequential nature of content-sharing on social media can lead to pathological outcomes, whereby fake news articles attain “truthful news status” and are propagated in perpetuity. We then consider a social media platform’s problem of deciding whether and when to intervene in the sharing of a news article by conducting its own inspection. We show that the optimal policy reduces to the solution of a simple finite-horizon optimal stopping problem, and identify the characteristics of the news environment that render immediate inspection, delayed inspection, and non-inspection optimal. Through a combination of analytical results and numerical experiments, we quantify the impact of fake news articles on the agents’ beliefs and highlight cases where this impact is most pronounced.

Tuesday, November 14th, 2017

Presenter:  Bo Cowgill

Title: Bias and Productivity in Humans and Algorithms: Theory and Evidence from Résumé Screening

Abstract

Where should algorithms improve decision-making? I formally model the advantages of human judgements and decision-making algorithms. I show that algorithms can remove human biases exhibited in the training data, but only if the human judgment is sufficiently noisy. The model suggests that decision-making algorithms have the biggest effects on productivity where human judgement is both biased and inconsistent. Where human decisions are biased and consistent, then algorithms trained on these judgments (and their outcomes) will codify the bias rather than reducing it. By contrast: Noise in human judgement facilitates de-biasing by contributing quasi-experimental variation into algorithms’ training data. I test these predictions in a field experiment in applying machine learning for hiring workers for white-collar team-production jobs. The marginal candidate selected by the machine (rejected by human screeners) is a) +14\% more likely to pass a face-to-face interview with incumbent workers and receive a job offer offer, b) +18\% more likely to accept job offers when extended by the employer, and c) 0.2$\sigma$-0.4$\sigma$ more productive once hired as employees. They are also 12\% less likely to show evidence of competing job offers during salary negotiations. Estimates of heterogeneous effects suggest that the results are driven by non-traditional job applicants: Candidates from non-elite backgrounds, those who lack job referrals, those without prior experience, those with atypical credentials and those with strong non-cognitive soft-skills. Empirical evidence suggests that human evaluation of these candidates was both noisy and biased.

Monday, November 20th, 2017

Presenter:  Chloe Kim Glaeser

Title: Optimal Retail Location: Empirical Methodology and Application to Practice

Abstract

We empirically study the spatio-temporal location problem motivated by an online retailer that uses the Buy-Online-Pick-Up-In-Store fulfillment method. Customers pick up their orders from trucks parked at specific locations on specific days, and the retailer’s problem is to determine where and when these pick-ups occur. Customer demand is influenced by the convenience of pick-up locations and days. We combine demographic and economic data, business location data, and the retailer’s historical sales and operations data to predict demand at potential locations. We introduce a novel procedure that combines machine learning and econometric techniques. First, we use a fixed effects regression to estimate spatial and temporal cannibalization effects. Then, we use a random forests algorithm to predict demand when a particular location operates in isolation. Based on the predicted demand, we solve the spatio-temporal integer program using quadratic program relaxation to find the optimal pick-up location configuration and schedule. We estimate a revenue increase of at least 42% from the improved location configuration and schedule.