Seminars 2018-2019

Spring 2019

Tuesday, April 30, 2019

Presenter: Pei-yu Chen

Title: From Data to Actionable Analytics: The Magical Power of Individual Shopping Time Habit

Abstract

I will share some of my recent work on deriving actionable analytics from large scale data. This talk will focus on a project that aims to understand and measure individual shopping time habits and its effects. Little research has focused on online shopping habit, particularly concerning time, missing opportunity to potentially improve important outcomes by the simple innovative use of time. Based on a unique dataset that includes reviews as well as pertinent purchases at the individual level from a large online retailer, this study investigates whether consumers exhibit time habits for online shopping and whether following such time habits affects their satisfaction and re-visit behavior. We employ activity-based metrics to assess shopping time habit at the INDIVIDUAL level, and results show that consumers form shopping time habits, and they obtain higher consumer satisfaction and exhibit greater re-visit behavior when the timing of shopping follows their shopping time habits. While prior works have documented that consumers exhibit time habits for physical shopping, driven mostly by time and location constraints, this current study is the first, to our knowledge, to examine online shopping time habit and, most importantly, its effects on consumer satisfaction and re-visit behavior. With the availability of detailed individual transaction data in online shopping and the advance of technology in providing personalized services which enable companies to act upon knowledge of individual behaviors, this research provides important practical implications for system and website design, marketing strategy as well as customer relationship management.

 

 

 

Tuesday, April 23, 2019

Presenter: Ashton Anderson

Title:  Assessing Human Error Against a Benchmark of Perfection

Abstract

An increasing number of domains are providing us with detailed trace data on human decisions in settings where we can evaluate the quality of these decisions via an algorithm. Motivated by this development, an emerging line of work has begun to consider whether we can characterize and predict the kinds of decisions where people are likely to make errors. To investigate what a general framework for human error prediction might look like, we focus on a model system with a rich history in the behavioral sciences: the decisions made by chess players as they select moves in a game. We carry out our analysis at a large scale, employing datasets with several million recorded games, and using chess tablebases to acquire a form of ground truth for a subset of chess positions that have been completely solved by computers but remain challenging for even the best players in the world. We organize our analysis around three categories of features that we argue are present in most settings where the analysis of human error is applicable: the skill of the decision-maker, the time available to make the decision, and the inherent difficulty of the decision. We identify rich structure in all three of these categories of features, and find strong evidence that in our domain, features describing the inherent difficulty of an instance are significantly more powerful than features based on skill or time.

 

 

Tuesday, April 9, 2019

Presenter: Costis Maglaras

Title: The role of operational controls and driver-side pricing in ride-hailing networks

Abstract

I will discuss two separate projects. The first explores the role of operational controls in a stylized ride-hailing network with strategic drivers, and characterizes the value of platform admission and matching controls on equilibrium behavior; one particular finding is that the platform may proactively wish to deny a match to arriving riders at a particular location, even when local driver capacity is available, so as to aggravate driver queueing and incentivize them to reposition elsewhere in the network (a form of damaged goods). The second project, still preliminary, explores how the platform can use dynamic driver-side pricing to optimize transient network behavior around a random demand shock. We explore this issue with emphasis on the interplay between the timescale of the demand shock duration, rider delay patience, and driver transportation delay. We find that personalized driver pricing (location- and time-dependent as a fcn of the demand shock location and start time) improves performance significantly. Additionally, the platform can improve performance by either sharing some of the risk with drivers, or tactically managing the driver perceived risk of not arriving in time to the hotspot to benefit from the surge.

 

Both are joint work with Zhe Liu and Philipp Afeche

 

Tuesday, April 2, 2019

Presenter: Georgia Perakis

Title: Retail Analytics: High-Low Promotion Optimization for Peak-End Demand Models

Abstract

Promotions are a key instrument for driving sales for retailers. As a result, retailers know it is crucial to optimize the timing and depth of promotions in order to maximize profits.  But how should they process the data necessary to determine optimal pricing and timing? Left to the intuition of product managers, retailers risk missing out. This talk discusses a body of work that addresses the problem of promotion pricing. We address both how to predict demand effectively as well as how to optimize promotions using tools of optimization and machine learning. The demand model we discuss uses features such as current and past period price but also the minimum price set within a set of past periods (that we refer to as memory) among other features. Using this demand model (that we refer to as the bounded memory peak-end demand model) we propose a compact dynamic programming model for determining optimal promotions for both the single item as well as the multiple item problem. We analyze the promotion effects and illustrate that the bounded memory peak-end demand model yields high-low optimal strategies. In addition, we illustrate that the methods we introduce are computationally efficient and “easy” to use in practice. We also illustrate the performance of our models using data through our collaboration with the Oracle Retail Business Unit and discuss the overall practical impact of our approach to Oracle RGBU clients. We show how our approach has the potential to help retailers increase profits by an average of 3-10 percent. In a world of slim profit margins, this could be a game changer for retailers.

Tuesday, March 26, 2019

Presenter:  Sam Ransbotham

Title: Open Source Code and the Risk of Attacks

Abstract

Vulnerabilities in software may be found before or after release.  For open source software, security attention frequently focuses on the discovery of vulnerabilities prior to release.  For example, the large number of diverse people who can view the source code may find vulnerabilities before the software product is released.  As a result, open source software has the potential to be more secure than closed source software.   Unfortunately, for vulnerabilities found after release, the benefits of access to source code may now work against open source software security.  Attackers may be more likely to exploit discovered vulnerabilities since they too can view the source code and can use it to learn the details of a weakness and how best to exploit it.  This research focuses on post-release exploitation attempts, contrasting vulnerabilities discovered in open source software with those based on closed source software.  Empirical analysis of two years of security alert data from intrusion detection systems indicates that once discovered, open source software vulnerabilities are at greater risk of exploitation.

Tuesday, March 19, 2019

Presenter:  Mohamed Mostagir

Title: Dynamic Contest Design: Theory, Experiments, and Applications

Abstract

Contests are a common mechanism for extracting effort from participants. Their use is widespread in a variety of settings like workplace promotions, crowdsourcing innovation, and healthcare quality.  One of the pivotal aspects of contest design is the contest’s information structure: what information should the contest designer provide to participants and when should this information be revealed?  The answers to these questions directly impact the behavior of players and the outcome of the contest, and also have broader implications for institutional and policy design.  We derive the contest’s optimal information disclosure policy within a large class of policies and design a novel experiment to evaluate how these policies perform in the lab.

Tuesday, March 12, 2019

Presenter:  Kelly Shue

Title: Can the Market Multiply and Divide? Non-Proportional Thinking in Financial Markets

Abstract

When pricing financial assets, rational agents should think in terms of proportional price changes, i.e., returns. However, stock price movements are often reported and discussed in dollar rather than percentage units, which may cause investors to think that news should correspond to a dollar change in price rather than a percentage change in price. Non-proportional thinking in financial markets can lead to return underreaction for high-priced stocks and overreaction for low-priced stocks. Consistent with a simple model of non-proportional thinking, we find that total volatility, idiosyncratic volatility, and absolute market beta are significantly higher for stocks with low share prices, controlling for size. To identify a causal effect of price, we show that volatility increases sharply following stock splits and drops following reverse stock splits. The economic magnitudes are large: non-proportional thinking can explain the “leverage effect” puzzle, in which volatility is negatively related to past returns, as well as the volatility-size and beta-size relations in the data. We also show that low-priced stocks drive the long-run reversal phenomenon in asset pricing, and the magnitude of long run reversals can be sorted by price, holding past returns and size constant. Finally, we show that non-proportional thinking biases reactions to news that is itself reported in nominal rather than scaled units. Investors react to nominal earnings per share surprises, after controlling for the earnings surprise scaled by share price. The reaction to the nominal earnings surprise reverses in the long run, consistent with correction of mispricing.

Fall 2018

Tuesday, October 30, 2018
(JMHH 370)

Presenter:  Tianshu Sun

Title: Displaying Things in Common to Encourage Friendship Formation: A Large Randomized Field Experiment

Abstract

Friendship formation is of central importance to online social network sites and to society. In this study, we investigate whether and how displaying things in common (TIC) between users (mutual hometown, interest, education, work, city) may encourage friendship formation. Displaying TIC computed from big data may update an individual’s belief about the shared similarity with another and reduce information friction that may be hard to overcome in offline communication. In collaboration with a large online social network, we design and implement a randomized field experiment involving over 50 million viewer-profile pairs, which randomly varies the prominence of things in common information when a user (viewer) is browsing a non-friend’s profile. The dyad-level exogenous variation, orthogonal to any unobserved factors in viewer-profile’s network, allows us to cleanly isolate the role of individuals’ preference (over TIC) in driving network formation and homophily. We find that displaying TIC to the viewers may significantly increase their probability of sending a friend request and forming a friendship, and especially effective for viewer-profile pairs who have little in common (with no mutual friends and only one or two things in common). Such findings suggest that information intervention is effective in encouraging the formation of weak ties, and also provide the first experimental evidence on the role of individuals’ preference (versus structural factors) in network formation. We further explore the heterogeneity in the effect and demonstrate that displaying TIC could improve friendship formation for a wide range of viewers with different characteristics. Finally, we propose an information-theoretic model to characterize the belief update process underlying a viewer’s decision, and provide evidence that displaying TIC is more effective when the TIC information is more surprising to the viewer. The insights and information-theoretical framework can guide the optimal design of information display in friendship formation process.

Tuesday, October 23, 2018
(JMHH 370)

Presenter:  Nir Halevy

Title: The Calculus of Peacemaking

Abstract

Third-parties have acted as peacemakers since the dawn of history. However, little is known about the causes and consequences of voluntary, informal third-party intervention in conflict. A series of experiments investigated when, why, and how third-parties intervene in others’ conflicts, transform them, and promote cooperation. Overall, this program of research finds that: (a) The mere possibility of third-party intervention is sufficient to increase cooperation among disputants; (b) Third-parties’ willingness to intervene critically depends on their ability to secure gains and avoid costs to themselves; (c) The positive effects of introducing third-party intervention are evident even following a history of conflict; and (d) persist even after the third-party can no longer intervene. These findings are discussed in the context of a broader conceptual framework that considers when, why and how third parties influence others’ interactions and relationships, for better or worse.

Tuesday, October 16, 2018
(JMHH 370)

Presenter:  Panos Ipeirotis

Title: Targeted Crowdsourcing with a Billion (Potential) Users

Abstract

We describe Quizz, a gamified crowdsourcing system that simultaneously assesses the knowledge of users and acquires new knowledge from them. Quizz operates by asking users to complete short quizzes on specific topics; as a user answers the quiz questions, Quizz estimates the user’s competence. To acquire new knowledge, Quizz also incorporates questions for which we do not have a known answer; the answers given by competent users provide useful signals for selecting the correct answers for these questions. Quizz actively tries to identify knowledgeable users on the Internet by running advertising campaigns, effectively leveraging “for free” the targeting capabilities of existing, publicly available, ad placement services. Quizz quantifies the contributions of the users using information theory and sends feedback to the advertising system about each user. The feedback allows the ad targeting mechanism to further optimize ad placement. Our experiments, which involve over ten thousand users, confirm that we can crowdsource knowledge curation for niche and specialized topics, as the advertising network can automatically identify users with the desired expertise and interest in the given topic. We present controlled experiments that examine the effect of various incentive mechanisms, highlighting the need for having short-term rewards as goals, which incentivize the users to contribute. Finally, our cost-quality analysis indicates that the cost of our approach is below that of hiring workers through paid-crowdsourcing platforms, while offering the additional advantage of giving access to billions of potential users all over the planet, and being able to reach users with specialized expertise that is not typically available through existing labor marketplaces.

Tuesday, October 9, 2018
(JMHH 270)

Presenter:  Abigail Sussman

Title: Mental Accounting Failures: The Case of Exceptional Consumption

Abstract

Expenses fall along a continuum from ordinary (common or frequent) to exceptional (unusual or infrequent), with many of the largest expenses (e.g., electronics, celebrations) being the most exceptional. In the current project, I show that consumers are fairly adept at budgeting and predicting how much they will spend on ordinary items, but they both underestimate their spending on exceptional purchases overall and overspend on each individual purchase.  Based on the principles of mental accounting and choice bracketing, I show that this discrepancy arises in part because consumers have difficulty categorizing and tracking exceptional expenses. Specifically, consumers are less likely to draw connections between exceptional (versus ordinary) items and other items they purchase, and less likely to believe that consumption of these items meaningfully impacts their budgets.  The current research extends findings beyond the domain of money and financial budgeting by drawing parallels to food and caloric budgeting, providing evidence that exceptional items create a common set of challenges for consumers across contexts.  I conclude by examining how we can harness our understanding of exceptional items to help improve outcomes for consumers. First, I provide evidence from both lab and field settings that exceptional framing of an identical charitable opportunity increases willingness to donate. Second, I provide evidence that an intervention that helps consumers consider their spending on exceptional items as part of a larger set of purchases reduces spending on these products.

Tuesday, October 2, 2018

Presenter:  John Beshears

Title: Borrowing to Save?  The Impact of Automatic Enrollment on Debt

Abstract

Automatic enrollment in defined contribution retirement savings plans is one of the most widely recognized applications of behavioral science in a managerial setting. Previous research documents that automatic enrollment increases average savings plan contributions. But how much of the retirement savings induced by automatic enrollment is offset by increased borrowing outside the savings plan? We study a natural experiment created when the U.S. Army began automatically enrolling its newly hired civilian employees into the Thrift Savings Plan (TSP) at a default contribution rate of 3% of income. Four years after hire, automatic enrollment causes no significant change in debt excluding auto loans and first mortgages (point estimate = 0.9% of income, 95% confidence interval = [-0.9%, 2.7%]). Automatic enrollment does significantly increase auto loan balances by 2.0% of income and first mortgage balances by 7.4% of income. Because we do not observe car or home values, we do not know whether this new debt is offset by greater accumulation of the assets on which these collateralized debts were issued.

Tuesday, September 25, 2018

Presenter:  Jacob Abernethy

Title: Emerging tools for sequential decision making, with applications in learning and game-playing

Abstract

In this talk we will explore algorithmic tools for solving sequential decision and prediction problems. These methods have grown quite popular in recent years given their scalability, their broad use in practice, and their reliance on much weaker statistical assumptions. We will begin by exploring a couple of key applications: (a) adaptive pricing for revenue maximization of a monopolist seller, and (b) the search for lead pipes in Flint MI. But we will then turn our attention to a more foundational result, which is the solution of zero-sum games using so-called “no-regret algorithms”. We will describe recent work that allows us to view several classical iterative optimization methods through the lens of game theory.