![]() |
Robert L. BrayKellogg School of Management |
Kellogg School of Management
Research Interests: empirical operations management, supply chaining, dynamic programming
Management Science, 58(5), 860-875
Abstract: The bullwhip effect is the amplification of demand variability along a supply chain: a company bullwhips if it purchases from suppliers more variably than it sells to customers. Such bullwhips (amplifications of demand variability) can lead to mismatches between demand and production and hence to lower supply chain efficiency. We investigate the bullwhip effect in a sample of 4,689 public U.S. companies over 1974–2008. Overall, about two-thirds of firms bullwhip. The sample's mean and median bullwhips, both significantly positive, respectively measure 15.8% and 6.7% of total demand variability. Put another way, the mean quarterly standard deviation of upstream orders exceeds that of demand by $20 million. We decompose the bullwhip by information transmission lead time. Estimating the bullwhip's information-lead-time components with a two-stage estimator, we find that demand signals firms observe with more than three-quarters' notice drive 30% of the bullwhip, and those firms observe with less than one-quarter's notice drive 51%. From 1974–1994 to 1995–2008, our sample's mean bullwhip dropped by a third.
Manufacturing & Service Operations Management, 17(2), 208-220
Abstract: The bullwhip effect and production smoothing appear antithetical because their empirical tests oppose one another: production variability exceeding sales variability for bullwhip, and vice versa for smoothing. But this is a false dichotomy. We distinguish between the phenomena with a new production smoothing measure, which estimates how much more variable production would be absent production volatility costs. We apply our metric to an automotive manufacturing sample comprising 162 car models and find 75% smooth production by at least 5%, despite the fact that 99% exhibit the bullwhip effect. Indeed, we estimate both a strong bullwhip (on average, production is 220% as variable as sales) and robust smoothing (on average, production would be 22% more variable without deliberate stabilization). We find firms smooth both production variability and production uncertainty. We measure production smoothing with a structural econometric production scheduling model, based on the generalized order-up-to policy.
Manufacturing & Service Operations Management, 18(4), 545-558
Abstract: We model how a judge schedules cases as a multiarmed bandit problem. The model indicates that a first-in-first-out (FIFO) scheduling policy is optimal when the case completion hazard rate function is monotonic. But there are two ways to implement FIFO in this context: at the hearing level or at the case level. Our model indicates that the former policy, prioritizing the oldest hearing, is optimal when the case completion hazard rate function decreases, and the latter policy, prioritizing the oldest case, is optimal when the case completion hazard rate function increases. This result convinced six judges of the Roman Labor Court of Appeals—a court that exhibits increasing hazard rates—to switch from hearing-level FIFO to case-level FIFO. Tracking these judges for eight years, we estimate that our intervention decreased the average case duration by 12% and the probability of a decision being appealed to the Italian supreme court by 3.8%, relative to a 44-judge control sample.
Operations Research, 67(2), 453-467
Abstract:We model a single-supplier, 73-store supply chain as a dynamic discrete choice problem. We estimate the model with transaction-level data, spanning 3,251 products and 1,370 days. We find two interrelated phenomena: the bullwhip effect and ration gaming. To establish the bullwhip effect, we show that shipments from suppliers are more variable than sales to customers. To establish ration gaming, we show that upstream scarcity triggers inventory runs, with stores simultaneously scrambling to amass private stocks in anticipation of impending shortages. These inventory runs increase our bullwhip measures by between 6% and 19%, which corroborates the long-standing hypothesis that ration gaming causes the bullwhip effect.
Management Science, 65(10), 4598-4606
Abstract: I present two algorithms for solving dynamic programs with exogenous variables: endogenous value iteration and endogenous policy iteration. These algorithms are always at least as fast as relative value iteration and relative policy iteration, and they are faster when the endogenous variables converge to their stationary distributions sooner than the exogenous variables.
Management Science, 65(9), 4079-4099
Abstract: We estimate the effect of supply chain proximity on product quality. Merging four automotive data sets, we create a supply chain sample that reports the failure rate of 27,807 auto components, the location of 529 upstream component factories, and the location of 275 downstream assembly plants. We find that defect rates are higher when upstream and downstream factories are farther apart. Specifically, we estimate that increasing the distance between an upstream component factory and a downstream assembly plant by an order of magnitude increases the component’s expected defect rate by 3.9%. We find that quality improves more slowly across geographically dispersed supply chains. We also find that supply chain distance is more detrimental to quality when automakers produce early-generation models or high-end products, when they buy components with more complex configurations, or when they source from suppliers who invest relatively little in research and development.
Quantitative Economics, 10(1), 43-65
Abstract: Morton and Wecker (1977) stated that the value iteration algorithm solves a dynamic program's policy function faster than its value function when the limiting Markov chain is ergodic. I show that their proof is incomplete, and provide a new proof of this classic result. I use this result to accelerate the estimation of Markov decision processes and the solution of Markov perfect equilibria.
Manufacturing & Service Operations Management, 25(3), 812-826
Abstract: Problem definition: Do the benefits of operational transparency depend on when the work is done? Academic/practical relevance: This work connects the operations management literature on operational transparency with the psychology literature on the peak-end effect. Methodology: This study examines how customers respond to operational transparency with parcel delivery data from the Cainiao Network, the logistics arm of Alibaba. The sample comprises 4.68 million deliveries. Each delivery has between 4 and 10 track-package activities, which customers can check in real time, and a delivery service score, which customers leave after receiving the package. Instrumental-variable regressions quantify the causal effect of track-package-activity times on delivery scores. Results: The regressions suggest that customers punish early idleness less than late idleness, leaving higher delivery service scores when track-package activities cluster toward the end of the shipping horizon. For example, if a shipment takes 100 hours, then delaying the time of the average action from hour 20 to hour 80 increases the expected delivery score by approximately the same amount as expediting the arrival time from hour 100 to hour 73. Managerial implications: Memory limitations make customers especially sensitive to how service operations end.
Operations Research, 70(2), 748-765
Abstract: We study the supply chain implications of dynamic pricing. Specifically, we estimate how reducing menu costs—the operational burden of adjusting prices—would affect supply chain volatility. Fitting a structural econometric model to data from a large Chinese supermarket chain, we estimate that removing menu costs would (i) reduce the mean shipment coefficient of variation by 7.2 percentage points (pp), (ii) reduce the mean sales coefficient of variation by 4.3 pp, and (iii) reduce the mean bullwhip effect by 2.9 pp. These stabilizing changes are almost entirely attributable to an increase in the mean sales rate.
Econometrica, 90(4), 1915-1929
Abstract: Rust (1997b) discovered a class of dynamic programs that can be solved in polynomial time with a randomized algorithm. For these dynamic programs, the optimal values of a polynomially large sample of states are sufficient statistics for the (near) optimal values everywhere, and the values of this random sample can be bootstrapped from the sample itself. However, I show that this class is limited, as it requires all but a vanishingly small fraction of state variables to behave arbitrarily similarly to i.i.d. uniform random variables.
Forthcoming at Operations Research
Abstract: I use empirical processes to study how the shadow prices of a linear program that allocates an endowment of \( n\beta \in \mathbb{R}^m \) resources to \( n \) customers behave as \( n \to \infty \). I show the shadow prices (i) adhere to a concentration of measure, (ii) converge to a multivariate normal under central-limit-theorem scaling, and (iii) have a variance that decreases like \( \Theta(1/n) \). I use these results to prove that the expected regret in an online linear program is \( \Theta(\log n) \), both when the customer variable distribution is known upfront and must be learned on the fly. This result tightens the sharpest known upper bound from \( O(\log n \log \log n) \) to \( O(\log n) \), and it extends the \( \Omega(\log n) \) lower bound known for single-dimensional problems to the multidimensional setting. I illustrate my new techniques with a simple analysis of a multisecretary problem.
Forthcoming at the INFORMS Journal of Applied Analytics (formerly known as Interfaces)
Abstract: This tutorial addresses the challenge of incorporating large language models, such as ChatGPT, in a data analytics class. It details several new in-class and out-of-class teaching techniques enabled by artificial intelligence (AI). Here are three examples. Instructors can parallelize instruction by having students interact with different custom-made GPTs to learn different parts of an analysis and then teach each other what they learned from their GPTs. Instructors can turn problem sets into AI tutoring sessions: a custommade GPT guides a student through the problems and the student uploads the chatlog for their homework submission. Instructors can assign different labs to each section of a class and have each section create AI assistants to help the other sections work through their labs. This tutorial advocates the natural language programming (NLP) paradigm, in which students articulate desired data transformations with a spoken language, such as English, and then use AI to generate the corresponding computer code. Students can wrangle data more effectively with NLP than with Excel.
Revise and resubmit at Econometrica