Operations Management at Leeds

Operations Management

The Operations Management division in the Leeds School of Business is comprised of faculty and students who are dedicated to understanding and shaping the future of global operations.

The division’s faculty consistently excel in high-impact research and bring that expertise into the classroom through cutting-edge pedagogy and hands-on learning experiences. Their research spans critical domains—from sustainability and analytics to risk management and supply chain innovation—positioning the division as a leader in both disciplinary and interdisciplinary advancement. Students in these programs benefit from innovative teaching methods and experiential opportunities that prepare them for an increasingly complex global marketplace. Through its academic rigor and forward-looking approach, the Operations division at Leeds is committed to its teaching mission, consistently working on innovative ways to enhance student learning.

Research Themes

Revenue and Management Pricing
Nonprofit Operations
Sustainable Operations and Management
Supply Chain Analytics
Data-Driven Decision Making

No. 29

Operations Management Research Journal Rankings

The UTD Research Journal Rankings by Discipline, 2025

11

Tenure/Tenure Track-Faculty

3

PhD Placements in past 5 years

Operations Management PhD placements

Operations Management Faculty Feature

Dan Zhang

MediaOne Professor
Strategy, Entrepreneurship and Operations

Joint Optimization of Pricing and Personalized Recommendations in Online Retailing

Retailers face an ongoing challenge of varying prices while tailoring product recommendations at scale. Dan Zhang, associate dean of research and academics and MediaOne Professor of Operations Management at Leeds, explores this issue in the study “Joint Optimization of Pricing and Personalized Recommendations in Online Retailing," demonstrating how advanced algorithms and data analytics can revolutionize retail strategies to enhance efficiency, customer experience and profitability.

Operations Management Faculty Research

Near-Optimal Mixed (s,S) Policy for a Multiwarehouse, Multistore Inventory System with Lost Sales and Fixed Cost

Operations Research
Sentao Miao
Additional Authors: Stefanus Jasin, Xiuli Chao
Sep./Oct. 2025, Vol. 73 Issue 5, p2306-2318.

Abstract: Smarter Inventory Control Through Policy Mixing Coordinating inventory across multiple warehouses and retail stores becomes significantly more complex when fixed ordering costs and lost sales are involved. The paper "Near-Optimal Mixed (s,S) Policy for a Multi-warehouse, Multi-store Inventory System with Lost Sales and Fixed Cost," introduces a novel solution to this challenge. By leveraging Lagrangian relaxation, the authors develop a mixed (s,S) policy that assigns two carefully selected ordering strategies to each store: one for the early phase of the selling horizon and another for the later phase. This structured mix allows the policy to approximate the optimal solution, satisfying inventory constraints and managing costs effectively. The approach achieves a provable near-optimality guarantee and highlights the power of combining simple policies to tackle complex supply chain problems. We consider a firm managing a multi-period, multi-warehouse, multi-store (MWMS) inventory problem with fixed ordering cost at each store over a finite time horizon. The warehouses are endowed with initial inventories at the start of the horizon, and the stores are periodically replenished from the warehouses. The decisions are the order quantities from each store at each period. The optimal policy for this problem is complex and computationally intractable. We construct a mixed (s,S) policy based on the optimal solutions of a Lagrangian relaxation. Under this policy, each store makes use of at most two (s,S) policies; one is applied during the first phase of the selling horizon, and the second is applied in the remaining periods. We prove that this policy is near optimal as the length of the time horizon grows. In contrast to the existing works on the MWMS problem without fixed cost for which near-optimal policies can be developed using an optimal Lagrangian solution, with fixed cost, it is crucial to adopt a mixture of Lagrangian solutions, and simply applying a pure optimal Lagrangian solution can be highly sub-optimal.

Journal Article

Adaptive Lagrangian Policies for a Multiwarehouse, Multistore Inventory System with Lost Sales

Operations Research
Sentao Miao
Additional Authors: Xiuli Chao, Stefanus Jasin
May./Jun. 2025, Vol. 73 Issue 3, p1615-1636.

Abstract: We consider the inventory control problem of a multi-warehouse, multi-store system over a time horizon when the warehouses receive no external replenishment. This problem is prevalent in retail settings, and it is referred to in the work of [Jackson PL (1988) Stock allocation in a two-echelon distribution system or "what to do until your ship comes in." Management Sci. 34(7):880-895] as the problem of "what to do until your (external) shipment comes in." The warehouses are stocked with initial inventories, and the stores are dynamically replenished from the warehouses in each period of the planning horizon. Excess demand in each period at a store is lost. The optimal policy for this problem is complex and state dependent, and because of the curse of dimensionality, computing the optimal policy using standard dynamic programming is numerically intractable. Static Lagrangian base-stock (LaBS) policies have been developed for this problem [Miao S, Jasin S, Chao X (2022) Asymptotically optimal Lagrangian policies for one-warehouse multi-store system with lost sales. Oper. Res. 70(1):141-159] and shown to be asymptotically optimal. In this paper, we develop adaptive policies that dynamically adjust the control parameters of a vanilla static LaBS policy using realized historical demands. We show, both theoretically and numerically, that adaptive policies significantly improve the performance of the LaBS policy, with the magnitude of improvement characterized by the number of policy adjustments. In particular, when the number of adjustments is a logarithm of the length of time horizon, the policy is rate optimal in the sense that the rate of the loss (in terms of the dependency on the length of the time horizon) matches that of the theoretical lower bound. Among other insights, our results also highlight the benefit of incorporating the "pooling effect" in designing a dynamic adjustment scheme.

Journal Article

Efficient Project Scheduling with Autonomous Learning Opportunities

INFORMS Journal on Computing
Thomas Vossen
Additional Authors: Alessandro Hill
May./Jun. 2025, Vol. 37 Issue 3, p761-783.

Abstract: We consider novel project scheduling problems in which the experience gained from completing selected activities can be used to accelerate subsequent activities. Given a set of potential learning opportunities, our model aims to identify the opportunities that result in a maximum reduction of the project makespan when scheduled in sequence. Accounting for the impact of such learning opportunities causes significant complications, due to the cyclic nature of the learning relations and their interference with the precedence network. We propose additive and subtractive algorithms that iteratively reschedule the project using an enhanced topological sorting algorithm. Learning opportunities are integrated, activated, and potentially deactivated in each step by maintaining the a cyclicity of the combined precedence and learning network. To illustrate the challenges that arise in this setting, we first consider the special case where activities can learn from at most one other activity. Subsequently, we extend our approach to the general case that admits multiple learning opportunities. We show that our approaches guarantee the construction of an optimal solution in polynomial time. In a computational study using 340 small and large resource-unconstrained PSPlib instances, we analyze the model behavior under various scenarios of learning intensity and learning opportunity. We demonstrate that significant project speedups can be obtained when proactively accounting for learning opportunities. History: Accepted by Pascal van Hentenryck, Area Editor for Computational Modeling: Methods & Analysis. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information

Journal Article

Intraconsumer Price Discrimination with Credit Refund Policies

Management Science
Dan Zhang
Additional Authors: Yan Lui
Nov. 2023, Volume 70, Issue 10, p6483-7343.

Consumers often receive a full or partial refund for product returns or service cancellations. Much of the existing literature studies cash refunds, where consumers get the money back minus a fee upon a product return or service cancellation. However, not all refunds are issued in cash. Sometimes consumers receive credit that can be used for future purchases, oftentimes with an expiration term after which the credit is forfeited. We study the optimal design of credit refund policies. Different from models that consider cash refunds, we explicitly model repeated interactions between the seller and consumers over time. We assume that consumers’ valuation for the product/service varies over time and that there is an exogenous probability for product returns. Several interesting results emerge. First, a credit refund policy facilitates intraconsumer price discrimination for a single type of consumers with stochastic valuation. Second, an optimal policy often involves an intermediate credit expiration term, under which a consumer with a high product valuation always makes a purchase, whereas a consumer with a low product valuation may be induced to make a purchase as the credit approaches expiration, leading to a demand induction effect. Finally, a credit refund policy can be more profitable than a cash refund policy and can lead to a win-win outcome for both the firm and consumers under certain conditions. We also consider several extensions to check the robustness of our findings.

Journal Article

Approximate Linear Programming for a Queueing Control Problem

Computers & Operations Research
Dan Zhang, Rui Zhang
Additional Authors: Saied Samiedaluie
Sep. 2024, Vol. 169, 106711.

Abstract: Admission decisions for loss systems accessed by multiple customer classes are a classical queueing control problem with a wide variety of applications. When a server is available, the decision is whether to admit an arriving customer and collect a lump-sum revenue. The system can be modeled as a continuous-time infinite-horizon dynamic program, but suffers from the curse of dimensionality when different customer classes have different service rates. We use approximate linear programming to solve the problem under three approximation architectures: affine, separable piecewise linear and finite affine. The finite affine approximation is a recently proposed generalization of the affine approximation, which allows for non-stationary parameters. For both affine and finite affine approximations, we derive equivalent, but more compact, formulations that can be efficiently solved. We propose a column generation algorithm for the separable piecewise linear approximation. Our numerical results show that the finite affine approximation can obtain the tightest bounds for 75% of the instances among the three approximations. Especially, when the number of servers is large and/or the load on the system is high, the finite affine approximation always achieves the tightest bounds. Regarding policy performance, the finite affine approximation has the best performance on average compared to the other two approximations and the achievable performance region method (Bertsimaset al., 1994, Kumar and Kumar, 1994). Furthermore, the finite affine approximation is 4 to 5 orders of magnitude faster than the achievable performance region method and the separable piecewise linear approximation for large-scale instances. Therefore, considering bounds, policy performance, and computational efficiency, the finite affine approximation emerges as a competitive approximation architecture for the class of problems studied here. • We use approximate LP to solve a queueing control problem under three approximation architectures. • We derive equivalent but more compact formulations for affine and finite affine approximations. • We propose a column generation algorithm for the separable piecewise linear approximation. • We show that the finite affine approximation can obtain the tightest bounds for 75% of the instances. • We show that the finite affine approximation has the best policy performance and efficiency. 

Journal Article

UCB-Type Learning Algorithms with Kaplan–Meier Estimator for Lost-Sales Inventory Models with Lead Times

Operations Research
Huanan Zhang
Additional Authors: Chengyi Lyu, Linwei Xin
Jul./Aug. 2024, Vol. 72 Issue 4, p1317-1332.

Abstract: Efficient Learning Algorithms for the Best Capped Base-Stock Policy in Lost Sales Inventory Systems Periodic review, lost sales inventory systems with lead times are notoriously challenging to optimize. Recently, the capped base-stock policy, which places orders to bring the inventory position up to the order-up-to level subject to the order cap, has demonstrated exceptional performance. In the paper ""UCB-Type Learning Algorithms with Kaplan–Meier Estimator for Lost Sales Inventory Models with Lead Times,"" Lyu, Zhang, and Xin propose an upper confidence bound–type learning framework. This framework, which incorporates simulations with the Kaplan–Meier estimator, works with censored demand observations. It can be applied to determine the optimal capped base-stock policy with a tight regret with respect to the planning horizon and the optimal base-stock policy with a regret that matches the best existing result. Both theoretical analysis and extensive numerical experiments demonstrate the effectiveness of the proposed learning framework. In this paper, we consider a classic periodic-review lost-sales inventory system with lead times, which is notoriously challenging to optimize with a wide range of real-world applications. We consider a joint learning and optimization problem in which the decision maker does not know the demand distribution a priori and can only use past sales information (i.e., censored demand). Departing from existing learning algorithms on this learning problem that require the convexity property of the underlying system, we develop an upper confidence bound (UCB)-type learning framework that incorporates simulations with the Kaplan–Meier estimator and demonstrate its applicability to learning not only the optimal capped base-stock policy in which convexity no longer holds, but also the optimal base-stock policy with a regret that matches the best existing result. Compared with a classic multi-armed bandit problem, our problem has unique challenges because of the nature of the inventory system, because (1) each action has long-term impacts on future costs, and (2) the system state space is exponentially large in the lead time. As such, our learning algorithms are not naive adoptions of the classic UCB algorithm; in fact, the design of the simulation steps with the Kaplan–Meier estimator and averaging steps is novel in our algorithms, and the confidence width in the UCB index is also different from the classic one. We prove the regrets of our learning algorithms are tight up to a logarithmic term in the planning horizon T. Our extensive numerical experiments suggest the proposed algorithms (almost) dominate existing learning algorithms. We also demonstrate how to select which learning algorithm to use with limited demand data.

Journal Article

An Approximate Dynamic Programming Approach to Dynamic Stochastic Matching

INFORMA Journal on Computing
Thomas Vossen
Additional Authors: Fan You
Jul./Aug. 2024, Vol. 36 Issue 4, p1006-1022.

Abstract: Dynamic stochastic matching problems arise in a variety of recent applications, ranging from ridesharing and online video games to kidney exchange. Such problems are naturally formulated as Markov decision processes (MDPs) that are, however, intractable in general. To improve tractability, we investigate the linear programming-based approach to approximate dynamic programming. This approach can provide both feasible control policies and bounds on the MDPs' optimal policy value, which can be used to establish optimality gaps. However, the approximate linear programs (ALPs) resulting from this approach can often be difficult to solve. To address this computational challenge, we derive novel ALP reformulations that can be used for a broad class of dynamic stochastic matching problems that incorporate, among others, possible match failures and certain restrictions on feasible matchings. We show that these ALP reformulations can be solved efficiently and applied to a broad class of dynamic matching problems. In addition, our numerical results indicate that our ALP reformulations can produce tight bounds that allow us to establish near-optimal policy performance for a broad set of problem instances. Thus, ALP reformulations can present an attractive alternative for applications that involve dynamic stochastic matching.

Journal Article

Causal Inference Under Selection on Observables in Operations Management Research: Matching Methods and Synthetic Controls

Journal of Operations Management
Övünç Yilmaz
Additional Authors: Yoonseock Son, Guangzhi Shang, Hayri Arslan
Jul. 2024, Vol. 70 Issue 5, p831-859.

Abstract: The majority of recent empirical papers in operations management (OM) employ observational data to investigate the causal effects of a treatment, such as program or policy adoption. However, as observational data lacks the benefit of random treatment assignment, estimating causal effects poses challenges. In the specific scenario where one can reasonably assume that all confounding factors are observed—referred to as selection on observables—matching methods and synthetic controls can assist researchers to replicate a randomized experiment, the most desirable setting for drawing causal inferences. In this paper, we first present an overview of matching methods and their utilization in the OM literature. Subsequently, we establish the framework and provide pragmatic guidance for propensity score matching and coarsened exact matching, which have garnered considerable attention in recent OM studies. Following this, we conduct a comprehensive simulation study that compares diverse matching algorithms across various scenarios, providing practical insights derived from our findings. Finally, we discuss synthetic controls, a method that offers unique advantages over matching techniques in specific scenarios and is expected to become even more popular in the OM field in the near future. We hope that this paper will serve as a catalyst for promoting a more rigorous application of matching and synthetic control methodologies.

Journal Article

To Earmark or to Nonearmark? The Role of Control, Transparency, and Warm-Glow

M&SOM: Manufacturing & Service Operations Management
Gloria Urrea, Sebastian Villa
Additional Authors: Özalp Özer
Mar./Apr. 2024, Vol. 26 Issue 2, p739-757.

Abstract: Problem definition: Charities face tension when deciding whether to earmark donations, that is, allow donors to restrict the use of their donations for a specific purpose. Research shows that earmarking decreases operational performance because it limits charities' flexibility to use donations. However, there is also a common belief that earmarking increases donations. Earmarking is assumed to increase donations through three mechanisms: by (i) giving donors control over their donations, (ii) increasing operational transparency of donations, and (iii) changing donors' levels of altruism and warm-glow. To resolve this tension, we study how, when, and why earmarking affects donors' decisions. We consider three important decisions donors make that impact the fundraising outcome: (i) preference between earmarking and nonearmarking, (ii) decision to donate or not (i.e., donor activation), and (iii) the donation amount. Methodology/results: We design three online experiments that allow us to quantify the effect of earmarking on donors' decisions and to investigate the role of the three aforementioned mechanisms in fundraising. Our results reveal, for example, that earmarking activates more donors but it does not always increase donation amounts. In addition, we determine the conditions under which the three mechanisms affect the outcome of fundraising campaigns. Managerial implications: Our findings provide actionable insights for how charities can design fundraising campaigns more effectively and suggest when to leverage earmarking and the three mechanisms depending on the charities' fundraising goals.

Journal Article

The Driver-Aide Problem: Coordinated Logistics for Last-Mile Delivery

M&SOM: Manufacturing & Service Operations Management
Rui Zhang
Additional Authors: S. Raghavan
Jan./Feb. 2024, Vol. 26 Issue 1, p291-311.

Abstract: Problem definition: Last-mile delivery is a critical component of logistics networks, accounting for approximately 30%–35% of costs. As delivery volumes have increased, truck route times have become unsustainably long. To address this issue, many logistics companies, including FedEx and UPS, have resorted to using a ""driver aide"" to assist with deliveries. The aide can assist the driver in two ways. As a ""jumper,"" the aide works with the driver in preparing and delivering packages, thus reducing the service time at a given stop. As a ""helper,"" the aide can independently work at a location delivering packages, and the driver can leave to deliver packages at other locations and then return. Given a set of delivery locations, travel times, service times, jumper's savings, and helper's service times, the goal is to determine both the delivery route and the most effective way to use the aide (e.g., sometimes as a jumper and sometimes as a helper) to minimize the total routing time. Methodology/results: We model this problem as an integer program with an exponential number of variables and an exponential number of constraints and propose a branch-cut-and-price approach for solving it. Our computational experiments are based on simulated instances built on real-world data provided by an industrial partner and a data set released by Amazon. The instances based on the Amazon data set show that this novel operation can lead to, on average, a 35.8% reduction in routing time and 22.0% in cost savings. More importantly, our results characterize the conditions under which this novel operation mode can lead to significant savings in terms of both the routing time and cost. Managerial implications: Our computational results show that the driver aide with both jumper and helper modes is most effective when there are denser service regions and when the truck's speed is higher (≥10 miles per hour). Coupled with an economic analysis, we come up with rules of thumb (that have close to 100% accuracy) to predict whether to use the aide and in which mode. Empirically, we find that the service delivery routes with greater than 50% of the time devoted to delivery (as opposed to driving) are the ones that provide the greatest benefit. These routes are characterized by a high density of delivery locations.

Journal Article

Access all previous research publications in operations management

Near-Optimal Mixed (s,S) Policy for a Multiwarehouse, Multistore Inventory System with Lost Sales and Fixed Cost
Operations Research
Sentao Miao
Additional authors: Stefanus Jasin, Xiuli Chao
Sep./Oct. 2025, Vol. 73 Issue 5, p2306-2318

Abstract: Smarter Inventory Control Through Policy Mixing Coordinating inventory across multiple warehouses and retail stores becomes significantly more complex when fixed ordering costs and lost sales are involved. The paper "Near-Optimal Mixed (s,S) Policy for a Multiwarehouse, Multistore Inventory System with Lost Sales and Fixed Cost," introduces a novel solution to this challenge. By leveraging Lagrangian relaxation, the authors develop a mixed (s,S) policy that assigns two carefully selected ordering strategies to each store: one for the early phase of the selling horizon and another for the later phase. This structured mix allows the policy to approximate the optimal solution, satisfying inventory constraints and managing costs effectively. The approach achieves a provable near-optimality guarantee and highlights the power of combining simple policies to tackle complex supply chain problems. We consider a firm managing a multiperiod, multiwarehouse, multistore (MWMS) inventory problem with fixed ordering cost at each store over a finite time horizon. The warehouses are endowed with initial inventories at the start of the horizon, and the stores are periodically replenished from the warehouses. The decisions are the order quantities from each store at each period. The optimal policy for this problem is complex and computationally intractable. We construct a mixed (s,S) policy based on the optimal solutions of a Lagrangian relaxation. Under this policy, each store makes use of at most two (s,S) policies; one is applied during the first phase of the selling horizon, and the second is applied in the remaining periods. We prove that this policy is near optimal as the length of the time horizon grows. In contrast to the existing works on the MWMS problem without fixed cost for which near-optimal policies can be developed using an optimal Lagrangian solution, with fixed cost, it is crucial to adopt a mixture of Lagrangian solutions, and simply applying a pure optimal Lagrangian solution can be highly suboptimal.

Are Employees Committed to Diversity at Work and in Their Personal Lives? The Role of Organizational Antiracist Signaling Following a Racial Injustice Event
Human Resource Management
Sabrina Volpone
Additional authors: Wendy J. Casper, Julie Holliday Wayne, Maria L. White
Sep./Oct. 2025, Vol. 64 Issue 5, p1401-1420

Abstract: Research on corporate sociopolitical activism (CSA) is in its infancy, and more research is needed to examine its effects on employees. We draw from the tenets of Signaling Theory to develop and test a model of how organizations' antiracist signaling after a racial injustice event, as a form of CSA, communicates that racial justice is valued sincerely by organizations, and in turn, motivates employee commitment to diversity—both at work and in their personal lives. We also explore boundary conditions (i.e., climate for inclusion, employee race) of this relationship. We test our model with data collected from 367 employees (37.6% Black, 62.4% White) across 4‐time waves, each 1 month apart, using a mixed‐methods (quantitative and qualitative) approach. Results suggest that organizations are viewed as most sincere when they engage in signaling that includes both words (i.e., releasing a statement) and actions (e.g., hiring a diversity officer) relative to when they don't engage in these words and/or actions. Moreover, when organizations signaled a sincere commitment to antiracism with both words and actions, employees were more committed to diversity at work and in their personal lives, though actions taken by the organization were especially important. Moreover, a strong climate for inclusion reduced the need for actions, while a weak climate for inclusion increased the need for a statement. Theoretical, research, and practical implications are discussed.

Adaptive Lagrangian Policies for a Multiwarehouse, Multistore Inventory System with Lost Sales
Operations Research
Sentao Miao
Additional authors: Xiuli Chao, Stefanus Jasin
May/Jun. 2025, Vol. 73 Issue 3, p1615-1636

Abstract: We consider the inventory control problemof amultiwarehouse, multistore system over a time horizon when the warehouses receive no external replenishment. This problem is prevalent in retail settings, and it is referred to in the work of [Jackson PL (1988) Stock allocation in a two-echelon distribution systemor "what to do until your ship comes in." Management Sci. 34(7):880-895] as the problem of "what to do until your (external) shipment comes in." The warehouses are stocked with initial inventories, and the stores are dynamically replenished from the warehouses in each period of the planning horizon. Excess demand in each period at a store is lost. The optimal policy for this problem is complex and state dependent, and because of the curse of dimensionality, computing the optimal policy using standard dynamic programming is numerically intractable. Static Lagrangian base-stock (LaBS) policies have been developed for this problem [Miao S, Jasin S, Chao X (2022) Asymptotically optimal Lagrangian policies for one-warehouse multistore system with lost sales. Oper. Res. 70(1):141-159] and shown to be asymptotically optimal. In this paper, we develop adaptive policies that dynamically adjust the control parameters of a vanilla static LaBS policy using realized historical demands. We show, both theoretically and numerically, that adaptive policies significantly improve the performance of the LaBS policy, with the magnitude of improvement characterized by the number of policy adjustments. In particular, when the number of adjustments is a logarithm of the length of time horizon, the policy is rate optimal in the sense that the rate of the loss (in terms of the dependency on the length of the time horizon) matches that of the theoretical lower bound. Among other insights, our results also highlight the benefit of incorporating the "pooling effect" in designing a dynamic adjustment scheme.

Efficient Project Scheduling with Autonomous Learning Opportunities
INFORMS Journal on Computing
Thomas Vossen
Additional author: Alessandro Hill
May/Jun. 2025, Vol. 37 Issue 3, p761-783

Abstract: We consider novel project scheduling problems in which the experience gained from completing selected activities can be used to accelerate subsequent activities. Given a set of potential learning opportunities, our model aims to identify the opportunities that result in a maximum reduction of the project makespan when scheduled in sequence. Accounting for the impact of such learning opportunities causes significant complications, due to the cyclic nature of the learning relations and their interference with the precedence network. We propose additive and subtractive algorithms that iteratively reschedule the project using an enhanced topological sorting algorithm. Learning opportunities are integrated, activated, and potentially deactivated in each step by maintaining the acyclicity of the combined precedence and learning network. To illustrate the challenges that arise in this setting, we first consider the special case where activities can learn from at most one other activity. Subsequently, we extend our approach to the general case that admits multiple learning opportunities. We show that our approaches guarantee the construction of an optimal solution in polynomial time. In a computational study using 340 small and large resource-unconstrained PSPlib instances, we analyze the model behavior under various scenarios of learning intensity and learning opportunity. We demonstrate that significant project speedups can be obtained when proactively accounting for learning opportunities. History: Accepted by Pascal van Hentenryck, Area Editor for Computational Modeling: Methods & Analysis. Supplemental Material: The software that supports the findings of this study is available within the paper and its Supplemental Information

A novel parallel framework for scatter search
Knowledge-Based Systems
Manuel Laguna
Additional authors: Alejandra Casado, Sergio Perez-Pelo, Jesus Sanchez-Oro, Abraham Duarte
Apr. 2025, Vol. 314

Abstract: Scatter search (SS) is a well-established metaheuristic for hard combinatorial optimization problems. SS is characterized by its versatility and ease of context adaptation and implementation. Although the literature includes SS parallelization schemes for specific problems, a general parallel framework for scatter search has not been developed and tested. We introduce three SS parallel designs, each focusing on a different task, namely, reducing computational time, increasing search exploration, and balancing search intensification and diversification. The proposed designs are tested on problems where the state of the art is a traditional (sequential) SS approach. This testing platform helps us assess the contributions of the parallel computing strategies to solution speed and quality. Our publicly available code is designed to be adapted to optimization problems that are not considered here. The results show promising avenues for establishing a general framework of SS parallelization. • Several designs for parallelizing scatter search are proposed. • The new parallel designs are tested on three combinatorial optimization problems. • Computational results assess the performance of the parallel designs. • The advantages and disadvantages of each parallel design are discussed.

How Does Flexibility Affect Environmental Performance? Evidence from the Power Generation Industry
Manufacturing & Service Operations Management (M&SOM)
David Drake
Additional author: Suresh Muthulingam
Mar./Apr. 2025, Vol. 27 Issue 2, p569-587

Abstract: Problem definition: It is well known that the power sector requires flexibility, especially from fossil fuel–based power-generating units, to balance demand side variability and supply side fluctuations. However, the environmental consequences related to possessing and exercising such flexibility remain relatively unexplored. In this study, we examine the environmental impact of two forms of flexibility pertinent to power generation: fuel flexibility—that is, the ability to utilize multiple fuel types, and volume flexibility—that is, the ability to alter production volumes quickly. Methodology/results: We assembled a data set that spans 1998–2016 and includes details on fuel usage, power generation, generating unit characteristics, and carbon dioxide (i.e., CO2) emissions for 3,135 fossil fuel power-generating units that account for more than 92% of the United States' fossil fuel–generating capacity. Our empirical analysis reveals that power-generating units that possess fuel flexibility or volume flexibility generate greater CO2 emissions than comparable nonflexible generating units. Additionally, when power-generating units exercise fuel flexibility (i.e., they use multiple fuel types in a period) or volume flexibility (i.e., they vary production to a greater degree in a period), they generate greater CO2 emissions. By contrast, when power-generating units exercise both fuel and volume flexibility, we find that they diminish the aggregate emissions increases expected from exercising fuel or volume flexibility alone. Managerial implications: We add to the literature by exploring how flexibility affects environmental performance and by disambiguating the effects of possessing flexibility and exercising flexibility. These results are important when considering the penetration of renewables, the adoption of utility-grade storage, and demand response as each of these paths can significantly affect the flexibility burden placed on conventional sources of power. Thus, our results are relevant to policy makers and practitioners because they crystallize the environmental tradeoffs involved with deploying flexibility in fossil fuel–based power-generating units.

Approximate Linear Programming for a Queueing Control Problem
Computers & Operations Research
Dan Zhang, Rui Zhang
Additional Authors: Saied Samiedaluie
Sep. 2024, Vol. 169, 106711.

Abstract: Admission decisions for loss systems accessed by multiple customer classes are a classical queueing control problem with a wide variety of applications. When a server is available, the decision is whether to admit an arriving customer and collect a lump-sum revenue. The system can be modeled as a continuous-time infinite-horizon dynamic program, but suffers from the curse of dimensionality when different customer classes have different service rates. We use approximate linear programming to solve the problem under three approximation architectures: affine, separable piecewise linear and finite affine. The finite affine approximation is a recently proposed generalization of the affine approximation, which allows for non-stationary parameters. For both affine and finite affine approximations, we derive equivalent, but more compact, formulations that can be efficiently solved. We propose a column generation algorithm for the separable piecewise linear approximation. Our numerical results show that the finite affine approximation can obtain the tightest bounds for 75% of the instances among the three approximations. Especially, when the number of servers is large and/or the load on the system is high, the finite affine approximation always achieves the tightest bounds. Regarding policy performance, the finite affine approximation has the best performance on average compared to the other two approximations and the achievable performance region method (Bertsimaset al., 1994, Kumar and Kumar, 1994). Furthermore, the finite affine approximation is 4 to 5 orders of magnitude faster than the achievable performance region method and the separable piecewise linear approximation for large-scale instances. Therefore, considering bounds, policy performance, and computational efficiency, the finite affine approximation emerges as a competitive approximation architecture for the class of problems studied here. • We use approximate LP to solve a queueing control problem under three approximation architectures. • We derive equivalent but more compact formulations for affine and finite affine approximations. • We propose a column generation algorithm for the separable piecewise linear approximation. • We show that the finite affine approximation can obtain the tightest bounds for 75% of the instances. • We show that the finite affine approximation has the best policy performance and efficiency. 

An Approximate Dynamic Programming Approach to Dynamic Stochastic Matching
INFORMA Journal on Computing
Thomas Vossen
Additional authors: Fan You
Jul./Aug. 2024, Vol. 36 Issue 4, p1006-1022.

Abstract: Dynamic stochastic matching problems arise in a variety of recent applications, ranging from ridesharing and online video games to kidney exchange. Such problems are naturally formulated as Markov decision processes (MDPs) that are, however, intractable in general. To improve tractability, we investigate the linear programming-based approach to approximate dynamic programming. This approach can provide both feasible control policies and bounds on the MDPs' optimal policy value, which can be used to establish optimality gaps. However, the approximate linear programs (ALPs) resulting from this approach can often be difficult to solve. To address this computational challenge, we derive novel ALP reformulations that can be used for a broad class of dynamic stochastic matching problems that incorporate, among others, possible match failures and certain restrictions on feasible matchings. We show that these ALP reformulations can be solved efficiently and applied to a broad class of dynamic matching problems. In addition, our numerical results indicate that our ALP reformulations can produce tight bounds that allow us to establish near-optimal policy performance for a broad set of problem instances. Thus, ALP reformulations can present an attractive alternative for applications that involve dynamic stochastic matching.

UCB-Type Learning Algorithms with Kaplan–Meier Estimator for Lost-Sales Inventory Models with Lead Times
Operations Research
Huanan Zhang
Additional authors: Chengyi Lyu, Linwei Xin
Jul./Aug. 2024, Vol. 72 Issue 4, p1317-1332.

Abstract: Efficient Learning Algorithms for the Best Capped Base-Stock Policy in Lost Sales Inventory Systems Periodic review, lost sales inventory systems with lead times are notoriously challenging to optimize. Recently, the capped base-stock policy, which places orders to bring the inventory position up to the order-up-to level subject to the order cap, has demonstrated exceptional performance. In the paper ""UCB-Type Learning Algorithms with Kaplan–Meier Estimator for Lost Sales Inventory Models with Lead Times,"" Lyu, Zhang, and Xin propose an upper confidence bound–type learning framework. This framework, which incorporates simulations with the Kaplan–Meier estimator, works with censored demand observations. It can be applied to determine the optimal capped base-stock policy with a tight regret with respect to the planning horizon and the optimal base-stock policy with a regret that matches the best existing result. Both theoretical analysis and extensive numerical experiments demonstrate the effectiveness of the proposed learning framework. In this paper, we consider a classic periodic-review lost-sales inventory system with lead times, which is notoriously challenging to optimize with a wide range of real-world applications. We consider a joint learning and optimization problem in which the decision maker does not know the demand distribution a priori and can only use past sales information (i.e., censored demand). Departing from existing learning algorithms on this learning problem that require the convexity property of the underlying system, we develop an upper confidence bound (UCB)-type learning framework that incorporates simulations with the Kaplan–Meier estimator and demonstrate its applicability to learning not only the optimal capped base-stock policy in which convexity no longer holds, but also the optimal base-stock policy with a regret that matches the best existing result. Compared with a classic multi-armed bandit problem, our problem has unique challenges because of the nature of the inventory system, because (1) each action has long-term impacts on future costs, and (2) the system state space is exponentially large in the lead time. As such, our learning algorithms are not naive adoptions of the classic UCB algorithm; in fact, the design of the simulation steps with the Kaplan–Meier estimator and averaging steps is novel in our algorithms, and the confidence width in the UCB index is also different from the classic one. We prove the regrets of our learning algorithms are tight up to a logarithmic term in the planning horizon T. Our extensive numerical experiments suggest the proposed algorithms (almost) dominate existing learning algorithms. We also demonstrate how to select which learning algorithm to use with limited demand data.

Causal Inference Under Selection on Observables in Operations Management Research: Matching Methods and Synthetic Controls
Journal of Operations Management
Övünç Yilmaz
Additional authors: Yoonseock Son, Guangzhi Shang, Hayri Arslan
Jul. 2024, Vol. 70 Issue 5, p831-859.

Abstract: The majority of recent empirical papers in operations management (OM) employ observational data to investigate the causal effects of a treatment, such as program or policy adoption. However, as observational data lacks the benefit of random treatment assignment, estimating causal effects poses challenges. In the specific scenario where one can reasonably assume that all confounding factors are observed—referred to as selection on observables—matching methods and synthetic controls can assist researchers to replicate a randomized experiment, the most desirable setting for drawing causal inferences. In this paper, we first present an overview of matching methods and their utilization in the OM literature. Subsequently, we establish the framework and provide pragmatic guidance for propensity score matching and coarsened exact matching, which have garnered considerable attention in recent OM studies. Following this, we conduct a comprehensive simulation study that compares diverse matching algorithms across various scenarios, providing practical insights derived from our findings. Finally, we discuss synthetic controls, a method that offers unique advantages over matching techniques in specific scenarios and is expected to become even more popular in the OM field in the near future. We hope that this paper will serve as a catalyst for promoting a more rigorous application of matching and synthetic control methodologies.

To Earmark or to Nonearmark? The Role of Control, Transparency, and Warm-Glow
M&SOM: Manufacturing & Service Operations Management
Gloria Urrea, Sebastian Villa
Additional authors: Özalp Özer
Mar./Apr. 2024, Vol. 26 Issue 2, p739-757.

Abstract: Problem definition: Charities face tension when deciding whether to earmark donations, that is, allow donors to restrict the use of their donations for a specific purpose. Research shows that earmarking decreases operational performance because it limits charities' flexibility to use donations. However, there is also a common belief that earmarking increases donations. Earmarking is assumed to increase donations through three mechanisms: by (i) giving donors control over their donations, (ii) increasing operational transparency of donations, and (iii) changing donors' levels of altruism and warm-glow. To resolve this tension, we study how, when, and why earmarking affects donors' decisions. We consider three important decisions donors make that impact the fundraising outcome: (i) preference between earmarking and nonearmarking, (ii) decision to donate or not (i.e., donor activation), and (iii) the donation amount. Methodology/results: We design three online experiments that allow us to quantify the effect of earmarking on donors' decisions and to investigate the role of the three aforementioned mechanisms in fundraising. Our results reveal, for example, that earmarking activates more donors but it does not always increase donation amounts. In addition, we determine the conditions under which the three mechanisms affect the outcome of fundraising campaigns. Managerial implications: Our findings provide actionable insights for how charities can design fundraising campaigns more effectively and suggest when to leverage earmarking and the three mechanisms depending on the charities' fundraising goals.

The Driver-Aide Problem: Coordinated Logistics for Last-Mile Delivery
M&SOM: Manufacturing & Service Operations Management
Rui Zhang
Additional authors: S. Raghavan
Jan./Feb. 2024, Vol. 26 Issue 1, p291-311.

Abstract: Problem definition: Last-mile delivery is a critical component of logistics networks, accounting for approximately 30%–35% of costs. As delivery volumes have increased, truck route times have become unsustainably long. To address this issue, many logistics companies, including FedEx and UPS, have resorted to using a ""driver aide"" to assist with deliveries. The aide can assist the driver in two ways. As a ""jumper,"" the aide works with the driver in preparing and delivering packages, thus reducing the service time at a given stop. As a ""helper,"" the aide can independently work at a location delivering packages, and the driver can leave to deliver packages at other locations and then return. Given a set of delivery locations, travel times, service times, jumper's savings, and helper's service times, the goal is to determine both the delivery route and the most effective way to use the aide (e.g., sometimes as a jumper and sometimes as a helper) to minimize the total routing time. Methodology/results: We model this problem as an integer program with an exponential number of variables and an exponential number of constraints and propose a branch-cut-and-price approach for solving it. Our computational experiments are based on simulated instances built on real-world data provided by an industrial partner and a data set released by Amazon. The instances based on the Amazon data set show that this novel operation can lead to, on average, a 35.8% reduction in routing time and 22.0% in cost savings. More importantly, our results characterize the conditions under which this novel operation mode can lead to significant savings in terms of both the routing time and cost. Managerial implications: Our computational results show that the driver aide with both jumper and helper modes is most effective when there are denser service regions and when the truck's speed is higher (≥10 miles per hour). Coupled with an economic analysis, we come up with rules of thumb (that have close to 100% accuracy) to predict whether to use the aide and in which mode. Empirically, we find that the service delivery routes with greater than 50% of the time devoted to delivery (as opposed to driving) are the ones that provide the greatest benefit. These routes are characterized by a high density of delivery locations.

Previous Operations Research Publications

Operations Management Faculty Directory