Journal: Operations Research Letters

Loading...

Abbreviation

Oper. res. lett.

Publisher

Elsevier

Journal Volumes

ISSN

0167-6377
1872-7468

Description

Search Results

Publications1 - 10 of 23
  • Karaca, Orçun; Delikaraoglou, Stefanos; Kamgarpour, Maryam (2021)
    Operations Research Letters
    Considering the sequential clearing of energy and reserves in Europe, enabling inter-area reserve exchange requires optimally allocating inter-area transmission capacities between these two markets. To achieve this, we provide a market-based allocation framework and derive payments with desirable properties. The proposed min-max least core selecting payments achieve individual rationality, budget balance, and approximate incentive compatibility and coalitional stability. The results extend the works on private discrete items to a network of continuous public choices.
  • Banjac, Goran (2021)
    Operations Research Letters
    The Douglas–Rachford algorithm can be represented as the fixed point iteration of a firmly nonexpansive operator. When the operator has no fixed points, the algorithm's iterates diverge, but the difference between consecutive iterates converges to the so-called minimal displacement vector, which can be used to certify infeasibility of an optimization problem. In this paper, we establish new properties of the minimal displacement vector, which allow us to generalize some existing results. © 2021 The Author(s)
  • Connectivity interdiction
    Item type: Journal Article
    Zenklusen, Rico (2014)
    Operations Research Letters
  • Cohen, Victor; Parmentier, Axel (2023)
    Operations Research Letters
    Optimal policies for partially observed Markov decision processes (POMDPs) are history-dependent: Decisions are made based on the entire history of observations. Memoryless policies, which take decisions based on the last observation only, are generally considered useless in the literature because we can construct POMDP instances for which optimal memoryless policies are arbitrarily worse than history-dependent ones. Our purpose is to challenge this belief. We show that optimal memoryless policies can be computed efficiently using mixed integer linear programming (MILP), and perform reasonably well on a wide range of instances from the literature. When strengthened with valid inequalities, the linear relaxation of this MILP provides high quality upper-bounds on the value of an optimal history dependent policy. Furthermore, when used with a finite horizon POMDP problem with memoryless policies as rolling optimization problem, a model predictive control approach leads to an efficient history-dependent policy, which we call the short memory in the future (SMF) policy. Basically, the SMF policy leverages these memoryless policies to build an approximation of the Bellman value function. Numerical experiments show the efficiency of our approach on benchmark instances from the literature.
  • Haus, Utz-Uwe; Pfeuffer, Frank (2012)
    Operations Research Letters
  • Haus, Utz-Uwe (2015)
    Operations Research Letters
  • Basu, Amitabh; Conforti, Michele; Cornuéjols, Gérard; et al. (2017)
    Operations Research Letters
  • Tejada, Oriol (2013)
    Operations Research Letters
  • Hildebrand, R.; Oertel, T.; Weismantel, R. (2015)
    Operations Research Letters
  • Oertel, Timm; Wagner, Christian; Weismantel, Robert (2014)
    Operations Research Letters
Publications1 - 10 of 23