On the Optimality of (s, S) Policies
Eugene A. Feinberg
Department of Applied Mathematics and Statistics
State University of New York at Stony Brook
Mark E. Lewis
School of Operations Research and Information Engineering
Ithaca, New York 14853
This paper describes results on the existence of optimal policies and convergence properties of optimal actions for discounted and average-cost Markov Decision Processes with weakly continuous transition probabilities. It is possible that cost functions are unbounded and action sets are not compact. The results are applied to stochastic periodic-review inventory control problems, for which they imply the existence of stationary optimal policies and certain optimality properties. The optimality of (s, S) policies is proved by using dynamic programming equations for discounted costs and the vanishing discount factor approach for average costs per unit time.