Number of the records: 1  

Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion

  1. 1.
    0449029 - ÚTIA 2016 RIV GB eng J - Journal Article
    Cavazos-Cadena, R. - Montes-de-Oca, R. - Sladký, Karel
    Sample-Path Optimal Stationary Policies in Stable Markov Decision Chains with Average Reward Criterion.
    Journal of Applied Probability. Roč. 52, č. 2 (2015), s. 419-440. ISSN 0021-9002. E-ISSN 1475-6072
    Grant - others:GA AV ČR(CZ) 171396
    Institutional support: RVO:67985556
    Keywords : Dominated Convergence theorem for the expected average criterion * Discrepancy function * Kolmogorov inequality * Innovations * Strong sample-path optimality
    Subject RIV: BC - Control Systems Theory
    Impact factor: 0.665, year: 2015
    http://library.utia.cas.cz/separaty/2015/E/sladky-0449029.pdf

    This work concerns discrete-time Markov decision chains with denumerable state and compact action sets. Besides standard continuity requirements, the main assumption on the model is that it admits a Lyapunov function m. In this context the average reward criterion is analyzed from the sample-path point of view. The main conclusion is that, if the expected average reward associated to m^2 is finite under any policy, then a stationary policy obtained from the optimality equation in the standard way is sample-path average optimal in a strong sense.
    Permanent Link: http://hdl.handle.net/11104/0250631

     
     
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.