Number of the records: 1  

Central Moments and Risk-Sensitive Optimality in Markov Reward Chains

  1. 1.
    SYSNO ASEP0490663
    Document TypeC - Proceedings Paper (int. conf.)
    R&D Document TypeThe record was not marked in the RIV
    TitleCentral Moments and Risk-Sensitive Optimality in Markov Reward Chains
    Author(s) Sladký, Karel (UTIA-B) RID
    Number of authors1
    Source TitleQuantitative Methods in Economics: Multiple Criteria Decision Making XIX. - Bratislava : University of Economics, Bratislava, 2018 / Reiff Martin ; Gežík Pavel - ISBN 978-80-89962-07-5
    Pagess. 325-331
    Number of pages7 s.
    Publication formPrint - P
    ActionQuantitative Methods in Economics: Multiple Criteria Decision Making XIX
    Event date23.05.2018 - 25.05.2018
    VEvent locationTrenčianské Teplice
    CountrySK - Slovakia
    Event typeEUR
    Languageeng - English
    CountrySK - Slovakia
    KeywordsDiscrete-time Markov reward chains ; exponential utility ; moment generating functions ; formulae for central moments
    Subject RIVBB - Applied Statistics, Operational Research
    OECD categoryApplied Economics, Econometrics
    R&D ProjectsGA18-02739S GA ČR - Czech Science Foundation (CSF)
    Institutional supportUTIA-B - RVO:67985556
    UT WOS000455265500044
    AnnotationThere is no doubt that usual optimization criteria examined in the literature on optimization of Markov reward processes, e.g. total discounted or mean reward, may be quite insufficient to characterize the problem from the point of the decision maker. To this end it is necessary to select more sophisticated criteria that reflect also the variability-risk features of the problem (cf. Cavazos-Cadena and Fernandez-Gaucherand (1999), Cavazos-Cadena and Hernández-Hernández (2005), Howard and Matheson (1972), Jaquette (1976),
    Kawai (1987), Mandl (1971), Sladký (2005),(2008),(2013), van Dijk and Sladký (2006), White (1988)).
    In the present paper we consider unichain Markov reward processes with finite state spaces and assume that the generated reward is evaluated by an exponential utility function. Using the Taylor expansion we present explicit formulae for calculating variance and higher central moments of the total reward generated by the Markov reward chain along with its asymptotic behavior and the growth rates if the considered time horizon tends to infinity.
    WorkplaceInstitute of Information Theory and Automation
    ContactMarkéta Votavová, votavova@utia.cas.cz, Tel.: 266 052 201.
    Year of Publishing2019
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.