Number of the records: 1  

Similarity-based transfer learning of decision policies

  1. 1.
    SYSNO ASEP0534000
    Document TypeC - Proceedings Paper (int. conf.)
    R&D Document TypeConference Paper
    TitleSimilarity-based transfer learning of decision policies
    Author(s) Zugarová, Eliška (UTIA-B)
    Guy, Tatiana Valentine (UTIA-B) RID, ORCID
    Number of authors2
    Source TitleProceedings of the IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS 2020. - Piscataway : IEEE, 2020 - ISSN 1062-922X - ISBN 978-1-7281-8527-9
    Pagess. 37-44
    Number of pages8 s.
    Publication formOnline - E
    ActionIEEE International Conference on Systems, Man and Cybernetics 2020
    Event date11.10.2020 - 14.10.2020
    VEvent locationToronto
    CountryCA - Canada
    Event typeWRD
    Languageeng - English
    CountryUS - United States
    Keywordsprobabilistic model ; fully probabilistic design ; transfer learning ; closed-loop behavior ; Bayesian estimation ; sequential decision making
    Subject RIVBB - Applied Statistics, Operational Research
    OECD categoryStatistics and probability
    R&D ProjectsLTC18075 GA MŠMT - Ministry of Education, Youth and Sports (MEYS)
    Institutional supportUTIA-B - RVO:67985556
    EID SCOPUS85098853951
    DOI10.1109/SMC42975.2020.9283093
    AnnotationWe consider a problem of learning decision policy from past experience available. Using the Fully Probabilistic Design (FPD) formalism, we propose a new general approach for finding a stochastic policy from the past data. The proposedapproach assigns degree of similarity to all of the past closed-loop behaviors. The degree of similarity expresses how close the current decision making task is to a past task. Then it is used by Bayesian estimation to learn an approximate optimal policy, which comprises the best past experience. The approach learns decision policy directly from the data without interacting with any supervisor/expert or using any reinforcement signal. The past experience may consider a decision objective different than the current one. Moreover the past decision policy need not to be optimal with respect to the past objective. We demonstrate our approach on simulated examples and show that the learned policy achieves better performance than optimal FPD policy whenever a mismodeling is present.
    WorkplaceInstitute of Information Theory and Automation
    ContactMarkéta Votavová, votavova@utia.cas.cz, Tel.: 266 052 201.
    Year of Publishing2021
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.