Počet záznamů: 1  

Gradient Descent Parameter Learning of Bayesian Networks under Monotonicity Restrictions

  1. 1.
    0490309 - ÚTIA 2019 RIV CZ eng C - Konferenční příspěvek (zahraniční konf.)
    Plajner, Martin - Vomlel, Jiří
    Gradient Descent Parameter Learning of Bayesian Networks under Monotonicity Restrictions.
    Proceedings of the 11th Workshop on Uncertainty Processing (WUPES’18). Praha: MatfyzPress, Publishing House of the Faculty of Mathematics and Physics Charles University, 2018 - (Kratochvíl, V.; Vejnarová, J.), s. 153-164. ISBN 978-80-7378-361-7.
    [Workshop on Uncertainty Processing (WUPES’18). Třeboň (CZ), 06.06.2018-09.06.2018]
    Grant CEP: GA ČR(CZ) GA16-12010S
    Grant ostatní: ČVUT(CZ) SGS17/198/OHK4/3T/14
    Institucionální podpora: RVO:67985556
    Klíčová slova: Bayesian networks * Learning model parameters * monotonicity condition
    Obor OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
    http://library.utia.cas.cz/separaty/2018/MTR/plajner-0490309.pdf

    Learning parameters of a probabilistic model is a necessary step in most machine learning modeling tasks. When the model is complex and data volume is small the learning process may fail to provide good results. In this paper we present a method to improve learning results for small data sets by using additional information about the modelled system. This additional information is represented by monotonicity conditions which are restrictions on parameters of the model. Monotonicity simplifies the learning process and also these conditions are often required by the user of the system to hold.

    In this paper we present a generalization of the previously used algorithm for parameter learning of Bayesian Networks under monotonicity conditions. This generalization allows both parents and children in the network to have multiple states. The algorithm is described in detail as well as monotonicity conditions are.

    The presented algorithm is tested on two different data sets. Models are trained on differently sized data subsamples with the proposed method and the general EM algorithm. Learned models are then compared by their ability to fit data. We present empirical results showing the benefit of monotonicity conditions. The difference is especially significant when working with small data samples. The proposed method outperforms the EM algorithm for small sets and provides comparable results for larger sets.
    Trvalý link: http://hdl.handle.net/11104/0284592

     
     
Počet záznamů: 1  

  Tyto stránky využívají soubory cookies, které usnadňují jejich prohlížení. Další informace o tom jak používáme cookies.