Number of the records: 1  

Adaptive Selection of Gaussian Process Model for Active Learning in Expensive Optimization

  1. 1.
    0493292 - ÚI 2019 RIV IE eng A - Abstract
    Repický, Jakub - Pitra, Zbyněk - Holeňa, Martin
    Adaptive Selection of Gaussian Process Model for Active Learning in Expensive Optimization.
    ECML PKDD 2018: Workshop on Interactive Adaptive Learning. Proceedings. Dublin, 2018 - (Krempl, G.; Lemaire, V.; Kottke, D.; Calma, A.; Holzinger, A.; Polikar, R.; Sick, B.). s. 80-84
    [ECML PKDD 2018: The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. 10.09.2018-14.09.2018, Dublin]
    R&D Projects: GA ČR GA17-01251S
    Institutional support: RVO:67985807
    Keywords : Gaussian process * Surrogate model * Black-box optimization * Active Learning
    OECD category: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
    https://www.ies.uni-kassel.de/p/ial2018/ialatecml2018.pdf

    PUBLISHED IN: ECML PKDD 2018: Workshop on Interactive Adaptive Learning. Proceedings. Dublin, 2018 - (Krempl, G., Lemaire, V., Kottke, D., Calma, A., Holzinger, A., Polikar, R., Sick, B.). s. 80-84. [ECML PKDD 2018: The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. 10.09.2018-14.09.2018, Dublin]. Grant CEP: GA ČR GA17-01251S. ABSTRACT: Black-box optimization denotes the optimization of objective functions the values of which are only available through empirical measurements or experiments. Such optimization tasks are most often tackled with evolutionary algorithms and other kinds of metaheuristics methods (e. g.), which need to evaluate the objective function in many points. This is a serious problem in situations when its evaluation is expensive with respect to some kind of resources, e.g., the cost of needed experiments. A standard attempt to circumvent that problem is to evaluate the original objective function only in a small fraction of those points, and to evaluate a surrogate model of the original function in the remaining points. Once a model has been trained, the success of the optimization in the remaining iterations depends on a resource aware selection of points in which the original function will be evaluated, which is a typical active learning task. The surrogate model used in the reported research is a Gaussian process (GP), which treats the values of an unknown function as jointly Gaussian random variables. The advantage of GP compared to other kinds of surrogate models is its capability of quantifying the uncertainty of prediction, by calculating the variance of the posterior distribution of function values.
    Permanent Link: http://hdl.handle.net/11104/0286678

     
    FileDownloadSizeCommentaryVersionAccess
    a0493292.pdf7715 KBSborník dostupný online.Publisher’s postprintopen-access
     
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.