Počet záznamů: 1  

On Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data

  1. 1.
    SYSNO ASEP0562371
    Druh ASEPC - Konferenční příspěvek (mezinárodní konf.)
    Zařazení RIVD - Článek ve sborníku
    NázevOn Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data
    Tvůrce(i) Kalina, Jan (UIVT-O) RID, SAI, ORCID
    Tumpach, Jiří (UIVT-O) ORCID, SAI
    Holeňa, Martin (UIVT-O) SAI, RID
    Celkový počet autorů3
    Zdroj.dok.2022 International Joint Conference on Neural Networks (IJCNN) Proceedings. - Piscataway : IEEE, 2022 - ISBN 978-1-7281-8671-9
    Poč.str.8 s.
    Forma vydáníOnline - E
    AkceIJCNN 2022: International Joint Conference on Neural Networks /35./
    Datum konání18.07.2022 - 23.07.2022
    Místo konáníPadua
    ZeměIT - Itálie
    Typ akceWRD
    Jazyk dok.eng - angličtina
    Země vyd.US - Spojené státy americké
    Klíč. slovafeedforward networks ; nonlinear regression ; outliers ; robust neural networks ; trend estimation
    Obor OECDComputer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
    CEPGA22-02067S GA ČR - Grantová agentura ČR
    Institucionální podporaUIVT-O - RVO:67985807
    UT WOS000867070905022
    EID SCOPUS85140750378
    DOI10.1109/IJCNN55064.2022.9892510
    AnotaceMultilayer perceptrons (MLPs) continue to be commonly used for nonlinear regression modeling in numerous applications. Available robust approaches to training MLPs, which allow to yield reliable results also for data contaminated by outliers, have not much penetrated to real applications so far. Besides, there remains a lack of systematic comparisons of the performance of robust MLPs, if their training uses one of regularization techniques, which are available for standard MLPs to prevent overfitting. This paper is interested in comparing the performance of MLPs trained with various combinations of robust loss functions and regularization types on small datasets. The experiments start with MLPs trained on individual datasets, which allow graphical visualizations, and proceed to a study on a set of 163251 MLPs trained on well known benchmarks using various combinations of robustness and regularization types. Huber loss combined with L2 - regularization turns out to outperform other choices. This combination is recommendable whenever the data do not contain a large proportion of outliers.
    PracovištěÚstav informatiky
    KontaktTereza Šírová, sirova@cs.cas.cz, Tel.: 266 053 800
    Rok sběru2023
    Elektronická adresahttps://dx.doi.org/10.1109/IJCNN55064.2022.9892838
Počet záznamů: 1  

  Tyto stránky využívají soubory cookies, které usnadňují jejich prohlížení. Další informace o tom jak používáme cookies.