Number of the records: 1  

On Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data

  1. 1.
    SYSNO ASEP0562371
    Document TypeC - Proceedings Paper (int. conf.)
    R&D Document TypeConference Paper
    TitleOn Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data
    Author(s) Kalina, Jan (UIVT-O) RID, SAI, ORCID
    Tumpach, Jiří (UIVT-O) ORCID, SAI
    Holeňa, Martin (UIVT-O) SAI, RID
    Number of authors3
    Source Title2022 International Joint Conference on Neural Networks (IJCNN) Proceedings. - Piscataway : IEEE, 2022 - ISBN 978-1-7281-8671-9
    Number of pages8 s.
    Publication formOnline - E
    ActionIJCNN 2022: International Joint Conference on Neural Networks /35./
    Event date18.07.2022 - 23.07.2022
    VEvent locationPadua
    CountryIT - Italy
    Event typeWRD
    Languageeng - English
    CountryUS - United States
    Keywordsfeedforward networks ; nonlinear regression ; outliers ; robust neural networks ; trend estimation
    OECD categoryComputer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
    R&D ProjectsGA22-02067S GA ČR - Czech Science Foundation (CSF)
    Institutional supportUIVT-O - RVO:67985807
    UT WOS000867070905022
    EID SCOPUS85140750378
    DOI10.1109/IJCNN55064.2022.9892510
    AnnotationMultilayer perceptrons (MLPs) continue to be commonly used for nonlinear regression modeling in numerous applications. Available robust approaches to training MLPs, which allow to yield reliable results also for data contaminated by outliers, have not much penetrated to real applications so far. Besides, there remains a lack of systematic comparisons of the performance of robust MLPs, if their training uses one of regularization techniques, which are available for standard MLPs to prevent overfitting. This paper is interested in comparing the performance of MLPs trained with various combinations of robust loss functions and regularization types on small datasets. The experiments start with MLPs trained on individual datasets, which allow graphical visualizations, and proceed to a study on a set of 163251 MLPs trained on well known benchmarks using various combinations of robustness and regularization types. Huber loss combined with L2 - regularization turns out to outperform other choices. This combination is recommendable whenever the data do not contain a large proportion of outliers.
    WorkplaceInstitute of Computer Science
    ContactTereza Šírová, sirova@cs.cas.cz, Tel.: 266 053 800
    Year of Publishing2023
    Electronic addresshttps://dx.doi.org/10.1109/IJCNN55064.2022.9892838
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.