Number of the records: 1
On Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data
- 1.
SYSNO ASEP 0562371 Document Type C - Proceedings Paper (int. conf.) R&D Document Type Conference Paper Title On Combining Robustness and Regularization in Training Multilayer Perceptrons over Small Data Author(s) Kalina, Jan (UIVT-O) RID, SAI, ORCID
Tumpach, Jiří (UIVT-O) ORCID, SAI
Holeňa, Martin (UIVT-O) SAI, RIDNumber of authors 3 Source Title 2022 International Joint Conference on Neural Networks (IJCNN) Proceedings. - Piscataway : IEEE, 2022 - ISBN 978-1-7281-8671-9 Number of pages 8 s. Publication form Online - E Action IJCNN 2022: International Joint Conference on Neural Networks /35./ Event date 18.07.2022 - 23.07.2022 VEvent location Padua Country IT - Italy Event type WRD Language eng - English Country US - United States Keywords feedforward networks ; nonlinear regression ; outliers ; robust neural networks ; trend estimation OECD category Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) R&D Projects GA22-02067S GA ČR - Czech Science Foundation (CSF) Institutional support UIVT-O - RVO:67985807 UT WOS 000867070905022 EID SCOPUS 85140750378 DOI https://doi.org/10.1109/IJCNN55064.2022.9892510 Annotation Multilayer perceptrons (MLPs) continue to be commonly used for nonlinear regression modeling in numerous applications. Available robust approaches to training MLPs, which allow to yield reliable results also for data contaminated by outliers, have not much penetrated to real applications so far. Besides, there remains a lack of systematic comparisons of the performance of robust MLPs, if their training uses one of regularization techniques, which are available for standard MLPs to prevent overfitting. This paper is interested in comparing the performance of MLPs trained with various combinations of robust loss functions and regularization types on small datasets. The experiments start with MLPs trained on individual datasets, which allow graphical visualizations, and proceed to a study on a set of 163251 MLPs trained on well known benchmarks using various combinations of robustness and regularization types. Huber loss combined with L2 - regularization turns out to outperform other choices. This combination is recommendable whenever the data do not contain a large proportion of outliers. Workplace Institute of Computer Science Contact Tereza Šírová, sirova@cs.cas.cz, Tel.: 266 053 800 Year of Publishing 2023 Electronic address https://dx.doi.org/10.1109/IJCNN55064.2022.9892838
Number of the records: 1