Number of the records: 1  

A Comparison of Regularization Techniques for Shallow Neural Networks Trained on Small Datasets

  1. 1.
    0546161 - ÚI 2022 RIV DE eng C - Conference Paper (international conference)
    Tumpach, Jiří - Kalina, Jan - Holeňa, Martin
    A Comparison of Regularization Techniques for Shallow Neural Networks Trained on Small Datasets.
    Proceedings of the 21st Conference Information Technologies – Applications and Theory (ITAT 2021). Aachen: Technical University & CreateSpace Independent Publishing, 2021 - (Brejová, B.; Ciencialová, L.; Holeňa, M.; Mráz, F.; Pardubská, D.; Plátek, M.; Vinař, T.), s. 94-103. ISSN 1613-0073.
    [ITAT 2021: Information Technologies - Applications and Theory /21./. Heľpa (SK), 24.09.2021-28.09.2021]
    R&D Projects: GA ČR(CZ) GA18-18080S; GA ČR(CZ) GA19-05704S
    Grant - others:Ministerstvo školství, mládeže a tělovýchovy - GA MŠk(CZ) LM2018140
    Institutional support: RVO:67985807
    Keywords : artificial neural networks * regularization * robustness * optimization
    OECD category: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
    https://ics.upjs.sk/~antoni/ceur-ws.org/Vol-0000/paper38.pdf

    Neural networks are frequently used as regression models. Their training is usually difficult when the model is subject to a small training dataset with numerous outliers. This paper investigates the effects of various regularisation techniques that can help with this kind of problem. We analysed the effects of the model size, loss selection, L2 weight regularisation, L2 activity regularisation, Dropout, and Alpha Dropout. We collected 30 different datasets, each of which has been split by ten-fold cross-validation. As an evaluation metric, we used cumulative distribution functions (CDFs) of L1 and L2 losses to aggregate results from different datasets without a considerable amount of distortion. Distributions of the metrics are shown, and thorough statistical tests were conducted. Surprisingly, the results show that Dropout models are not suited for our objective. The most effective approach is the choice of model size and L2 types of regularisations.
    Permanent Link: http://hdl.handle.net/11104/0322710

     
    FileDownloadSizeCommentaryVersionAccess
    0546161-aoa.pdf16.1 MBOA CC BY 4.0Publisher’s postprintopen-access
     
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.