Number of the records: 1
Limitations of Shallow Networks Representing Finite Mappings
- 1.0485613 - ÚI 2020 RIV US eng J - Journal Article
Kůrková, Věra
Limitations of Shallow Networks Representing Finite Mappings.
Neural Computing & Applications. Roč. 31, č. 6 (2019), s. 1783-1792. ISSN 0941-0643. E-ISSN 1433-3058
R&D Projects: GA ČR GA15-18108S; GA ČR(CZ) GA18-23827S
Institutional support: RVO:67985807
Keywords : shallow and deep networks * sparsity * variational norms * functions on large finite domains * finite dictionaries of computational units * pseudo-noise sequences * perceptron networks
OECD category: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
Impact factor: 4.774, year: 2019
Method of publishing: Open access
http://dx.doi.org/10.1007/s00521-018-3680-1
Limitations of capabilities of shallow networks to efficiently compute real-valued functions on finite domains are investigated. Efficiency is studied in terms of network sparsity and its approximate measures. It is shown that when a dictionary of computational units is not sufficiently large, computation of almost any uniformly randomly chosen function either represents a well-conditioned task performed by a large network or an ill-conditioned task performed by a network of a moderate size. The probabilistic results are complemented by a concrete example of a class of functions which cannot be efficiently computed by shallow perceptron networks. The class is constructed using pseudo-noise sequences which have many features of random sequences but can be generated using special polynomials. Connections to the No Free Lunch Theorem and the central paradox of coding theory are discussed.
Permanent Link: http://hdl.handle.net/11104/0280569
File Download Size Commentary Version Access 0485613-afin.pdf 12 608 KB stránkovaná, finální verze Publisher’s postprint require 0485613.pdf 5 330.3 KB Author´s preprint require
Number of the records: 1