Number of the records: 1
Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages
- 1.
SYSNO ASEP 0536423 Document Type J - Journal Article R&D Document Type Journal Article Subsidiary J Článek ve WOS Title Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages Author(s) Šíma, Jiří (UIVT-O) RID, SAI, ORCID Source Title Neurocomputing. - : Elsevier - ISSN 0925-2312
Roč. 493, July 2022 (2022), s. 605-612Number of pages 8 s. Language eng - English Country NL - Netherlands Keywords Recurrent neural network ; Analog neuron hierarchy ; Deterministic context-free language ; Chomsky hierarchy OECD category Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) R&D Projects GA19-05704S GA ČR - Czech Science Foundation (CSF) Method of publishing Limited access Institutional support UIVT-O - RVO:67985807 UT WOS 000800351800012 EID SCOPUS 85124166634 DOI 10.1016/j.neucom.2021.12.107 Annotation We analyze the computational power of discrete-time recurrent neural networks (NNs) with the saturated-linear activation function within the Chomsky hierarchy. This model restricted to integer weights coincides with binary-state NNs with the Heaviside activation function, which are equivalent to finite automata (Chomsky level 3) recognizing regular languages (REG), while rational weights make this model Turing-complete even for three analog-state units (Chomsky level 0). For the intermediate model αANN of a binary-state NN that is extended with α>=0 extra analog-state neurons with rational weights, we have established the analog neuron hierarchy 0ANNs \subset 1ANNs \subset 2ANNs \subseteq 3ANNs. The separation 1ANNs \subsetneqq 2ANNs has been witnessed by the non-regular deterministic context-free language (DCFL) L_#={0^n1^n|n>=1} which cannot be recognized by any 1ANN even with real weights, while any DCFL (Chomsky level 2) is accepted by a 2ANN with rational weights. In this paper, we strengthen this separation by showing that any non-regular DCFL cannot be recognized by 1ANNs with real weights, which means (DCFLs-REG) \subset (2ANNs-1ANNs), implying 1ANNs \cap DCFLs = 0ANNs. For this purpose, we have shown that L_# is the simplest non-regular DCFL by reducing L_# to any language in this class, which is by itself an interesting achievement in computability theory. Workplace Institute of Computer Science Contact Tereza Šírová, sirova@cs.cas.cz, Tel.: 266 053 800 Year of Publishing 2023 Electronic address http://dx.doi.org/10.1016/j.neucom.2021.12.107
Number of the records: 1