Number of the records: 1  

Stronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages

  1. 1.
    SYSNO ASEP0536423
    Document TypeJ - Journal Article
    R&D Document TypeJournal Article
    Subsidiary JČlánek ve WOS
    TitleStronger Separation of Analog Neuron Hierarchy by Deterministic Context-Free Languages
    Author(s) Šíma, Jiří (UIVT-O) RID, SAI, ORCID
    Source TitleNeurocomputing. - : Elsevier - ISSN 0925-2312
    Roč. 493, July 2022 (2022), s. 605-612
    Number of pages8 s.
    Languageeng - English
    CountryNL - Netherlands
    KeywordsRecurrent neural network ; Analog neuron hierarchy ; Deterministic context-free language ; Chomsky hierarchy
    OECD categoryComputer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
    R&D ProjectsGA19-05704S GA ČR - Czech Science Foundation (CSF)
    Method of publishingLimited access
    Institutional supportUIVT-O - RVO:67985807
    UT WOS000800351800012
    EID SCOPUS85124166634
    DOI10.1016/j.neucom.2021.12.107
    AnnotationWe analyze the computational power of discrete-time recurrent neural networks (NNs) with the saturated-linear activation function within the Chomsky hierarchy. This model restricted to integer weights coincides with binary-state NNs with the Heaviside activation function, which are equivalent to finite automata (Chomsky level 3) recognizing regular languages (REG), while rational weights make this model Turing-complete even for three analog-state units (Chomsky level 0). For the intermediate model αANN of a binary-state NN that is extended with α>=0 extra analog-state neurons with rational weights, we have established the analog neuron hierarchy 0ANNs \subset 1ANNs \subset 2ANNs \subseteq 3ANNs. The separation 1ANNs \subsetneqq 2ANNs has been witnessed by the non-regular deterministic context-free language (DCFL) L_#={0^n1^n|n>=1} which cannot be recognized by any 1ANN even with real weights, while any DCFL (Chomsky level 2) is accepted by a 2ANN with rational weights. In this paper, we strengthen this separation by showing that any non-regular DCFL cannot be recognized by 1ANNs with real weights, which means (DCFLs-REG) \subset (2ANNs-1ANNs), implying 1ANNs \cap DCFLs = 0ANNs. For this purpose, we have shown that L_# is the simplest non-regular DCFL by reducing L_# to any language in this class, which is by itself an interesting achievement in computability theory.
    WorkplaceInstitute of Computer Science
    ContactTereza Šírová, sirova@cs.cas.cz, Tel.: 266 053 800
    Year of Publishing2023
    Electronic addresshttp://dx.doi.org/10.1016/j.neucom.2021.12.107
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.