Number of the records: 1  

Can Dictionary-based Computational Models Outperform the Best Linear Ones?

  1. 1.
    0360287 - ÚI 2012 RIV GB eng J - Journal Article
    Gnecco, G. - Kůrková, Věra - Sanguineti, M.
    Can Dictionary-based Computational Models Outperform the Best Linear Ones?
    Neural Networks. Roč. 24, č. 8 (2011), s. 881-887. ISSN 0893-6080. E-ISSN 1879-2782
    R&D Projects: GA MŠMT OC10047
    Grant - others:CNR - AV ČR project 2010-2012(XE) Complexity of Neural-Network and Kernel Computational Models
    Institutional research plan: CEZ:AV0Z10300504
    Keywords : dictionary-based approximation * linear approximation * rates of approximation * worst-case error * Kolmogorov width * perceptron networks
    Subject RIV: IN - Informatics, Computer Science
    Impact factor: 2.182, year: 2011

    Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of n-tuples of basis functions computable by units belonging to a set called "dictionary") and linear ones (i.e., linear combinations of n fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number n of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.
    Permanent Link: http://hdl.handle.net/11104/0197874

     
     
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.