Number of the records: 1  

Understanding and Controlling Artificial General Intelligent Systems

  1. 1.
    0474350 - ÚI 2018 RIV GB eng C - Conference Paper (international conference)
    Wiedermann, Jiří - van Leeuwen, J.
    Understanding and Controlling Artificial General Intelligent Systems.
    Proceedings of AISB Annual Convention 2017. London: AISB, 2017 - (Bryson, J.; De Vos, M.; Padget, J.), s. 356-363. ISBN 978-1-908187-81-9.
    [AISB 2017. Bath (GB), 18.04.2017-22.04.2017]
    Grant - others:GA ČR(CZ) GA15-04960S
    Institutional support: RVO:67985807
    Keywords : artificial intelligence * epistemic computation * artificial general intelligence (AGI) * self-improving epistemic theories * controlling AGI systems
    OECD category: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
    http://aisb2017.cs.bath.ac.uk/proceedings.html

    Artificial general intelligence (AGI) systems are advancing in all parts of our society. The potential of autonomous systems that surpass the capabilities of human intelligence has stirred debates everywhere. How should ‘super-intelligent’ AGI systems be viewed so they can be feasibly controlled? We approach this question based on the viewpoints of the epistemic philosophy of computation, which treats AGI systems as computational systems processing knowledge over some domain. Rather than considering their autonomous development based on ‘self-improving software’, as is customary in the literature about super-intelligence, we consider AGI systems as operating with ‘self-improving epistemic theories’ that automatically increase their understanding of the world around them. We outline a number of algorithmic principles by which the self-improving theories can be constructed. Then we discuss the problem of aligning the behavior of AGI systems with human values in order to make such systems safe. This issue arises concretely when one studies the social and ethical aspects of human-robot interaction in advanced AGI systems as they exist already today. No general solution to this problem is known. However, based on the principles of interactive proof systems, we design an architecture of AGI systems and an interactive scenario that will enable one to detect in their behavior deviations from the prescribed goals. The conclusions from our analysis of AGI systems temper the over-optimistic expectations and over-pessimistic fears of singularity believers, by grounding the ideas on super-intelligent AGI systems in more realistic foundations.
    Permanent Link: http://hdl.handle.net/11104/0271429

     
    FileDownloadSizeCommentaryVersionAccess
    a0474350.pdf11.3 MBPublisher’s postprintrequire
     
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.