Number of the records: 1  

Semi-supervised and Active Learning in Video Scene Classification from Statistical Features

  1. 1.
    SYSNO ASEP0493293
    Document TypeC - Proceedings Paper (int. conf.)
    R&D Document TypeO - Ostatní
    TitleSemi-supervised and Active Learning in Video Scene Classification from Statistical Features
    Author(s) Šabata, T. (CZ)
    Pulc, Petr (UIVT-O) SAI, ORCID
    Holeňa, Martin (UIVT-O) SAI, RID
    Source TitleECML PKDD 2018: Workshop on Interactive Adaptive Learning. Proceedings. - Dublin, 2018 / Krempl G. ; Lemaire V. ; Kottke D. ; Calma A. ; Holzinger A. ; Polikar R. ; Sick B.
    Pagess. 24-35
    Number of pages12 s.
    Publication formOnline - E
    ActionECML PKDD 2018: The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases
    Event date10.09.2018 - 14.09.2018
    VEvent locationDublin
    CountryIE - Ireland
    Event typeEUR
    Languageeng - English
    CountryIE - Ireland
    Keywordsvideo data ; scene classification ; semi-supervised learning ; active learning ; colour statistics ; feedforward neural networks
    Subject RIVIN - Informatics, Computer Science
    OECD categoryComputer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8)
    R&D ProjectsGA18-18080S GA ČR - Czech Science Foundation (CSF)
    Institutional supportUIVT-O - RVO:67985807
    AnnotationPUBLISHED: ECML PKDD 2018: Workshop on Interactive Adaptive Learning. Proceedings. Dublin, 2018 - (Krempl, G., Lemaire, V., Kottke, D., Calma, A., Holzinger, A., Polikar, R., Sick, B.), s. 24-35. [ECML PKDD 2018: The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Dublin (IE), 10.09.2018-14.09.2018]. Grant CEP: GA ČR(CZ) GA18-18080S. ABSTRACT: In multimedia classification, the background is usually considered an unwanted part of input data and is often modeled only to be removed in later processing. Contrary to that, we believe that a background model (i.e., the scene in which the picture or video shot is taken) should be included as an essential feature for both indexing and followup content processing. Information about image background, however, is not usually the main target in the labeling process and the number of annotated samples is very limited. Therefore, we propose to use a combination of semi-supervised and active learning to improve the performance of our scene classifier, specifically a combination of self-training with uncertainty sampling. As a result, we utilize a combination of statistical features extractor, a feed-forward neural network and support vector machine classifier, which consistently achieves higher accuracy on less diverse data. With the proposed approach, we are currently able to achieve precision over 80% on a dataset trained on a single series of a popular TV show.
    WorkplaceInstitute of Computer Science
    ContactTereza Šírová, sirova@cs.cas.cz, Tel.: 266 053 800
    Year of Publishing2019
    Electronic addressttps://www.ies.uni-kassel.de/p/ial2018/ialatecml2018.pdf
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.