Number of the records: 1  

Sparse robust portfolio optimization via NLP regularizations

  1. 1.
    SYSNO ASEP0468834
    Document TypeV - Research Report
    R&D Document TypeThe record was not marked in the RIV
    TitleSparse robust portfolio optimization via NLP regularizations
    Author(s) Branda, Martin (UTIA-B) RID, ORCID
    Červinka, Michal (UTIA-B) RID, ORCID
    Schwartz, A. (DE)
    Number of authors3
    Issue dataPraha: ÚTIA AV ČR v. v. i., 2016
    SeriesResearch Report
    Series number2358
    Number of pages19 s.
    Publication formPrint - P
    Languageeng - English
    CountryCZ - Czech Republic
    KeywordsConditional Value-at-Risk ; Value-at-Risk ; risk measure
    Subject RIVBB - Applied Statistics, Operational Research
    R&D ProjectsGA15-00735S GA ČR - Czech Science Foundation (CSF)
    Institutional supportUTIA-B - RVO:67985556
    AnnotationWe deal with investment problems where we minimize a risk measure under a condition on the sparsity of the portfolio. Various risk measures are considered including Value-at-Risk and Conditional Value-at-Risk under normal distribution of returns and their robust counterparts are derived under moment conditions, all leading to nonconvex objective functions. We propose four solution approaches: a mixed-integer formulation, a relaxation of an alternative mixed-integer reformulation and two NLP regularizations. In a numerical study, we compare their computational performance on a large number of simulated instances taken from the literature.
    Description in EnglishWe deal with investment problems where we minimize a risk measure
    under a condition on the sparsity of the portfolio. Various risk measures
    are considered including Value-at-Risk and Conditional Value-at-Risk
    under normal distribution of returns and their robust counterparts are
    derived under moment conditions, all leading to nonconvex objective
    functions. We propose four solution approaches: a mixed-integer formulation,
    a relaxation of an alternative mixed-integer reformulation and
    two NLP regularizations. In a numerical study, we compare their computational
    performance on a large number of simulated instances taken
    from the literature.
    WorkplaceInstitute of Information Theory and Automation
    ContactMarkéta Votavová, votavova@utia.cas.cz, Tel.: 266 052 201.
    Year of Publishing2017
Number of the records: 1  

  This site uses cookies to make them easier to browse. Learn more about how we use cookies.