Ergebnisse für *

Zeige Ergebnisse 1 bis 10 von 10.

  1. Optimal monetary policy using reinforcement learning
    Erschienen: [2021]
    Verlag:  Deutsche Bundesbank, Frankfurt am Main

    This paper introduces a reinforcement learning based approach to compute optimal interest rate reaction functions in terms of fulfilling inflation and output gap targets. The method is generally flexible enough to incorporate restrictions like the... mehr

    Leibniz-Institut für Wirtschaftsforschung Halle, Bibliothek
    keine Fernleihe
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 12
    keine Fernleihe

     

    This paper introduces a reinforcement learning based approach to compute optimal interest rate reaction functions in terms of fulfilling inflation and output gap targets. The method is generally flexible enough to incorporate restrictions like the zero lower bound, nonlinear economy structures or asymmetric preferences. We use quarterly U.S. data from1987:Q3-2007:Q2 to estimate (nonlinear) model transition equations, train optimal policies and perform counterfactual analyses to evaluate them, assuming that the transition equations remain unchanged. All of our resulting policy rules outperform other common rules as well as the actual federal funds rate. Given a neural network representation of the economy, our optimized nonlinear policy rules reduce the central bank’s loss by over43 %. A DSGE model comparison exercise further indicates robustness of the optimized rules.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Ebook
    Format: Online
    ISBN: 9783957298614
    Weitere Identifier:
    hdl: 10419/248736
    Schriftenreihe: Discussion paper / Deutsche Bundesbank ; no 2021, 51
    Schlagworte: Optimal Monetary Policy; Reinforcement Learning; Artificial Neural Network; Machine Learning; Reaction Function
    Umfang: 1 Online-Ressource (circa 62 Seiten), Illustrationen
  2. Algorithmic collusion, genuine and spurious
    Erschienen: 25 July 2021
    Verlag:  Centre for Economic Policy Research, London

    Zugang:
    Verlag (lizenzpflichtig)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    LZ 161
    keine Fernleihe
    Universitätsbibliothek Mannheim
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Schriftenreihe: Array ; DP16393
    Schlagworte: artificial intelligence; Reinforcement Learning; Collusion; exploration
    Umfang: 1 Online-Ressource (circa 9 Seiten), Illustrationen
  3. Ambiguous dynamic treatment regimes
    a reinforcement learning approach
    Erschienen: 2021
    Verlag:  Harvard Kennedy School, John F. Kennedy School of Government, [Cambridge, MA]

    A main research goal in various studies is to use an observational data set and provide a new set of counterfactual guidelines that can yield causal improvements. Dynamic Treatment Regimes (DTRs) are widely studied to formalize this process and... mehr

    Zugang:
    Verlag (kostenfrei)
    Resolving-System (kostenfrei)
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    Helmut-Schmidt-Universität, Universität der Bundeswehr Hamburg, Universitätsbibliothek
    keine Fernleihe
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    VS 600
    keine Fernleihe

     

    A main research goal in various studies is to use an observational data set and provide a new set of counterfactual guidelines that can yield causal improvements. Dynamic Treatment Regimes (DTRs) are widely studied to formalize this process and enable researchers to find guidelines that are both personalized and dynamic. However, available methods in finding optimal DTRs often rely on assumptions that are violated in real-world applications (e.g., medical decision-making or public policy), especially when (a) the existence of unobserved confounders cannot be ignored, and (b) the unobserved confounders are time-varying (e.g., affected by previous actions). When such assumptions are violated, one often faces ambiguity regarding the underlying causal model that is needed to be assumed to obtain an optimal DTR. This ambiguity is inevitable, since the dynamics of unobserved confounders and their causal impact on the observed part of the data cannot be understood from the observed data. Motivated by a case study of finding superior treatment regimes for patients who underwent transplantation in our partner hospital and faced a medical condition known as New Onset Diabetes After Transplantation (NODAT), we extend DTRs to a new class termed Ambiguous Dynamic Treatment Regimes (ADTRs), in which the causal impact of treatment regimes is evaluated based on a “cloud” of potential causal models. We then connect ADTRs to Ambiguous Partially Observable Mark Decision Processes (APOMDPs) proposed by Saghafian (2018), and consider unobserved confounders as latent variables but with ambiguous dynamics and causal effects on observed variables. Using this connection, we develop two Reinforcement Learning methods termed Direct Augmented V-Learning (DAV-Learning) and Safe Augmented V-Learning (SAV-Learning), which enable using the observed data to efficiently learn an optimal treatment regime. We establish theoretical results for these learning methods, including (weak) consistency and asymptotic normality. We further evaluate the performance of these learning methods both in our case study (using clinical data) and in simulation experiments (using synthetic data). We find promising results for our proposed approaches, showing that they perform well even compared to an imaginary oracle who knows both the true causal model (of the data generating process) and the optimal regime under that model

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    Auflage/Ausgabe: Version: December 8, 2021
    Schriftenreihe: Faculty research working paper series / Harvard Kennedy School, John F. Kennedy School of Government ; RWP21, 034 (December 2021)
    Schlagworte: Observational Data; Dynamic Treatment Regimes; Unobserved Confounders; APOMDPs; Reinforcement Learning
    Weitere Schlagworte: Array
    Umfang: 1 Online-Ressource (circa 36 Seiten), Illustrationen
  4. Application of artificial intelligence for monetary policy-making
    Erschienen: [2022]
    Verlag:  National Bank of Georgia, Tbilisi, Georgia

    Zugang:
    Verlag (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    Nicht speichern
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Schriftenreihe: NBG working papers ; WP 2022, 02
    Schlagworte: Artificial Intelligence; Reinforcement Learning; Monetary policy
    Umfang: 1 Online-Ressource (circa 60 Seiten), Illustrationen
  5. A reinforcement learning algorithm for trading commodities
    Erschienen: [2023]
    Verlag:  CEIS Tor Vergata, [Rom]

    Zugang:
    Verlag (kostenfrei)
    Verlag (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    VS 665
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Schriftenreihe: CEIS Tor Vergata research paper series ; vol. 21, issue 1 = no. 552 (February 2023)
    Schlagworte: Portfolio Optimization; Reinforcement Learning; SARSA; Commodities; Threshold Models
    Umfang: 1 Online-Ressource (circa 22 Seiten), Illustrationen
  6. The weight of personal experience
    an experimental measurement
    Erschienen: [2012]
    Verlag:  IGIER, Università Bocconi, Milano, Italy

    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Volltext (kostenfrei)
    Volltext (kostenfrei)
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Auflage/Ausgabe: This version: August 31, 2012
    Schriftenreihe: Working paper series / IGIER ; n. 452
    Schlagworte: Experiments; Learning; Observation; Reinforcement Learning; Belief-Based Learning
    Umfang: 1 Online-Ressource (circa 33 Seiten), Illustrationen
  7. Reinforcement learning and portfolio allocation
    challenging traditional allocation methods?
    Erschienen: 2 February 2023
    Verlag:  Queen's University, Belfast, Management School, [Belfast]

    We test the out-of-sample trading performance of model-free reinforcement learning (RL) agents and compare them with the performance of equally-weighted portfolios and traditional mean-variance (MV) optimization benchmarks. By dividing European and... mehr

    Zugang:
    Resolving-System (kostenfrei)
    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    DS 843
    keine Fernleihe

     

    We test the out-of-sample trading performance of model-free reinforcement learning (RL) agents and compare them with the performance of equally-weighted portfolios and traditional mean-variance (MV) optimization benchmarks. By dividing European and U.S. indices constituents into factor datasets, the RL-generated portfolios face different scenarios defined by these factor environments. The RL approach is empirically evaluated based on a selection of measures and probabilistic assessments. Training these models only on price data and features constructed from these prices, the performance of the RL approach yields better risk-adjusted returns as well as probabilistic Sharpe ratios compared to MV specifications. However, this performance varies across factor environments. RL models partially uncover the nonlinear structure of the stochastic discount factor. It is further demonstrated that RL models are successful at reducing left-tail risks in out-of-sample settings. These results indicate that these models are indeed useful in portfolio management applications.

     

    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Weitere Identifier:
    hdl: 10419/271267
    Schriftenreihe: QMS working paper ; 2023, 01
    Schlagworte: Asset Allocation; Reinforcement Learning; Machine Learning; Portfolio Theory; Diversification
    Umfang: 1 Online-Ressource (circa 49 Seiten), Illustrationen
  8. Q-learning-based financial trading systems with applications
    Erschienen: October 2014
    Verlag:  Department of Economics, Ca’ Foscari University of Venice, Venice Italy

    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Auflage/Ausgabe: First draft
    Schriftenreihe: Working paper / Ca' Foscari University of Venice, Department of Economics ; 2014, no. 15
    Schlagworte: Financial trading system; Reinforcement Learning; Q-Learning algorithm; daily stock price time series; FTSE MIB basket
    Umfang: 1 Online-Ressource (circa 25 Seiten), Illustrationen
  9. Q-Learning and SARSA
    a comparison between two intelligent stochastic control approaches for financial trading
    Erschienen: [2015]
    Verlag:  Department of Economics, Ca’ Foscari University of Venice, Venice Italy

    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Schriftenreihe: Working paper / Ca' Foscari University of Venice, Department of Economics ; 2015, no. 15
    Schlagworte: Financial trading system; Adaptive Market Hypothesis; model free machine learning; Reinforcement Learning; Q-Learning; SARSA; Italian stock market
    Umfang: 1 Online-Ressource (circa 25 Seiten), Illustrationen
  10. Losing from naive reinforcement learning
    a survival analysis of individual repurchase decisions
    Autor*in: Jiao, Peiran
    Erschienen: November 2015
    Verlag:  University of Oxford, Department of Economics, Oxford

    ZBW - Leibniz-Informationszentrum Wirtschaft, Standort Kiel
    keine Fernleihe
    Export in Literaturverwaltung   RIS-Format
      BibTeX-Format
    Hinweise zum Inhalt
    Quelle: Verbundkataloge
    Sprache: Englisch
    Medientyp: Buch (Monographie)
    Format: Online
    Schriftenreihe: Department of Economics discussion paper series / University of Oxford ; number 765
    Schlagworte: Repurchase Bias; Reinforcement Learning; Sophistication; Experience
    Umfang: 1 Online-Ressource (34 Seiten)