Vast portfolio selection with gross-exposure constraints
J Fan, J Zhang, K Yu - Journal of the American Statistical …, 2012 - Taylor & Francis
This article introduces the large portfolio selection using gross-exposure constraints. It
shows that with gross-exposure constraints, the empirically selected optimal portfolios based …
shows that with gross-exposure constraints, the empirically selected optimal portfolios based …
Nets: Network estimation for time series
We model a large panel of time series as a vector autoregression where the autoregressive
matrices and the inverse covariance matrix of the system innovations are assumed to be …
matrices and the inverse covariance matrix of the system innovations are assumed to be …
Deep learning for ψ-weakly dependent processes
W Kengne, M Wade - Journal of Statistical Planning and Inference, 2024 - Elsevier
In this paper, we perform deep neural networks for learning stationary ψ-weakly dependent
processes. Such weak-dependence property includes a class of weak dependence …
processes. Such weak-dependence property includes a class of weak dependence …
Bernstein inequality and moderate deviations under strong mixing conditions
In this paper we obtain a Bernstein type inequality for a class of weakly dependent and
bounded random variables. The proofs lead to a moderate deviations principle for sums of …
bounded random variables. The proofs lead to a moderate deviations principle for sums of …
Fast approximation of the sliced-Wasserstein distance using concentration of random projections
Abstract The Sliced-Wasserstein distance (SW) is being increasingly used in machine
learning applications as an alternative to the Wasserstein distance and offers significant …
learning applications as an alternative to the Wasserstein distance and offers significant …
Robust deep learning from weakly dependent data
W Kengne, M Wade - Neural Networks, 2025 - Elsevier
Recent developments on deep learning established some theoretical properties of deep
neural networks estimators. However, most of the existing works on this topic are restricted …
neural networks estimators. However, most of the existing works on this topic are restricted …
Multiple change point detection under serial dependence: Wild contrast maximisation and gappy Schwarz algorithm
We propose a methodology for detecting multiple change points in the mean of an otherwise
stationary, autocorrelated, linear time series. It combines solution path generation based on …
stationary, autocorrelated, linear time series. It combines solution path generation based on …
Sliced-Wasserstein distance for large-scale machine learning: theory, methodology and extensions
K Nadjahi - 2021 - theses.hal.science
Many methods for statistical inference and generative modeling rely on a probability
divergence to effectively compare two probability distributions. The Wasserstein distance …
divergence to effectively compare two probability distributions. The Wasserstein distance …
The method of cumulants for the normal approximation
H Döring, S Jansen, K Schubert - Probability Surveys, 2022 - projecteuclid.org
The survey is dedicated to a celebrated series of quantitave results, developed by the
Lithuanian school of probability, on the normal approximation for a real-valued random …
Lithuanian school of probability, on the normal approximation for a real-valued random …
A Berry–Esseen theorem and Edgeworth expansions for uniformly elliptic inhomogeneous Markov chains
Abstract We prove a Berry–Esseen theorem and Edgeworth expansions for partial sums of
the form SN=∑ n= 1 N fn (X n, X n+ 1), where {X n} is a uniformly elliptic inhomogeneous …
the form SN=∑ n= 1 N fn (X n, X n+ 1), where {X n} is a uniformly elliptic inhomogeneous …