Seminar on Probability and Statistics
Seminar information archive ~01/17|Next seminar|Future seminars 01/18~
Organizer(s) | Nakahiro Yoshida, Hiroki Masuda, Teppei Ogihara, Yuta Koike |
---|
Seminar information archive
2016/08/09
13:00-16:30 Room #117 (Graduate School of Math. Sci. Bldg.)
David Nualart (Kansas University)
Malliavin calculus and normal approximations
http://www2.ms.u-tokyo.ac.jp/probstat/?page_id=180
David Nualart (Kansas University)
Malliavin calculus and normal approximations
[ Abstract ]
The purpose of these lectures is to introduce some recent results on the application of Malliavin calculus combined with Stein's method to normal approximation. The Malliavin calculus is a differential calculus on the Wiener space. First, we will present some elements of Malliavin calculus, defining the basic differential operators: the derivative, its adjoint called the divergence operator and the generator of the Ornstein-Uhlenbeck semigroup. The behavior of these operators on the Wiener chaos expansion will be discussed. Then, we will introduce the Stein's method for normal approximation, which leads to general bounds for the Kolmogorov and total variation distances between the law of a Brownian functional and the standard normal distribution. In this context, the integration by parts formula of Malliavin calculus will allow us to express these bounds in terms of the Malliavin operators. We will present the application of this methodology to derive the Fourth Moment Theorem for a sequence of multiple stochastic integrals, and we will discuss some results on the uniform convergence of densities obtained using Malliavin calculus techniques. Finally, examples of functionals of Gaussian processes, such as the fractional Brownian motion, will be discussed.
[ Reference URL ]The purpose of these lectures is to introduce some recent results on the application of Malliavin calculus combined with Stein's method to normal approximation. The Malliavin calculus is a differential calculus on the Wiener space. First, we will present some elements of Malliavin calculus, defining the basic differential operators: the derivative, its adjoint called the divergence operator and the generator of the Ornstein-Uhlenbeck semigroup. The behavior of these operators on the Wiener chaos expansion will be discussed. Then, we will introduce the Stein's method for normal approximation, which leads to general bounds for the Kolmogorov and total variation distances between the law of a Brownian functional and the standard normal distribution. In this context, the integration by parts formula of Malliavin calculus will allow us to express these bounds in terms of the Malliavin operators. We will present the application of this methodology to derive the Fourth Moment Theorem for a sequence of multiple stochastic integrals, and we will discuss some results on the uniform convergence of densities obtained using Malliavin calculus techniques. Finally, examples of functionals of Gaussian processes, such as the fractional Brownian motion, will be discussed.
http://www2.ms.u-tokyo.ac.jp/probstat/?page_id=180
2016/08/06
10:00-17:10 Room #123 (Graduate School of Math. Sci. Bldg.)
Nakahiro Yoshida (University of Tokyo, Institute of Statistical Mathematics, and JST CREST) 10:00-10:50
Asymptotic expansion of variations
Teppei Ogihara (The Institute of Statistical Mathematics, JST PRESTO, and JST CREST) 11:00-11:50
LAMN property and optimal estimation for diffusion with non synchronous observations
David Nualart (Kansas University) 13:10-14:00
Approximation schemes for stochastic differential equations driven by a fractional Brownian motion
David Nualart (Kansas University) 14:10-15:00
Parameter estimation for fractional Ornstein-Uhlenbeck processes
Seiichiro Kusuoka (Okayama University) 15:20-16:10
Stein's equations for invariant measures of diffusions processes and their applications via Malliavin calculus
Yasushi Ishikawa (Ehime University) 16:20-17:10
Asymptotic expansion of a nonlinear oscillator with a jump diffusion
[ Reference URL ]
http://www2.ms.u-tokyo.ac.jp/probstat/?page_id=179
Nakahiro Yoshida (University of Tokyo, Institute of Statistical Mathematics, and JST CREST) 10:00-10:50
Asymptotic expansion of variations
Teppei Ogihara (The Institute of Statistical Mathematics, JST PRESTO, and JST CREST) 11:00-11:50
LAMN property and optimal estimation for diffusion with non synchronous observations
David Nualart (Kansas University) 13:10-14:00
Approximation schemes for stochastic differential equations driven by a fractional Brownian motion
David Nualart (Kansas University) 14:10-15:00
Parameter estimation for fractional Ornstein-Uhlenbeck processes
Seiichiro Kusuoka (Okayama University) 15:20-16:10
Stein's equations for invariant measures of diffusions processes and their applications via Malliavin calculus
Yasushi Ishikawa (Ehime University) 16:20-17:10
Asymptotic expansion of a nonlinear oscillator with a jump diffusion
[ Reference URL ]
http://www2.ms.u-tokyo.ac.jp/probstat/?page_id=179
2016/07/26
13:00-14:30 Room #052 (Graduate School of Math. Sci. Bldg.)
Ajay Jasra (National University of Singapore)
Multilevel Particle Filters
Ajay Jasra (National University of Singapore)
Multilevel Particle Filters
[ Abstract ]
In this talk the filtering of partially observed diffusions,
with discrete-time observations, is considered.
It is assumed that only biased approximations of the diffusion can be
obtained, for choice of an accuracy parameter indexed by $l$.
A multilevel estimator is proposed, consisting of a telescopic sum of
increment estimators associated to the successive levels.
The work associated to $\cO(\varepsilon^2)$ mean-square error between
the multilevel estimator and average with respect to the filtering
distribution is shown to scale optimally, for example as
$\cO(\varepsilon^{-2})$ for optimal rates of convergence of the
underlying diffusion approximation.
The method is illustrated on several examples.
In this talk the filtering of partially observed diffusions,
with discrete-time observations, is considered.
It is assumed that only biased approximations of the diffusion can be
obtained, for choice of an accuracy parameter indexed by $l$.
A multilevel estimator is proposed, consisting of a telescopic sum of
increment estimators associated to the successive levels.
The work associated to $\cO(\varepsilon^2)$ mean-square error between
the multilevel estimator and average with respect to the filtering
distribution is shown to scale optimally, for example as
$\cO(\varepsilon^{-2})$ for optimal rates of convergence of the
underlying diffusion approximation.
The method is illustrated on several examples.
2016/06/21
13:00-15:00 Room #052 (Graduate School of Math. Sci. Bldg.)
Lorenzo Mercuri (University of Milan)
New Classes and Methods in YUIMA package
Lorenzo Mercuri (University of Milan)
New Classes and Methods in YUIMA package
[ Abstract ]
In this talk, we present three new classes recently introduced in YUIMA package.
These classes allow the user to manage three different problems:
・Construction of a multidimensional stochastic differential equation driven by a general multivariate Levy process. In particular we show how to define and then simulate a SDE driven by a multivariate Variance Gamma process.
・Definition and simulation of a functional of a general SDE.
・Definition and simulation of the integral of an object from the class yuima.model. In particular, we are able to evaluate Riemann Stieltjes integrals,deterministic integrals with random integrand and stochastic integrals.
Numerical examples are given in order to explain the new methods and classes.
In this talk, we present three new classes recently introduced in YUIMA package.
These classes allow the user to manage three different problems:
・Construction of a multidimensional stochastic differential equation driven by a general multivariate Levy process. In particular we show how to define and then simulate a SDE driven by a multivariate Variance Gamma process.
・Definition and simulation of a functional of a general SDE.
・Definition and simulation of the integral of an object from the class yuima.model. In particular, we are able to evaluate Riemann Stieltjes integrals,deterministic integrals with random integrand and stochastic integrals.
Numerical examples are given in order to explain the new methods and classes.
2016/05/30
13:00-14:10 Room #052 (Graduate School of Math. Sci. Bldg.)
OKADA, Yukinori (Osaka University)
Statistical genetics contributes to elucidation of disease biology and genomic drug discovery
OKADA, Yukinori (Osaka University)
Statistical genetics contributes to elucidation of disease biology and genomic drug discovery
2016/04/26
16:10-17:10 Room #123 (Graduate School of Math. Sci. Bldg.)
Teppei Ogihara (Institute of Statistical Mathematics, JST PRESTO, JST CREST)
LAMN property and optimal estimation for diffusion with non synchronous observations
Teppei Ogihara (Institute of Statistical Mathematics, JST PRESTO, JST CREST)
LAMN property and optimal estimation for diffusion with non synchronous observations
[ Abstract ]
We study so-called local asymptotic mixed normality (LAMN) property for a statistical model generated by nonsynchronously observed diffusion processes using a Malliavin calculus technique. The LAMN property of the statistical model induces an asymptotic minimal variance of estimation errors for any estimators of the parameter. We also construct an optimal estimator which attains the best asymptotic variance.
We study so-called local asymptotic mixed normality (LAMN) property for a statistical model generated by nonsynchronously observed diffusion processes using a Malliavin calculus technique. The LAMN property of the statistical model induces an asymptotic minimal variance of estimation errors for any estimators of the parameter. We also construct an optimal estimator which attains the best asymptotic variance.
2016/04/26
13:00-14:20 Room #123 (Graduate School of Math. Sci. Bldg.)
Ciprian Tudor (Université de Lille 1)
Stochastic heat equation with fractional noise 1
Ciprian Tudor (Université de Lille 1)
Stochastic heat equation with fractional noise 1
[ Abstract ]
In the first part, we introduce the bifractional Brownian motion, which is a Gaussian process that generalizes the well- known fractional Brownian motion. We present the basic properties of this process and we also present its connection with the mild solution to the heat equation driven by a Gaussian noise that behaves as the Brownian motion in time.
In the first part, we introduce the bifractional Brownian motion, which is a Gaussian process that generalizes the well- known fractional Brownian motion. We present the basic properties of this process and we also present its connection with the mild solution to the heat equation driven by a Gaussian noise that behaves as the Brownian motion in time.
2016/04/26
14:30-15:50 Room #123 (Graduate School of Math. Sci. Bldg.)
Ciprian Tudor (Université de Lille 1)
Stochastic heat equation with fractional noise 2
Ciprian Tudor (Université de Lille 1)
Stochastic heat equation with fractional noise 2
[ Abstract ]
We will present recent result concerning the heat equation driven by q Gaussian noise which behaves as a fractional Brownian motion in time and has a correlated spatial structure. We give the basic results concerning the existence and the properties of the solution. We will also focus on the distribution of this Gaussian process and its connection with other fractional-type processes.
We will present recent result concerning the heat equation driven by q Gaussian noise which behaves as a fractional Brownian motion in time and has a correlated spatial structure. We give the basic results concerning the existence and the properties of the solution. We will also focus on the distribution of this Gaussian process and its connection with other fractional-type processes.
2016/04/22
10:30-11:50 Room #002 (Graduate School of Math. Sci. Bldg.)
Ciprian Tudor (Université de Lille 1)
Stein method and Malliavin calculus : theory and some applications to limit theorems 1
Ciprian Tudor (Université de Lille 1)
Stein method and Malliavin calculus : theory and some applications to limit theorems 1
[ Abstract ]
In this first part, we will present the basic ideas of the Stein method for the normal approximation. We will also describe its connection with the Malliavin calculus and the Fourth Moment Theorem.
In this first part, we will present the basic ideas of the Stein method for the normal approximation. We will also describe its connection with the Malliavin calculus and the Fourth Moment Theorem.
2016/04/22
12:50-14:10 Room #002 (Graduate School of Math. Sci. Bldg.)
Ciprian Tudor (Université de Lille 1)
Stein method and Malliavin calculus : theory and some applications to limit theorems 2
Ciprian Tudor (Université de Lille 1)
Stein method and Malliavin calculus : theory and some applications to limit theorems 2
[ Abstract ]
In the second presentation, we intend to do the following: to illustrate the application of the Stein method to the limit behavior of the quadratic variation of Gaussian processes and its connection to statistics. We also intend to present the extension of the method to other target distributions.
In the second presentation, we intend to do the following: to illustrate the application of the Stein method to the limit behavior of the quadratic variation of Gaussian processes and its connection to statistics. We also intend to present the extension of the method to other target distributions.
2016/04/22
14:20-15:50 Room #002 (Graduate School of Math. Sci. Bldg.)
Seiichiro Kusuoka (Okayama University)
Equivalence between the convergence in total variation and that of the Stein factor to the invariant measures of diffusion processes
Seiichiro Kusuoka (Okayama University)
Equivalence between the convergence in total variation and that of the Stein factor to the invariant measures of diffusion processes
[ Abstract ]
We consider the characterization of the convergence of distributions to a given distribution in a certain class by using Stein's equation and Malliavin calculus with respect to the invariant measures of one-dimensional diffusion processes. Precisely speaking, we obtain an estimate between the so-called Stein factor and the total variation norm, and the equivalence between the convergence of the distributions in total variation and that of the Stein factor. This talk is based on the joint work with C.A.Tudor (arXiv:1310.3785).
We consider the characterization of the convergence of distributions to a given distribution in a certain class by using Stein's equation and Malliavin calculus with respect to the invariant measures of one-dimensional diffusion processes. Precisely speaking, we obtain an estimate between the so-called Stein factor and the total variation norm, and the equivalence between the convergence of the distributions in total variation and that of the Stein factor. This talk is based on the joint work with C.A.Tudor (arXiv:1310.3785).
2016/04/22
16:10-17:10 Room #002 (Graduate School of Math. Sci. Bldg.)
Nakahiro Yoshida (University of Tokyo, Institute of Statistical Mathematics, JST CREST)
Asymptotic expansion and estimation of volatility
Nakahiro Yoshida (University of Tokyo, Institute of Statistical Mathematics, JST CREST)
Asymptotic expansion and estimation of volatility
[ Abstract ]
Parametric estimation of volatility of an Ito process in a finite time horizon is discussed. Asymptotic expansion of the error distribution will be presented for the quasi likelihood estimators, i.e., quasi MLE, quasi Bayesian estimator and one-step quasi MLE. Statistics becomes non-ergodic, where the limit distribution is mixed normal. Asymptotic expansion is a basic tool in various areas in the traditional ergodic statistics such as higher order asymptotic decision theory, bootstrap and resampling plans, prediction theory, information criterion for model selection, information geometry, etc. Then a natural question is to obtain asymptotic expansion in the non-ergodic statistics. However, due to randomness of the characteristics of the limit, the classical martingale expansion or the mixing method cannot not apply. Recently a new martingale expansion was developed and applied to a quadratic form of the Ito process. The higher order terms are characterized by the adaptive random symbol and the anticipative random symbol. The Malliavin calculus is used for the description of the anticipative random symbols as well as for obtaining a decay of the characteristic functions. In this talk, the martingale expansion method and the quasi likelihood analysis with a polynomial type large deviation estimate of the quasi likelihood random field collaborate to derive expansions for the quasi likelihood estimators. Expansions of the realized volatility under microstructure noise, the power variation and the error of Euler-Maruyama scheme are recent applications. Further, some extension of martingale expansion to general martingales will be mentioned. References: SPA2013, arXiv:1212.5845, AISM2011, arXiv:1309.2071 (to appear in AAP), arXiv:1512.04716.
Parametric estimation of volatility of an Ito process in a finite time horizon is discussed. Asymptotic expansion of the error distribution will be presented for the quasi likelihood estimators, i.e., quasi MLE, quasi Bayesian estimator and one-step quasi MLE. Statistics becomes non-ergodic, where the limit distribution is mixed normal. Asymptotic expansion is a basic tool in various areas in the traditional ergodic statistics such as higher order asymptotic decision theory, bootstrap and resampling plans, prediction theory, information criterion for model selection, information geometry, etc. Then a natural question is to obtain asymptotic expansion in the non-ergodic statistics. However, due to randomness of the characteristics of the limit, the classical martingale expansion or the mixing method cannot not apply. Recently a new martingale expansion was developed and applied to a quadratic form of the Ito process. The higher order terms are characterized by the adaptive random symbol and the anticipative random symbol. The Malliavin calculus is used for the description of the anticipative random symbols as well as for obtaining a decay of the characteristic functions. In this talk, the martingale expansion method and the quasi likelihood analysis with a polynomial type large deviation estimate of the quasi likelihood random field collaborate to derive expansions for the quasi likelihood estimators. Expansions of the realized volatility under microstructure noise, the power variation and the error of Euler-Maruyama scheme are recent applications. Further, some extension of martingale expansion to general martingales will be mentioned. References: SPA2013, arXiv:1212.5845, AISM2011, arXiv:1309.2071 (to appear in AAP), arXiv:1512.04716.
2016/01/27
13:00-14:10 Room #052 (Graduate School of Math. Sci. Bldg.)
Ajay Jasra (National University of Singapore)
Multilevel SMC Samplers
Ajay Jasra (National University of Singapore)
Multilevel SMC Samplers
[ Abstract ]
The approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs) is considered herein; this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with step-size level h_L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multi-level Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels \infty>h_0>h_1\cdots>h_L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained in the SMC context. The approach is numerically illustrated on a Bayesian inverse problem. This is a joint work with Kody Law (ORNL), Yan Zhou (NUS), Raul Tempone (KAUST) and Alex Beskos (UCL).
The approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs) is considered herein; this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with step-size level h_L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multi-level Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels \infty>h_0>h_1\cdots>h_L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained in the SMC context. The approach is numerically illustrated on a Bayesian inverse problem. This is a joint work with Kody Law (ORNL), Yan Zhou (NUS), Raul Tempone (KAUST) and Alex Beskos (UCL).
2016/01/20
13:00-17:00 Room #123 (Graduate School of Math. Sci. Bldg.)
Enzo Orsingher (Sapienza University of Rome)
Fractional calculus and some applications to stochastic processes
Enzo Orsingher (Sapienza University of Rome)
Fractional calculus and some applications to stochastic processes
[ Abstract ]
1) Riemann-Liouville fractional integrals and derivatives
2) integrals of derivatives and derivatives of integrals
3) Dzerbayshan-Caputo fractional derivatives
4) Marchaud derivative
5) Riesz potential and fractional derivatives
6) Hadamard derivatives and also Erdelyi-Kober derivatives
7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives
8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)
9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)
10) Time-fractional telegraph Poisson process
11) Space fractional Poisson process
13) Other fractional point processes (birth and death processes)
14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.
In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.
1) Riemann-Liouville fractional integrals and derivatives
2) integrals of derivatives and derivatives of integrals
3) Dzerbayshan-Caputo fractional derivatives
4) Marchaud derivative
5) Riesz potential and fractional derivatives
6) Hadamard derivatives and also Erdelyi-Kober derivatives
7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives
8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)
9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)
10) Time-fractional telegraph Poisson process
11) Space fractional Poisson process
13) Other fractional point processes (birth and death processes)
14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.
In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.
2016/01/18
13:00-17:00 Room #123 (Graduate School of Math. Sci. Bldg.)
Enzo Orsingher (Sapienza University of Rome)
Fractional calculus and some applications to stochastic processes
Enzo Orsingher (Sapienza University of Rome)
Fractional calculus and some applications to stochastic processes
[ Abstract ]
1) Riemann-Liouville fractional integrals and derivatives
2) integrals of derivatives and derivatives of integrals
3) Dzerbayshan-Caputo fractional derivatives
4) Marchaud derivative
5) Riesz potential and fractional derivatives
6) Hadamard derivatives and also Erdelyi-Kober derivatives
7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives
8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)
9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)
10) Time-fractional telegraph Poisson process
11) Space fractional Poisson process
13) Other fractional point processes (birth and death processes)
14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.
In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.
1) Riemann-Liouville fractional integrals and derivatives
2) integrals of derivatives and derivatives of integrals
3) Dzerbayshan-Caputo fractional derivatives
4) Marchaud derivative
5) Riesz potential and fractional derivatives
6) Hadamard derivatives and also Erdelyi-Kober derivatives
7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives
8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)
9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)
10) Time-fractional telegraph Poisson process
11) Space fractional Poisson process
13) Other fractional point processes (birth and death processes)
14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.
In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.
2016/01/15
13:00-17:00 Room #123 (Graduate School of Math. Sci. Bldg.)
Enzo Orsingher (Sapienza University of Rome)
Fractional calculus and some applications to stochastic processes
Enzo Orsingher (Sapienza University of Rome)
Fractional calculus and some applications to stochastic processes
[ Abstract ]
1) Riemann-Liouville fractional integrals and derivatives
2) integrals of derivatives and derivatives of integrals
3) Dzerbayshan-Caputo fractional derivatives
4) Marchaud derivative
5) Riesz potential and fractional derivatives
6) Hadamard derivatives and also Erdelyi-Kober derivatives
7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives
8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)
9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)
10) Time-fractional telegraph Poisson process
11) Space fractional Poisson process
13) Other fractional point processes (birth and death processes)
14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.
In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.
1) Riemann-Liouville fractional integrals and derivatives
2) integrals of derivatives and derivatives of integrals
3) Dzerbayshan-Caputo fractional derivatives
4) Marchaud derivative
5) Riesz potential and fractional derivatives
6) Hadamard derivatives and also Erdelyi-Kober derivatives
7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives
8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)
9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)
10) Time-fractional telegraph Poisson process
11) Space fractional Poisson process
13) Other fractional point processes (birth and death processes)
14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.
In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.
2015/12/03
16:40-18:00 Room #123 (Graduate School of Math. Sci. Bldg.)
Arnak Dalalyan (ENSAE ParisTech)
Learning theory and sparsity ~ Sparsity and low rank matrix learning ~
Arnak Dalalyan (ENSAE ParisTech)
Learning theory and sparsity ~ Sparsity and low rank matrix learning ~
[ Abstract ]
In this third lecture, we will present extensions of the previously introduced sparse recovery techniques to the problems of machine learning and statistics in which a large matrix should be learned from data. The analogue of the sparsity, in this context, is the low-rankness of the matrix. We will show that such matrices can be effectively learned by minimizing the empirical risk penalized by the nuclear norm. The resulting problem is a problem of semi-definite programming and can be solved efficiently even when the dimension is large. Theoretical guarantees for this method will be established in the case of matrix completion with known sampling distribution.
In this third lecture, we will present extensions of the previously introduced sparse recovery techniques to the problems of machine learning and statistics in which a large matrix should be learned from data. The analogue of the sparsity, in this context, is the low-rankness of the matrix. We will show that such matrices can be effectively learned by minimizing the empirical risk penalized by the nuclear norm. The resulting problem is a problem of semi-definite programming and can be solved efficiently even when the dimension is large. Theoretical guarantees for this method will be established in the case of matrix completion with known sampling distribution.
2015/12/02
14:55-18:00 Room #056 (Graduate School of Math. Sci. Bldg.)
Arnak Dalalyan (ENSAE ParisTech)
Learning theory and sparsity ~ Lasso, Dantzig selector and their statistical properties ~
Arnak Dalalyan (ENSAE ParisTech)
Learning theory and sparsity ~ Lasso, Dantzig selector and their statistical properties ~
[ Abstract ]
In this second lecture, we will focus on the problem of high dimensional linear regression under the sparsity assumption and discuss the three main statistical problems: denoising, prediction and model selection. We will prove that convex programming based predictors such as the lasso and the Dantzig selector are provably consistent as soon as the dictionary elements are normalized and an appropriate upper bound on the noise-level is available. We will also show that under additional assumptions on the dictionary elements, the aforementioned methods are rate-optimal and model-selection consistent.
In this second lecture, we will focus on the problem of high dimensional linear regression under the sparsity assumption and discuss the three main statistical problems: denoising, prediction and model selection. We will prove that convex programming based predictors such as the lasso and the Dantzig selector are provably consistent as soon as the dictionary elements are normalized and an appropriate upper bound on the noise-level is available. We will also show that under additional assumptions on the dictionary elements, the aforementioned methods are rate-optimal and model-selection consistent.
2015/11/25
14:55-18:00 Room #056 (Graduate School of Math. Sci. Bldg.)
Arnak Dalalyan (ENSAE ParisTech)
Learning theory and sparsity ~ Introduction into sparse recovery and compressed sensing ~
Arnak Dalalyan (ENSAE ParisTech)
Learning theory and sparsity ~ Introduction into sparse recovery and compressed sensing ~
[ Abstract ]
In this introductory lecture, we will present the general framework of high-dimensional statistical modeling and its applications in machine learning and signal processing. Basic methods of sparse recovery, such as the hard and the soft thresholding, will be introduced in the context of orthonormal dictionaries and their statistical accuracy will be discussed in detail. We will also show the relation of these methods with compressed sensing and convex programming based procedures.
In this introductory lecture, we will present the general framework of high-dimensional statistical modeling and its applications in machine learning and signal processing. Basic methods of sparse recovery, such as the hard and the soft thresholding, will be introduced in the context of orthonormal dictionaries and their statistical accuracy will be discussed in detail. We will also show the relation of these methods with compressed sensing and convex programming based procedures.
2015/11/18
17:00-18:10 Room #056 (Graduate School of Math. Sci. Bldg.)
Ioane Muni Toke (University of New Caledonia)
Order flow intensities for limit order book modelling
Ioane Muni Toke (University of New Caledonia)
Order flow intensities for limit order book modelling
[ Abstract ]
Limit order books are at the core of electronic financial markets. Mathematical models of limit order books use point processes to model the arrival of limit, market and cancellation orders in the order book, but it is not clear what a "good" parametric model for the intensities of these point processes should be.
In the first part of the talk, we show that despite their simplicity basic Poisson processes can be used to accurately model a few features of the order book that more advanced models reproduce with volume-dependent intensities.
In the second part of the talk we present ongoing investigations in a more advanced statistical modelling of these order flow intensities using in particular normal mixture distributions and exponential models.
Limit order books are at the core of electronic financial markets. Mathematical models of limit order books use point processes to model the arrival of limit, market and cancellation orders in the order book, but it is not clear what a "good" parametric model for the intensities of these point processes should be.
In the first part of the talk, we show that despite their simplicity basic Poisson processes can be used to accurately model a few features of the order book that more advanced models reproduce with volume-dependent intensities.
In the second part of the talk we present ongoing investigations in a more advanced statistical modelling of these order flow intensities using in particular normal mixture distributions and exponential models.
2015/10/19
13:00-16:40 Room #052 (Graduate School of Math. Sci. Bldg.)
2015/09/17
15:00-16:10 Room #052 (Graduate School of Math. Sci. Bldg.)
Stefano Iacus (University of Milan)
The use of S4 classes and methods in the Yuima R package
Stefano Iacus (University of Milan)
The use of S4 classes and methods in the Yuima R package
[ Abstract ]
In this talk we present the basic concept of S4 classes and methods approach for object oriented programming in R. As a working example, we introduce the structure of the Yuima package for simulation and inference of stochastic differential equations. We will describe the basic classes and objects as well as some recent extensions which allows for Carma and Co-Garch processes handling in Yuima.
In this talk we present the basic concept of S4 classes and methods approach for object oriented programming in R. As a working example, we introduce the structure of the Yuima package for simulation and inference of stochastic differential equations. We will describe the basic classes and objects as well as some recent extensions which allows for Carma and Co-Garch processes handling in Yuima.
2015/08/07
14:40-15:50 Room #052 (Graduate School of Math. Sci. Bldg.)
UBUKATA, Masato (Kushiro Public University of Economics)
Effectiveness of time-varying minimum value at risk and expected shortfall hedging
UBUKATA, Masato (Kushiro Public University of Economics)
Effectiveness of time-varying minimum value at risk and expected shortfall hedging
[ Abstract ]
This paper assesses the incremental value of time-varying minimum value at risk (VaR) and expected shortfall (ES) hedging strategies over unconditional hedging strategy. The conditional futures hedge ratios are calculated through estimation of multivariate volatility models under a skewed and leptokurtic distribution and Monte Carlo simulation for conditional skewness and kurtosis of hedged portfolio returns. We examine DCC-GJR models with or without encompassing realized covariance measure (RCM) from high-frequency data under a multivariate skewed Student's t-distribution. In the out-of-sample analysis with a daily rebalancing approach, the empirical results show that the conditional minimum VaR and ES hedging strategies outperform the unconditional hedging strategy. We find that the use of RCM improves the futures hedging performance for a short hedge, although the degree of improvement is small relative to that when switching from unconditional to conditional.
This paper assesses the incremental value of time-varying minimum value at risk (VaR) and expected shortfall (ES) hedging strategies over unconditional hedging strategy. The conditional futures hedge ratios are calculated through estimation of multivariate volatility models under a skewed and leptokurtic distribution and Monte Carlo simulation for conditional skewness and kurtosis of hedged portfolio returns. We examine DCC-GJR models with or without encompassing realized covariance measure (RCM) from high-frequency data under a multivariate skewed Student's t-distribution. In the out-of-sample analysis with a daily rebalancing approach, the empirical results show that the conditional minimum VaR and ES hedging strategies outperform the unconditional hedging strategy. We find that the use of RCM improves the futures hedging performance for a short hedge, although the degree of improvement is small relative to that when switching from unconditional to conditional.
2015/08/07
13:20-14:30 Room #052 (Graduate School of Math. Sci. Bldg.)
Yoann Potiron (University of Chicago)
ESTIMATION OF INTEGRATED QUADRATIC COVARIATION BETWEEN TWO ASSETS WITH ENDOGENOUS SAMPLING TIMES
Yoann Potiron (University of Chicago)
ESTIMATION OF INTEGRATED QUADRATIC COVARIATION BETWEEN TWO ASSETS WITH ENDOGENOUS SAMPLING TIMES
[ Abstract ]
When estimating integrated covariation between two assets based on high-frequency data,simple assumptions are usually imposed on the relationship between the price processes and the observation times. In this paper, we introduce an endogenous 2-dimensional model and show that it is more general than the existing endogenous models of the literature. In addition, we establish a central limit theorem for the Hayashi-Yoshida estimator in this general endogenous model in the case where prices follow pure-diffusion processes.
When estimating integrated covariation between two assets based on high-frequency data,simple assumptions are usually imposed on the relationship between the price processes and the observation times. In this paper, we introduce an endogenous 2-dimensional model and show that it is more general than the existing endogenous models of the literature. In addition, we establish a central limit theorem for the Hayashi-Yoshida estimator in this general endogenous model in the case where prices follow pure-diffusion processes.
2015/06/05
16:20-17:30 Room #056 (Graduate School of Math. Sci. Bldg.)