## Seminar on Probability and Statistics

Seminar information archive ～09/27｜Next seminar｜Future seminars 09/28～

Organizer(s) | Nakahiro Yoshida, Teppei Ogihara, Yuta Koike |
---|

**Seminar information archive**

### 2016/05/30

13:00-14:10 Room #052 (Graduate School of Math. Sci. Bldg.)

Statistical genetics contributes to elucidation of disease biology and genomic drug discovery

**OKADA, Yukinori**(Osaka University)Statistical genetics contributes to elucidation of disease biology and genomic drug discovery

### 2016/04/26

16:10-17:10 Room #123 (Graduate School of Math. Sci. Bldg.)

LAMN property and optimal estimation for diffusion with non synchronous observations

**Teppei Ogihara**(Institute of Statistical Mathematics, JST PRESTO, JST CREST)LAMN property and optimal estimation for diffusion with non synchronous observations

[ Abstract ]

We study so-called local asymptotic mixed normality (LAMN) property for a statistical model generated by nonsynchronously observed diffusion processes using a Malliavin calculus technique. The LAMN property of the statistical model induces an asymptotic minimal variance of estimation errors for any estimators of the parameter. We also construct an optimal estimator which attains the best asymptotic variance.

We study so-called local asymptotic mixed normality (LAMN) property for a statistical model generated by nonsynchronously observed diffusion processes using a Malliavin calculus technique. The LAMN property of the statistical model induces an asymptotic minimal variance of estimation errors for any estimators of the parameter. We also construct an optimal estimator which attains the best asymptotic variance.

### 2016/04/26

13:00-14:20 Room #123 (Graduate School of Math. Sci. Bldg.)

Stochastic heat equation with fractional noise 1

**Ciprian Tudor**(Université de Lille 1)Stochastic heat equation with fractional noise 1

[ Abstract ]

In the first part, we introduce the bifractional Brownian motion, which is a Gaussian process that generalizes the well- known fractional Brownian motion. We present the basic properties of this process and we also present its connection with the mild solution to the heat equation driven by a Gaussian noise that behaves as the Brownian motion in time.

In the first part, we introduce the bifractional Brownian motion, which is a Gaussian process that generalizes the well- known fractional Brownian motion. We present the basic properties of this process and we also present its connection with the mild solution to the heat equation driven by a Gaussian noise that behaves as the Brownian motion in time.

### 2016/04/26

14:30-15:50 Room #123 (Graduate School of Math. Sci. Bldg.)

Stochastic heat equation with fractional noise 2

**Ciprian Tudor**(Université de Lille 1)Stochastic heat equation with fractional noise 2

[ Abstract ]

We will present recent result concerning the heat equation driven by q Gaussian noise which behaves as a fractional Brownian motion in time and has a correlated spatial structure. We give the basic results concerning the existence and the properties of the solution. We will also focus on the distribution of this Gaussian process and its connection with other fractional-type processes.

We will present recent result concerning the heat equation driven by q Gaussian noise which behaves as a fractional Brownian motion in time and has a correlated spatial structure. We give the basic results concerning the existence and the properties of the solution. We will also focus on the distribution of this Gaussian process and its connection with other fractional-type processes.

### 2016/04/22

10:30-11:50 Room #002 (Graduate School of Math. Sci. Bldg.)

Stein method and Malliavin calculus : theory and some applications to limit theorems 1

**Ciprian Tudor**(Université de Lille 1)Stein method and Malliavin calculus : theory and some applications to limit theorems 1

[ Abstract ]

In this first part, we will present the basic ideas of the Stein method for the normal approximation. We will also describe its connection with the Malliavin calculus and the Fourth Moment Theorem.

In this first part, we will present the basic ideas of the Stein method for the normal approximation. We will also describe its connection with the Malliavin calculus and the Fourth Moment Theorem.

### 2016/04/22

12:50-14:10 Room #002 (Graduate School of Math. Sci. Bldg.)

Stein method and Malliavin calculus : theory and some applications to limit theorems 2

**Ciprian Tudor**(Université de Lille 1)Stein method and Malliavin calculus : theory and some applications to limit theorems 2

[ Abstract ]

In the second presentation, we intend to do the following: to illustrate the application of the Stein method to the limit behavior of the quadratic variation of Gaussian processes and its connection to statistics. We also intend to present the extension of the method to other target distributions.

In the second presentation, we intend to do the following: to illustrate the application of the Stein method to the limit behavior of the quadratic variation of Gaussian processes and its connection to statistics. We also intend to present the extension of the method to other target distributions.

### 2016/04/22

14:20-15:50 Room #002 (Graduate School of Math. Sci. Bldg.)

Equivalence between the convergence in total variation and that of the Stein factor to the invariant measures of diffusion processes

**Seiichiro Kusuoka**(Okayama University)Equivalence between the convergence in total variation and that of the Stein factor to the invariant measures of diffusion processes

[ Abstract ]

We consider the characterization of the convergence of distributions to a given distribution in a certain class by using Stein's equation and Malliavin calculus with respect to the invariant measures of one-dimensional diffusion processes. Precisely speaking, we obtain an estimate between the so-called Stein factor and the total variation norm, and the equivalence between the convergence of the distributions in total variation and that of the Stein factor. This talk is based on the joint work with C.A.Tudor (arXiv:1310.3785).

We consider the characterization of the convergence of distributions to a given distribution in a certain class by using Stein's equation and Malliavin calculus with respect to the invariant measures of one-dimensional diffusion processes. Precisely speaking, we obtain an estimate between the so-called Stein factor and the total variation norm, and the equivalence between the convergence of the distributions in total variation and that of the Stein factor. This talk is based on the joint work with C.A.Tudor (arXiv:1310.3785).

### 2016/04/22

16:10-17:10 Room #002 (Graduate School of Math. Sci. Bldg.)

Asymptotic expansion and estimation of volatility

**Nakahiro Yoshida**(University of Tokyo, Institute of Statistical Mathematics, JST CREST)Asymptotic expansion and estimation of volatility

[ Abstract ]

Parametric estimation of volatility of an Ito process in a finite time horizon is discussed. Asymptotic expansion of the error distribution will be presented for the quasi likelihood estimators, i.e., quasi MLE, quasi Bayesian estimator and one-step quasi MLE. Statistics becomes non-ergodic, where the limit distribution is mixed normal. Asymptotic expansion is a basic tool in various areas in the traditional ergodic statistics such as higher order asymptotic decision theory, bootstrap and resampling plans, prediction theory, information criterion for model selection, information geometry, etc. Then a natural question is to obtain asymptotic expansion in the non-ergodic statistics. However, due to randomness of the characteristics of the limit, the classical martingale expansion or the mixing method cannot not apply. Recently a new martingale expansion was developed and applied to a quadratic form of the Ito process. The higher order terms are characterized by the adaptive random symbol and the anticipative random symbol. The Malliavin calculus is used for the description of the anticipative random symbols as well as for obtaining a decay of the characteristic functions. In this talk, the martingale expansion method and the quasi likelihood analysis with a polynomial type large deviation estimate of the quasi likelihood random field collaborate to derive expansions for the quasi likelihood estimators. Expansions of the realized volatility under microstructure noise, the power variation and the error of Euler-Maruyama scheme are recent applications. Further, some extension of martingale expansion to general martingales will be mentioned. References: SPA2013, arXiv:1212.5845, AISM2011, arXiv:1309.2071 (to appear in AAP), arXiv:1512.04716.

Parametric estimation of volatility of an Ito process in a finite time horizon is discussed. Asymptotic expansion of the error distribution will be presented for the quasi likelihood estimators, i.e., quasi MLE, quasi Bayesian estimator and one-step quasi MLE. Statistics becomes non-ergodic, where the limit distribution is mixed normal. Asymptotic expansion is a basic tool in various areas in the traditional ergodic statistics such as higher order asymptotic decision theory, bootstrap and resampling plans, prediction theory, information criterion for model selection, information geometry, etc. Then a natural question is to obtain asymptotic expansion in the non-ergodic statistics. However, due to randomness of the characteristics of the limit, the classical martingale expansion or the mixing method cannot not apply. Recently a new martingale expansion was developed and applied to a quadratic form of the Ito process. The higher order terms are characterized by the adaptive random symbol and the anticipative random symbol. The Malliavin calculus is used for the description of the anticipative random symbols as well as for obtaining a decay of the characteristic functions. In this talk, the martingale expansion method and the quasi likelihood analysis with a polynomial type large deviation estimate of the quasi likelihood random field collaborate to derive expansions for the quasi likelihood estimators. Expansions of the realized volatility under microstructure noise, the power variation and the error of Euler-Maruyama scheme are recent applications. Further, some extension of martingale expansion to general martingales will be mentioned. References: SPA2013, arXiv:1212.5845, AISM2011, arXiv:1309.2071 (to appear in AAP), arXiv:1512.04716.

### 2016/01/27

13:00-14:10 Room #052 (Graduate School of Math. Sci. Bldg.)

Multilevel SMC Samplers

**Ajay Jasra**(National University of Singapore)Multilevel SMC Samplers

[ Abstract ]

The approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs) is considered herein; this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with step-size level h_L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multi-level Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels \infty>h_0>h_1\cdots>h_L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained in the SMC context. The approach is numerically illustrated on a Bayesian inverse problem. This is a joint work with Kody Law (ORNL), Yan Zhou (NUS), Raul Tempone (KAUST) and Alex Beskos (UCL).

The approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs) is considered herein; this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods and leading to a discretisation bias, with step-size level h_L. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multi-level Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretisation levels \infty>h_0>h_1\cdots>h_L. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence of probability distributions. A sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained in the SMC context. The approach is numerically illustrated on a Bayesian inverse problem. This is a joint work with Kody Law (ORNL), Yan Zhou (NUS), Raul Tempone (KAUST) and Alex Beskos (UCL).

### 2016/01/20

13:00-17:00 Room #123 (Graduate School of Math. Sci. Bldg.)

Fractional calculus and some applications to stochastic processes

**Enzo Orsingher**(Sapienza University of Rome)Fractional calculus and some applications to stochastic processes

[ Abstract ]

1) Riemann-Liouville fractional integrals and derivatives

2) integrals of derivatives and derivatives of integrals

3) Dzerbayshan-Caputo fractional derivatives

4) Marchaud derivative

5) Riesz potential and fractional derivatives

6) Hadamard derivatives and also Erdelyi-Kober derivatives

7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives

8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)

9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)

10) Time-fractional telegraph Poisson process

11) Space fractional Poisson process

13) Other fractional point processes (birth and death processes)

14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.

In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.

1) Riemann-Liouville fractional integrals and derivatives

2) integrals of derivatives and derivatives of integrals

3) Dzerbayshan-Caputo fractional derivatives

4) Marchaud derivative

5) Riesz potential and fractional derivatives

6) Hadamard derivatives and also Erdelyi-Kober derivatives

7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives

8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)

9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)

10) Time-fractional telegraph Poisson process

11) Space fractional Poisson process

13) Other fractional point processes (birth and death processes)

14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.

In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.

### 2016/01/18

13:00-17:00 Room #123 (Graduate School of Math. Sci. Bldg.)

Fractional calculus and some applications to stochastic processes

**Enzo Orsingher**(Sapienza University of Rome)Fractional calculus and some applications to stochastic processes

[ Abstract ]

1) Riemann-Liouville fractional integrals and derivatives

2) integrals of derivatives and derivatives of integrals

3) Dzerbayshan-Caputo fractional derivatives

4) Marchaud derivative

5) Riesz potential and fractional derivatives

6) Hadamard derivatives and also Erdelyi-Kober derivatives

7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives

8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)

9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)

10) Time-fractional telegraph Poisson process

11) Space fractional Poisson process

13) Other fractional point processes (birth and death processes)

14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.

In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.

1) Riemann-Liouville fractional integrals and derivatives

2) integrals of derivatives and derivatives of integrals

3) Dzerbayshan-Caputo fractional derivatives

4) Marchaud derivative

5) Riesz potential and fractional derivatives

6) Hadamard derivatives and also Erdelyi-Kober derivatives

7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives

8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)

9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)

10) Time-fractional telegraph Poisson process

11) Space fractional Poisson process

13) Other fractional point processes (birth and death processes)

14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.

In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.

### 2016/01/15

13:00-17:00 Room #123 (Graduate School of Math. Sci. Bldg.)

Fractional calculus and some applications to stochastic processes

**Enzo Orsingher**(Sapienza University of Rome)Fractional calculus and some applications to stochastic processes

[ Abstract ]

1) Riemann-Liouville fractional integrals and derivatives

2) integrals of derivatives and derivatives of integrals

3) Dzerbayshan-Caputo fractional derivatives

4) Marchaud derivative

5) Riesz potential and fractional derivatives

6) Hadamard derivatives and also Erdelyi-Kober derivatives

7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives

8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)

9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)

10) Time-fractional telegraph Poisson process

11) Space fractional Poisson process

13) Other fractional point processes (birth and death processes)

14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.

In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.

1) Riemann-Liouville fractional integrals and derivatives

2) integrals of derivatives and derivatives of integrals

3) Dzerbayshan-Caputo fractional derivatives

4) Marchaud derivative

5) Riesz potential and fractional derivatives

6) Hadamard derivatives and also Erdelyi-Kober derivatives

7) Laplace transforms of Riemann.Liouville and Dzerbayshan-Caputo fractional derivatives

8) Fractional diffusion equations and related special functions (Mittag-Leffler and Wright functions)

9) Fractional telegraph equations (space-time fractional equations and also their mutidimensional versions)

10) Time-fractional telegraph Poisson process

11) Space fractional Poisson process

13) Other fractional point processes (birth and death processes)

14) We shall present the relationship between solutions of wave and Euler-Poisson-Darboux equations through the Erdelyi-Kober integrals.

In these lessons we will introduce the main ideas of the classical fractional calculus. The results and theorems will be presented with all details and calculations. We shall study some fundamental fractional equations and their interplay with stochastic processes. Some details on the iterated Brownian motion will also be given.

### 2015/12/03

16:40-18:00 Room #123 (Graduate School of Math. Sci. Bldg.)

Learning theory and sparsity ～ Sparsity and low rank matrix learning ～

**Arnak Dalalyan**(ENSAE ParisTech)Learning theory and sparsity ～ Sparsity and low rank matrix learning ～

[ Abstract ]

In this third lecture, we will present extensions of the previously introduced sparse recovery techniques to the problems of machine learning and statistics in which a large matrix should be learned from data. The analogue of the sparsity, in this context, is the low-rankness of the matrix. We will show that such matrices can be effectively learned by minimizing the empirical risk penalized by the nuclear norm. The resulting problem is a problem of semi-definite programming and can be solved efficiently even when the dimension is large. Theoretical guarantees for this method will be established in the case of matrix completion with known sampling distribution.

In this third lecture, we will present extensions of the previously introduced sparse recovery techniques to the problems of machine learning and statistics in which a large matrix should be learned from data. The analogue of the sparsity, in this context, is the low-rankness of the matrix. We will show that such matrices can be effectively learned by minimizing the empirical risk penalized by the nuclear norm. The resulting problem is a problem of semi-definite programming and can be solved efficiently even when the dimension is large. Theoretical guarantees for this method will be established in the case of matrix completion with known sampling distribution.

### 2015/12/02

14:55-18:00 Room #056 (Graduate School of Math. Sci. Bldg.)

Learning theory and sparsity ～ Lasso, Dantzig selector and their statistical properties ～

**Arnak Dalalyan**(ENSAE ParisTech)Learning theory and sparsity ～ Lasso, Dantzig selector and their statistical properties ～

[ Abstract ]

In this second lecture, we will focus on the problem of high dimensional linear regression under the sparsity assumption and discuss the three main statistical problems: denoising, prediction and model selection. We will prove that convex programming based predictors such as the lasso and the Dantzig selector are provably consistent as soon as the dictionary elements are normalized and an appropriate upper bound on the noise-level is available. We will also show that under additional assumptions on the dictionary elements, the aforementioned methods are rate-optimal and model-selection consistent.

In this second lecture, we will focus on the problem of high dimensional linear regression under the sparsity assumption and discuss the three main statistical problems: denoising, prediction and model selection. We will prove that convex programming based predictors such as the lasso and the Dantzig selector are provably consistent as soon as the dictionary elements are normalized and an appropriate upper bound on the noise-level is available. We will also show that under additional assumptions on the dictionary elements, the aforementioned methods are rate-optimal and model-selection consistent.

### 2015/11/25

14:55-18:00 Room #056 (Graduate School of Math. Sci. Bldg.)

Learning theory and sparsity ～ Introduction into sparse recovery and compressed sensing ～

**Arnak Dalalyan**(ENSAE ParisTech)Learning theory and sparsity ～ Introduction into sparse recovery and compressed sensing ～

[ Abstract ]

In this introductory lecture, we will present the general framework of high-dimensional statistical modeling and its applications in machine learning and signal processing. Basic methods of sparse recovery, such as the hard and the soft thresholding, will be introduced in the context of orthonormal dictionaries and their statistical accuracy will be discussed in detail. We will also show the relation of these methods with compressed sensing and convex programming based procedures.

In this introductory lecture, we will present the general framework of high-dimensional statistical modeling and its applications in machine learning and signal processing. Basic methods of sparse recovery, such as the hard and the soft thresholding, will be introduced in the context of orthonormal dictionaries and their statistical accuracy will be discussed in detail. We will also show the relation of these methods with compressed sensing and convex programming based procedures.

### 2015/11/18

17:00-18:10 Room #056 (Graduate School of Math. Sci. Bldg.)

Order flow intensities for limit order book modelling

**Ioane Muni Toke**(University of New Caledonia)Order flow intensities for limit order book modelling

[ Abstract ]

Limit order books are at the core of electronic financial markets. Mathematical models of limit order books use point processes to model the arrival of limit, market and cancellation orders in the order book, but it is not clear what a "good" parametric model for the intensities of these point processes should be.

In the first part of the talk, we show that despite their simplicity basic Poisson processes can be used to accurately model a few features of the order book that more advanced models reproduce with volume-dependent intensities.

In the second part of the talk we present ongoing investigations in a more advanced statistical modelling of these order flow intensities using in particular normal mixture distributions and exponential models.

Limit order books are at the core of electronic financial markets. Mathematical models of limit order books use point processes to model the arrival of limit, market and cancellation orders in the order book, but it is not clear what a "good" parametric model for the intensities of these point processes should be.

In the first part of the talk, we show that despite their simplicity basic Poisson processes can be used to accurately model a few features of the order book that more advanced models reproduce with volume-dependent intensities.

In the second part of the talk we present ongoing investigations in a more advanced statistical modelling of these order flow intensities using in particular normal mixture distributions and exponential models.

### 2015/10/19

13:00-16:40 Room #052 (Graduate School of Math. Sci. Bldg.)

### 2015/09/17

15:00-16:10 Room #052 (Graduate School of Math. Sci. Bldg.)

The use of S4 classes and methods in the Yuima R package

**Stefano Iacus**(University of Milan)The use of S4 classes and methods in the Yuima R package

[ Abstract ]

In this talk we present the basic concept of S4 classes and methods approach for object oriented programming in R. As a working example, we introduce the structure of the Yuima package for simulation and inference of stochastic differential equations. We will describe the basic classes and objects as well as some recent extensions which allows for Carma and Co-Garch processes handling in Yuima.

In this talk we present the basic concept of S4 classes and methods approach for object oriented programming in R. As a working example, we introduce the structure of the Yuima package for simulation and inference of stochastic differential equations. We will describe the basic classes and objects as well as some recent extensions which allows for Carma and Co-Garch processes handling in Yuima.

### 2015/08/07

14:40-15:50 Room #052 (Graduate School of Math. Sci. Bldg.)

Effectiveness of time-varying minimum value at risk and expected shortfall hedging

**UBUKATA, Masato**(Kushiro Public University of Economics)Effectiveness of time-varying minimum value at risk and expected shortfall hedging

[ Abstract ]

This paper assesses the incremental value of time-varying minimum value at risk (VaR) and expected shortfall (ES) hedging strategies over unconditional hedging strategy. The conditional futures hedge ratios are calculated through estimation of multivariate volatility models under a skewed and leptokurtic distribution and Monte Carlo simulation for conditional skewness and kurtosis of hedged portfolio returns. We examine DCC-GJR models with or without encompassing realized covariance measure (RCM) from high-frequency data under a multivariate skewed Student's t-distribution. In the out-of-sample analysis with a daily rebalancing approach, the empirical results show that the conditional minimum VaR and ES hedging strategies outperform the unconditional hedging strategy. We find that the use of RCM improves the futures hedging performance for a short hedge, although the degree of improvement is small relative to that when switching from unconditional to conditional.

This paper assesses the incremental value of time-varying minimum value at risk (VaR) and expected shortfall (ES) hedging strategies over unconditional hedging strategy. The conditional futures hedge ratios are calculated through estimation of multivariate volatility models under a skewed and leptokurtic distribution and Monte Carlo simulation for conditional skewness and kurtosis of hedged portfolio returns. We examine DCC-GJR models with or without encompassing realized covariance measure (RCM) from high-frequency data under a multivariate skewed Student's t-distribution. In the out-of-sample analysis with a daily rebalancing approach, the empirical results show that the conditional minimum VaR and ES hedging strategies outperform the unconditional hedging strategy. We find that the use of RCM improves the futures hedging performance for a short hedge, although the degree of improvement is small relative to that when switching from unconditional to conditional.

### 2015/08/07

13:20-14:30 Room #052 (Graduate School of Math. Sci. Bldg.)

ESTIMATION OF INTEGRATED QUADRATIC COVARIATION BETWEEN TWO ASSETS WITH ENDOGENOUS SAMPLING TIMES

**Yoann Potiron**(University of Chicago)ESTIMATION OF INTEGRATED QUADRATIC COVARIATION BETWEEN TWO ASSETS WITH ENDOGENOUS SAMPLING TIMES

[ Abstract ]

When estimating integrated covariation between two assets based on high-frequency data,simple assumptions are usually imposed on the relationship between the price processes and the observation times. In this paper, we introduce an endogenous 2-dimensional model and show that it is more general than the existing endogenous models of the literature. In addition, we establish a central limit theorem for the Hayashi-Yoshida estimator in this general endogenous model in the case where prices follow pure-diffusion processes.

When estimating integrated covariation between two assets based on high-frequency data,simple assumptions are usually imposed on the relationship between the price processes and the observation times. In this paper, we introduce an endogenous 2-dimensional model and show that it is more general than the existing endogenous models of the literature. In addition, we establish a central limit theorem for the Hayashi-Yoshida estimator in this general endogenous model in the case where prices follow pure-diffusion processes.

### 2015/06/05

16:20-17:30 Room #056 (Graduate School of Math. Sci. Bldg.)

### 2015/04/10

14:50-16:00 Room #128 (Graduate School of Math. Sci. Bldg.)

Principal Component Analysis of High Frequency Data (joint with Dacheng Xiu)

**Yacine Ait-Sahalia**(Princeton University)Principal Component Analysis of High Frequency Data (joint with Dacheng Xiu)

[ Abstract ]

We develop a methodology to conduct principal component analysis of high frequency financial data. The procedure involves estimation of realized eigenvalues, realized eigenvectors, and realized principal components and we provide the asymptotic distribution of these estimators. Empirically, we study the components of the constituents of Dow Jones Industrial Average Index, in a high frequency version, with jumps, of the Fama-French analysis. Our findings show that, excluding jump variation, three Brownian factors explain between 50 and 60% of continuous variation of the stock returns. Their explanatory power varies over time. During crises, the first principal component becomes increasingly dominant, explaining up to 70% of the variation on its own, a clear sign of systemic risk.

We develop a methodology to conduct principal component analysis of high frequency financial data. The procedure involves estimation of realized eigenvalues, realized eigenvectors, and realized principal components and we provide the asymptotic distribution of these estimators. Empirically, we study the components of the constituents of Dow Jones Industrial Average Index, in a high frequency version, with jumps, of the Fama-French analysis. Our findings show that, excluding jump variation, three Brownian factors explain between 50 and 60% of continuous variation of the stock returns. Their explanatory power varies over time. During crises, the first principal component becomes increasingly dominant, explaining up to 70% of the variation on its own, a clear sign of systemic risk.

### 2015/02/19

16:30-17:40 Room #052 (Graduate School of Math. Sci. Bldg.)

TBA

**Dobrislav Dobrev**(Board of Governors of the Federal Reserve System, Division of International Finance)TBA

[ Abstract ]

TBA

TBA

### 2015/02/10

16:30-17:40 Room #052 (Graduate School of Math. Sci. Bldg.)

Zero-intelligence modelling of limit order books

**Ioane Muni Toke**(Ecole Centrale Paris and University of New Caledonia)Zero-intelligence modelling of limit order books

[ Abstract ]

Limit order books (LOB) are at the core of electronic financial markets.

A LOB centralizes all orders of all market participants on a given

exchange, matching buy and sell orders of all types.

In a first part, we observe that a LOB is a queueing system and that

this analogy is fruitful to derive stationary properties of these

structures. Using a basic Poisson model, we compute analytical formulas

for the average shape of the LOB. Our model allows for non-unit size of

limit orders, leading to new predictions on the granularity of financial

markets that turn out to be empirically valid.

In a second part, we study the LOB during the call auction, a market

design often used during the opening and closing phases of the trading

day. We show that in a basic Poisson model of the call auction, the

distributions for the traded volume and the range of clearing prices are

analytically computable. In the case of a liquid market, we derive weak

limits of these distributions and test them empirically.

Limit order books (LOB) are at the core of electronic financial markets.

A LOB centralizes all orders of all market participants on a given

exchange, matching buy and sell orders of all types.

In a first part, we observe that a LOB is a queueing system and that

this analogy is fruitful to derive stationary properties of these

structures. Using a basic Poisson model, we compute analytical formulas

for the average shape of the LOB. Our model allows for non-unit size of

limit orders, leading to new predictions on the granularity of financial

markets that turn out to be empirically valid.

In a second part, we study the LOB during the call auction, a market

design often used during the opening and closing phases of the trading

day. We show that in a basic Poisson model of the call auction, the

distributions for the traded volume and the range of clearing prices are

analytically computable. In the case of a liquid market, we derive weak

limits of these distributions and test them empirically.

### 2015/01/16

14:00-15:30 Room #052 (Graduate School of Math. Sci. Bldg.)

A stable particle filter in high-dimensions

http://www.sigmath.es.osaka-u.ac.jp/~kamatani/statseminar/2014/06.html

**Ajay Jasra**(National University of Singapore)A stable particle filter in high-dimensions

[ Abstract ]

We consider the numerical approximation of the filtering problem in high dimensions, that is, when the hidden state lies in $\mathbb{R}^d$ with $d$ large. For low dimensional problems, one of the most popular numerical procedures for consistent inference is the class of approximations termed as particle filters or sequential Monte Carlo methods. However, in high dimensions, standard particle filters (e.g. the bootstrap particle filter) can have a cost that is exponential in $d$ for the algorithm to be stable in an appropriate sense. We develop a new particle filter, called the space-time particle filter, for a specific family of state-space models in discrete time. This new class of particle filters provide consistent Monte Carlo estimates for any fixed $d$, as do standard particle filters. Moreover, under a simple i.i.d. model structure, we show that in order to achieve some stability properties this new filter has cost $\mathcal{O}(nNd^2)$, where $n$ is the time parameter and $N$ is the number of Monte Carlo samples, that are fixed and independent of $d$. Similar results hold, under a more general structure than the i.i.d. one. Here we show that, under additional assumptions and with the same cost, the asymptotic variance of the relative estimate of the normalizing constant grows at most linearly in time and independently of the dimension. Our theoretical results are supported by numerical simulations. The results suggest that it is possible to tackle some high dimensional filtering problems using the space-time particle filter that standard particle filters cannot.

This is joint work with: Alex Beskos (UCL), Dan Crisan (Imperial), Kengo Kamatani (Osaka) and Yan Zhou (NUS).

[ Reference URL ]We consider the numerical approximation of the filtering problem in high dimensions, that is, when the hidden state lies in $\mathbb{R}^d$ with $d$ large. For low dimensional problems, one of the most popular numerical procedures for consistent inference is the class of approximations termed as particle filters or sequential Monte Carlo methods. However, in high dimensions, standard particle filters (e.g. the bootstrap particle filter) can have a cost that is exponential in $d$ for the algorithm to be stable in an appropriate sense. We develop a new particle filter, called the space-time particle filter, for a specific family of state-space models in discrete time. This new class of particle filters provide consistent Monte Carlo estimates for any fixed $d$, as do standard particle filters. Moreover, under a simple i.i.d. model structure, we show that in order to achieve some stability properties this new filter has cost $\mathcal{O}(nNd^2)$, where $n$ is the time parameter and $N$ is the number of Monte Carlo samples, that are fixed and independent of $d$. Similar results hold, under a more general structure than the i.i.d. one. Here we show that, under additional assumptions and with the same cost, the asymptotic variance of the relative estimate of the normalizing constant grows at most linearly in time and independently of the dimension. Our theoretical results are supported by numerical simulations. The results suggest that it is possible to tackle some high dimensional filtering problems using the space-time particle filter that standard particle filters cannot.

This is joint work with: Alex Beskos (UCL), Dan Crisan (Imperial), Kengo Kamatani (Osaka) and Yan Zhou (NUS).

http://www.sigmath.es.osaka-u.ac.jp/~kamatani/statseminar/2014/06.html