Numerical Analysis Seminar
Seminar information archive ~12/29|Next seminar|Future seminars 12/30~
| Date, time & place | Tuesday 16:30 - 18:00 002Room #002 (Graduate School of Math. Sci. Bldg.) |
|---|---|
| Organizer(s) | Norikazu Saito, Takahito Kashiwabara |
Seminar information archive
2025/12/16
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Laurent Mertz (City University of Hong Kong)
A Control Variate Method Driven by Diffusion Approximation (English)
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Laurent Mertz (City University of Hong Kong)
A Control Variate Method Driven by Diffusion Approximation (English)
[ Abstract ]
We present a control variate estimator for a quantity that can be expressed as the expectation of a functional of a random process, that is itself the solution of a differential equation driven by fast mean-reverting ergodic forces. The control variate is the expectation of the same functional for the limit diffusion process that approximates the original process when the mean-reversion time goes to zero. To get an efficient control variate estimator, we propose a coupling method to build the original process and the limit diffusion process. We show that the correlation between the two processes indeed goes to one when the mean reversion time goes to zero and we quantify the convergence rate, which makes it possible to characterize the variance reduction of the proposed control variate method. The efficiency of the method is illustrated on a few examples. This is joint work with Josselin Garnier (École Polytechnique, France). Link to the paper: https://doi.org/10.1002/cpa.21976
[ Reference URL ]We present a control variate estimator for a quantity that can be expressed as the expectation of a functional of a random process, that is itself the solution of a differential equation driven by fast mean-reverting ergodic forces. The control variate is the expectation of the same functional for the limit diffusion process that approximates the original process when the mean-reversion time goes to zero. To get an efficient control variate estimator, we propose a coupling method to build the original process and the limit diffusion process. We show that the correlation between the two processes indeed goes to one when the mean reversion time goes to zero and we quantify the convergence rate, which makes it possible to characterize the variance reduction of the proposed control variate method. The efficiency of the method is illustrated on a few examples. This is joint work with Josselin Garnier (École Polytechnique, France). Link to the paper: https://doi.org/10.1002/cpa.21976
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2025/12/09
16:30-18:00 Room #122 (Graduate School of Math. Sci. Bldg.)
Dorin Bucur (Université Savoie Mont Blanc)
On polygonal nonlocal isoperimetric inequalities: Hardy-Littlewood, Riesz, Faber-Krahn (English)
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Dorin Bucur (Université Savoie Mont Blanc)
On polygonal nonlocal isoperimetric inequalities: Hardy-Littlewood, Riesz, Faber-Krahn (English)
[ Abstract ]
The starting point is the Faber-Krahn inequality on the first eigenvalue of the Dirichlet Laplacian. Many refinements were obtained in the last years, mainly due to the use of recent techniques based on the analysis of vectorial free boundary problems. It turns out that the polygonal version of this inequality, very easy to state, is extremely hard to prove and remains open since 1947, when it was conjectured by Polya. I will connect this question to somehow easier problems, like polygonal versions of Hardy-Littlewood and Riesz inequalities and I will discuss the local minimality of regular polygons and the possibility to prove the conjecture by a mixed approach. This talk is based on joint works with Beniamin Bogosel and Ilaria Fragala.
[ Reference URL ]The starting point is the Faber-Krahn inequality on the first eigenvalue of the Dirichlet Laplacian. Many refinements were obtained in the last years, mainly due to the use of recent techniques based on the analysis of vectorial free boundary problems. It turns out that the polygonal version of this inequality, very easy to state, is extremely hard to prove and remains open since 1947, when it was conjectured by Polya. I will connect this question to somehow easier problems, like polygonal versions of Hardy-Littlewood and Riesz inequalities and I will discuss the local minimality of regular polygons and the possibility to prove the conjecture by a mixed approach. This talk is based on joint works with Beniamin Bogosel and Ilaria Fragala.
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2025/11/25
16:30-18:00 Room #117 (Graduate School of Math. Sci. Bldg.)
Lars Diening (Bielefeld University)
Sobolev stability of the $L^2$-projection (English)
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Lars Diening (Bielefeld University)
Sobolev stability of the $L^2$-projection (English)
[ Abstract ]
We prove the $W^{1,2}$-stability of the $L^2$-projection on Lagrange elements for adaptive meshes and arbitrary polynomial degree. This property is especially important for the numerical analysis of parabolic problems. We will explain that the stability of the projection is connected to the grading constants of the underlying adaptive refinement routine. For arbitrary dimensions, we show that the bisection algorithm of Maubach and Traxler produces meshes with a grading constant 2. This implies $W^{1,2}$-stability of the $L^2$-projection up to dimension six.
[ Reference URL ]We prove the $W^{1,2}$-stability of the $L^2$-projection on Lagrange elements for adaptive meshes and arbitrary polynomial degree. This property is especially important for the numerical analysis of parabolic problems. We will explain that the stability of the projection is connected to the grading constants of the underlying adaptive refinement routine. For arbitrary dimensions, we show that the bisection algorithm of Maubach and Traxler produces meshes with a grading constant 2. This implies $W^{1,2}$-stability of the $L^2$-projection up to dimension six.
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2025/11/18
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Guanyu Zhou (University of Electronic Science and Technology of China)
The mixed methods for the variational inequalities (English)
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Guanyu Zhou (University of Electronic Science and Technology of China)
The mixed methods for the variational inequalities (English)
[ Abstract ]
We propose new mixed formulations for variational inequalities arising from contact problems, aimed at improving the approximation of the stress tensor and displacement in numerical simulations. We establish the well-posedness of these mixed variational inequalities. Furthermore, we will present their finite element analysis.
[ Reference URL ]We propose new mixed formulations for variational inequalities arising from contact problems, aimed at improving the approximation of the stress tensor and displacement in numerical simulations. We establish the well-posedness of these mixed variational inequalities. Furthermore, we will present their finite element analysis.
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2025/07/29
16:30-18:00 Room #123 (Graduate School of Math. Sci. Bldg.)
Takashi Suzuki (Osaka University)
An analytic proof of the Hodge decomposition on bounded domains in Euclidean space and its applications (Japanese)
Takashi Suzuki (Osaka University)
An analytic proof of the Hodge decomposition on bounded domains in Euclidean space and its applications (Japanese)
2025/07/08
16:30-18:00 Room #126 (Graduate School of Math. Sci. Bldg.)
Masaki Imagawa (Kyoto Univsersity)
Convergence analysis of perturbed advection equations in a bounded domain (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Masaki Imagawa (Kyoto Univsersity)
Convergence analysis of perturbed advection equations in a bounded domain (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2025/06/10
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Nobuyuki Oshima (Faculty of Engineering, Hokkaido University)
Immersed-boundary Navier-Stokes equation and its application to image data (Japanese)
Nobuyuki Oshima (Faculty of Engineering, Hokkaido University)
Immersed-boundary Navier-Stokes equation and its application to image data (Japanese)
2025/04/22
16:30-18:00 Room #126 (Graduate School of Math. Sci. Bldg.)
Yasutoshi Taniguchi (Graduate School of Mathematical Sciences, The University of Tokyo)
A Hyperelastic Extended Kirchhoff–Love Shell Model: Formulation and Isogeometric Discretization (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Yasutoshi Taniguchi (Graduate School of Mathematical Sciences, The University of Tokyo)
A Hyperelastic Extended Kirchhoff–Love Shell Model: Formulation and Isogeometric Discretization (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2025/04/15
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Yuji Ito (TOYOTA CENTRAL R&D LABS., INC.)
Control of uncertain and unknown systems (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Yuji Ito (TOYOTA CENTRAL R&D LABS., INC.)
Control of uncertain and unknown systems (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2024/11/27
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Yumiharu Nakano (Institute of Science Tokyo)
Schrödinger problems and diffusion generative models (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Yumiharu Nakano (Institute of Science Tokyo)
Schrödinger problems and diffusion generative models (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2024/10/16
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Kengo Nakai (Okayama University)
Data-driven modeling from biased small training data (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Kengo Nakai (Okayama University)
Data-driven modeling from biased small training data (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2024/07/09
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Bernardo Cockburn (University of Minnesota)
The transformation of stabilizations into spaces for Galerkin methods for PDEs (English)
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Bernardo Cockburn (University of Minnesota)
The transformation of stabilizations into spaces for Galerkin methods for PDEs (English)
[ Abstract ]
We describe a novel technique which allows us to transform the terms which render Galerkin methods stable into spaces (JJIAM, 2023). We begin by applying this technique to show that the Continuous and Discontinuous Galerkin (DG) methods for ODEs produce the very same approximation of the time derivative, and use this to obtain superconvergence points of the DG method. We then apply this technique to mixed methods for second-order elliptic equations to show that they can always be recast as hybridizable DG (HDG) methods. We then show that this recating makes the implementation from 10% to 20% better for polynomial degrees ranging from 1 to 20.We end by sketching or ongoing and future work.
[ Reference URL ]We describe a novel technique which allows us to transform the terms which render Galerkin methods stable into spaces (JJIAM, 2023). We begin by applying this technique to show that the Continuous and Discontinuous Galerkin (DG) methods for ODEs produce the very same approximation of the time derivative, and use this to obtain superconvergence points of the DG method. We then apply this technique to mixed methods for second-order elliptic equations to show that they can always be recast as hybridizable DG (HDG) methods. We then show that this recating makes the implementation from 10% to 20% better for polynomial degrees ranging from 1 to 20.We end by sketching or ongoing and future work.
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2024/05/29
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Satoshi Hayakawa (Sony Group Corporation)
Random convex hulls and kernel quadrature (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Satoshi Hayakawa (Sony Group Corporation)
Random convex hulls and kernel quadrature (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2024/05/15
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Koya Sakakibara (Kanazawa University)
Regularization via Bregman divergence for the discrete optimal transport problem (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Koya Sakakibara (Kanazawa University)
Regularization via Bregman divergence for the discrete optimal transport problem (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2024/04/24
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Yuka Hashimoto (NTT Network Service Systems Laboratories)
Generalization analysis of neural networks based on Koopman operators
(Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Yuka Hashimoto (NTT Network Service Systems Laboratories)
Generalization analysis of neural networks based on Koopman operators
(Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2024/03/13
16:30-17:30 Online
David Sommer (Weierstrass Institute for Applied Analysis and Stochastics)
Approximating Langevin Monte Carlo with ResNet-like neural network architectures (English)
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
David Sommer (Weierstrass Institute for Applied Analysis and Stochastics)
Approximating Langevin Monte Carlo with ResNet-like neural network architectures (English)
[ Abstract ]
We analyse a method to sample from a given target distribution by constructing a neural network which maps samples from a simple reference distribution, e.g. the standard normal, to samples from the target distribution. For this, we propose using a neural network architecture inspired by the Langevin Monte Carlo (LMC) algorithm. Based on LMC perturbation results, approximation rates of the proposed architecture for smooth, log-concave target distributions measured in the Wasserstein-2 distance are shown. The analysis heavily relies on the notion of sub-Gaussianity of the intermediate measures of the perturbed LMC process. In particular, we derive bounds on the growth of the intermediate variance proxies under different assumptions on the perturbations. Moreover, we propose an architecture similar to deep residual neural networks (ResNets) and derive expressivity results for approximating the sample to target distribution map.
[ Reference URL ]We analyse a method to sample from a given target distribution by constructing a neural network which maps samples from a simple reference distribution, e.g. the standard normal, to samples from the target distribution. For this, we propose using a neural network architecture inspired by the Langevin Monte Carlo (LMC) algorithm. Based on LMC perturbation results, approximation rates of the proposed architecture for smooth, log-concave target distributions measured in the Wasserstein-2 distance are shown. The analysis heavily relies on the notion of sub-Gaussianity of the intermediate measures of the perturbed LMC process. In particular, we derive bounds on the growth of the intermediate variance proxies under different assumptions on the perturbations. Moreover, we propose an architecture similar to deep residual neural networks (ResNets) and derive expressivity results for approximating the sample to target distribution map.
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2024/03/13
17:30-18:30 Online
Andreas Rathsfeld (Weierstrass Institute for Applied Analysis and Stochastics)
Analysis of the Scattering Matrix Algorithm (RCWA) for Diffraction by Periodic Surface Structures (English)
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Andreas Rathsfeld (Weierstrass Institute for Applied Analysis and Stochastics)
Analysis of the Scattering Matrix Algorithm (RCWA) for Diffraction by Periodic Surface Structures (English)
[ Abstract ]
The scattering matrix algorithm is a popular numerical method for the diffraction of optical waves by periodic surfaces. The computational domain is divided into horizontal slices and, by a clever recursion, an approximated operator, mapping incoming into outgoing waves, is obtained. Combining this with numerical schemes inside the slices, methods like RCWA and FMM have been designed.
The key for the analysis is the scattering problem with special radiation conditions for inhomogeneous cover materials. If the numerical scheme inside the slices is the FEM, then the scattering matrix algorithm is nothing else than a clever version of a domain decomposition method.
[ Reference URL ]The scattering matrix algorithm is a popular numerical method for the diffraction of optical waves by periodic surfaces. The computational domain is divided into horizontal slices and, by a clever recursion, an approximated operator, mapping incoming into outgoing waves, is obtained. Combining this with numerical schemes inside the slices, methods like RCWA and FMM have been designed.
The key for the analysis is the scattering problem with special radiation conditions for inhomogeneous cover materials. If the numerical scheme inside the slices is the FEM, then the scattering matrix algorithm is nothing else than a clever version of a domain decomposition method.
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2024/01/09
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Takashi Matsubara (Osaka University)
Deep learning that learns from, becomes part of, or replaces numerical methods for differential equations (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Takashi Matsubara (Osaka University)
Deep learning that learns from, becomes part of, or replaces numerical methods for differential equations (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2023/11/14
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Ken Furukawa (RIKEN)
On some dynamical systems and their prediction using data assimilation (Japanese)
[ Reference URL ]
ハイブリッド開催です。参加の詳細は参考URLをご覧ください。
Ken Furukawa (RIKEN)
On some dynamical systems and their prediction using data assimilation (Japanese)
[ Reference URL ]
ハイブリッド開催です。参加の詳細は参考URLをご覧ください。
2023/10/24
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Kazuaki Tanaka (Waseda University)
Neural Network-based Enclosure of Solutions to Differential Equations and Reconsideration of the Sub- and Super-solution Method (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Kazuaki Tanaka (Waseda University)
Neural Network-based Enclosure of Solutions to Differential Equations and Reconsideration of the Sub- and Super-solution Method (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2023/10/17
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Makoto Okumura (Konan University)
Structure-preserving schemes for the Cahn-Hilliard equation with dynamic boundary conditions in two spatial dimensions (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Makoto Okumura (Konan University)
Structure-preserving schemes for the Cahn-Hilliard equation with dynamic boundary conditions in two spatial dimensions (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2023/06/27
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Toshihiro Yamada (Hitotsubashi University)
Solving high-dimensional partial differential equations via deep learning and probabilistic methods (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Toshihiro Yamada (Hitotsubashi University)
Solving high-dimensional partial differential equations via deep learning and probabilistic methods (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2023/06/06
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Hideyuki Azegami (Nagoya Industrial Science Research Institute)
Relation between regularity and numerical solutions of shape optimization problems
(Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Hideyuki Azegami (Nagoya Industrial Science Research Institute)
Relation between regularity and numerical solutions of shape optimization problems
(Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2023/05/23
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Masaaki Imaizumi (The University of Tokyo)
Theory of Deep Learning and Over-Parameterization (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Masaaki Imaizumi (The University of Tokyo)
Theory of Deep Learning and Over-Parameterization (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
2023/05/16
16:30-18:00 Room #002 (Graduate School of Math. Sci. Bldg.)
Yuuki Shimizu (The University of Tokyo)
Numerical analysis of the Plateau problem by the method of fundamental solutions (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/
Yuuki Shimizu (The University of Tokyo)
Numerical analysis of the Plateau problem by the method of fundamental solutions (Japanese)
[ Reference URL ]
https://sites.google.com/g.ecc.u-tokyo.ac.jp/utnas-bulletin-board/


Text only print
Full screen print

