科学研究
报告题目:

Vocabulary for Universal Approximation: A Linguistic Perspective of Mapping Compositions

报告人:

蔡永强(北京师范大学)

报告时间:

报告地点:

理学院东北楼四楼报告厅(404)

报告摘要:

In recent years, deep learning-based sequence modelings, such as language models, have received much attention and success, which pushes researchers to explore the possibility of transforming non-sequential problems into a sequential form. Following this thought, deep neural networks can be represented as composite functions of a sequence of mappings, linear or nonlinear, where each composition can be viewed as a \emph{word}. However, the weights of linear mappings are undetermined and hence require an infinite number of words.In this talk, we investigate the finite case and constructively prove the existence of a finite \emph{vocabulary} $V=\{\phi_i: \mathbb{R}^d \to \mathbb{R}^d | i=1,...,n\}$ with $n=O(d^2)$ for the universal approximation. That is, for any continuous mapping $f: \mathbb{R}^d \to \mathbb{R}^d$, compact domain $\Omega$ and $\varepsilon>0$, there is a sequence of mappings $\phi_{i_1}, ..., \phi_{i_m} \in V, m \in \mathbb{Z}_+$, such that the composition $\phi_{i_m} \circ ... \circ \phi_{i_1} $ approximates $f$ on $\Omega$ with an error less than $\varepsilon$. Our results provide a linguistic perspective of composite mappings and suggest a cross-disciplinary study between linguistics and approximation theory.

报告人简介:

蔡永强博士,北京师范大学数学科学学院。2017 年取得北京大学计算数学专业理学博士学位,2017年至2020年在新加坡国立大学理学院做博士后研究工作。20209月进入北京师范大学数学科学学院工作,研究方向为计算数学。研究兴趣包括高分子自组装、液晶相双层膜的数值模拟,以及深度学习中的算法分析及应用。