Anming Gu
I'm a recent graduate of Boston University. I am currently working with Prof. Edward Chien and Kristjan Greenewald on applied optimal transport in statistics and machine learning and Atsushi Nitanda on mean-field optimization.
My graduate coursework includes:
- Mathematics: Functional Analysis, Stochastic Calculus, Mathematics of Deep Learning, PDEs, Stochastic PDEs
- Computer Science: Complexity Theory, Mathematical Methods for Theoretical Computer Science
Teaching experience:
- Algebraic Algorithms, F24
- Analysis of Algorithms, S22, F24
- Theory of Computation, S24
- Concepts of Programming Languages, F23
CV  / 
Google Scholar  / 
Github
|
|
Research
I'm interested in optimal transport, optimization (stochastic, mean-field), sampling, and machine learning (theory).
General research directions that seem interesting to me:
- Applications of sampling for diffusion models
- Connections between diffusion models and spin glasses
- Optimal transport for discrete objects (e.g. graphs, spin glasses), learning theory (e.g. generalization via PAC-Bayes), and differential privacy
|
|
Partially Observed Trajectory Inference using Optimal Transport and a Dynamics Prior
Anming Gu, Edward Chien, Kristjan Greenewald
Accepted to OPT workshop at NeurIPS 2024.
arXiv / code to come / thesis slides
Trajectory inference is the problem of recovering a stochastic process from temporal marginals. We consider the setting when we cannot observe the process directly but we have access to a known velocity field. Using tools in optimal transport, stochastic calculus, and optimization theory, we show that a minimum entropy estimator will recover the latent trajectory of the process. We provide theoretical guarantees that our estimator will converge to the ground truth as the number of observations becomes dense in the time domain. We also provide empirical results to show the robustness of our method.
|
|
k-Mixup Regularization for Deep Learning via Optimal Transport
Kristjan Greenewald, Anming Gu, Mikhail Yurochkin, Justin Solomon, Edward Chien
Transactions on Machine Learning Research, 2023.
arXiv / code
Mixup is a regularization technique for training neural networks that perturbs input training data in the direction of other randomly chosen training data. We propose a new variant of mixup that uses optimal transport to perturb training data in the direction of other training data that are more similar to the input data. We show theoretically and experimentally that our method is more effective than mixup at improving generalization performance.
|
|