Research at the Interface of Applied Mathematics and Machine Learning
CBMS Conference
Department of Mathematics, University of Houston
The Department of Mathematics at the University of Houston will be hosting the CBMS Conference: Research at the Interface of Applied Mathematics and Machine Learning from 12/08/2025 to 12/12/2025.
Schedule Overview
Time | Monday | Tuesday | Wednesday | Thursday | Friday |
---|---|---|---|---|---|
0830 - 0900 | Welcome | Coffee | Coffee | Coffee | Coffee |
0900 - 1000 | Lecture 1 | Lecture 3 | Lecture 5 | Lecture 7 | Lecture 9 |
1000 - 1030 | Break | Break | Break | Break | Break |
1030 - 1130 | Lecture 2 | Lecture 4 | Lecture 6 | Lecture 8 | Lecture 10 |
1130 - 1300 | Lunch | Lunch (**) | Lunch | Lunch | Lunch |
1300 - 1345 | Talk | Talk | Talk | Talk | |
1345 - 1430 | Talk | Talk | Talk | Talk | |
1430 - 1500 | Break | Break | Break | Break | |
1500 - 1545 | Talk | Panel | Discussion | Talk | |
1545 - 1630 | Talk | Panel | Discussion | Talk | |
1630 - 1800 | Poster | ||||
1800 - 1900 | Dinner | Poster | Social Event |
(**) Group Picture
Talks
We have scheduled 12 talks from leading authorities in (Sci)ML from Monday 12/08 through Thursday 12/11.
Monday, 12/08/2025
Tuesday, 12/09/2025
Wednesday, 12/10/2025
Thursday, 12/11/2025
Lectures
The workshop features ten lectures within 3 modules. An outline can be found below.
Module 1: Machine Learning Crash Course (3 Lectures)
This module aims to define ML techniques and commonly used terms mathematically to make them accessible to applied and computational mathematicians.
Show Details
Lecture 1: OverviewThis lecture sets the main notation for this conference and discusses different forms of ML that are relevant to this workshop. For simplicity, we define learning as designing a function or operator $f_\theta$ and learning its weights $\theta$ to accomplish a given task. In this workshop, we focus on $f_\theta$ as a neural network. This lecture overviews terminology and different learning tasks in applied mathematics. The remaining lectures in this module discuss the two crucial aspects of learning in more detail. To give participants a broad context, we provide motivating examples from unsupervised, semi-supervised, supervised learning, generative modeling, reinforcement learning, and operator learning. Where possible, we link the examples to corresponding problems in applied mathematics. For example, supervised learning corresponds to data fitting, generative modeling has links to optimal transport, reinforcement learning is tied to optimal control, and operator learning arises in solving PDEs and inverse problems.
Lecture 2: Neural NetworksThis lecture is devoted to different ways to design the neural network architecture that defines $f_\theta$. This is a crucial step in solving practical problems. We review classical multilayer perceptron architectures, where $f_{\theta}$ is defined by concatenating affine transformations and pointwise nonlinear activation functions. While these architectures are universal function approximators and can be effective in many applications, they have difficulties approximating simple functions, like the identity mapping, and their training is challenging with increasing depth. Adding skip connections, as done in residual networks, can overcome this disadvantage and be trained with hundreds or thousands of layers. The latter architectures provide links to optimal control and will be revisited in the next module. We also present graph neural networks, which are crucial to handling unstructured data, and give a mathematical description of transformers, including their attention mechanism.
Lecture 3: The Learning ProblemThis lecture introduces the loss functions that can be used to train the neural network architectures for a given task. For supervised learning, we discuss regression and cross-entropy losses. We discuss maximum likelihood training and variational inference with the empirical lower bound (ELBO) for generative modeling. As an example of unsupervised learning, we discuss PDE losses in PINNs. We illustrate the difference between minimizing the loss function and learning, which requires generalization to unseen data, using examples from polynomial data fitting that most participants will recall. We then provide further insights by discussing the integration error of Monte Carlo approximation of the loss.
Module 2: Applied Mathematics for Machine Learning (3 Lectures)
This module discusses three themes of applied mathematics research in ML. We spend one lecture per area and aim to introduce the core principles that underlie recent advances we expect to be discussed by several invited presenters.
Show Details
Lecture 4: Stochastic OptimizationHaving defined the ML model and loss functions, this lecture discusses optimization algorithms that can be used to identify weights. Since the loss functions usually involve high-dimensional integrals, we approximate them using Monte Carlo integration. This naturally leads to stochastic optimization algorithms such as stochastic gradient descent and its variants. In this lecture, we discuss convergence properties, theoretical and empirical results that show convergence to global minimizers for highly nonconvex functions, and their ability to regularize the problem.
Lecture 5: RegularizationThis lecture investigates the relation between generalization and regularization in more depth. Building upon advances in applied mathematics, we discuss iterative regularization, direct regularization, and hybrid approaches in the context of ill-posed inverse problems. From this perspective, we show new insights into the double-descent phenomenon arising in modern ML. We illustrate this using the random feature models and demonstrate that adequate regularization can help those models generalize to unseen data. We also review recent results on the benefits and challenges of adding regularization theory into stochastic optimization schemes.
Lecture 6: Continuous-in-Time ArchitecturesThis lecture surveys neural network architectures whose depth corresponds to artificial time and whose forward propagation is given by differential equations. As a starting point, we view the infinite-depth limit residual networks as forward Euler discretizations of a nonstationary, nonlinear initial value problem. We discuss the theoretical implications for supervised learning and the opportunities in generative modeling. We present extensions to PDEs when the features correspond to image, speech, or video data and the benefits of generalizing the framework to stochastic dynamics. The latter allows us to discuss image-generation algorithms like Dalle-2 and other techniques based on score-based diffusion.
Module 3: Machine Learning for Applied Mathematics (4 Lectures)
This module discusses four avenues of applying ML techniques in applied mathematics problems.
Show Details
Lecture 7: Scientific Machine LearningThis lecture demonstrates the use cases and challenges of using ML in scientific contexts. Unlike traditional big-data applications, specific challenges arise from scarce datasets, the complex nature of observations, the lack of similar experience, which makes finding hyperparameters difficult, and so on. We also overview important research directions not covered in the remaining lectures, such as PINNs, neural operator learning, and reinforcement learning.
Lecture 8: High-dimensional PDEsThis lecture introduces several high-dimensional PDE problems for which ML techniques provide promising avenues. While many PDE problems are phrased in physical coordinates and are thus limited to three dimensions plus time, there are several areas in which the dimensions become considerably higher, and the curse of dimensionality limits traditional numerical techniques. We use examples including the Black Scholes equations in finance and Hamilton Jacobi Bellman equations in optimal control. In those cases, the curse of dimensionality can be mitigated (but not entirely overcome) by appropriate numerical integration, adequate neural network models, and training.
Lecture 9: Inverse ProblemsDue to their ill-posedness and (in some cases) abundance of training data, applying ML techniques to improve the solution of inverse problems has been a promising research direction. In this lecture, we will first demonstrate that approximating the inverse map with a neural network in a supervised way in ill-posed inverse problems leads to unstable approximations. We then consider Bayesian formulations of the inverse problem as they rely on modeling and analyzing complex, high-dimensional probability distributions. This provides links to generative modeling, and we show how those techniques can overcome the limitations of the traditional Bayesian setting and improve computational efficiency.
Lecture 10: Mathematical Reasoning and DiscoveryThis lecture highlights recent trends in mathematics that employ large language models ({\bf LLM}s) to discover and reason about mathematics. For example, combining proof assistants with LLMs assists in formalizing mathematical theorems, proofs, and even sketch proofs. Another example is the use of reinforcement learning to discover counter-examples in combinatorics. We discuss funsearch and its ability to develop interpretable solutions to challenging discrete math problems. In the context of computational mathematics, reinforcement learning is noteworthy for discovering more efficient implementations of matrix-matrix products.
Panel
The conference will host a panel where experts and scientists share experiences while discussing current and future trends in research and machine learning applications. The panel discussion is scheduled for Tuesday 12/7, 1500 to 1630. Invited panelists include:
Detlef Hohl, Shell Technology Center Houston
Dr. Hohl spent his entire Shell career in R&D, both in the US and in Europe. First in geophysical seismic imaging, then in probabilistic seismic inversion, and then he became R&D team leader for “Quantitative Reservoir Management”. From 2010-2017 he was R&D General Manager Computation and Modeling where he led a project portfolio in data analytics, computational engineering and materials science, geoscience and petroleum engineering.
Dr. Hohl was appointed Shell’s Chief Scientist for Computational and Data Science in 2017 where he oversees and guides Shell’s entire computational and computer science portfolio, including elements of Artificial Intelligence, physical systems simulation at all spatial and temporal scales, chemicals and chemical engineering modeling, future energy systems optimization, atmospheric and Earth science modeling.
Dr. Hohl has always used the largest available high-performance computers of their time to do big things that cannot be done otherwise. He is active in the academic, National laboratory and joint industry research communities, member of APS, ACM, SIAM, SPE, SEG and AGU. Dr. Hohl is adjunct professor and teaches courses at Rice University (Computational and Applied Math). He held various temporary and visiting positions at NCSA, SISSA Trieste, NIST and Stanford University. In his free time, science remains his biggest hobby.
Xiaoqian Jiang, School of Biomedical Informatics, UTHealth Houston
Dr. Xiaoqian Jiang is the Associate Vice President for Medical AI, Chair of the Department of Health Data Science and Artificial Intelligence, and the Christopher Sarofim Professor at The University of Texas Health Science Center at Houston (UTHealth). He also directs the Center for Secure Artificial Intelligence for Healthcare (SAFE) at McWilliams School of Biomedical Informatics. Dr. Jiang is a leading expert in privacy-preserving data mining, federated learning, and explainable machine learning, with over $35 million in grant funding from NIH and other prestigious awards such as the CPRIT Rising Stars and UT Stars. His research spans a range of critical health AI applications, from Alzheimer’s prevention to COVID-19 patient tracking, and his innovative work in human-in-the-loop AI models and computational phenotyping has earned several best paper awards from AMIA. Dr. Jiang's mission aligns with advancing healthcare AI, fostering high-quality education, and improving patient outcomes through AI-driven innovations.
Peter Kochunov, Psychiatry and Behavioral Sciences, UTHealth Houston
Dr. Kochunov is a board-certified MRI physicist with over two decades of experiences in development of novel data analysis protocols with emphasis on the quantitative, multimodal analyses of genetic factors that are responsible for structural and functional variability.
Dr. Kochunov has a background in neuroimaging, electrical engineering, software development and statistics. Dr. Kochunov has participated in development of many popular neuroimaging tools and formats including SOLAR-Eclipse, ENIGMA-Viewer, ENIGMA-DTI and ENIGMA-rsFMRI analyses pipelines, Talairach deamon, BrainMap, Mango and BrainVisa Morphologist, NFITI and others.
Dr. Kochunov’s research is described in over 300 scientific manuscripts, including some of the first manuscripts on heritability of white matter integrity, gray matter thickness, resting-state connectivity and application of these approaches in severe mental illness research.
Javad Razjouyan, Baylor College of Medicine
Javad Razjouyan, Ph.D., is an Assistant Professor (tenure track) of medicine for Health Services Research and the Institute for Clinical & Translational Research (ICTR) at Baylor College of Medicine (BCM). He is also a health research scientist at Implementation Science & Innovation Core, Center for Innovations in Quality, Effectiveness and Safety (IQuESt) at Michael E. DeBakey VA Medical Center. He is faculty member of the Big Data-Scientist Training Enhancement Program (BD-STEP) of the Department of Veterans Affairs (VA) and the National Cancer Institute (NCI) and he is adjunct Assistant Professor of Epidemiology at UTHealth Houston School of Public Health.
Dr. Razjouyan serving as the Director of Artificial Intelligence in Health Lab (AIH-Lab) at BCM and co-director of BD-STEP advanced fellowship program, Houston site.
He has published more than 40 scientific publications in peer-reviewed journals, more than 50 conference proceedings or abstracts, three filled patents, and a textbook for undergraduate students in biomedical engineering. He received a young investigator award from the Gerontological Society of America conference in 2014 as digital biomarker development. He also won a junior investigator travel award at the American Heart Association - Quality of Care & Outcome Research 2019 as he developed an EMR based frailty index by machine learning techniques. He mentored three advanced fellows at the BD-STEP program on used of EMR and performing advanced machine learning algorithms. Currently, He is mentoring three advanced post-doctoral fellows at the AIH-lab. His post-doctoral fellows use artificial intelligence (AI), machine learning (ML) algorithms, and natural language processing (NLP) tools on various medical fields such as sleep medicine, psychology, dementia, heart failure, and frailty.
Amir Sharafkhaneh, Baylor College of Medicine
Dr. Amir Sharafkhaneh is a Professor of Medicine (tenured) at Baylor College of Medicine and a leading authority in sleep medicine. He completed his medical degree at Tehran University of Medical Sciences, followed by an internal medicine residency at Long Island College Hospital and a fellowship in Pulmonary, Critical Care, and Sleep Medicine at Baylor College of Medicine, where he also earned a PhD in medical research.
With over 25 years of clinical and academic experience, Dr. Sharafkhaneh has authored numerous peer-reviewed publications and book chapters in the fields of pulmonary and sleep medicine. He founded the first accredited sleep medicine fellowship program in Texas and has since trained more than 100 sleep specialists. His work has been supported by multiple federal grants, including initiatives to develop telemedicine programs that expand access to sleep care in underserved areas.
Dr. Sharafkhaneh currently co-chairs the VA clinical practice guideline committees for obstructive sleep apnea, insomnia, asthma, and COPD. His research team applies artificial intelligence and advanced data analytics to large-scale electronic health record data to advance the understanding and treatment of sleep and respiratory disorders. He also co-leads the AI Interest Group of the World Sleep Society.
Xiao-Hui Wu, ExxonMobil Technology & Engineering
Xiao-Hui Wu joined ExxonMobil Upstream Research Company in 1997. His research experience covers geologic modeling, unstructured gridding, upscaling, reduced order modeling, and uncertainty quantification. He is a Senior Earth Modeling Advisor in the Computational Science Function. Xiao-Hui received his Ph.D. in Mechanical Engineering from the University of Tennessee and worked as a postdoc in Applied Mathematics at Caltech before joining Exxon Mobil. He is a member of SPE and SIAM, a technical editor/reviewer for the SPE Journal, Journal of Computational Physics, and Multiscale Modeling and Simulation. He served on program committees of several conferences, including the Reservoir Simulation Symposium.
Poster Session
We have scheduled a poster session on Wednesday 12/7, 1700 to 1900.
Show Details