DeepMath
DeepMath 2024 Day 2
8:45:37
DeepMath
DeepMath 2024
8:54:30
DeepMath
DeepMath 2023 Day 2 Live Stream
8:47:09
DeepMath
DeepMath 2023 Day 1 Live Stream
8:15:05
DeepMath
2022 11 17 DeepMath Day 2 Ben Shaul
17:47
DeepMath
2022 11 17 DeepMath Day 2 Galanti
19:40
DeepMath
2022 11 17 DeepMath Day 2 Solla
1:00:22
DeepMath
2022 11 17 DeepMath Day 2 Soudry
58:14
DeepMath
2022 11 17 DeepMath Day 2 Tarmoun
16:07
DeepMath
2022 11 17 DeepMath Day 1 Bach
1:02:08
DeepMath
2022 11 17 DeepMath Day 1 Bombari
23:35
DeepMath
2022 11 17 DeepMath Day 1 Ghosh
17:45
DeepMath
2022 11 17 DeepMath Day 1 Kiani
16:40
DeepMath
DeepMath 2022 Live Stream
7:25:32
DeepMath
DeepMath 2022 Day 1
4:10:13
DeepMath
DeepMath 2021: Day 1 Session 1 (Opening Remarks, Mallat, Gauthier)
1:58:12
DeepMath
DeepMath 2021: Day 1 Session 2 (Singh, Bordelon, Zhu)
1:38:04
DeepMath
DeepMath 2021: Day 1 Session 3 (Ergen & Pilanci, Willett)
1:42:17
DeepMath
DeepMath 2021: Day 2 Session 1 (Uhler, Kevrekidis, Loureiro)
2:25:30
DeepMath
DeepMath 2021: Day 2 Session 2 (Arora, Wang)
1:24:00
DeepMath
DeepMath 2021: Day 2 Session 3 (Refinetti, Chaudhuri)
1:17:36
DeepMath
Lenka Zdeborova - Insights on gradient-based algorithms in high-dimensional non-convex learning
1:00:56
DeepMath
Francesa Mignacco - Dynamical Mean-Field Theory for SGD in Gaussian Mixture Classification
23:33
DeepMath
Stefanie Jegelka - Representation and Learning in Graph Neural Networks
57:17
DeepMath
Rika Antonova - Analytic Manifold Learning with Neural Networks
20:56
DeepMath
Yi Sun - Data Augmentation as Stochastic Optimization
16:56
DeepMath
Rene Vidal - Keynote: Mathematics of Deep Learning
1:04:24
DeepMath
Gitta Kutyniok - Spectral Graph Convolutional Neural Networks Do Generalize
56:03
DeepMath
Maksim Maydanskiy - Spatial Transformations in Convolutional Networksand Invariant Recognition
23:18
DeepMath
Stéphane d'Ascoli - Reconciling Double Descent With Older Ideas
22:01
DeepMath
Demba Ba - Deeply-Sparse Signal Representations
1:01:22
DeepMath
Eero Simoncelli - Making use of the Prior Implicit in a Denoiser
1:03:57
DeepMath
Melanie Weber - Learning a Robust Large-Margin Classifier in Hyperbolic Space
18:18
DeepMath
Amartya Mitra - LEAD: Least Action Dynamics forMin-Max Optimization
20:57
DeepMath
Sejun Park - Expressive Power of Narrow Networks
17:03
DeepMath
Abdulkadir Canatar - Statistical Mechanics of Generalization in Kernel Regression
20:57
DeepMath
David Wipf - No Bad VAE Local Minima when Learning Optimal Sparse Representations
20:31
DeepMath
Niru Maheswaranathan - Understanding the Dynamics of Learned Optimizers
20:19
DeepMath
Rich Baraniuk - Mad Max: Affine Spline Insights into Deep Learning
58:24
DeepMath
Misha Belkin - Toward a theory of optimization for deep learning
1:04:56
DeepMath
Boris Hanin - NTK in ReLU Nets with Finite Depth and Width
19:41
DeepMath
Lukas Balles - The Geometry of Sign Gradient Descent
17:13
DeepMath
Tan Minh Nguyen - Neural Rendering Model: The Joint Generation and Prediction Perspective
22:02
DeepMath
David Schwab - How Noise Affects the Hessian Spectrum in Overparameterized Neural Networks
36:28
DeepMath
Mikio Aoi - Concluding remarks
1:42
DeepMath
Ahmed El Hady - Introductory remarks
10:33
DeepMath
Eran Malach - Deep Learning on the border between success and failure
40:07
DeepMath
Haim Sompolinsky - DeepManifolds: Geometry of Computation in Deep Networks
1:08:48
DeepMath
Dan Roberts - Robust Learning with Jacobian Regularization
20:06
DeepMath
Tomaso Poggio - Dynamics and Generalization in Deep Neural Networks
51:56
DeepMath
Surya Ganguli - Deep Learning Theory: From Generalization to the Brain
1:00:22
DeepMath
Naftali Tishby - The Information Bottleneck View of Deep Learning: Why do we need it?
1:00:09
DeepMath
Michael Elad - Sparse Modelling of Data and its Relation to Deep Learning
59:32
DeepMath
Yasaman Bahri - Towards an Understanding of Wide Neural Networks
1:11:37
DeepMath
Jeffery Pennington - Deep Learning and Operator-Valued Free Probability: Dynamics in High Dimensions
1:00:49
DeepMath
Sanjeev Arora - Is Optimization the Right Language to Understand Deep Learning?
1:03:01
DeepMath
Nadav Cohen: On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
1:03:57
DeepMath
Andrew Saxe: A theory of deep learning dynamics: Insights from the linear case
59:44
DeepMath
Anna Gilbert: Toward Understanding the Invertibility of Convolutional Neural Networks
51:13
DeepMath
Sebastian Musslick: Multitasking Capability vs Learning Efficiency in Neural Network Architectures
59:34
DeepMath
Joan Bruna: On the Optimization Landscape of Neural Networks
48:01
DeepMath
Sanjeev Arora: Why do deep nets generalize, that is, predict well on unseen data
56:17
DeepMath
Adam Charles: Introductory remarks
9:10