Dimitri Meunier
@dimitrimeunier.bsky.social
📤 101
📥 129
📝 15
PhD, Gatsby, UCL
reposted by
Dimitri Meunier
Arthur Gretton
about 1 month ago
At
#NeurIPS
? Visit our posters! 🧵 Demystifying Spectral Feature Learning for Instrumental Variable Regression: #2600, Wed 11am Regularized least squares learning with heavy-tailed noise is minimax optimal: #3012, Wed 4:30pm ✨spotlight✨ 1/2
1
5
2
reposted by
Dimitri Meunier
Le Monde
4 months ago
Solenne Gaucher, la mathématicienne qui sort le genre de l’équation
loading . . .
Solenne Gaucher, la mathématicienne qui sort le genre de l’équation
« La Relève ». Chaque mois, « Le Monde Campus » rencontre un jeune qui bouscule les normes dans son domaine. A 31 ans, la docteure en mathématiques s’attaque aux biais algorithmiques de l’intelligence artificielle et a reçu en 2024 un prix pour ses travaux.
https://www.lemonde.fr/campus/article/2025/09/21/solenne-gaucher-la-mathematicienne-qui-sort-le-genre-de-l-equation_6642263_4401467.html
0
45
22
reposted by
Dimitri Meunier
Emtiyaz Khan
5 months ago
AISTATS 2026 will be in Morocco!
0
35
10
reposted by
Dimitri Meunier
Motonobu Kanagawa
7 months ago
We've written a monograph on Gaussian processes and reproducing kernel methods (with
@philipphennig.bsky.social
,
@sejdino.bsky.social
and Bharath Sriperumbudur).
arxiv.org/abs/2506.17366
loading . . .
Gaussian Processes and Reproducing Kernels: Connections and Equivalences
This monograph studies the relations between two approaches using positive definite kernels: probabilistic methods using Gaussian processes, and non-probabilistic methods using reproducing kernel Hilb...
https://arxiv.org/abs/2506.17366
0
36
11
reposted by
Dimitri Meunier
Rémi Flamary
6 months ago
Distributional Reduction paper with H. Van Assel,
@ncourty.bsky.social
, T. Vayer , C. Vincent-Cuaz, and
@pfrossard.bsky.social
is accepted at TMLR. We show that both dimensionality reduction and clustering can be seen as minimizing an optimal transport loss đź§µ1/5.
openreview.net/forum?id=cll...
1
33
10
reposted by
Dimitri Meunier
arxiv stat.ML
7 months ago
Dimitri Meunier, Antoine Moulin, Jakub Wornbard, Vladimir R. Kostic, Arthur Gretton Demystifying Spectral Feature Learning for Instrumental Variable Regression
https://arxiv.org/abs/2506.10899
0
1
2
Very much looking forward to this ! 🙌 Stellar line-up
add a skeleton here at some point
7 months ago
0
2
1
reposted by
Dimitri Meunier
Antoine Moulin
7 months ago
new preprint with the amazing
@lviano.bsky.social
and
@neu-rips.bsky.social
on offline imitation learning! learned a lot :) when the expert is hard to represent but the environment is simple, estimating a Q-value rather than the expert directly may be beneficial. lots of open questions left though!
1
18
4
🚨 New paper accepted at SIMODS! 🚨 “Nonlinear Meta-learning Can Guarantee Faster Rates”
arxiv.org/abs/2307.10870
When does meta learning work? Spoiler: generalise to new tasks by overfitting on your training tasks! Here is why: 🧵👇
loading . . .
Nonlinear Meta-Learning Can Guarantee Faster Rates
Many recent theoretical works on \emph{meta-learning} aim to achieve guarantees in leveraging similar representational structures from related tasks towards simplifying a target task. The main aim of ...
https://arxiv.org/abs/2307.10870
7 months ago
2
9
8
reposted by
Dimitri Meunier
arxiv stat.ML
over 1 year ago
Dimitri Meunier, Zikai Shen, Mattes Mollenhauer, Arthur Gretton, Zhu Li Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms
https://arxiv.org/abs/2405.14778
0
3
2
reposted by
Dimitri Meunier
arXiv cs.LG Machine Learning
8 months ago
Mattes Mollenhauer, Nicole M\"ucke, Dimitri Meunier, Arthur Gretton: Regularized least squares learning with heavy-tailed noise is minimax optimal
https://arxiv.org/abs/2505.14214
https://arxiv.org/pdf/2505.14214
https://arxiv.org/html/2505.14214
1
6
7
reposted by
Dimitri Meunier
Gabriel Peyré
8 months ago
I have updated my slides on the maths of AI by an optimal pairing between AI and maths researchers ...
speakerdeck.com/gpeyre/the-m...
3
25
3
reposted by
Dimitri Meunier
Gabriel Peyré
8 months ago
I have cleaned a bit my lecture notes on Optimal Transport for Machine Learners
arxiv.org/abs/2505.06589
loading . . .
Optimal Transport for Machine Learners
Optimal Transport is a foundational mathematical theory that connects optimization, partial differential equations, and probability. It offers a powerful framework for comparing probability distributi...
http://arxiv.org/abs/2505.06589
0
121
29
reposted by
Dimitri Meunier
arxiv stat.ML
8 months ago
Gabriel Peyr\'e Optimal Transport for Machine Learners
https://arxiv.org/abs/2505.06589
0
4
1
reposted by
Dimitri Meunier
François-Xavier Briol
8 months ago
New ICML 2025 paper: Nested expectations with kernel quadrature. We propose an algorithm to estimate nested expectations which provides orders of magnitude improvements in low-to-mid dimensional smooth nested expectations using kernel ridge regression/kernel quadrature.
arxiv.org/abs/2502.18284
1
14
1
Great talk by Aapo Hyvärinen on non linear ICA at AISTATS 25’!
8 months ago
0
7
0
reposted by
Dimitri Meunier
Arthur Gretton
8 months ago
Density Ratio-based Proxy Causal Learning Without Density Ratios 🤔 at
#AISTATS2025
An alternative bridge function for proxy causal learning with hidden confounders.
arxiv.org/abs/2503.08371
Bozkurt, Deaner,
@dimitrimeunier.bsky.social
, Xu
0
7
4
reposted by
Dimitri Meunier
Charles Riou
8 months ago
Link to the video:
youtu.be/nLGBTMfTvr8?...
loading . . .
Interview of Statistics and ML Expert - Pierre Alquier
YouTube video by ML New Papers
https://youtu.be/nLGBTMfTvr8?si=OEXwnjrfazavVehP
0
11
3
reposted by
Dimitri Meunier
Pierre Alquier
8 months ago
Dinner in Siglap yesterday evening with the members of the ABI team & friends who are attending ICLR.
1
9
1
reposted by
Dimitri Meunier
Arthur Gretton
9 months ago
Optimality and Adaptivity of Deep Neural Features for Instrumental Variable Regression
#ICLR25
openreview.net/forum?id=ReI...
NNs ✨better than fixed-feature (kernel, sieve) when target has low spatial homogeneity, ✨more sample-efficient wrt Stage 1 Kim,
@dimitrimeunier.bsky.social
, Suzuki, Li
0
8
3
reposted by
Dimitri Meunier
Pierre Alquier
10 months ago
Our joint paper with Geoffrey Wolfer
@gwolfer.bsky.social
"Variance-Aware Estimation of the Kernel Mean Embedding" accepted for publication in the Journal of Machine Learning Research 🥳
arxiv.org/abs/2210.06672
loading . . .
Variance-Aware Estimation of Kernel Mean Embedding
An important feature of kernel mean embeddings (KME) is that the rate of convergence of the empirical KME to the true distribution KME can be bounded independently of the dimension of the space, prope...
https://arxiv.org/abs/2210.06672
1
29
4
reposted by
Dimitri Meunier
arxiv stat.ML
12 months ago
Juno Kim, Dimitri Meunier, Arthur Gretton, Taiji Suzuki, Zhu Li Optimality and Adaptivity of Deep Neural Features for Instrumental Variable Regression
https://arxiv.org/abs/2501.04898
0
5
3
you reached the end!!
feeds!
log in