Arthur Gretton
@arthurgretton.bsky.social
📤 436
📥 169
📝 28
Thank you for a fantastic conference!
add a skeleton here at some point
28 days ago
0
3
0
Sequential kernel embedding for mediated and time-varying dose response curves Appearing in Bernoulli:
projecteuclid.org/journals/ber...
with preprint here:
arxiv.org/abs/2111.03950
...along with code!
github.com/liyuan9988/K...
Rahul Singh, Liyuan Xu
about 2 months ago
0
6
1
Research Fellow position open at
@gatsbyucl.bsky.social
to work with me and Jason Hartford on Causality in Biological Systems! Apply at link, deadline is 27 August:
www.ucl.ac.uk/work-at-ucl/...
loading . . .
UCL – University College London
UCL is consistently ranked as one of the top ten universities in the world (QS World University Rankings 2010-2022) and is No.2 in the UK for research power (Research Excellence Framework 2021).
https://www.ucl.ac.uk/work-at-ucl/search-ucl-jobs/details?jobId=36867&jobTitle=Research+Fellow+-+Causality+in+Biological+Systems
about 2 months ago
0
13
5
Accelerated Diffusion Models via Speculative Sampling, at
#icml25
! 16:30 Tuesday July 15 poster E-3012
arxiv.org/abs/2501.05370
@vdebortoli.bsky.social
Galashov
@arnauddoucet.bsky.social
3 months ago
1
25
6
Distributional diffusion models with scoring rules at
#icml25
Fewer, larger denoising steps using distributional losses! Wednesday 11am poster E-1910
arxiv.org/pdf/2502.02483
@vdebortoli.bsky.social
Galashov Guntupalli Zhou
@sirbayes.bsky.social
@arnauddoucet.bsky.social
3 months ago
0
8
3
reposted by
Arthur Gretton
Rémi Flamary
3 months ago
Distributional Reduction paper with H. Van Assel,
@ncourty.bsky.social
, T. Vayer , C. Vincent-Cuaz, and
@pfrossard.bsky.social
is accepted at TMLR. We show that both dimensionality reduction and clustering can be seen as minimizing an optimal transport loss 🧵1/5.
openreview.net/forum?id=cll...
1
33
10
Composite Goodness-of-fit Tests with Kernels, now out in JMLR!
www.jmlr.org/papers/v26/2...
Test if your distribution comes from ✨any✨ member of a parametric family. Comes in MMD and KSD flavours, and with code.
@oscarkey.bsky.social
@fxbriol.bsky.social
Tamara Fernandez
loading . . .
Composite Goodness-of-fit Tests with Kernels
https://www.jmlr.org/papers/v26/24-0276.html
4 months ago
0
19
5
Turns out that overfitting is the right approach when you want to generalize to new tasks!
add a skeleton here at some point
4 months ago
0
15
0
reposted by
Arthur Gretton
arXiv cs.LG Machine Learning
4 months ago
Mattes Mollenhauer, Nicole M\"ucke, Dimitri Meunier, Arthur Gretton: Regularized least squares learning with heavy-tailed noise is minimax optimal
https://arxiv.org/abs/2505.14214
https://arxiv.org/pdf/2505.14214
https://arxiv.org/html/2505.14214
1
6
7
Looking forward to this!
add a skeleton here at some point
5 months ago
0
7
0
Kernel Single Proxy Control for Deterministic Confounding at
#AISTATS25
Proxy causal learning generally requires two proxy variables - a treatment and an outcome proxy. When is it possible to use just one?
arxiv.org/abs/2308.04585
Liyuan Xu
5 months ago
0
2
1
Credal Two-Sample Tests of Epistemic Uncertainty at
#AISTATS25
Compare credal sets: convex sets of prob measures where elements capture aleatoric uncertainty; set represents epistemic uncertainty.
arxiv.org/abs/2410.12921
@slchau.bsky.social
Schrab
@sejdino.bsky.social
@krikamol.bsky.social
5 months ago
0
13
4
Spectral Representation for Causal Estimation with Hidden Confounders at
#AISTATS2025
A spectral method for causal effect estimation with hidden confounders, for instrumental variable and proxy causal learning
arxiv.org/abs/2407.10448
Haotian Sun,
@antoine-mln.bsky.social
, Tongzheng Ren, Bo Dai
5 months ago
1
3
3
Density Ratio-based Proxy Causal Learning Without Density Ratios 🤔 at
#AISTATS2025
An alternative bridge function for proxy causal learning with hidden confounders.
arxiv.org/abs/2503.08371
Bozkurt, Deaner,
@dimitrimeunier.bsky.social
, Xu
5 months ago
0
7
4
reposted by
Arthur Gretton
Lénaïc Chizat
6 months ago
Announcing : The 2nd International Summer School on Mathematical Aspects of Data Science
mathsdata2025.github.io
EPFL, Sept 1–5, 2025 Speakers: Bach
@bachfrancis.bsky.social
Bandeira Mallat Montanari Peyré
@gabrielpeyre.bsky.social
For PhD students & early-career researchers Apply before May 15!
loading . . .
Mathematical Aspects of Data Science
Graduate Summer School - EPFL - Sept. 1-5, 2025
https://mathsdata2025.github.io
1
45
25
Optimality and Adaptivity of Deep Neural Features for Instrumental Variable Regression
#ICLR25
openreview.net/forum?id=ReI...
NNs ✨better than fixed-feature (kernel, sieve) when target has low spatial homogeneity, ✨more sample-efficient wrt Stage 1 Kim,
@dimitrimeunier.bsky.social
, Suzuki, Li
5 months ago
0
8
3
Deep MMD Gradient Flow Without Adversarial Training at
#ICLR2025
openreview.net/forum?id=Pf8...
Do you have a GAN critic? Then you have a diffusion! Adaptive MMD gradient flow trained on a forward diffusion, competitive performance on image generation! Galashov,
@vdebortoli.bsky.social
5 months ago
0
3
1
Looking forward to this!
add a skeleton here at some point
6 months ago
0
5
2
reposted by
Arthur Gretton
Pierre Alquier
7 months ago
Our joint paper with Geoffrey Wolfer
@gwolfer.bsky.social
"Variance-Aware Estimation of the Kernel Mean Embedding" accepted for publication in the Journal of Machine Learning Research 🥳
arxiv.org/abs/2210.06672
loading . . .
Variance-Aware Estimation of Kernel Mean Embedding
An important feature of kernel mean embeddings (KME) is that the rate of convergence of the empirical KME to the true distribution KME can be bounded independently of the dimension of the space, prope...
https://arxiv.org/abs/2210.06672
1
29
4
Congratulations
@lestermackey.bsky.social
!!
add a skeleton here at some point
7 months ago
1
4
1
reposted by
Arthur Gretton
ELLIS
7 months ago
Hey ELLIS PhD students, need to travel but low on funds? Learn how ELSA can help with that:
bit.ly/4kqjyel
#ELLISPhD
#MobilityFund
#SustainableAI
#ProjectsBuildingOnELLIS
loading . . .
Travel the world with ELSA: our Mobility Fund in Action – ELSA
https://bit.ly/4kqjyel
0
21
4
reposted by
Arthur Gretton
Pierre Alquier
7 months ago
I already advertised for this document when I posted it on arXiv, and later when it was published. This week, with the agreement of the publisher, I uploaded the published version on arXiv. Less typos, more references and additional sections including PAC-Bayes Bernstein.
arxiv.org/abs/2110.11216
1
109
25
reposted by
Arthur Gretton
Pierre Alquier
7 months ago
The slides of my talk at OIST are online:
pierrealquier.github.io/slides/okina...
Thanks to the organisers, and thanks to Frank Nielsen for the photo of my talk 🙏 Link to the paper:
arxiv.org/abs/2412.18539
0
16
1
Video now public from the talk "Learning to act in noisy contexts using deep proxy learning" at the NeurIPS'24 Workshop on Causal Representation Learning! Video:
neurips.cc/virtual/2024...
Slides:
www.gatsby.ucl.ac.uk/~gretton/cou...
7 months ago
1
8
5
reposted by
Arthur Gretton
Antoine Moulin
7 months ago
super happy about this preprint! we can *finally* perform efficient exploration and find near-optimal stationary policies in infinite-horizon linear MDPs, and even use it for imitation learning :) working with
@neu-rips.bsky.social
and
@lviano.bsky.social
on this was so much fun!!
2
23
3
reposted by
Arthur Gretton
Tim van Erven
7 months ago
With Jack Mayo we are organizing a symposium on theory of bandit algorithms on March 10 at the University of Amsterdam. Talks by Jack Mayo, Wouter Koolen, Julia Olkhovskaya, Dirk van der Hoeven and, tentatively, Tor Lattimore.
www.timvanerven.nl/events/bandi...
has details and (free) registration
loading . . .
Bandit Theory Symposium
Tim van Erven’s website
https://www.timvanerven.nl/events/bandit-symposium2025/
0
12
3
Better diffusions with scoring rules! Fewer, larger denoising steps using distributional losses; learn the posterior distribution of clean samples given the noisy versions.
arxiv.org/pdf/2502.02483
@vdebortoli.bsky.social
Galashov Guntupalli Zhou
@sirbayes.bsky.social
@arnauddoucet.bsky.social
loading . . .
https://arxiv.org/pdf/2502.02483
8 months ago
1
29
11
reposted by
Arthur Gretton
ELLIS
8 months ago
Last week, the MFO Oberwolfach workshop on Overparametrization, Regularization, Identifiability & Uncertainty in ML united 48 researchers for 29 talks & plenaries. Organized by two ELLIS Programs, it advanced discussions in theoretical ML. Get details:
ellis.eu/news/ellis-p...
loading . . .
Exploring Overparametrization, Regularization, and Uncertainty in Machine Learning: Insights from the Oberwolfach Workshop
The ELLIS mission is to create a diverse European network that promotes research excellence and advances breakthroughs in AI, as well as a pan-European PhD program to educate the next generation of AI...
https://ellis.eu/news/ellis-program-workshop-on-overparametrization-regularization-identifiability-and-uncertainty-in-machine-learning
0
18
3
reposted by
Arthur Gretton
Sara Magliacane
8 months ago
Sad after
#AISTATS2025
and
#ICLR2025
notifications? As we say in Italy, when a door closes, a bigger one opens ;) If you have a fantastic paper on
#uncertainty
#AI
#ML
#causality
#statML
#probabilisticmodels
#reasoning
#impreciseprobabilities
etc, consider submitting to
#UAI2025
🇧🇷 deadline 10 Feb 💥
add a skeleton here at some point
2
42
16
reposted by
Arthur Gretton
ICLR Conference
9 months ago
Financial Assistance applications are now open! If you face financial barriers to attending ICLR 2025, we encourage you to apply. The program offers prepay and reimbursement options. Applications are due March 2nd with decisions announced March 9th.
iclr.cc/Conferences/...
loading . . .
ICLR 2024 Financial Assistance
https://iclr.cc/Conferences/2025/FinancialAssistance
0
30
22
reposted by
Arthur Gretton
Arnaud Doucet
9 months ago
Speculative sampling accelerates inference in LLMs by drafting future tokens which are verified in parallel. With
@vdebortoli.bsky.social
, A. Galashov &
@arthurgretton.bsky.social
, we extend this approach to (continuous-space) diffusion models:
arxiv.org/abs/2501.05370
0
45
10
reposted by
Arthur Gretton
9 months ago
📢 ICML Call for Papers is out The CfP is live here:
icml.cc/Conferences/...
, and we also just released a blog post summarizing major updates this year:
medium.com/@icml2025pc/...
loading . . .
ICML 2025 Call for Papers
https://icml.cc/Conferences/2025/CallForPapers
0
48
20
reposted by
Arthur Gretton
Pierre Alquier
10 months ago
ALT 2025: list of accepted papers. Congratulations to the authors !
openreview.net/group?id=alg...
loading . . .
ALT 2025 Conference
Welcome to the OpenReview homepage for ALT 2025 Conference
https://openreview.net/group?id=algorithmiclearningtheory.org/ALT/2025/Conference#tab-accept
1
24
7
reposted by
Arthur Gretton
Arnaud Doucet
10 months ago
The slides of my NeurIPS lecture "From Diffusion Models to Schrödinger Bridges - Generative Modeling meets Optimal Transport" can be found here
drive.google.com/file/d/1eLa3...
loading . . .
BreimanLectureNeurIPS2024_Doucet.pdf
https://drive.google.com/file/d/1eLa3y2Xprtjmq4cIiPD9hxevra-wy9k4/view?usp=sharing
9
327
73
"Learning to Act in Noisy Contexts Using Deep Proxy Learning" Talk at the
#NeurIPS2024
Causal Representation Learning Workshop
neurips.cc/virtual/2024...
Sunday 15th December (tomorrow!), 8:45am, East Exhibition Hall C
10 months ago
0
9
0
reposted by
Arthur Gretton
Harley Wiltzer
10 months ago
How can you 0-shot transfer predictions of long-term performance across reward functions *and* risk-sensitive utilities? We can do this via Distributional Successor Features. Our recent work introduces the 1st tractable & provably convergent algos for learning DSFs.
#NeurIPS2024
#6704 12 Dec, 11-2
loading . . .
3
16
6
Data balancing for fairness: when does it work? When does it not? "Mind the Graph When Balancing Data for Fairness or Robustness"
#NeurIPS2024
West Ballroom A-D #5507 Fri 13 Dec 11 a.m. PST — 2 p.m. PST
neurips.cc/virtual/2024...
10 months ago
0
14
1
Distributional SFs: enable 0-shot generalization of return *distribution* functions across a finite-dimensional reward function class "Foundations of Multivariate Distributional Reinforcement Learning"
#NeurIPS2024
#6704 12 Dec 11am-2pm
neurips.cc/virtual/2024...
Wiltzer Farebrother Rowland
10 months ago
0
5
2
Conditional mean embeddings without Tikhonov: no saturation effect! "Optimal Rates for Vector-Valued Spectral Regularization Learning Algorithms"
#NeurIPS2024
Poster #5709 12 Dec 11 a.m.— 2 p.m
neurips.cc/virtual/2024...
Meunier, Shen, Mollenhauer, Li
10 months ago
0
3
1
Use contrastive divergence to learn energy-based models as well as with maximum likelihood! "Near-Optimality of Contrastive Divergence Algorithms" Pierre Glaser, Kevin Han Huang at
#NeurIPS2024
West Ballroom A-D #5609 Wed 11 Dec 4:30 — 7:30 p.m. PST
neurips.cc/virtual/2024...
10 months ago
0
3
0
reposted by
Arthur Gretton
UCL CSML/ELLIS
10 months ago
Join us this Friday, 6th of December at noon, for Xidong Feng's (Google DeepMind) talk titled "Language Model from the view of Reinforcement Learning". Happening in-person and on zoom. More details and zoom link here:
ucl-ellis.github.io/dm_csml_semi...
0
4
3
reposted by
Arthur Gretton
UCL CSML/ELLIS
10 months ago
Join us this Friday, 29th of November at noon, in person or on zoom to hear Takanori Maehara from Roku Inc talk about the Expressive Power of Graph Neural Networks. More details and zoom link here:
ucl-ellis.github.io/dm_csml_semi...
0
4
4
reposted by
Arthur Gretton
UCL CSML/ELLIS
10 months ago
It was our pleasure to host Jeremias Knoblauch at the Seminar kindly sponsored by Jump Trading. In case you missed it, the recording of his talk on Post-Bayesian Machine Learning is available here:
youtu.be/0kyI3UfD1Uw
loading . . .
Jeremias Knoblauch (University College London): Post-Bayesian Machine Learning
YouTube video by JumpTrading ELLIS UCL CSML Seminar Series
https://youtu.be/0kyI3UfD1Uw
0
14
6
A fun interview with Charles Riou of ML New Papers, recorded at
#MLSS2024
#MLSS2024Okinawa
!
youtu.be/wbVoH9QUm80
loading . . .
Interview of Arthur Gretton ML Researcher at Google DeepMind
YouTube video by ML New Papers
https://youtu.be/wbVoH9QUm80
about 1 year ago
0
1
0
you reached the end!!
feeds!
log in