Michael Kirchhof (ICML)
@mkirchhof.bsky.social
📤 350
📥 186
📝 67
Research Scientist at Apple for uncertainty quantification.
pinned post!
Can LLMs access and describe their own internal distributions? With my colleagues at Apple, I invite you to take a leap forward and make LLM uncertainty quantification what it can be. 📄
arxiv.org/abs/2505.20295
💻
github.com/apple/ml-sel...
🧵1/9
loading . . .
6 months ago
1
22
6
If you want some holiday reflections: This is not just a blogpost, but an insight into the philosophy of one of the best scientific minds (and best humans, really) I had the honor to share a bit of my life with.
add a skeleton here at some point
16 days ago
0
0
0
Our research team is hiring PhD interns 🍏 Spend your next summer in Paris and explore the next frontiers of LLMs for uncertainty quantification, calibration, RL and post-training, and Bayesian experimental design. Details & Application ➡️
jobs.apple.com/en-my/detail...
loading . . .
Internship - Machine Learning Research on Uncertainty - Jobs at Apple (MY)
Apply for a Internship - Machine Learning Research on Uncertainty job at Apple. Read about the role and find out if it’s right for you.
https://jobs.apple.com/en-my/details/200632015-2911/internship-machine-learning-research-on-uncertainty
about 2 months ago
1
3
2
reposted by
Michael Kirchhof (ICML)
about 2 months ago
📢 We’re looking for a researcher in in cogsci, neuroscience, linguistics, or related disciplines to work with us at Apple Machine Learning Research! We're hiring for a one-year interdisciplinary AIML Resident to work on understanding reasoning and decision making in LLMs. 🧵
1
9
6
reposted by
Michael Kirchhof (ICML)
Marco Cuturi
2 months ago
We have been working with Michal Klein on pushing a module to train *flow matching* models using JAX. This is shipped as part of our new release of the OTT-JAX toolbox (
github.com/ott-jax/ott
) The tutorial to do so is here:
ott-jax.readthedocs.io/tutorials/ne...
loading . . .
1
13
7
reposted by
Michael Kirchhof (ICML)
Marco Cuturi
3 months ago
It's that time of the year! 🎁 The Apple Machine Learning Research (MLR) team in Paris is hiring a few interns, to do cool research for ±6 months 🚀🚀 & work towards publications/OSS. Check requirements and apply: ➡️
jobs.apple.com/en-us/detail...
More❓→ ✉️
[email protected]
0
7
4
LLMs are currently this one big parameter block that stores all sort of facts. In our new preprint, we add context-specific memory parameters to the model, and pretrain the model along with a big bank of memories. 📑
arxiv.org/abs/2510.02375
[1/10]🧵
loading . . .
3 months ago
1
13
4
reposted by
Michael Kirchhof (ICML)
Marco Cuturi
3 months ago
Our two phenomenal interns, Alireza Mousavi-Hosseini and Stephen Zhang
@syz.bsky.social
have been cooking some really cool work with Michal Klein and me over the summer. Relying on optimal transport couplings (to pick noise and data pairs) should, in principle, be helpful to guide flow matching 🧵
2
31
8
Many treat uncertainty = a number. At Apple, we're rethinking this: LLMs should output strings that reveal all information of their internal distributions. We find that Reasoning, SFT, CoT can't do it - yet. To get there, we introduce the SelfReflect benchmark.
arxiv.org/pdf/2505.20295
3 months ago
3
33
7
reposted by
Michael Kirchhof (ICML)
Shubhendu Trivedi
4 months ago
Natural idea. Looks like a nice paper too.
arxiv.org/abs/2508.21184
loading . . .
BED-LLM: Intelligent Information Gathering with LLMs and Bayesian Experimental Design
We propose a general-purpose approach for improving the ability of Large Language Models (LLMs) to intelligently and adaptively gather information from a user or other external source using the framew...
https://arxiv.org/abs/2508.21184
1
25
6
I'll present my view on the future of uncertainties in LLMs and vision models at
@icmlconf.bsky.social
, in penal discussions, posters, and workshops. Reach out if you wanna chat :) Here's everything from me and other folks at Apple:
machinelearning.apple.com/updates/appl...
6 months ago
0
5
1
Can LLMs access and describe their own internal distributions? With my colleagues at Apple, I invite you to take a leap forward and make LLM uncertainty quantification what it can be. 📄
arxiv.org/abs/2505.20295
💻
github.com/apple/ml-sel...
🧵1/9
loading . . .
6 months ago
1
22
6
reposted by
Michael Kirchhof (ICML)
7 months ago
NEW PAPER ALERT: Recent studies have shown that LLMs often lack robustness to distribution shifts in their reasoning. Our paper proposes a new method, AbstRaL, to augment LLMs’ reasoning robustness, by promoting their abstract thinking with granular reinforcement learning.
1
7
4
I‘ll talk today about our latest research on uncertainty quantification at Apple (papers are 2 weeks old) and what I see as the future for UQ in vision and LLMs. See you at 102B, 4:30pm! PS: Lmk if you wanna chat :)
7 months ago
0
3
0
At the end of my PhD, I reflected on uncertainty quantification research, and what might change with chatbots and LLM agents. This was now accepted as position paper at
@icmlconf.bsky.social
. Some of those future topics are already picking up pace, so have an evening read ☕
arxiv.org/abs/2505.22655
7 months ago
0
18
4
reposted by
Michael Kirchhof (ICML)
Maureen de Seyssel
7 months ago
Now that
@interspeech.bsky.social
registration is open, time for some shameless promo! Sign-up and join our Interspeech tutorial: Speech Technology Meets Early Language Acquisition: How Interdisciplinary Efforts Benefit Both Fields. 🗣️👶
www.interspeech2025.org/tutorials
⬇️ (1/2)
loading . . .
https://www.interspeech2025.org/tutorials
Your cookies are disabled, please enable them.
https://www.interspeech2025.org/tutorials
1
9
6
Aleatoric and epistemic uncertainty are clear-cut concepts, right? ... right? 😵💫 In our new ICLR blogpost we let different schools of thought speak and contradict each other, and revisit chatbots where “the character of aleatory ‘transforms’ into epistemic”
iclr-blogposts.github.io/2025/blog/re...
8 months ago
1
31
9
reposted by
Michael Kirchhof (ICML)
Cem Koç
8 months ago
Today we have released the code and a demo iOS application for FastVLM - our extremely efficient and fast vision language model which runs on your device using MLX! You can check out the code and the app here:
github.com/apple/ml-fas...
loading . . .
1
4
3
reposted by
Michael Kirchhof (ICML)
Preetum Nakkiran
11 months ago
Paper🧵 (cross-posted at X): When does composition of diffusion models "work"? Intuitively, the reason dog+hat works and dog+horse doesn’t has something to do with independence between the concepts being composed. The tricky part is to formalize exactly what this means. 1/
add a skeleton here at some point
2
39
17
reposted by
Michael Kirchhof (ICML)
Vimal Thilak
11 months ago
🚨 Apple Machine Learning Research Internship opportunity! My colleagues in Apple MLR are looking for a PhD research intern with a strong interest in reinforcement learning/post-training for LLMs. If interested, apply by sending an email to Etai Littwin (elittwin at apple dot com)
0
3
2
Wow, OpenAI's o1 has a whopping 93% ECE on Humanity's Last Exam. So if you just prompt o1 to tell you how sure it is about its answer, it will basically produce gibberish. And that's how most users will ask for uncertainties. We have work to do!
12 months ago
0
16
2
reposted by
Michael Kirchhof (ICML)
Marco Cuturi
12 months ago
Today is a great day for optimal transport 🎉! Lots of gratitude 🙏 for all folks who contributed to
ott-jax.readthedocs.io
and pushed for the MOSCOT (now @ nature!) paper, from visionaries
@dominik1klein.bsky.social
, G. Palla, Z. Piran to the magician, Michal Klein! ❤️
www.nature.com/articles/s41...
add a skeleton here at some point
0
22
8
Many LLM uncertainty estimators perform similarly, but does that mean they do the same? No! We find that they use different cues, and combining them gives even better performance. 🧵1/5 📄
openreview.net/forum?id=QKR...
NeurIPS: Sunday, East Exhibition Hall A, Safe Gen AI workshop
about 1 year ago
1
11
4
reposted by
Michael Kirchhof (ICML)
Andrea Santilli
about 1 year ago
Interested in learning how to evaluate uncertainty in LLMs? Check out our work at NeurIPS! Feel free to reach out for a chat!
add a skeleton here at some point
1
3
1
reposted by
Michael Kirchhof (ICML)
Bálint Mucsányi
about 1 year ago
Excited to present our spotlight paper on uncertainty disentanglement at
#NeurIPS
! Drop by today between 11 am and 2 pm PST at West Ballroom A-D #5509 and let's chat!
add a skeleton here at some point
0
10
1
Evaluating your LLM uncertainties with Rougle-L will show clear winners... except that they aren't actually good. We find that Rouge-L spuriously favors some methods over others. 🧵1/4 📄
openreview.net/forum?id=jGt...
NeurIPS: Sunday, East Exhibition Hall A, Safe Gen AI workshop
about 1 year ago
1
7
4
reposted by
Michael Kirchhof (ICML)
Alexander Kolesnikov
about 1 year ago
Ok, it is yesterdays news already, but good night sleep is important. After 7 amazing years at Google Brain/DM, I am joining OpenAI. Together with
@xzhai.bsky.social
and
@giffmana.ai
, we will establish OpenAI Zurich office. Proud of our past work and looking forward to the future.
8
116
16
reposted by
Michael Kirchhof (ICML)
Bálint Mucsányi
about 1 year ago
Thrilled to share our NeurIPS spotlight on uncertainty disentanglement! ✨ We study how well existing methods disentangle different sources of uncertainty, like epistemic and aleatoric. While all tested methods fail at this task, there are promising avenues ahead. 🧵 👇 1/7 📖:
arxiv.org/abs/2402.19460
4
57
8
Proud to announce our NeurIPS spotlight, which was in the works for over a year now :) We dig into why decomposing aleatoric and epistemic uncertainty is hard, and what this means for the future of uncertainty quantification. 📖
arxiv.org/abs/2402.19460
🧵1/10
about 1 year ago
3
74
14
My last week as an Apple intern was insane. 3 paper deadlines? Sure, I can do it. "Wanna interview this week?" Sure, I can do it! "Wanna present your side project to the senior VP?" Sure, I... wait 🤯 I had such a flow! It was so fun! I want more. I'm joining Apple as a Research Scientist 🍎
about 1 year ago
6
24
0
Hello there 🦋 I'll continue my uncertainty quantification research here :) Expect a NeurIPS spotlight and some work-in-progress papers here in the next days. You'll hear it here first!
about 1 year ago
0
8
0
you reached the end!!
feeds!
log in