Stephanie Hyland
@hylandsl.bsky.social
📤 2228
📥 890
📝 73
machine learning for health at microsoft research, based in cambridge UK 🌻 she/her
uv is so good
add a skeleton here at some point
9 days ago
0
4
0
reposted by
Stephanie Hyland
Alexander Doria
20 days ago
Some papers really have a good intro
4
16
1
reposted by
Stephanie Hyland
Eugene Vinitsky 🍒
about 1 month ago
The more rigorous peer review happens in conversations and reading groups after the paper is out with reputational costs for publishing bad work
2
49
8
reposted by
Stephanie Hyland
Jack Lynch
about 1 month ago
I'll admit, I was skeptical when they said Gemini was just like a bunch of PhDs. But I gotta admit they nailed it.
71
7318
1837
what is the purpose of VQA datasets where text-only models do better than random?
about 2 months ago
0
1
0
lads can we stop
about 2 months ago
0
4
0
reposted by
Stephanie Hyland
Tim Kellogg
about 2 months ago
quick diagram of Bluesky’s architecture and why it’s nicer here
4
72
6
it's frustrating how inefficient review assignments are: we target a minimum number of completed reviews per paper but in accounting for inevitable no-shows, some people end up doing technically unnecessary (if still beneficial) reviews
2 months ago
1
1
0
New work from my team!
arxiv.org/abs/2507.12950
Intersecting mechanistic interpretability and health AI 😎 We trained and interpreted sparse autoencoders on MAIRA-2, our radiology MLLM. We found a range of human-interpretable radiology reporting concepts, but also many uninterpretable SAE features.
loading . . .
Insights into a radiology-specialised multimodal large language model with sparse autoencoders
Interpretability can improve the safety, transparency and trust of AI models, which is especially important in healthcare applications where decisions often carry significant consequences. Mechanistic...
https://arxiv.org/abs/2507.12950
2 months ago
1
10
4
reposted by
Stephanie Hyland
NeurIPS Conference
3 months ago
We're excited to announce a second physical location for NeurIPS 2025, in Mexico City, which we hope will address concerns around skyrocketing attendance and difficulties in travel visas that some attendees have experienced in previous years. Read more in our blog:
blog.neurips.cc/2025/07/16/n...
1
45
23
reposted by
Stephanie Hyland
Sebastian Bordt
3 months ago
During the last couple of years, we have read a lot of papers on explainability and often felt that something was fundamentally missing🤔 This led us to write a position paper (accepted at
#ICML2025
) that attempts to identify the problem and to propose a solution.
arxiv.org/abs/2402.02870
👇🧵
1
12
6
reposted by
Stephanie Hyland
Jessica Hullman
3 months ago
ExplainableAI has long frustrated me by lacking a clear theory of what an explanation should do. Improve use of a model for what? How? Given a task what's max effect explanation could have? It's complicated bc most methods are functions of features & prediction but not true state being predicted 1/
2
44
8
reposted by
Stephanie Hyland
Ida Momennejad
3 months ago
Pleased to share our ICML Spotlight with
@eberleoliver.bsky.social
, Thomas McGee, Hamza Giaffar,
@taylorwwebb.bsky.social
. Position: We Need An Algorithmic Understanding of Generative AI What algorithms do LLMs actually learn and use to solve problems?🧵1/n
openreview.net/forum?id=eax...
3
151
40
This is cool
arxiv.org/abs/2505.18235
. Linear representation hypothesis discourse needs more differential geometry i m o
loading . . .
The Origins of Representation Manifolds in Large Language Models
There is a large ongoing scientific effort in mechanistic interpretability to map embeddings and internal representations of AI systems into human-understandable concepts. A key element of this effort...
https://arxiv.org/abs/2505.18235
3 months ago
0
7
0
reposted by
Stephanie Hyland
Jan Hermann
3 months ago
🚀 After two+ years of intense research, we’re thrilled to introduce Skala — a scalable deep learning density functional that hits chemical accuracy on atomization energies and matches hybrid-level accuracy on main group chemistry — all at the cost of semi-local DFT ⚛️🔥🧪🧬
3
72
32
Limited time offer: I get an email every time someone fills in this form, so act now to add to the chaos of my inbox before I figure out how to turn this off
add a skeleton here at some point
3 months ago
1
1
1
A strange thing about living near Duxford is having WW2-era planes flying overhead on a regular basis.
4 months ago
0
1
0
reposted by
Stephanie Hyland
Itai Yanai
4 months ago
90% of doing science is being open to new ideas.
2
258
76
NeurIPS 2025 call for ethics reviewers
neurips.cc/Conferences/...
#neurips2025
loading . . .
2025 Call For Ethics Reviewers
https://neurips.cc/Conferences/2025/CallForEthicsReviewers
4 months ago
0
4
2
reposted by
Stephanie Hyland
Steve Klabnik
4 months ago
I am disappointed in the AI discourse
steveklabnik.com/writing/i-am...
loading . . .
I am disappointed in the AI discourse
https://steveklabnik.com/writing/i-am-disappointed-in-the-ai-discourse/
212
915
269
reposted by
Stephanie Hyland
Irene Chen
5 months ago
What happens in SAIL 2025 stays in SAIL 2025 -- except for these anonymized hot takes! 🔥 Jotted down 17 de-identified quotes on AI and medicine from medical executives, journal editors, and academics in off-the-record discussions in Puerto Rico
irenechen.net/sail2025/
0
25
7
reposted by
Stephanie Hyland
Association for Health Learning and Inference (AHLI)
5 months ago
Registration is now open for
#CHIL2025
! This year's program will feature 🔹Keynote presentations 🔹Panel discussions 🔹Year in Review 🔹Posters sessions 🔹Doctoral Symposium lightning talks 👉 Schedule:
chil.ahli.cc/attend/sched...
👉 Register:
ahli.cc/chil25-regis...
0
2
1
reposted by
Stephanie Hyland
Naomi Saphra
5 months ago
I wrote something up for AI people who want to get into bluesky and either couldn't assemble an exciting feed or gave up doomscrolling when their Following feed switched to talking politics 24/7.
loading . . .
The AI Researcher's Guide to a Non-Boring Bluesky Feed | Naomi Saphra
How to migrate to bsky without a boring feed.
https://nsaphra.net/post/bsky
23
313
106
taking a break from job 2 (roller derby spreadsheets) to work on job 3 (reviewing papers)
7 months ago
1
1
0
found a copy of the NeurIPS 1989 proceedings at the office
8 months ago
0
21
2
reposted by
Stephanie Hyland
Naomi Saphra
8 months ago
One of my grand interpretability goals is to improve human scientific understanding by analyzing scientific discovery models, but this is the most convincing case yet that we CAN learn from model interpretation: Chess grandmasters learned new play concepts from AlphaZero's internal representations.
loading . . .
Bridging the Human-AI Knowledge Gap: Concept Discovery and Transfer in AlphaZero
Artificial Intelligence (AI) systems have made remarkable progress, attaining super-human performance across various domains. This presents us with an opportunity to further human knowledge and improv...
https://arxiv.org/abs/2310.16410
2
108
23
I genuinely think about this book all the time
add a skeleton here at some point
9 months ago
0
2
0
Happy to announce a collaboration with the Mayo Clinic to advance our research in radiology report generation!
newsnetwork.mayoclinic.org/discussion/m...
Tagging some of the core team:
@valesalvatelli.bsky.social
@fepegar.com
@maxilse.bsky.social
@sambondtaylor.bsky.social
@anton-sc.bsky.social
loading . . .
https://newsnetwork.mayoclinic.org/discussion/mayo-clinic-accelerates-personalized-medicine-through-foundation-models-with-microsoft-research-and-cerebras-systems/
9 months ago
1
7
2
me, eating potato waffles straight from the toaster: see, vegan diets are healthier
add a skeleton here at some point
9 months ago
0
4
0
enjoying listening to the BBC Reith Lectures (
www.bbc.co.uk/programmes/b...
). In the 2021 edition, Stuart Russell offhandedly predicted AlphaFold winning the Nobel prize!
loading . . .
BBC Radio 4 - The Reith Lectures
Significant international thinkers deliver the BBC's flagship annual lecture series
https://www.bbc.co.uk/programmes/b00729d9
9 months ago
0
2
1
Working on a "special" kind of data (health; medical images etc.) means experiencing the disappointment of realising the method in a paper relies critically on "having a very good image captioner", "asking GPT-4V", etc.
9 months ago
2
5
0
I love having a very specific question I'm trying to answer and just going to town on semantic scholar.
9 months ago
1
2
0
reposted by
Stephanie Hyland
Maarten van Smeden
9 months ago
Let us start 2025 in a positive mood: here are 10 methods things researchers can worry *less* about in 2025
15
261
137
reposted by
Stephanie Hyland
Altmetric
10 months ago
THREAD This Is The Story Of The The Pernicious Rise of AI-Generated Papers and their Online Impact An Incomplete History Told In The Voice of Documentarian Adam Curtis 1/31
30
604
508
reposted by
Stephanie Hyland
9 months ago
feeling a but under the weather this week … thus an increased level of activity on social media and blog:
kyunghyuncho.me/i-sensed-anx...
loading . . .
i sensed anxiety and frustration at NeurIPS’24 – Kyunghyun Cho
https://kyunghyuncho.me/i-sensed-anxiety-and-frustration-at-neurips24/
19
181
49
reposted by
Stephanie Hyland
Dr Zoë Ayres
10 months ago
A word of caution: it might feel like that grant/paper/review cannot possibly wait over the Christmas period, but you can't get time with friends and family back. You likely won't be like "really wish I'd submitted that" in 5 years time but you might regret not spending time with those you love.
3
200
51
reposted by
Stephanie Hyland
Dan Goldstein
10 months ago
Microsoft's Computational Social Science group may have the opportunity to hire one researcher Senior: 0-3 yrs post PhD
jobs.careers.microsoft.com/global/en/jo...
Principal: 3+ yrs post PhD
jobs.careers.microsoft.com/global/en/sh...
Please note: our ability to hire this season is not certain
4
130
55
Isaac Kohane highlighting that AI is already being used by health insurance companies to deny claims
#ML4H
10 months ago
2
7
1
Simulating a skin lesion dataset for evaluating dermatology models, from Adarsh Subbaswamy at ML4H
10 months ago
0
6
1
“Most people who have died from malaria were never in an EHR system” - Megan Coffee on the panel (“The Promise of Foundation Models in Healthcare”) at ML4H
10 months ago
0
5
1
Max Tegmark pointing out the benefits of “tool AI” over “AGI” at the
#NeurIPS2024
Safe Generative AI workshop
10 months ago
4
23
6
my roller derby name is Kernel Panic 💅
add a skeleton here at some point
10 months ago
0
7
0
It’s kicking off in the
#NeurIPS2024
town hall slido
10 months ago
1
5
0
Rosalind Picard on involving the real people who will be impacted/benefit from your research *in your research*. (One of many excellent lessons from her talk on “How to optimise what matters most?”)
#NeurIPS2024
10 months ago
1
7
0
Lidong Zhou giving lessons on scaling from the systems community
#NeurIPS2024
10 months ago
0
6
1
“How would you describe your research?”
#NeurIPS2024
10 months ago
1
36
1
not enough people are talking about the concerning proliferation of radar charts in machine learning research
towardsdatascience.com/ghosts-on-th...
loading . . .
Ghosts on the Radar — Why Radar Charts Are Easily Misread
Mali Akmanalp just wrote an interesting piece on how he used radar charts to solve a particular data viz problem. In general though, I…
https://towardsdatascience.com/ghosts-on-the-radar-why-radar-charts-are-easily-misread-dba00fc399ef
10 months ago
1
5
0
whenever someone presents a “here’s the history of this field” slide I can’t help but notice an absence of women
10 months ago
1
7
1
profound questions in the
#NeurIPS2024
slido
10 months ago
0
10
0
Nice to see machine learning for health in the top 20 topics in NeurIPS papers
#NeurIPS2024
10 months ago
1
9
2
Load more
feeds!
log in