Andrew Saxe
@saxelab.bsky.social
📤 4068
📥 467
📝 30
Professor at the Gatsby Unit and Sainsbury Wellcome Centre, UCL, trying to figure out how we learn
pinned post!
Excited to share new work
@icmlconf.bsky.social
by Loek van Rossem exploring the development of computational algorithms in recurrent neural networks. Hear it live tomorrow, Oral 1D, Tues 15 Jul West Exhibition Hall C:
icml.cc/virtual/2025...
Paper:
openreview.net/forum?id=3go...
(1/11)
loading . . .
ICML Poster Algorithm Development in Neural Networks: Insights from the Streaming Parity TaskICML 2025
https://icml.cc/virtual/2025/poster/46526
2 months ago
1
39
12
reposted by
Andrew Saxe
Dirk Gütlin
about 8 hours ago
Structure transfer and consolidation in visual implicit learning 🧠🟦 🧠🤖
elifesciences.org/articles/100...
loading . . .
Structure transfer and consolidation in visual implicit learning
Sleep is essential for consolidating implicitly acquired perceptual knowledge that enables the knowledge transfer effect via newly learned structured information observed in prior studies of explicit ...
https://elifesciences.org/articles/100785
0
10
5
reposted by
Andrew Saxe
Abhilasha Joshi, PhD
5 days ago
1/ 🚨 New preprint! 🚨 Excited and proud (& a little nervous 😅) to share our latest work on the importance of
#theta-timescale
spiking during
#locomotion
in
#learning
. If you care about how organisms learn, buckle up. 🧵👇 📄
www.biorxiv.org/content/10.1...
💻 code + data 🔗 below 🤩
#neuroskyence
loading . . .
8
104
53
reposted by
Andrew Saxe
Chris Baldassano
6 days ago
What happens when we learn a new shortcut between places we thought were unconnected? Hannah found that the hippocampus rapidly adjusts its representations of environments to join them into a connected map - excited to share this final paper from her PhD work with me and
@mariamaly.bsky.social
!
add a skeleton here at some point
2
40
10
reposted by
Andrew Saxe
Dan Yamins
7 days ago
Here is our best thinking about how to make world models. I would apologize for it being a massive 40-page behemoth, but it's worth reading.
arxiv.org/pdf/2509.09737
loading . . .
https://arxiv.org/pdf/2509.09737
2
70
19
reposted by
Andrew Saxe
Joey Rudoler
15 days ago
Just spent two wonderful weeks in London for the Analytical Connectionism Summer School (hosted at Gatsby/UCL this year). Met lots of wonderful scientists at the intersection of cog neuro and machine learning. Learned a lot and can’t recommend more highly! Small meetings rule
1
8
1
reposted by
Andrew Saxe
Joao Barbosa
17 days ago
I recently learned: w/ lesioned 8A, you can do many WM tasks but not one👇 Guess what happens when you decoding from 8A during each of these tasks? They are all the same. Decoding is like a quality check, it provides almost no info about function
scholar.google.com/citations?vi...
2
6
2
reposted by
Andrew Saxe
Timothy O'Leary
16 days ago
We all know that correlation doesn't imply causation. So we took some correlations and tested if they were causal. Here's what happened:
www.cell.com/cell-reports...
loading . . .
An optical brain-machine interface reveals a causal role of posterior parietal cortex in goal-directed navigation
Relating neural circuitry to behavior is challenging due to closed loop interactions between neural activity, actions, and sensations. Sorrell et al. present evidence for a causal role of mouse PPC in...
https://www.cell.com/cell-reports/fulltext/S2211-1247(25)00633-3?_returnURL=https%3A%2F%2Flinkinghub.elsevier.com%2Fretrieve%2Fpii%2FS2211124725006333%3Fshowall%3Dtrue
3
45
12
reposted by
Andrew Saxe
Erin Grant
about 1 month ago
Our
#CCN2025
GAC debate w/
@gretatuckute.bsky.social
, Gemma Roig (
www.cvai.cs.uni-frankfurt.de
), Jacqueline Gottlieb (
gottlieblab.com
), Klaus Oberauer,
@mschrimpf.bsky.social
&
@brittawestner.bsky.social
asks: 📊 What benchmarks are useful for cognitive science? 💭
2025.ccneuro.org/gac
1
49
17
reposted by
Andrew Saxe
Erin Grant
about 1 month ago
Are similar representations in neural nets evidence of shared computation? In new theory work w/ Lukas Braun (
lukasbraun.com
) &
@saxelab.bsky.social
, we prove that representational comparisons are ill-posed in general, unless networks are efficient.
@icmlconf.bsky.social
@cogcompneuro.bsky.social
3
72
20
reposted by
Andrew Saxe
Denis Lan
about 2 months ago
My first PhD paper - with
@lhuntneuro.bsky.social
and
@summerfieldlab.bsky.social
- is now out in
@plosbiology.org
! We ask: how do humans (and deep neural networks) navigate flexibly even in unfamiliar environments, such as a new city? Link:
plos.io/45uSwNm
🧵 (1/6)
1
30
11
reposted by
Andrew Saxe
Guido Meijer
about 2 months ago
🚨Pre-print alert🚨 We stimulated serotonin with optogenetics while doing large-scale Neuropixel recordings across the mouse brain. We found strong widespread modulation of neural activity, but no effect on the choices of the mouse 🐭 How is this possible? Strap in! (1/9) 👇🧵
doi.org/10.1101/2025...
loading . . .
Serotonin drives choice-independent reconfiguration of distributed neural activity
Serotonin (5-HT) is a central neuromodulator which is implicated in, amongst other functions, cognitive flexibility. 5-HT is released from the dorsal raphe nucleus (DRN) throughout nearly the entire f...
https://doi.org/10.1101/2025.08.01.668048
3
90
36
reposted by
Andrew Saxe
Juan Gallego
about 2 months ago
Very happy about my former mentor Sara Solla having received the Valentin Braitenberg Award for her lifelong contributions to computational neuroscience! Sara will be giving a lecture at the upcoming
@bernsteinneuro.bsky.social
meeting which you shouldn't miss.
bernstein-network.de/en/newsroom/...
loading . . .
Sara A. Solla receives the Valentin Braitenberg Award for Computational Neuroscience 2025 – Bernstein Network Computational Neuroscience
https://bernstein-network.de/en/newsroom/news/sara-solla-receives-valentin-braitenberg-award-2025/
1
73
16
reposted by
Andrew Saxe
Sainsbury Wellcome Centre
about 2 months ago
12 leading neuroscientists tackle a big question: Will we ever understand the brain? Their reflections span philosophy, complexity, and the limits of scientific explanation.
www.sainsburywellcome.org/web/blog/wil...
Illustration by
@gilcosta.bsky.social
&
@joanagcc.bsky.social
2
13
4
reposted by
Andrew Saxe
David Sussillo
about 2 months ago
Coming March 17, 2026! Just got my advance copy of Emergence — a memoir about growing up in group homes and somehow ending up in neuroscience and AI. It’s personal, it’s scientific, and it’s been a wild thing to write. Grateful and excited to share it soon.
7
183
37
reposted by
Andrew Saxe
Laurence Hunt
about 2 months ago
Our new paper is out! When navigating through an environment, how do we combine our general sense of direction with known landmark states? To explore this, @denislan.bsky.social used a task that allowed subjects (or neural networks) to choose either their next action or next state at each step.
add a skeleton here at some point
0
70
25
reposted by
Andrew Saxe
Clementine Domine 🍊 @CCN
about 2 months ago
🎓Thrilled to share I’ve officially defended my PhD!🥳 At
@gatsbyucl.bsky.social
, my research explored how prior knowledge shapes neural representations. I’m deeply grateful to my mentors,
@saxelab.bsky.social
and
@caswell.bsky.social
, my incredible collaborators, and everyone who supported me!
5
38
2
reposted by
Andrew Saxe
Claudia Clopath Lab
2 months ago
Trying to train RNNs in a biol plausible (local) way? Well, try our new method using predictive alignment. Paper just out in Nat. Com. Toshitake Asabuki deserves all the credit!
www.nature.com/articles/s41...
loading . . .
https://www.nature.com/articles/s41467-025-61309-9.epdf?sharing_token=EuyHHIaDvrnv6e3b6saPLtRgN0jAjWel9jnR3ZoTv0N_dFCpkrqI8B6Eap2WyDj7lu2LQau1BlRBmaM6qidIpGzKkgISccdH8hgHgwkKrG7DjDAZB7c5PF-eiGaFPL9JsgsjYd5Hio4MCUqo4gany-bNSjetxlQYKogS1mQ-z2A%3D
1
56
16
reposted by
Andrew Saxe
Gatsby Computational Neuroscience Unit
2 months ago
🥳 Congratulations to Rodrigo Carrasco-Davison on passing his PhD viva with minor corrections! 🎉 📜 Principles of Optimal Learning Control in Biological and Artificial Agents.
0
16
1
reposted by
Andrew Saxe
Matteo Carandini
2 months ago
Welcome to Bluesky
@kenneth-harris.bsky.social
!
0
20
3
reposted by
Andrew Saxe
Athena Akrami
2 months ago
🎉 Heron is finally out
@elife.bsky.social
! Led by George Dimitriadis, with Ella Svahn &
@macaskillaf.bsky.social
🧪 🧠 🐭 🤖 If you wonder why yet another tool for experimental pipelines, read the 🧵 below:
#neuroscience
#neuroskyence
#OpenSource
1/
elifesciences.org/articles/91915
add a skeleton here at some point
1
38
12
Come chat about this at the poster
@icmlconf.bsky.social
, 11:00-13:30 on Wednesday in the West Exhibition Hall
#W-902
!
add a skeleton here at some point
2 months ago
0
7
1
Excited to share new work
@icmlconf.bsky.social
by Loek van Rossem exploring the development of computational algorithms in recurrent neural networks. Hear it live tomorrow, Oral 1D, Tues 15 Jul West Exhibition Hall C:
icml.cc/virtual/2025...
Paper:
openreview.net/forum?id=3go...
(1/11)
loading . . .
ICML Poster Algorithm Development in Neural Networks: Insights from the Streaming Parity TaskICML 2025
https://icml.cc/virtual/2025/poster/46526
2 months ago
1
39
12
reposted by
Andrew Saxe
Gatsby Computational Neuroscience Unit
2 months ago
👋 Attending
#ICML2025
next week? Don't forget to check out work involving our researchers!
1
9
2
reposted by
Andrew Saxe
Blake Richards
2 months ago
Super excited to see this paper from Armin Lak & colleagues out! (I've seen
@saxelab.bsky.social
present it before.)
www.cell.com/cell/fulltex...
tl;dr: The learning trajectories that individual mice take correspond to different saddle points in a deep net's loss landscape. 🧠📈 🧪
#NeuroAI
loading . . .
Dopamine encodes deep network teaching signals for individual learning trajectories
Longitudinal tracking of long-term learning behavior and striatal dopamine reveals that dopamine teaching signals shape individually diverse yet systematic learning trajectories, captured mathematical...
https://www.cell.com/cell/fulltext/S0092-8674(25)00575-6
5
84
18
reposted by
Andrew Saxe
Tim Kietzmann
3 months ago
Exciting new preprint from the lab: “Adopting a human developmental visual diet yields robust, shape-based AI vision”. A most wonderful case where brain inspiration massively improved AI solutions. Work with
@zejinlu.bsky.social
@sushrutthorat.bsky.social
and Radek Cichy
arxiv.org/abs/2507.03168
loading . . .
https://arxiv.org/abs/2507.03168
3
138
69
reposted by
Andrew Saxe
Rhodri Cusack
3 months ago
Thanks to fellow defenders
@clionaod.bsky.social
, Marc'Aurelio Ranzato, and
@charvetcj.bsky.social
Defending the foundation model view of infant development
www.sciencedirect.com/science/arti...
loading . . .
Defending the foundation model view of infant development
https://www.sciencedirect.com/science/article/pii/S1364661325001196
3
16
9
reposted by
Andrew Saxe
Rhodri Cusack
3 months ago
Not just one, but two fantastic chances to discuss how infant development can inform machine learning and vice-versa at CCN 2025 in Amsterdam!!! Satellite workshop
sites.google.com/view/child2m...
and Generative Adversarial Collaboration
sites.google.com/ccneuro.org/...
loading . . .
CCN 2025 Satellite Event
Background The human visual system is full of optimisations—mechanisms designed to extract the most useful information from a constant stream of incoming data. The field of neuro-AI has made significa...
https://sites.google.com/view/child2mlandback?usp=sharing
0
31
16
reposted by
Andrew Saxe
λ³🎲
3 months ago
Shunichi Amari has been awarded the 40th (2025) Kyoto Prize in recognition of his pioneering research in the fields of artificial neural networks, machine learning, and information geometry
www.riken.jp/pr/news/2025...
loading . . .
甘利 俊一 栄誉研究員が「京都賞」を受賞
甘利 俊一栄誉研究員(本務:帝京大学 先端総合研究機構 特任教授)は、人工ニューラルネットワーク、機械学習、情報幾何学分野での先駆的な研究が評価され、第40回(2025)京都賞(先端技術部門 受賞対象分野:情報科学)を受賞しました。
https://www.riken.jp/pr/news/2025/20250620_1/index.html
2
35
12
reposted by
Andrew Saxe
Alexandra Proca
3 months ago
How do task dynamics impact learning in networks with internal dynamics? Excited to share our ICML Oral paper on learning dynamics in linear RNNs! with
@clementinedomine.bsky.social
@mpshanahan.bsky.social
and Pedro Mediano
openreview.net/forum?id=KGO...
loading . . .
Learning dynamics in linear recurrent neural networks
Recurrent neural networks (RNNs) are powerful models used widely in both machine learning and neuroscience to learn tasks with temporal dependencies and to model neural dynamics. However, despite...
https://openreview.net/forum?id=KGOcrIWYnx
1
34
12
reposted by
Andrew Saxe
Samuel Liebana
3 months ago
Does the brain learn by gradient descent? It's a pleasure to share our paper at
@cp-cell.bsky.social
, showing how mice learning over long timescales display key hallmarks of gradient descent (GD). The culmination of my PhD supervised by
@laklab.bsky.social
,
@saxelab.bsky.social
and Rafal Bogacz!
loading . . .
Dopamine encodes deep network teaching signals for individual learning trajectories
Longitudinal tracking of long-term learning behavior and striatal dopamine reveals that dopamine teaching signals shape individually diverse yet systematic learning trajectories, captured mathematical...
https://www.cell.com/cell/fulltext/S0092-8674(25)00575-6
3
71
20
reposted by
Andrew Saxe
Sainsbury Wellcome Centre
3 months ago
New research shows long-term learning is shaped by dopamine signals that act as partial reward prediction errors. The study in mice reveals how early behavioural biases predict individual learning trajectories. Find out more ⬇️
www.sainsburywellcome.org/web/blog/lon...
1
11
4
reposted by
Andrew Saxe
Sainsbury Wellcome Centre
3 months ago
Read the full paper ‘Dopamine encodes deep network teaching signals for individual learning trajectories’ in
@cellpress.bsky.social
⬇️
www.cell.com/cell/fulltex...
@yulonglilab.bsky.social
@saxelab.bsky.social
@oxforddpag.bsky.social
@laklab.bsky.social
loading . . .
Dopamine encodes deep network teaching signals for individual learning trajectories
Longitudinal tracking of long-term learning behavior and striatal dopamine reveals that dopamine teaching signals shape individually diverse yet systematic learning trajectories, captured mathematical...
https://www.cell.com/cell/fulltext/S0092-8674(25)00575-6
0
6
3
reposted by
Andrew Saxe
Armin Lak
3 months ago
Our work, out at Cell, shows that the brain’s dopamine signals teach each individual a unique learning trajectory. Collaborative experiment-theory effort, led by Sam Liebana in the lab. The first experiment my lab started just shy of 6y ago & v excited to see it out:
www.cell.com/cell/fulltex...
7
208
73
reposted by
Andrew Saxe
Nico Schuck
4 months ago
One way to tackle a new task is to reuse solutions from the past. Check out Sam Hall-McMaster's latest finding that strategy reuse is accompanied by neural reactivation of prior solutions
@plosbiology.org
Collab w/ M Tomov &
@gershbrain.bsky.social
#neuroskyence
journals.plos.org/plosbiology/...
loading . . .
Neural evidence that humans reuse strategies to solve new tasks
Humans can apply solutions used in past problems to new problems. In this study, the authors reveal the neural correlates of this process, known as generalization, and show that humans apply past poli...
https://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.3003174
1
73
19
How does in-context learning emerge in attention models during gradient descent training? Sharing our new Spotlight paper
@icmlconf.bsky.social
: Training Dynamics of In-Context Learning in Linear Attention
arxiv.org/abs/2501.16265
Led by Yedi Zhang with
@aaditya6284.bsky.social
and Peter Latham
4 months ago
1
52
18
reposted by
Andrew Saxe
Tim Vogels
4 months ago
We just pushed “Memory by a 1000 rules” onto bioRxiv, where we use clever
#ML
to find
#plasticity
quadruplets (EE, EI, IE, II) that learn basic stability in spiking nets. Why is it cool? We find 1000s!! of solutions, and they don’t just stabilise. They
#memorise
!
www.biorxiv.org/content/10.1...
loading . . .
Memory by a thousand rules: Automated discovery of functional multi-type plasticity rules reveals variety & degeneracy at the heart of learning
Synaptic plasticity is the basis of learning and memory, but the link between synaptic changes and neural function remains elusive. Here, we used automated search algorithms to obtain thousands of str...
https://www.biorxiv.org/content/10.1101/2025.05.28.656584v1
3
133
51
reposted by
Andrew Saxe
Friedemann Zenke
4 months ago
1/6 Why does the brain maintain such precise excitatory-inhibitory balance? Our new preprint explores a provocative idea: Small, targeted deviations from this balance may serve a purpose: to encode local error signals for learning.
www.biorxiv.org/content/10.1...
led by
@jrbch.bsky.social
4
174
59
reposted by
Andrew Saxe
Ted Underwood
4 months ago
Most of us in higher ed in the US are going to lose something over the next four years (opportunities, or funding, or time). But if we believe universities exist, in part, to preserve historical memory and create a space for dissent, our primary job right now is to fight and take the hit.
19
979
187
reposted by
Andrew Saxe
Stefano Sarao Mannelli
4 months ago
Our paper just came out in PRX! Congrats to Nishil Patel and the rest of the team* TL;DR: We analyse policy learning through the lens of statphys, revealing distinct scaling regimes with sharp transitions. 🔗
journals.aps.org/prx/abstract...
*Seb Lee
@sebgoldt.bsky.social
@saxelab.bsky.social
loading . . .
RL Perceptron: Generalization Dynamics of Policy Learning in High Dimensions
A solvable model for reinforcement learning, the RL perceptron, provides a mathematical framework to analyze learning dynamics, revealing key efficiency factors and a speed-accuracy trade-off that can...
https://journals.aps.org/prx/abstract/10.1103/PhysRevX.15.021051
0
11
5
reposted by
Andrew Saxe
Paul Frankland
4 months ago
Sharing a new paper from the lab. This paper, led by Sangyoon Ko, represents a merging of two longstanding research themes in the lab-- adult neurogenesis and systems consolidation.
rdcu.be/el18q
A short thread follows for those interested. 1/n
loading . . .
Systems consolidation reorganizes hippocampal engram circuitry
Nature - A study shows that loss of memory precision associated with systems consolidation can be explained by neurogenesis-dependent reorganization of engram circuitry within the hippocampus over...
https://rdcu.be/el18q
15
220
103
reposted by
Andrew Saxe
Prof Rick Adams
5 months ago
📢 Fantastic post doc job opportunity in my group and co-supervised by DeepMind's
@mariaeckstein.bsky.social
- now live! Ad here:
tinyurl.com/26rafzdc
- deadline May 29th. This is part of a very exciting collaboration with
@melgaby.bsky.social
and Matt Nour, funded by
@wellcometrust.bsky.social
.
loading . . .
UCL – University College London
UCL is consistently ranked as one of the top ten universities in the world (QS World University Rankings 2010-2022) and is No.2 in the UK for research power (Research Excellence Framework 2021).
https://tinyurl.com/26rafzdc
2
31
24
reposted by
Andrew Saxe
Lisa Schmors
5 months ago
🧠🤖 Computational Neuroscience summer school IMBIZO in Cape Town is open for applications again! 💻🧬 3 weeks of intense coursework & projects with support from expert tutors and faculty 📈Apply until July 1st! 🔗https://imbizo.africa/
1
36
33
reposted by
Andrew Saxe
Ruairidh McLennan Battleday
5 months ago
Today we honor one of computational
#neuroscience’s
founding parents, Professor Jay McClelland. Jay, along with Rumelhart, and Hinton
@nobelprize.bsky.social
, is broadly thought to be responsible for ushering in the second wave of connectionism with their Parallel Distributed Processing book...
1
7
2
reposted by
Andrew Saxe
Qihong (Q) Lu
5 months ago
I’m thrilled to announce that I will start as a presidential assistant professor in Neuroscience at the City U of Hong Kong in Jan 2026! I have RA, PhD, and postdoc positions available! Come work with me on neural network models + experiments on human memory! RT appreciated! (1/5)
14
129
43
reposted by
Andrew Saxe
Clementine Domine 🍊 @CCN
5 months ago
Our paper "Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena" has been accepted as a position paper to ICML 2025!
arxiv.org/abs/2502.21009
These models offer a tractable path to understanding complex neural dynamics—before diving into full nonlinearity. 1/4
loading . . .
Position: Solve Layerwise Linear Models First to Understand Neural Dynamical Phenomena (Neural Collapse, Emergence, Lazy/Rich Regime, and Grokking)
In physics, complex systems are often simplified into minimal, solvable models that retain only the core principles. In machine learning, layerwise linear models (e.g., linear neural networks) act as ...
https://arxiv.org/abs/2502.21009
1
19
3
reposted by
Andrew Saxe
Tomás Ryan
5 months ago
Excited to share our new study: "Cold memories control whole-body thermoregulatory responses" by
@andreamunozz.bsky.social
,
@aaron-douglas.bsky.social
& team at
@tcddublin.bsky.social
, in collaboration with
@lydialynch.bsky.social
&
@drchristineannd.bsky.social
www.nature.com/articles/s41...
loading . . .
Cold memories control whole-body thermoregulatory responses - Nature
Cold-sensitive engrams contribute to learned thermoregulation in mice that are returned to an environment in which they previously experienced a cold challenge, through a network formed betw...
https://www.nature.com/articles/s41586-025-08902-6
5
100
41
reposted by
Andrew Saxe
Ken Miller
5 months ago
A good step. Brown, Harvard, Yale, Princeton, Cornell, Penn, MIT among the 150+ signatories. But Columbia, Northwestern, Hopkins, Stanford, Caltech, UC except for Riverside, among those who fail to sign.
add a skeleton here at some point
2
49
16
reposted by
Andrew Saxe
Alexander Mathis
5 months ago
My lab has several postdoctoral fellow and software engineering positions available. Please email me if you’re interested.
1
29
24
reposted by
Andrew Saxe
UCL NeuroAI
6 months ago
For our next UCL
#NeuroAI
online seminar, we are happy to host Dr Shaul Druckmann (
@stanforduniversity.bsky.social
,
@stanfordpsy.bsky.social
). 🗓️Wed 16 April 2025 ⏰4-5pm BST Talk title: 'Relating circuit dynamics to computation: robustness and dimension-specific computation in cortical dynamics'
1
11
5
reposted by
Andrew Saxe
Blake Richards
5 months ago
Interested in foundation models for
#neuroscience
? Want to contribute to the development of the next-generation of multi-modal models? Come join us at IVADO in Montreal! We're hiring a full-time machine learning specialist for this work. Please share widely!
#NeuroAI
🧠📈 🧪
add a skeleton here at some point
1
57
32
Load more
feeds!
log in