A Erdem Sagtekin
@aesagtekin.bsky.social
📤 705
📥 542
📝 9
theoretical neuroscience phd student at columbia
reposted by
A Erdem Sagtekin
Alex Williams
19 days ago
Cosyne invited me to give a long tutorial (4 hours!) on methods to quantify differences high-d neural recordings across animals, brain regions, deep neural nets, etc. The recording is up on youtube. I hope it inspires more research on this fundamental topic!
www.youtube.com/watch?v=n44x...
loading . . .
Cosyne 2026 - Cosyne Tutorial: Comparative Analysis of Neural Population Codes
YouTube video by Cosyne Talks
https://www.youtube.com/watch?v=n44xqrZ5j9U
3
160
57
reposted by
A Erdem Sagtekin
David G. Clark
about 1 month ago
I am totally pumped about this new work . "Task-trained RNNs" are a powerful and influential framework in neuroscience, but have lacked a firm theoretical footing. This work provides one, and makes direct contact with the classical theory of random RNNs:
www.biorxiv.org/content/10.6...
2
87
36
1/7 How should feedback signals influence a network during learning? Should they first adjust synaptic weights, which then indirectly change neural activity (as in backprop.)? Or should they first adjust neural activity to guide synaptic updates (e.g., target prop.)?
openreview.net/forum?id=xVI...
3 months ago
1
40
5
reposted by
A Erdem Sagtekin
Owen Marschall
4 months ago
1/X Excited to present this preprint on multi-tasking, with
@david-g-clark.bsky.social
and Ashok Litwin-Kumar! Timely too, as “low-D manifold” has been trending again. (If you read thru the end, we escape Flatland and return to the glorious high-D world we deserve.)
www.biorxiv.org/content/10.6...
loading . . .
A theory of multi-task computation and task selection
Neural activity during the performance of a stereotyped behavioral task is often described as low-dimensional, occupying only a limited region in the space of all firing-rate patterns. This region has...
https://www.biorxiv.org/content/10.64898/2025.12.12.693832v1
1
85
22
reposted by
A Erdem Sagtekin
Friedemann Zenke
11 months ago
1/6 Why does the brain maintain such precise excitatory-inhibitory balance? Our new preprint explores a provocative idea: Small, targeted deviations from this balance may serve a purpose: to encode local error signals for learning.
www.biorxiv.org/content/10.1...
led by
@jrbch.bsky.social
5
181
60
reposted by
A Erdem Sagtekin
Matthijs Pals
over 1 year ago
How to find all fixed points in piece-wise linear recurrent neural networks (RNNs)? A short thread 🧵 In RNNs with N units with ReLU(x-b) activations the phase space is partioned in 2^N regions by hyperplanes at x=b 1/7
1
63
12
reposted by
A Erdem Sagtekin
David G. Clark
over 1 year ago
(1/5) Fun fact: Several classic results in the stat. mech. of learning can be derived in a couple lines of simple algebra! In this paper with Haim Sompolinsky, we simplify and unify derivations for high-dimensional convex learning problems using a bipartite cavity method.
arxiv.org/abs/2412.01110
loading . . .
Simplified derivations for high-dimensional convex learning problems
Statistical physics provides tools for analyzing high-dimensional problems in machine learning and theoretical neuroscience. These calculations, particularly those using the replica method, often invo...
https://arxiv.org/abs/2412.01110
2
57
17
This list likely reflects mainly my interests and circle, and I’m sure I’ve missed many people, but I gave it a try: (I’ll be slowly editing it until it reaches 150/150)
go.bsky.app/7VFUkdn
(also, I tried but couldn't remove my profile...)
add a skeleton here at some point
over 1 year ago
48
81
59
i enjoyed reading the geometry of plasticity paper and felt that something important was coming, this is it:
add a skeleton here at some point
over 1 year ago
1
4
1
you reached the end!!
feeds!
log in