A Erdem Sagtekin
@aesagtekin.bsky.social
📤 691
📥 537
📝 9
theoretical neuroscience phd student at columbia
1/7 How should feedback signals influence a network during learning? Should they first adjust synaptic weights, which then indirectly change neural activity (as in backprop.)? Or should they first adjust neural activity to guide synaptic updates (e.g., target prop.)?
openreview.net/forum?id=xVI...
about 1 month ago
1
40
5
reposted by
A Erdem Sagtekin
Owen Marschall
2 months ago
1/X Excited to present this preprint on multi-tasking, with
@david-g-clark.bsky.social
and Ashok Litwin-Kumar! Timely too, as “low-D manifold” has been trending again. (If you read thru the end, we escape Flatland and return to the glorious high-D world we deserve.)
www.biorxiv.org/content/10.6...
loading . . .
A theory of multi-task computation and task selection
Neural activity during the performance of a stereotyped behavioral task is often described as low-dimensional, occupying only a limited region in the space of all firing-rate patterns. This region has...
https://www.biorxiv.org/content/10.64898/2025.12.12.693832v1
1
83
22
reposted by
A Erdem Sagtekin
Friedemann Zenke
9 months ago
1/6 Why does the brain maintain such precise excitatory-inhibitory balance? Our new preprint explores a provocative idea: Small, targeted deviations from this balance may serve a purpose: to encode local error signals for learning.
www.biorxiv.org/content/10.1...
led by
@jrbch.bsky.social
5
181
60
reposted by
A Erdem Sagtekin
Matthijs Pals
about 1 year ago
How to find all fixed points in piece-wise linear recurrent neural networks (RNNs)? A short thread 🧵 In RNNs with N units with ReLU(x-b) activations the phase space is partioned in 2^N regions by hyperplanes at x=b 1/7
1
63
12
reposted by
A Erdem Sagtekin
David G. Clark
about 1 year ago
(1/5) Fun fact: Several classic results in the stat. mech. of learning can be derived in a couple lines of simple algebra! In this paper with Haim Sompolinsky, we simplify and unify derivations for high-dimensional convex learning problems using a bipartite cavity method.
arxiv.org/abs/2412.01110
loading . . .
Simplified derivations for high-dimensional convex learning problems
Statistical physics provides tools for analyzing high-dimensional problems in machine learning and theoretical neuroscience. These calculations, particularly those using the replica method, often invo...
https://arxiv.org/abs/2412.01110
2
57
17
This list likely reflects mainly my interests and circle, and I’m sure I’ve missed many people, but I gave it a try: (I’ll be slowly editing it until it reaches 150/150)
go.bsky.app/7VFUkdn
(also, I tried but couldn't remove my profile...)
add a skeleton here at some point
over 1 year ago
48
81
59
i enjoyed reading the geometry of plasticity paper and felt that something important was coming, this is it:
add a skeleton here at some point
over 1 year ago
1
4
1
you reached the end!!
feeds!
log in