Stephanie Chan
@scychan.bsky.social
π€ 944
π₯ 276
π 15
Staff Research Scientist at Google DeepMind. Artificial and biological brains π€ π§
reposted by
Stephanie Chan
Andrew Lampinen
3 months ago
In neuroscience, we often try to understand systems by analyzing their representations β using tools like regression or RSA. But are these analyses biased towards discovering a subset of what a system represents? If you're interested in this question, check out our new commentary! Thread:
5
163
53
Great new paper by
@jessegeerts.bsky.social
, looking at a certain type of generalization in transformers -- transitive inference -- and what conditions induce this type of generalization
add a skeleton here at some point
5 months ago
0
1
0
New paper: Generalization from context often outperforms generalization from finetuning. And you might get the best of both worlds by spending extra compute and train time to augment finetuning.
add a skeleton here at some point
6 months ago
0
4
0
New work led by
@aaditya6284.bsky.social
"Strategy coopetition explains the emergence and transience of in-context learning in transformers." We find some surprising things!! E.g. that circuits can simultaneously compete AND cooperate ("coopetition") π― π§΅π
8 months ago
1
9
4
Sadly, we have lost a brilliant researcher and colleague, Felix Hill. Please see this note, where I have tried to compile some of his writings:
docs.google.com/document/d/1...
loading . . .
For Felix
Devastatingly, we have lost a bright light in our field. Felix Hill was not only a deeply insightful thinker -- he was also a generous, thoughtful mentor to many researchers. He majorly changed my lif...
https://docs.google.com/document/d/1LrMKQtDald0D3sKcD3prMY2eTOrrKMxuKeKT9UnrF-0/edit?usp=drivesdk
10 months ago
0
17
1
reposted by
Stephanie Chan
Andrew Lampinen
11 months ago
What counts as in-context learning (ICL)? Typically, you might think of it as learning a task from a few examples. However, weβve just written a perspective (
arxiv.org/abs/2412.03782
) suggesting interpreting a much broader spectrum of behaviors as ICL! Quick summary thread: 1/7
loading . . .
The broader spectrum of in-context learning
The ability of language models to learn a task from a few examples in context has generated substantial interest. Here, we provide a perspective that situates this type of supervised few-shot learning...
https://arxiv.org/abs/2412.03782
2
123
32
reposted by
Stephanie Chan
NoΓ©mi ΓltetΕ
11 months ago
Introducing the :milkfoamo: emoji
0
1
1
I'll be not at Neurips this week. Let's grab coffee if you want to fomo-commiserate with me
11 months ago
0
12
1
Hello hello. Testing testing 123
11 months ago
3
9
0
you reached the end!!
feeds!
log in