Lukas Galke
@lukasgalke.bsky.social
π€ 115
π₯ 367
π 10
Assistant Professor @SDU tracing connectionist mechanisms.
https://lgalke.github.io
pinned post!
π₯ Now finally out in Nature Communications: Deep neural networks and humans both benefit from compositional structure with Yoav Ram and
@limorraviv.bsky.social
Paper link right away:
rdcu.be/d5f2e
π§΅β¬οΈ
loading . . .
Deep neural networks and humans both benefit from compositional language structure
Nature Communications - This study demonstrates that deep neural networks, like humans, show a learnability advantage when trained on languages with more structured linguistic input, resulting in...
https://rdcu.be/d5f2e
about 1 year ago
1
11
4
reposted by
Lukas Galke
Danial Namazifard
5 months ago
π Excited to share our latest work, "Isolating Culture Neurons in Multilingual Large Language Models". π» Data & code:
github.com/namazifard/C...
π Preprint:
arxiv.org/abs/2508.02241
loading . . .
Isolating Culture Neurons in Multilingual Large Language Models
Language and culture are deeply intertwined, yet it is so far unclear how and where multilingual large language models encode culture. Here, we extend upon an established methodology for identifying l...
https://arxiv.org/abs/2508.02241
1
4
2
reposted by
Lukas Galke
Arianna Bisazza
10 months ago
RAG is a powerful way to improve LLMs' answering abilities across many languages. But how do LLMs deal with multilingual contexts? Do they answer consistently when the retrieved info is provided to them in different languages? Joint work w/
@jiruiqi.bsky.social
& Raquel_FernΓ‘ndez See thread! ‡οΈ
add a skeleton here at some point
0
6
2
reposted by
Lukas Galke
Ansgar Scherp
12 months ago
ποΈ A simple trick improves embedding retrieval performance even without further training. ZCA whitening increases isotropy of the embedding space and thereby helps retrieval Paper by Andor Diera and with
@lukasgalke.bsky.social
at ESANN 2025. Preprint:
arxiv.org/abs/2411.17538
loading . . .
Isotropy Matters: Soft-ZCA Whitening of Embeddings for Semantic Code Search
Low isotropy in an embedding space impairs performance on tasks involving semantic inference. Our study investigates the impact of isotropy on semantic code search performance and explores post-proces...
https://arxiv.org/abs/2411.17538
0
1
1
reposted by
Lukas Galke
Kristian Kersting
about 1 year ago
Thrilled to share our
#ICLR2025
work on Meta-Causal States! π Causal graphs evolve with dynamic systems & agent actions. We show how to cluster causal models by qualitative behavior, revealing hidden dynamics & emergent relationships π
#Causality
#ML
https://arxiv.org/abs/2410.13054
0
12
6
reposted by
Lukas Galke
Max Planck Institute for Psycholinguistics
about 1 year ago
Deep neural networks and humans both benefit from compositional language structure. New paper by
@lukasgalke.bsky.social
, Yoav Ram, and
@limorraviv.bsky.social
.
doi.org/10.1038/s414...
.
loading . . .
Deep neural networks and humans both benefit from compositional language structure - Nature Communications
This study demonstrates that deep neural networks, like humans, show a learnability advantage when trained on languages with more structured linguistic input, resulting in closer alignment with human ...
https://doi.org/10.1038/s41467-024-55158-1
1
8
5
π₯ Now finally out in Nature Communications: Deep neural networks and humans both benefit from compositional structure with Yoav Ram and
@limorraviv.bsky.social
Paper link right away:
rdcu.be/d5f2e
π§΅β¬οΈ
loading . . .
Deep neural networks and humans both benefit from compositional language structure
Nature Communications - This study demonstrates that deep neural networks, like humans, show a learnability advantage when trained on languages with more structured linguistic input, resulting in...
https://rdcu.be/d5f2e
about 1 year ago
1
11
4
Two more days left to apply for PhD positions on training multilingual language models at the Centre for Machine Learning in the Department of Mathematics and Computer Science (IMADA), University of Southern Denmark (SDU).
tinyurl.com/dfm2025phd
Application deadline: Dec 19, 2024
loading . . .
Several PhD scholarships in Artificial Intelligence
Application deadline: 19 December 2024 at 23:59 hours local Danish time
https://tinyurl.com/dfm2025phd
about 1 year ago
0
2
2
reposted by
Lukas Galke
Yoav Goldberg
about 1 year ago
tell me about LLMs tool use best practices. I know the high level, and want to learn about implementation/prompting details, e.g.: - how do you best feed in the tool specs or DSL to the LLM? - how do you ask it to indicate a tool use (which wrapper / indicator) - how do you ask for nested calls etc
9
38
4
We have some openings for PhD/Postdoc positions on multilingual language modeling at SDU's Centre for Machine Learning, Denmark. Topics go down to the core of pre-training and instruction tuning and adjacent topics such as efficient language modeling. Please consider to apply :)
add a skeleton here at some point
about 1 year ago
1
0
0
reposted by
Lukas Galke
about 1 year ago
Research positions on LLMs and available at the SDU Centre for ML:
tinyurl.com/dfm2025phd
tinyurl.com/dfm2025postdoc
loading . . .
Several PhD scholarships in Artificial Intelligence
Application deadline: 19 December 2024 at 23:59 hours local Danish time
https://tinyurl.com/dfm2025phd
0
2
2
I'm Lukas, working on machine learning and natural language processing. I'm particularly interested in interpretability of language models, efficient language models, continual learning, ood generalization, and machine communication. I hope to find a community like the ex-twitter ML community here.
about 1 year ago
1
6
0
you reached the end!!
feeds!
log in