Lukas Galke
@lukasgalke.bsky.social
📤 112
📥 367
📝 10
Assistant Professor @SDU tracing connectionist mechanisms.
https://lgalke.github.io
pinned post!
🔥 Now finally out in Nature Communications: Deep neural networks and humans both benefit from compositional structure with Yoav Ram and
@limorraviv.bsky.social
Paper link right away:
rdcu.be/d5f2e
🧵⬇️
loading . . .
Deep neural networks and humans both benefit from compositional language structure
Nature Communications - This study demonstrates that deep neural networks, like humans, show a learnability advantage when trained on languages with more structured linguistic input, resulting in...
https://rdcu.be/d5f2e
9 months ago
1
10
4
reposted by
Lukas Galke
Arianna Bisazza
5 months ago
RAG is a powerful way to improve LLMs' answering abilities across many languages. But how do LLMs deal with multilingual contexts? Do they answer consistently when the retrieved info is provided to them in different languages? Joint work w/
@jiruiqi.bsky.social
& Raquel_Fernández See thread! ⤵️
add a skeleton here at some point
0
6
2
reposted by
Lukas Galke
Ansgar Scherp
7 months ago
🗞️ A simple trick improves embedding retrieval performance even without further training. ZCA whitening increases isotropy of the embedding space and thereby helps retrieval Paper by Andor Diera and with
@lukasgalke.bsky.social
at ESANN 2025. Preprint:
arxiv.org/abs/2411.17538
loading . . .
Isotropy Matters: Soft-ZCA Whitening of Embeddings for Semantic Code Search
Low isotropy in an embedding space impairs performance on tasks involving semantic inference. Our study investigates the impact of isotropy on semantic code search performance and explores post-proces...
https://arxiv.org/abs/2411.17538
0
1
1
reposted by
Lukas Galke
Kristian Kersting
8 months ago
Thrilled to share our
#ICLR2025
work on Meta-Causal States! 🌟 Causal graphs evolve with dynamic systems & agent actions. We show how to cluster causal models by qualitative behavior, revealing hidden dynamics & emergent relationships 🚀
#Causality
#ML
https://arxiv.org/abs/2410.13054
0
12
6
reposted by
Lukas Galke
Max Planck Institute for Psycholinguistics
9 months ago
Deep neural networks and humans both benefit from compositional language structure. New paper by
@lukasgalke.bsky.social
, Yoav Ram, and
@limorraviv.bsky.social
.
doi.org/10.1038/s414...
.
loading . . .
Deep neural networks and humans both benefit from compositional language structure - Nature Communications
This study demonstrates that deep neural networks, like humans, show a learnability advantage when trained on languages with more structured linguistic input, resulting in closer alignment with human ...
https://doi.org/10.1038/s41467-024-55158-1
1
8
5
🔥 Now finally out in Nature Communications: Deep neural networks and humans both benefit from compositional structure with Yoav Ram and
@limorraviv.bsky.social
Paper link right away:
rdcu.be/d5f2e
🧵⬇️
loading . . .
Deep neural networks and humans both benefit from compositional language structure
Nature Communications - This study demonstrates that deep neural networks, like humans, show a learnability advantage when trained on languages with more structured linguistic input, resulting in...
https://rdcu.be/d5f2e
9 months ago
1
10
4
Two more days left to apply for PhD positions on training multilingual language models at the Centre for Machine Learning in the Department of Mathematics and Computer Science (IMADA), University of Southern Denmark (SDU).
tinyurl.com/dfm2025phd
Application deadline: Dec 19, 2024
loading . . .
Several PhD scholarships in Artificial Intelligence
Application deadline: 19 December 2024 at 23:59 hours local Danish time
https://tinyurl.com/dfm2025phd
10 months ago
0
2
2
reposted by
Lukas Galke
Yoav Goldberg
10 months ago
tell me about LLMs tool use best practices. I know the high level, and want to learn about implementation/prompting details, e.g.: - how do you best feed in the tool specs or DSL to the LLM? - how do you ask it to indicate a tool use (which wrapper / indicator) - how do you ask for nested calls etc
9
38
4
We have some openings for PhD/Postdoc positions on multilingual language modeling at SDU's Centre for Machine Learning, Denmark. Topics go down to the core of pre-training and instruction tuning and adjacent topics such as efficient language modeling. Please consider to apply :)
add a skeleton here at some point
10 months ago
1
0
0
reposted by
Lukas Galke
10 months ago
Research positions on LLMs and available at the SDU Centre for ML:
tinyurl.com/dfm2025phd
tinyurl.com/dfm2025postdoc
loading . . .
Several PhD scholarships in Artificial Intelligence
Application deadline: 19 December 2024 at 23:59 hours local Danish time
https://tinyurl.com/dfm2025phd
0
2
2
I'm Lukas, working on machine learning and natural language processing. I'm particularly interested in interpretability of language models, efficient language models, continual learning, ood generalization, and machine communication. I hope to find a community like the ex-twitter ML community here.
10 months ago
1
6
0
you reached the end!!
feeds!
log in