Reduan Achtibat
@reduanachtibat.bsky.social
📤 28
📥 8
📝 0
PhD student @ Fraunhofer HHI. XAI and Interpretability for NLP & Vision.
reposted by
Reduan Achtibat
Sebastian Lapuschkin
10 months ago
Have had enough of the fake "sources" "cited" by ChatGPT? We have the solution in the form of low-cost causal citations for LLMs. Go check this out!
arxiv.org/abs/2505.15807
Thanks to my amazing co-authors
@pkhdipraja.bsky.social
,
@reduanachtibat.bsky.social
, Thomas Wiegand and Wojciech Samek!
1
8
3
reposted by
Reduan Achtibat
Patrick Kahardipraja
11 months ago
ICL allows LLMs to adapt to new tasks and at the same time enables them to access external knowledge through RAG. How does the latter work? TL;DR we find that certain attention heads perform various, distinct operations on the input prompt for QA!
arxiv.org/abs/2505.15807
1/
loading . . .
The Atlas of In-Context Learning: How Attention Heads Shape In-Context Retrieval Augmentation
Large language models are able to exploit in-context learning to access external knowledge beyond their training data through retrieval-augmentation. While promising, its inner workings remain unclear...
https://arxiv.org/abs/2505.15807
1
1
3
you reached the end!!
feeds!
log in