Yanai Elazar
@yanai.bsky.social
π€ 2498
π₯ 387
π 95
Assistant Professor at Bar-Ilan University
https://yanaiela.github.io/
You had one job!
15 days ago
0
7
1
reposted by
Yanai Elazar
Chantal
about 2 months ago
"AI slop" seems to be everywhere, but what exactly makes text feel like "slop"? In our new work (w/
@tuhinchakr.bsky.social
, Diego Garcia-Olano,
@byron.bsky.social
) we provide a systematic attempt at measuring AI "slop" in text!
arxiv.org/abs/2509.19163
π§΅ (1/7)
1
31
17
reposted by
Yanai Elazar
Naomi Saphra
about 2 months ago
I did a QA with Quanta about interpretability and training dynamics! I got to talk about a bunch of research hobby horses and how I got into them.
add a skeleton here at some point
2
66
13
The new Tahoe UI is so bad. Everything just looks so blurred...
about 2 months ago
1
1
0
Interesting opportunity for a research visit through the Azrieli foundation -
azrielifoundation.org/azrieli-fell...
Let me know if you're interested!
loading . . .
Call for Applications for the Azrieli International Visiting PhD Fellowship - The Azrieli Foundation
The Azrieli International Visiting PhD Fellowship offers outstanding international PhD candidates the opportunity to conduct short-term research in Israel, fostering academic collaboration and strengt...
https://azrielifoundation.org/azrieli-fellows-news/call-for-applications-for-the-azrieli-international-visiting-phd-fellowship-2/
about 2 months ago
0
1
0
Organizing a workshop? Checkout our compiled material for organizing one:
www.bigpictureworkshop.com/open-workshop
(and hopefully we'll be back for another iteration of the Big Picture next year w/ Allyson Ettinger,
@norakassner.bsky.social
,
@sebruder.bsky.social
)
loading . . .
Big Picture Workshop - Open Workshop
Open sourcing the workshop
https://www.bigpictureworkshop.com/open-workshop
2 months ago
0
17
3
Iβm excited to share that I'm joining Bar-Ilan University as an assistant professor!
3 months ago
6
58
0
A strange trend I've noticed at
#ACL2025
is that people are hesitant to reach out to papers/"academic products" authors. This is unfortunate for both parties! A simple email can save a lot of time to the sender, but is also one of my favorite kind of email as the receiver!
3 months ago
0
6
0
reposted by
Yanai Elazar
Antoine Bosselut
3 months ago
The EPFL NLP lab is looking to hire a postdoctoral researcher on the topic of designing, training, and evaluating multilingual LLMs:
docs.google.com/document/d/1...
Come join our dynamic group in beautiful Lausanne!
loading . . .
EPFL NLP Postdoctoral Scholar Posting - Swiss AI LLMs
The EPFL Natural Language Processing (NLP) lab is looking to hire a postdoctoral researcher candidate in the area of multilingual LLM design, training, and evaluation. This postdoctoral position is as...
https://docs.google.com/document/d/1m0hE-0kfCNP29lFZptEWryV9SLAI78pF7Ua0SfZm-PM/edit?tab=t.0
0
21
13
reposted by
Yanai Elazar
Pietro Lesci
3 months ago
Had a really great and fun time with
@yanai.bsky.social
, Niloofar Mireshghallah, and Reza Shokri discussing memorisation at the
@l2m2workshop.bsky.social
panel. Thanks to the entire organising team and attendees for making this such a fantastic workshop!
#ACL2025
add a skeleton here at some point
0
8
1
I had a lot of fun contemplating about memorization questions at the
@l2m2workshop.bsky.social
panel yesterday together with Niloofar Mireshghallah and Reza Shokri, moderated by
@pietrolesci.bsky.social
who did a fantastic job!
#ACL2025
3 months ago
1
12
3
reposted by
Yanai Elazar
Shanshan Xu
3 months ago
I'll present our work w/
@santosh-tyss.bsky.social
@yanai.bsky.social
@barbaraplank.bsky.social
on LLMs memorization of distributions of political leanings in their pretraining data! Catch us at L2M2 workshop
@l2m2workshop.bsky.social
#ACL2025
tmrw π Aug 1, 14:00β15:30 π
arxiv.org/pdf/2502.18282
0
6
2
reposted by
Yanai Elazar
Ai2
3 months ago
Ai2 is excited to be at
#ACL2025
in Vienna, Austria this week. Come say hello, meet the team, and chat about the future of NLP. See you there! π€π
0
9
3
It's crazy that people give more than a single invited talk during the same conference (diff workshops). A single talk (done right) is challenging enough.
4 months ago
1
10
0
reposted by
Yanai Elazar
Leqi Liu
4 months ago
What if you could understand and control an LLM by studying its *smaller* sibling? Our new paper introduces the Linear Representation Transferability Hypothesis. We find that the internal representations of different-sized models can be translated into one another using a simple linear(affine) map.
1
25
11
I really like this take. Academia and the open source community should embrace transparency in data, even in the cost of the issues that come with it. These issues should be of course studied and documented, but not used as an indicator or a signal to shut down the whole operation.
add a skeleton here at some point
4 months ago
0
5
0
reposted by
Yanai Elazar
4 months ago
Prompting is our most successful tool for exploring LLMs, but the term evokes eye-rolls and grimaces from scientists. Why? Because prompting as scientific inquiry has become conflated with prompt engineering. This is holding us back. π§΅and new paper with
@ari-holtzman.bsky.social
.
2
37
15
What's up with
@arxiv-cs-cl.bsky.social
Wasn't the entire premise of this website to allow uploading of papers w/o the official peer review process??
4 months ago
0
0
0
Check out our take on Chain-of-Thought. I really like this paper as a survey on the current literature on what CoT is, but more importantly on what it's not. It also serves as a cautionary tale to the (apparently quite common) misuse of CoT as an interpretable method.
add a skeleton here at some point
4 months ago
1
13
5
reposted by
Yanai Elazar
Somnath Basu Roy Chowdhury
7 months ago
ππ¨π° πππ§ π°π π©ππ«πππππ₯π² ππ«ππ¬π ππ¨π§πππ©ππ¬ ππ«π¨π¦ ππππ¬? Our method, Perfect Erasure Functions (PEF), erases concepts perfectly from LLM representations. We analytically derive PEF w/o parameter estimation. PEFs achieve pareto optimal erasure-utility tradeoff backed w/ theoretical guarantees.
#AISTATS2025
π§΅
2
39
11
reposted by
Yanai Elazar
Valentina Pyatkin
7 months ago
I'll be at
#NAACL2025
: ποΈTo present my paper "Superlatives in Context", showing how the interpretation of superlatives is very context dependent and often implicit, and how LLMs handle such semantic underspecification ποΈAnd we will present RewardBench on Friday Reach out if you want to chat!
1
28
6
π‘ New ICLR paper! π‘ "On Linear Representations and Pretraining Data Frequency in Language Models": We provide an explanation for when & why linear representations form in large (or small) language models. Led by
@jackmerullo.bsky.social
, w/
@nlpnoah.bsky.social
&
@sarah-nlp.bsky.social
7 months ago
3
42
15
I'm on my way to ICLR! Let me know if you want to meet and/or hang out π₯³
7 months ago
0
9
1
reposted by
Yanai Elazar
Kyle Mahowald
7 months ago
I might be able to hire a postdoc for this fall in computational linguistics at UT Austin. Topics in the general LLM + cognitive space (particularly reasoning, chain of thought, LLMs + code) and LLM + linguistic space. If this could be of interest, feel free to get in touch!
0
60
32
How many _final_ will we end up with?
7 months ago
1
5
0
I'm curious if someone actually found those useful... All the "feedback" I got was just rephrasing my reviews.
add a skeleton here at some point
7 months ago
2
2
0
The R1 paper claims it does not require any (mostly) supervised data to enable these capabilities with RL, but it doesn't report any details on its data or how much it contains instruction data. It also doesn't tackle contamination...
10 months ago
2
7
1
Sponsor ACL π₯³ It is an AI conference, even though some say otherwise.
add a skeleton here at some point
10 months ago
0
8
0
@
#NeurIPS2024
Come say hi!
11 months ago
0
6
0
reposted by
Yanai Elazar
Ian Magnusson
11 months ago
Come chat with me at
#NeurIPS2024
and learn about how to use Paloma to evaluate perplexity over hundreds of domains! β¨We have stickers tooβ¨
add a skeleton here at some point
1
21
4
If I don't post about being on the job market, am I really on the job market?
11 months ago
1
16
0
reposted by
Yanai Elazar
Mechanical Dirk
11 months ago
We just updated the OLMo repo at
github.com/allenai/OLMo
! There are now several training configs that together reproduce the training runs that lead to the final OLMo 2 models. In particular, all the training data is available, tokenized and shuffled exactly as we trained on it!
loading . . .
GitHub - allenai/OLMo: Modeling, training, eval, and inference code for OLMo
Modeling, training, eval, and inference code for OLMo - allenai/OLMo
https://github.com/allenai/OLMo
0
54
11
ICLR ACs - save us!
11 months ago
1
13
2
Has anyone figured out how to create a keyboard shortcut in keynote to add animation to an object?
11 months ago
0
2
0
reposted by
Yanai Elazar
Sohee Yang
12 months ago
π¨ New Paper π¨ Can LLMs perform latent multi-hop reasoning without exploiting shortcuts? We find the answer is yes β they can recall and compose facts not seen together in training or guessing the answer, but success greatly depends on the type of the bridge entity (80% for country, 6% for year)! 1/N
3
67
15
interfolio has the incredibly sophisticated feature of saving my statements, so why can't it save my demographic information? I assure you, my US-centric race won't change from one application to another.
12 months ago
1
6
0
I mean, if you ask about my desired position...
12 months ago
2
17
0
Where are the agents who can fill out these applications for me?
12 months ago
0
16
0
Back in the day, I used to go on Twitter while waiting for my RNN models to finish training. These days, it's going on bsky (ok, ok, X as well) while waiting for ChatGPT to re-write my statement.
12 months ago
2
15
0
Dear EiC, Get some sleep, Yours truly, Yanai
12 months ago
0
15
0
Training LLMs still has a way to go in terms of the number of authors (
blog.google/outreach-ini...
)
12 months ago
2
8
0
Happening now!
add a skeleton here at some point
12 months ago
1
8
0
11:30 at Brickell Will Merrill will introduce DAWGS and how they can be used for estimating novelty of LLMs
add a skeleton here at some point
12 months ago
1
7
0
reposted by
Yanai Elazar
Maria Antoniak
12 months ago
What's In My Big Data is such a great toolkit. Let's you easily look inside big pretraining data for specific queries π You can try looking up your own name or whatever text you want. Link to the demo with ngram viewer:
wimbd.apps.allenai.org
add a skeleton here at some point
1
25
6
I also have more WIMBD stickers! Ping me if you want some
add a skeleton here at some point
12 months ago
0
6
2
In Miami for
#EMNLP2024
My collaborators and I are presenting a bunch of interesting papers! DM me if you wanna hear more about them, chat about life, job market, or other topics.
12 months ago
1
14
2
Is someone organizing a workshop at
#ACL
2025 and wants to switch with a
#NAACL
2025 slot?
12 months ago
0
2
0
reposted by
Yanai Elazar
Byron Wallace
12 months ago
Chantal Shaib reports on syntactic "templates" that LLM's like to repeat:
arxiv.org/abs/2407.00211
(w/@yanai.bsky.social and
@jessyjli.bsky.social
)
1
6
1
you reached the end!!
feeds!
log in