Sarah Griebel
@sgriebel.bsky.social
๐ค 118
๐ฅ 125
๐ 3
IS PhD student at UIUC
This new study uses continued pretraining with historical documents on Qwen2.5, along with supervised fine-tuning and reinforcement learning for a more historically accurate CoT-tuned model. Cool methods!
arxiv.org/pdf/2504.09488
loading . . .
https://arxiv.org/pdf/2504.09488
4 months ago
0
7
2
reposted by
Sarah Griebel
Ted Underwood
5 months ago
New preprint from
@lauraknelson.bsky.social
,
@mattwilkens.bsky.social
, and myself tests different ways of simulating the past with LLMs. We don't fully answer the title question hereโjust show that simple strategies based on prompting and fine-tuning are insufficient. +
loading . . .
Can Language Models Represent the Past without Anachronism?
Before researchers can use language models to simulate the past, they need to understand the risk of anachronism. We find that prompting a contemporary model with examples of period prose does not pro...
https://arxiv.org/abs/2505.00030
7
180
59
There are many ways to identify texts that seem ahead of their time. Our CHR 2024 paper asks which measures of textual precocity align best with social evidence about influence and change.
10 months ago
1
40
10
you reached the end!!
feeds!
log in