Eddie Yang
@eddieyang.bsky.social
๐ค 105
๐ฅ 137
๐ 33
New paper out at AJPS: "The limits of AI for authoritarian control." The more repression there is, the less information exists in AI's training data, and the worse the AI performs.
13 days ago
0
12
1
We've updated the localLLM package on CRAN (
cran.r-project.org/package=loca...
). It allows you to run LLMs locally and natively in R. Everything is reproducible. And it's free. Some functionalities on reproducibility and validation๐งต๐
loading . . .
localLLM: Running Local LLMs with 'llama.cpp' Backend
Provides R bindings to the 'llama.cpp' library for running large language models. The package uses a lightweight architecture where the C++ backend library is downloaded at runtime rather than bundled...
https://CRAN.R-project.org/package=localLLM
15 days ago
1
4
3
reposted by
Eddie Yang
Adam Scharpf
16 days ago
Russia, Venezuela, Iran, China, the Sahel region, the United States ... Want to know why state agents carry out brutal repression โ or participate in illegal coups? Our new book "Making a Career in Dictatorship" provides answers โ it just got published by
@academic.oup.com
:
tinyurl.com/ystwm3tf
12
133
83
Why the slow uptake in political science?
30 days ago
0
1
0
reposted by
Eddie Yang
Dan de Kadt
2 months ago
So, randomization is not a *sufficient* condition for good research. Far from it. The best experimental social science being done is that work where either the theory or the operationalization (or both) are the emphasis. Randomization is "easy" - the challenge is what you randomize and why.
3
23
4
reposted by
Eddie Yang
Melissa Sands
3 months ago
Can Large Multimodal Models (LMMs) extract features of urban neighborhoods from street-view images? New with Paige Bollen (OSU) and
@joehigton.bsky.social
(NYU): Sometimes, but the models better recover national assessments that local ones, even w/additional prompting (which can make things worse!)
add a skeleton here at some point
1
22
7
New paper: LLMs are increasingly used to label data in political science. But how reliable are these annotations, and what are the consequences for scientific findings? What are best practices? Some new findings from a large empirical evaluation. Paper:
eddieyang.net/research/llm_annotation.pdf
4 months ago
1
40
19
Great analogy to connect AI to many canonical political science questions. Political behavior has led the way in studying AI. Excited to see institutions catch up๐
add a skeleton here at some point
9 months ago
1
9
3
If no resource constraint, what open-weight LLM would you use in your research (for data labeling, coding etc.)?
10 months ago
0
0
0
Awesome work! Love to see different approaches to this problem.
add a skeleton here at some point
12 months ago
0
0
0
Really interesting read. Refreshing perspective.
add a skeleton here at some point
12 months ago
0
9
1
you reached the end!!
feeds!
log in