Myra Cheng
@myra.bsky.social
📤 2659
📥 130
📝 41
PhD candidate @ Stanford NLP
https://myracheng.github.io/
reposted by
Myra Cheng
Kaitlyn Zhou
about 15 hours ago
No better time to start learning about that
#AI
thing everyone's talking about... 📢 I'm recruiting PhD students in Computer Science or Information Science
@cornellbowers.bsky.social
! If you're interested, apply to either department (yes, either program!) and list me as a potential advisor!
1
16
9
reposted by
Myra Cheng
Kaitlyn Zhou
17 days ago
As of June 2025, 66% of Americans have never used ChatGPT. Our new position paper, Attention to Non-Adopters, explores why this matters: AI research is being shaped around adopters—leaving non-adopters’ needs, and key LLM research opportunities, behind.
arxiv.org/abs/2510.15951
2
32
12
reposted by
Myra Cheng
Kaitlyn Zhou
about 1 month ago
I'll be at COLM next week! Let me know if you want to chat!
@colmweb.org
@neilrathi.bsky.social
will be presenting our work on multilingual overconfidence in language models and the effects on human overreliance!
arxiv.org/pdf/2507.06306
loading . . .
https://arxiv.org/pdf/2507.06306
0
7
1
reposted by
Myra Cheng
Steve Rathje
about 1 month ago
🚨 New preprint 🚨 Across 3 experiments (n = 3,285), we found that interacting with sycophantic (or overly agreeable) AI chatbots entrenched attitudes and led to inflated self-perceptions. Yet, people preferred sycophantic chatbots and viewed them as unbiased!
osf.io/preprints/ps...
Thread 🧵
3
161
100
AI always calling your ideas “fantastic” can feel inauthentic, but what are sycophancy’s deeper harms? We find that in the common use case of seeking AI advice on interpersonal situations—specifically conflicts—sycophancy makes people feel more right & less willing to apologize.
about 1 month ago
2
115
53
Thoughtful NPR piece about ChatGPT relationship advice! Thanks for mentioning our research :)
add a skeleton here at some point
3 months ago
0
12
0
reposted by
Myra Cheng
Alexandra Olteanu
3 months ago
#acl2025
I think there is plenty of evidence for the risks of anthropomorphic AI behavior and design (re: keynote) -- find
@myra.bsky.social
and I if you want to chat more about this or our "Dehumanizing Machines" ACL 2025 paper
add a skeleton here at some point
0
11
1
reposted by
Myra Cheng
Dr Abeba Birhane
4 months ago
New paper hot off the press
www.nature.com/articles/s41...
We analysed over 40,000 computer vision papers from CVPR (the longest standing CV conf) & associated patents tracing pathways from research to application. We found that 90% of papers & 86% of downstream patents power surveillance 1/
loading . . .
Computer-vision research powers surveillance technology - Nature
An analysis of research papers and citing patents indicates the extensive ties between computer-vision research and surveillance.
https://www.nature.com/articles/s41586-025-08972-6#MOESM1
27
820
545
Do people actually like human-like LLMs? In our
#ACL2025
paper HumT DumT, we find a kind of uncanny valley effect: users dislike LLM outputs that are *too human-like*. We thus develop methods to reduce human-likeness without sacrificing performance.
5 months ago
1
23
6
Dear ChatGPT, Am I the Asshole? While Reddit users might say yes, your favorite LLM probably won’t. We present Social Sycophancy: a new way to understand and measure sycophancy as how LLMs overly preserve users' self-image.
6 months ago
6
136
35
How does the public conceptualize AI? Rather than self-reported measures, we use metaphors to understand the nuance and complexity of people’s mental models. In our
#FAccT2025
paper, we analyzed 12,000 metaphors collected over 12 months to track shifts in public perceptions.
6 months ago
3
49
15
New ICLR blogpost! 🎉 We argue that understanding the impact of anthropomorphic AI is critical to understanding the impact of AI.
6 months ago
1
15
5
reposted by
Myra Cheng
Alicia DeVrio
8 months ago
How can we better think and talk about human-like qualities attributed to language technologies like LLMs? In our
#CHI2025
paper, we taxonomize how text outputs from cases of user interactions with language technologies can contribute to anthropomorphism.
arxiv.org/abs/2502.09870
1/n
2
42
14
Check out our recent work studying anthropormophic AI!
add a skeleton here at some point
8 months ago
1
14
3
reposted by
Myra Cheng
Julia Mendelsohn
9 months ago
New preprint! Metaphors shape how people understand politics, but measuring them (& their real-world effects) is hard. We develop a new method to measure metaphor & use it to study dehumanizing metaphor in 400K immigration tweets Link:
bit.ly/4i3PGm3
#NLP
#NLProc
#polisky
#polcom
#compsocialsci
🐦🐦
6
180
75
Love this!!
add a skeleton here at some point
8 months ago
0
2
0
you reached the end!!
feeds!
log in