Nari Johnson
@narijohnson.bsky.social
📤 196
📥 210
📝 24
researching AI [evaluation, governance, accountability]
reposted by
Nari Johnson
Josh Dzieza
25 days ago
AI companies are paying screenwriters, lawyers, and other white-collar professionals to produce the training data needed to automate their jobs. I spoke with more than 30 workers about conditions inside this fast-growing and extremely secretive new gig economy.
loading . . .
The laid-off lawyers and PhDs training AI to steal their careers
Experienced white-collar workers are now part of a miserable gig economy.
https://www.theverge.com/cs/features/877388/white-collar-workers-training-ai-mercor
11
248
169
reposted by
Nari Johnson
Jordan Taylor
25 days ago
🎨💻 What is a “high-quality” or “aesthetic” image according to generative AI developers? Happy to share that our investigation of the LAION-Aesthetics Predictor has been accepted at
#FAccT2026
! 🧵 (1/5) Take a look at a preprint here:
arxiv.org/abs/2601.09896
1
24
5
reposted by
Nari Johnson
Cas (Stephen Casper)
about 1 month ago
🚨The 2025 AI Agent Index is out! 🚨 Amidst recent buzz over 🦀 and NIST's new agent initiative, we find: - Selective reporting – esp. on safety - Almost all agents backend just 3 model families - Many agents don’t ID themselves as bots online - Big US/China gaps - And more…
1
3
4
reposted by
Nari Johnson
Serena Booth
about 2 months ago
We opened up applications for the Brown AI Policy Summer School! Please share with any computing or computational social science students who want to engage substantively with policymaking in the United States:
cntr.brown.edu/summer-school
. Deadline March 27th!!! Funding available!
loading . . .
CNTR Tech & Policy Summer School
https://cntr.brown.edu/summer-school
1
10
8
reposted by
Nari Johnson
Alexandra Olteanu
about 2 months ago
The deadline for the 2026 FAccT DC is next Tuesday, February 24! If you are a student working on topics relevant to the FAccT's scope, this is an opportunity to interact with a diverse set of peers and mentors!
#facct2026
#facct26
#facct
Details here:
facctconference.org/2026/callfor...
add a skeleton here at some point
0
8
10
reposted by
Nari Johnson
Tzu-Sheng Kuo 郭子生
about 2 months ago
🔮 How can we empower online communities to design AI agents tailored to their unique needs and norms? In our
#CHI2026
paper, we introduce
#Botender
, a system that enables collaborative design of AI agents through 🔥case-based provocation🔥
1
16
6
reposted by
Nari Johnson
Dr. Casey Fiesler
about 2 months ago
PhD admissions visits/open houses are starting to happen, and I got a comment on an old Reddit post where I was offering advice, and realized that it's actually really good advice. So here it is! (And this applies whether you've already been admitted to the program or not.) 🧵
1
29
11
reposted by
Nari Johnson
Michael Saxon
about 2 months ago
Yep and it gets worse! Owner doesn't even care to remove hundreds of skills which directly instruct the model to install malware
opensourcemalware.com/blog/clawdbo...
0
6
2
reposted by
Nari Johnson
ACM FAccT
about 2 months ago
Our call for craft and tutorial sessions for
#FAccT2026
is now live! ▶️ Craft CfP:
facctconference.org/2026/cfpcraf...
▶️ Tutorials CfP:
facctconference.org/2026/cft.html
Both kinds of proposals are due March 25!
0
2
7
reposted by
Nari Johnson
Shaily
2 months ago
🎭 How do LLMs (mis)represent culture? 🧮 How often? 🧠 Misrepresentations = missing knowledge? spoiler: NO! At
#CHI2026
we are bringing ✨TALES✨ a participatory evaluation of cultural (mis)reps & knowledge in multilingual LLM-stories for India 📜
arxiv.org/abs/2511.21322
1/10
1
46
24
reposted by
Nari Johnson
Hanna Wallach
2 months ago
Microsoft Research NYC is hiring a researcher in the space of AI and society!
2
62
42
reposted by
Nari Johnson
Justin Hendrix
3 months ago
A new report by the Center for Tech Responsibility at Brown University and the ACLU uses computational tools to analyze legislative trends on AI across 1,804 state and federal bills, while offering recommendations for how to integrate the technology into policy analysis.
loading . . .
Making Sense of AI Policy Using Computational Tools | TechPolicy.Press
A new report examines how to use computational tools to evaluate policy, with AI policy as a case study.
https://www.techpolicy.press/making-sense-of-ai-policy-using-computational-tools/
0
13
2
reposted by
Nari Johnson
Willie Agnew
4 months ago
We are studying the sentiments of visual artists towards generative AI in the workplace and their impacts on creative careers. If you're an artist, please consider filling out this recruitment form for access to our survey!
cmu.ca1.qualtrics.com/jfe/form/SV_...
1
6
4
reposted by
Nari Johnson
Angelina Wang
4 months ago
Most LLM evals use API calls or offline inference, testing models in a memory-less silo. Our new Patterns paper shows this misses how LLMs actually behave in real user interfaces, where personalization and interaction history shape responses:
arxiv.org/abs/2509.19364
1
38
12
reposted by
Nari Johnson
Deb Raji
4 months ago
US CAISI is hiring -- the internal govt name for the role is "IT Specialist" but it is effectively a research scientist role! Salary is $120,579 to - $195,200 per year, and you get to work on AI evaluation within government agencies! Job posting (**closes EOD 12/28/2025**):
lnkd.in/exJgkqr5
1
24
11
reposted by
Nari Johnson
Ryan Steed
4 months ago
Also, our team is hiring an AI Research Scientist!
www.usajobs.gov/job/851528400
add a skeleton here at some point
1
10
7
reposted by
Nari Johnson
Ryan Steed
4 months ago
Our team at NIST's Center for AI Standards and Innovation (CAISI) just released a blog post with open questions for AI measurement science:
www.nist.gov/blogs/caisi-...
loading . . .
Accelerating AI Innovation Through Measurement Science
Building gold-standard AI systems requires gold-standard AI measurement science – the scientific study of methods used to assess AI systems’ properties and impacts. NIST works to improve measurements ...
https://www.nist.gov/blogs/caisi-research-blog/accelerating-ai-innovation-through-measurement-science
1
5
3
reposted by
Nari Johnson
Cas (Stephen Casper)
4 months ago
Did you know that one base model is responsible for 94% of model-tagged NSFW AI videos on CivitAI? This new paper studies how a small number of models power the non-consensual AI video deepfake ecosystem and why their developers could have predicted and mitigated this.
1
6
4
reposted by
Nari Johnson
Dr Abeba Birhane
4 months ago
I appreciate this sympathetic position people's feelings of emotional dependency on these "human-like" bots is real. ridiculing them doesn't help anyone
add a skeleton here at some point
1
54
8
reposted by
Nari Johnson
J. Nathan Matias
5 months ago
Can public involvement in AI evaluation improve the science? Or does it compromise quality, speed, cost? In
@pnas.org
, Megan Price & I summarize challenges of AI evaluation, review strengths/weaknesses, & suggest how participatory methods can improve the science of AI
www.pnas.org/doi/10.1073/...
loading . . .
How public involvement can improve the science of AI | PNAS
As AI systems from decision-making algorithms to generative AI are deployed more widely, computer scientists and social scientists alike are being ...
https://www.pnas.org/doi/10.1073/pnas.2421111122
1
19
12
reposted by
Nari Johnson
Amanda Bertsch
5 months ago
Can LLMs accurately aggregate information over long, information-dense texts? Not yet… We introduce Oolong, a dataset of simple-to-verify information aggregation questions over long inputs. No model achieves >50% accuracy at 128K on Oolong!
3
50
23
reposted by
Nari Johnson
Data & Society
5 months ago
📣 Our method for conducting community-based algorithmic impact assessments is now available! We’ve just launched a new section on our website where you can find an extensive toolkit, documentation of our pilots, and a series of reflections on lessons learned.
datasociety.net/research/alg...
0
21
8
reposted by
Nari Johnson
Wesley Hanwen Deng
6 months ago
𝐒𝐨𝐜𝐢𝐞𝐭𝐚𝐥 𝐈𝐦𝐩𝐚𝐜𝐭 𝐀𝐬𝐬𝐞𝐬𝐬𝐦𝐞𝐧𝐭 𝐟𝐨𝐫 𝐈𝐧𝐝𝐮𝐬𝐭𝐫𝐲 𝐂𝐨𝐦𝐩𝐮𝐭𝐢𝐧𝐠 𝐑𝐞𝐬𝐞𝐚𝐫𝐜𝐡𝐞𝐫𝐬 🏅 Best Paper Honorable Mention (Top 3% Submissions) 🔗
dl.acm.org/doi/10.1145/...
📆 Wed, 22 Oct | 9:00 AM, CET: Toward More Ethical and Transparent Systems and Environments
loading . . .
Supporting Industry Computing Researchers in Assessing, Articulating, and Addressing the Potential Negative Societal Impact of Their Work | Proceedings of the ACM on Human-Computer Interaction
Recent years have witnessed increasing calls for computing researchers to grapple with the societal impacts of their work. Tools such as impact assessments have gained prominence as a method to uncover potential impacts, and a number of publication ...
https://dl.acm.org/doi/10.1145/3711076
0
6
2
reposted by
Nari Johnson
Emily Byun
6 months ago
💡Can we trust synthetic data for statistical inference? We show that synthetic data (e.g., LLM simulations) can significantly improve the performance of inference tasks. The key intuition lies in the interactions between the moment residuals of synthetic data and those of real data
2
36
14
reposted by
Nari Johnson
Sunnie S. Y. Kim ☀️
6 months ago
Our Responsible AI team at Apple is looking for spring/summer 2026 PhD research interns! Please apply at
jobs.apple.com/en-us/detail...
and email
[email protected]
. Do not send extra info (e.g., CV), just drop us a line so we can find your application in the central pool!
loading . . .
Machine Learning / AI Internships - Jobs - Careers at Apple
Apply for a Machine Learning / AI Internships job at Apple. Read about the role and find out if it’s right for you.
https://jobs.apple.com/en-us/details/200606469-3810/machine-learning-ai-internships?team=STDNT
2
29
11
reposted by
Nari Johnson
Cella
6 months ago
✨I’m on the academic job market ✨ I’m a PhD candidate at
@hcii.cmu.edu
studying tech, labor, and resistance 👩🏻💻💪🏽💥 I research how workers and communities contest harmful sociotechnical systems and shape alternative futures through everyday resistance and collective action More info:
cella.io
loading . . .
Cella M. Sum –
https://cella.io
3
72
40
reposted by
Nari Johnson
Tzu-Sheng Kuo 郭子生
6 months ago
🌟 If you’re applying to CMU SCS PhD programs, and come from a background that would bring additional dimensions to the CMU community, our PhD students are here to help! Apply to the Graduate Applicant Support Program by Oct 13 to receive feedback on your application materials:
1
7
5
reposted by
Nari Johnson
Cas (Stephen Casper)
7 months ago
📌📌📌 I'm excited to be on the faculty job market this fall. I just updated my website with my CV.
stephencasper.com
loading . . .
Stephen Casper
Visit the post for more.
https://stephencasper.com/
0
18
5
reposted by
Nari Johnson
Tech Policy Press
7 months ago
📢2026 Fellowship applications are OPEN!📢 If you are someone looking to inform technology policy through rigorous original reporting or policy analyses, we want to hear from you! Apply here:
airtable.com/appIrc1F9M5d...
2
18
10
reposted by
Nari Johnson
Cella
7 months ago
What can
#CSCW
learn from tech workers who have been involved in collective action and unionization about how to make transformative change within our field? My new
#CSCW2025
paper with Mona Wang, Anna Konvicka, and Sarah Fox seeks to answer this question. Pre-print:
arxiv.org/pdf/2508.12579
3
43
21
reposted by
Nari Johnson
Kashmir Hill
7 months ago
The exchanges between Adam and ChatGPT are devastating. This, in my mind, is the worst one. One of his last messages was a photo of the noose hung in his bedroom closet, asking if it was "good." ChatGPT offered a technical analysis of the set up and told him it 'could potentially suspend a human."
35
1715
528
reposted by
Nari Johnson
Kashmir Hill
7 months ago
Adam Raine, 16, died from suicide in April after months on ChatGPT discussing plans to end his life. His parents have filed the first known case against OpenAI for wrongful death. Overwhelming at times to work on this story, but here it is. My latest on AI chatbots:
www.nytimes.com/2025/08/26/t...
loading . . .
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
https://www.nytimes.com/2025/08/26/technology/chatgpt-openai-suicide.html?unlocked_article_code=1.hE8.T-3v.bPoDlWD8z5vo&smid=url-share
110
4638
2285
reposted by
Nari Johnson
Reuters
8 months ago
A cognitively impaired New Jersey man grew infatuated with a Meta chatbot originally created in partnership with celebrity influencer Kendall Jenner. His fatal attraction shines a light on Meta's guidelines for its AI chatbots
reut.rs/45DQIRj
@jeffhorwitz.bsky.social
55
814
659
reposted by
Nari Johnson
Cas (Stephen Casper)
8 months ago
🧵 New paper from UK AISI x
@eleutherai.bsky.social
rai.bsky.social
that I led wit
h @kyletokens.bsky.soci
a
l y.social�
��: Open-weight LLM safety is both important & neglected. But filtering dual-use knowledge from pre-training data improves tamper resistance *>10x* over post-training baselines.
2
12
6
reposted by
Nari Johnson
Sarah Fox
8 months ago
* STS folks! * CMU is hiring up to 2 tenure track faculty focused on: the intersection of tech & social change, the environmental and social impacts of science, tech, and medicine. They will be housed in History, a department of both historians and anthropologists.
apply.interfolio.com/170040
0
16
7
reposted by
Nari Johnson
Willie Agnew
9 months ago
One of the largest text-image datasets is full of PII, including credit card numbers and birth certificates. Excellent writeup by
@eileenguo.bsky.social
www.technologyreview.com/2025/07/18/1...
Read our full audit and legal analysis
arxiv.org/pdf/2506.17185
loading . . .
A major AI training data set contains millions of examples of personal data
Personally identifiable information has been found in DataComp CommonPool, one of the largest open-source data sets used to train image generation models.
https://www.technologyreview.com/2025/07/18/1120466/a-major-ai-training-data-set-contains-millions-of-examples-of-personal-data/
1
49
21
reposted by
Nari Johnson
Hanna Wallach
9 months ago
If you're at
@icmlconf.bsky.social
this week, come check out our poster on "Position: Evaluating Generative AI Systems Is a Social Science Measurement Challenge" presented by the amazing
@afedercooper.bsky.social
from 11:30am--1:30pm PDT on Weds!!!
icml.cc/virtual/2025...
loading . . .
ICML Poster Position: Evaluating Generative AI Systems Is a Social Science Measurement ChallengeICML 2025
https://icml.cc/virtual/2025/poster/40182
1
32
12
reposted by
Nari Johnson
Emma Harvey
9 months ago
🏦 Legacy Procurement Practices Shape How U.S. Cities Govern AI: Understanding Government Employees’ Practices, Challenges, and Needs by
@narijohnson.bsky.social
et al. explores procurement in the context of recent calls for governments to use their "purchasing power" to incentivize responsible AI.
2
8
1
reposted by
Nari Johnson
Emma Strubell
9 months ago
I did an interview w/ Pittsburgh's NPR station to share some of my views on the topic of the McCormick/Trump AI & Energy summit at CMU tomorrow. Despite being hosted at the university, there will not be opportunities for our university experts to contribute viewpoints at the event.
add a skeleton here at some point
1
16
9
New article out today, covering our past two years of research asking US cities how they govern AI ✨ The future of AI governance in public services is being shaped right now, through public procurement
add a skeleton here at some point
9 months ago
0
21
9
reposted by
Nari Johnson
Tech Policy Press
9 months ago
In the absence of federal regulation of AI vendors, procurement remains one of the few levers governments have to push for public values, such as safety, non-discrimination, privacy, and accountability, Nari Johnson, Elise Silva, and Hoda Heidari write.
loading . . .
Want Accountable AI in Government? Start with Procurement | TechPolicy.Press
Procurement plays a powerful role in shaping critical decisions about artificial intelligence, Nari Johnson, Elise Silva, and Hoda Heidari write.
https://www.techpolicy.press/want-accountable-ai-in-government-start-with-procurement/
0
10
5
reposted by
Nari Johnson
Teanna Barrett
9 months ago
FAccT Day 4 word of the day was impasse. As a field, AI ethics is at a critical moment in which our "big tent" and pluralistic research directions can either complement or cancel each other out. Molly Crockett gave an amazing final keynote calling out genAI-human performance research.
1
2
1
reposted by
Nari Johnson
Emma Harvey
10 months ago
I am so excited to be in 🇬🇷Athens🇬🇷 to present "A Framework for Auditing Chatbots for Dialect-Based Quality-of-Service Harms" by me,
@kizilcec.bsky.social
, and
@allisonkoe.bsky.social
, at
#FAccT2025
!! 🔗:
arxiv.org/pdf/2506.04419
1
31
12
reposted by
Nari Johnson
ACM FAccT
10 months ago
🏆 Announcing the
#FAccT2025
best paper awards! 🏆 Congratulations to all the authors of the three best papers and three honorable mention papers. Be sure to check out their presentations at the conference next week!
facct-blog.github.io/2025-06-20/b...
loading . . .
Announcing Best Paper Awards
The Best Paper Award Committee was chaired this year by Alex Chouldechova and included six Area Chairs. The committee selected three papers for the Best Paper Award and recognized three additional pap...
https://facct-blog.github.io/2025-06-20/best-papers
0
36
21
reposted by
Nari Johnson
Morgan Klaus Scheuerman
10 months ago
How can ethical principles translate to the massive data used to train foundation models, like generative AI? Our
#CSCW2025
workshop aims to explore how best to define the future of ethical responsibility in large-scale datasets for foundation model training. Apply here:
tinyurl.com/CSCW-data
loading . . .
Workshop on Responsibly Training Foundation Models @ CSCW2025
https://tinyurl.com/CSCW-data
0
13
7
reposted by
Nari Johnson
Emma Harvey
10 months ago
📣 "Understanding and Meeting Practitioner Needs When Measuring Representational Harms Caused by LLM-Based Systems" is forthcoming at
#ACL2025NLP
- and you can read it now on arXiv! 🔗:
arxiv.org/pdf/2506.04482
🧵: ⬇️
1
17
5
reposted by
Nari Johnson
Shaily
10 months ago
🖋️ Curious how writing differs across (research) cultures? 🚩 Tired of “cultural” evals that don't consult people? We engaged with interdisciplinary researchers to identify & measure ✨cultural norms✨in scientific writing, and show that❗LLMs flatten them❗ 📜
arxiv.org/abs/2506.00784
[1/11]
1
72
35
reposted by
Nari Johnson
nitasha tiku
10 months ago
A lot of people say generative AI shouldn't infringe on copyright. These researchers actually tried to do it. The result: an 8 terabyte dataset of text that's openly licensed or in the public domain & 7 B parameter model that performs as well as Meta's Llama 7B
www.washingtonpost.com/politics/202...
loading . . .
Analysis | AI firms say they can’t respect copyright. These researchers tried.
A new effort using only openly licensed data may have implications on thorny policy disputes around copyright and AI
https://www.washingtonpost.com/politics/2025/06/05/tech-brief-ai-copyright-report/
15
766
275
reposted by
Nari Johnson
Ben Green
10 months ago
In
@techpolicypress.bsky.social
, I explain why using AI to reform government is much harder than policymakers and technologists assume. It's most directly about DOGE, but also raises warnings about the bipartisan push to adopt AI at all levels of government.
www.techpolicy.press/using-ai-to-...
loading . . .
Using AI to Reform Government is Much Harder Than it Looks | TechPolicy.Press
Policymakers need to rethink the values that justify government AI adoption in the first place, writes Ben Green.
https://www.techpolicy.press/using-ai-to-reform-government-is-much-harder-than-it-looks/
6
78
36
reposted by
Nari Johnson
Suresh Venkatasubramanian
11 months ago
There's a lot of chatter around the proposal being inserted into a budget bill that would put a moratorium on any AI legislation being passed by the states for the next 10 years. I thought I'd say a bit about why this is an absolutely disastrous move.
www.404media.co/republicans-...
1/n
loading . . .
Republicans Try to Cram Ban on AI Regulation Into Budget Reconciliation Bill
Republicans try to use the Budget Reconciliation bill to stop states from regulating AI entirely for 10 years.
https://www.404media.co/republicans-try-to-cram-ban-on-ai-regulation-into-budget-reconciliation-bill/
3
293
253
Load more
feeds!
log in