Miranda Bogen
@mbogen.bsky.social
📤 340
📥 198
📝 6
Director of the AI Governance Lab
@cendemtech.bsky.social
/ responsible AI + policy
reposted by
Miranda Bogen
logan koepke
2 days ago
a recent New York State audit of NYC's Local Law 144 — designed to ostensibly regulate potential bias and discrimination in automated employment tools — is fairly scathing in its assessment of how implementation and enforcement of the law is going. simply put, LL 144 does not work.
1
3
6
reposted by
Miranda Bogen
Nathalie Maréchal, PhD
3 days ago
New report from
@mbogen.bsky.social
& yours truly, on how the big AI companies are trying to make money and what it means for all of us. I am more proud of the title than I have any right to be.
add a skeleton here at some point
0
1
2
reposted by
Miranda Bogen
Center for Democracy & Technology
30 days ago
New from CDT: “A Roadmap for Responsible Approaches to AI Memory” by @mbogen.bsky.social & Ruchika Joshi explores how AI systems store, recall, and use info—and what that means for privacy, transparency, and user control.
cdt.org/insights/a-r...
1
3
2
reposted by
Miranda Bogen
A. Feder Cooper
about 1 month ago
[NeurIPS '25] Our oral slot and poster session on "Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research" are tomorrow, December 4! [https://arxiv.org/abs/2412.06966] Oral: 3:30-4pm PST, Upper Level Ballroom 20AB Poster 1307: 4:30:-7:30pm PST, Exhibit Hall C-E
1
2
2
The CFPB proposed a new rule where it would no longer recognize disparate impact liability when enforcing the Equal Credit Opportunity Act. This would eliminate a key protection against discrimination in access to credit, including when AI is involved.
www.federalregister.gov/documents/20...
loading . . .
Equal Credit Opportunity Act (Regulation B)
The Consumer Financial Protection Bureau (Bureau or CFPB) is issuing a proposed rule for public comment that amends provisions related to disparate impact, discouragement of applicants or prospective ...
https://www.federalregister.gov/documents/2025/11/13/2025-19864/equal-credit-opportunity-act-regulation-b]
about 2 months ago
1
5
4
reposted by
Miranda Bogen
Federation of American Scientists
3 months ago
🚨Call for policy proposals If AI adoption is not slowing down, policy governing safety and security practices needs to speed up. This is where you come in.
1
5
6
reposted by
Miranda Bogen
Helen Toner
6 months ago
AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts. Delighted to feature
@mbogen.bsky.social
on Rising Tide today, on what's being built and why we should care:
1
13
5
AI companies are starting to promise personalized assistants that “know you.” We’ve seen this playbook before — it didn’t end well. In a guest post for
@hlntnr.bsky.social
’s Rising Tide, I explore how leading AI labs are rushing toward personalization without learning from social media’s mistakes
loading . . .
Personalized AI is rerunning the worst part of social media's playbook
The incentives, risks, and complications of AI that knows you
https://open.substack.com/pub/helentoner/p/personalized-ai-social-media-playbook?utm_campaign=post&utm_medium=web
6 months ago
0
14
8
reposted by
Miranda Bogen
p. sampson
8 months ago
Personalization is political. Very excited to share a piece I co-authored with
@mbogen.bsky.social
as a Google Public Policy Fellow
@cendemtech.bsky.social
!
cdt.org/insights/its...
loading . . .
It’s (Getting) Personal: How Advanced AI Systems Are Personalized
This brief was co-authored by Princess Sampson. Generative artificial intelligence has reshaped the landscape of consumer technology and injected new dimensions into familiar technical tools. Search e...
https://cdt.org/insights/its-getting-personal-how-advanced-ai-systems-are-personalized/
1
16
5
reposted by
Miranda Bogen
Center for Democracy & Technology
9 months ago
From CDT’s @mbogen.bsky.social: “As #AI companies are racing to put out increasingly advanced systems, they also seem to be cutting more and more corners on safety, which doesn’t add up.”
www.ft.com/content/8...
loading . . .
OpenAI slashes AI model safety testing time
Testers have raised concerns that its technology is being rushed out without sufficient safeguards
https://www.ft.com/content/8253b66e-ade7-4d1f-993b-2d0779c7e7d8
1
22
12
reposted by
Miranda Bogen
Center for Democracy & Technology
12 months ago
To truly understand AI’s risks & impacts, we need sociotechnical frameworks that connect the technical with the societal. Holistic assessments can guide responsible AI deployment & safeguard safety and rights. 📖 Read more:
cdt.org/insights/ado...
loading . . .
Adopting More Holistic Approaches to Assess the Impacts of AI Systems
by Evani Radiya-Dixit, CDT Summer Fellow As artificial intelligence (AI) continues to advance and gain widespread adoption, the topic of how to hold developers and deployers accountable for the AI systems they implement remains pivotal. Assessments of the risks and impacts of AI systems tend to evaluate a system’s outcomes or performance through methods like […]
https://cdt.org/insights/adopting-more-holistic-approaches-to-assess-the-impacts-of-ai-systems/
0
6
2
reposted by
Miranda Bogen
Center for Democracy & Technology
12 months ago
CDT’s Amy Winecoff + @mbogen.bsky.social new explainer dives into the fundamentals of hypothesis testing, how auditors can apply it to AI systems, & where it might fall short. Using simulations, we show its role in detecting bias in a hypothetical hiring algorithm.
cdt.org/insights/hyp...
loading . . .
Hypothesis Testing for AI Audits
Introduction AI systems are used in a range of settings, from low-stakes scenarios like recommending movies based on a user’s viewing history to high-stakes areas such as employment, healthcare, finance, and autonomous vehicles. These systems can offer a variety of benefits, but they do not always behave as intended. For instance, ChatGPT has demonstrated bias […]
https://cdt.org/insights/hypothesis-testing-for-ai-audits/
1
9
3
reposted by
Miranda Bogen
Center for Democracy & Technology
12 months ago
NEW REPORT: CDT AI Governance Lab’s’s Assessing AI reportAudits looks at the rise of complex automated systems which demand a robust ecosystem for managing risks and ensuring accountability.
cdt.org/insights/ass...
cc:
@mbogen.bsky.social
1
9
3
reposted by
Miranda Bogen
Kendra Albert @ MAGFest
about 1 year ago
@upturn.org
is hiring for a research associate! Excellent opportunity to work with some fantastic folks!
www.upturn.org/join/researc...
loading . . .
Upturn Seeks a Research Associate
This position is ideal for someone who is excited about sharp, interdisciplinary research on a range of topics related to technology, policy, and justice.
https://www.upturn.org/join/research-associate/
1
9
5
reposted by
Miranda Bogen
logan koepke
about 1 year ago
howdy! the Georgetown Law Journal has published "Less Discriminatory Algorithms." it's been very fun to work on this w/ Emily Black, Pauline Kim, Solon Barocas, and Ming Hsu. i hope you give it a read — the article is just the beginning of this line of work.
www.law.georgetown.edu/georgetown-l...
4
50
20
you reached the end!!
feeds!
log in