Mia Hoffmann
@miahoffmann.bsky.social
đ¤ 156
đĽ 242
đ 42
AI governance, harms and assessment | Research fellow
@csetgeorgetown.bsky.social
đ¤â¨ New report with
@partnershipai.bsky.social
! AI agents pose new risks. Monitoring is essential to ensure effective oversight and intervention when needed. Our paper presents a framework for real-time failure detection that takes into account stakes, reversibility and affordances of agent actions.
18 days ago
1
1
1
reposted by
Mia Hoffmann
CSET
2 months ago
â¨New Analysis⨠Can the new EU AI Code of Practice change the global AI safety landscape? As companies like Anthropic, OpenAI, and Google sign on, CSETâs
@miahoffmann.bsky.social
explores the codeâs Safety and Security chapter.
cset.georgetown.edu/article/eu-a...
loading . . .
AI Safety under the EU AI Code of Practice â A New Global Standard? | Center for Security and Emerging Technology
To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply ...
https://cset.georgetown.edu/article/eu-ai-code-safety/
0
1
2
reposted by
Mia Hoffmann
Vikram Venkatram
2 months ago
Yesterday's new AI Action Plan has a lot worth discussing! One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations." This could be cause for concern.
1
5
3
reposted by
Mia Hoffmann
CSET
4 months ago
âď¸ New Explainer! Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work? In their new explainer,
@jessicaji.bsky.social
,
@vikramvenkatram.bsky.social
&
@stephbatalis.bsky.social
break down the different fundamental types of AI safety evaluations.
1
4
4
reposted by
Mia Hoffmann
Helen Toner
4 months ago
đĄFunding opportunityâshare with your AI research networksđĄ Internal deployments of frontier AI models are an underexplored source of risk. My program at
@csetgeorgetown.bsky.social
just opened a call for research ideasâEOIs due Jun 30. Full details âĄď¸
cset.georgetown.edu/wp-content/u...
Summary âŹď¸
1
9
6
Today,
@csetgeorgetown.bsky.social
published our recommendations for the U.S. AI Action Plan. One of them is a CSET evergreen: implement an AI incident reporting regime for AI used by the federal government. Why? Short answer: because we can learn a ton from incidents! Long answer: đ
7 months ago
1
5
2
reposted by
Mia Hoffmann
CSET
7 months ago
đ¨We're hiring â only a few days left to apply!đ¨ CSET is looking for a Media Engagement Specialist to amplify our research. If you're a strategic communicator who can craft press releases, media pitches, & social content, apply by March 17, 2025!
cset.georgetown.edu/job/media-en...
loading . . .
Media Engagement Specialist | Center for Security and Emerging Technology
The Center for Security and Emerging Technology, under the School of Foreign Service, is a research organization focused on studying the security impacts of emerging technologies, supporting academic ...
https://cset.georgetown.edu/job/media-engagement-specialist/
0
0
2
reposted by
Mia Hoffmann
CSET
7 months ago
What: CSET Webinar đş When: Tuesday, 3/25 at 12PM ET đ Whatâs next for AI red-teaming? And how do we make it more useful? Join Tori Westerhoff, Christina Liaghati, Marius Hobbhahn, and CSET's
@dr-bly.bsky.social
*
@jessicaji.bsky.social
for a great discussion:
cset.georgetown.edu/event/whats-...
loading . . .
Whatâs Next for AI Red-Teaming? | Center for Security and Emerging Technology
On March 25, CSET will host an in-depth discussion about AI red-teaming â what it is, how it works in practice, and how to make it more useful in the future.
https://cset.georgetown.edu/event/whats-next-for-ai-red-teaming/
0
4
5
reposted by
Mia Hoffmann
CSET
7 months ago
What does the EU's shifting strategy mean for AI? CSET's
@miahoffmann.bsky.social
&
@ojdaniels.bsky.social
have a new piece out for
@techpolicypress.bsky.social
. Read it now đ
add a skeleton here at some point
0
4
4
reposted by
Mia Hoffmann
Tech Policy Press
7 months ago
Mia Hoffmann and Owen J. Daniels from Georgetownâs Center for Security and Emerging Technology say Europe's apparent shift on AI policy could change the global landscape for AI governance.
loading . . .
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetownâs Center for Security and Emerging Technology say Europe's movements could change the global landscape.
https://buff.ly/PX4A3pq
0
6
3
If youâve ever wondered what the EU and elephants have in common - or are wondering now- read my latest piece with
@ojdaniels.bsky.social
! We take a look what the EUâs new innovation-friendly regulatory approach might mean for the global AI policy ecosystem
www.techpolicy.press/out-of-balan...
loading . . .
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetownâs Center for Security and Emerging Technology say Europe's movements could change the global landscape.
https://www.techpolicy.press/out-of-balance-what-the-eus-strategy-shift-means-for-the-ai-ecosystem/
7 months ago
0
2
3
reposted by
Mia Hoffmann
CSET
7 months ago
CSET is hiring đ˘ Weâre hiring a software engineer to support
@emergingtechobs.bsky.social
. Help build high-quality public tools and datasets to inform critical decisions on emerging tech issues. Interested or know someone who would be? Learn more and apply đ
cset.georgetown.edu/job/software...
loading . . .
Software Engineer | Center for Security and Emerging Technology
The Center for Security and Emerging Technology (CSET), under the School of Foreign Service, is hiring a Software Engineer. The Software Engineer will be a generalist who can flex between full-stack w...
https://cset.georgetown.edu/job/software-engineer/
0
3
4
There have been a ton of AI policy developments coming out of the EU these past weeks, but one deeply concerning one is the withdrawal of the AI Liability Directive (AILD) by the European Commission. Hereâs why:
8 months ago
1
4
4
reposted by
Mia Hoffmann
Mina Narayanan
8 months ago
@miahoffmann.bsky.social
,
@ojdaniels.bsky.social
, and I wrote a piece on key AI governance areas to watch in 2025 with the upcoming AI Action Summit in mind. Check it out here!
thebulletin.org/2025/02/will...
loading . . .
Will the Paris artificial intelligence summit set a unified approach to AI governanceâor just be another conference?
AI innovations and governmentsâ preferences can make international consensus on governance at the Paris Summit challenging.
https://thebulletin.org/2025/02/will-the-paris-artificial-intelligence-summit-set-a-unified-approach-to-ai-governance-or-just-be-another-conference/
0
5
3
reposted by
Mia Hoffmann
Bulletin of the Atomic Scientists
8 months ago
Will the Paris
#AIActionSummit
set a unified approach to AI governanceâor just be another conference? A new article from
@miahoffmann.bsky.social
,
@minanrn.bsky.social
, and
@ojdaniels.bsky.social
.
loading . . .
Will the Paris artificial intelligence summit set a unified approach to AI governanceâor just be another conference?
AI innovations and governmentsâ preferences can make international consensus on governance at the Paris Summit challenging.
https://thebulletin.org/2025/02/will-the-paris-artificial-intelligence-summit-set-a-unified-approach-to-ai-governance-or-just-be-another-conference/?utm_source=Bluesky&utm_medium=SocialMedia&utm_campaign=BlueskyPost022025&utm_content=DisruptiveTechnologies_ParisAISummit_02062025
0
7
6
reposted by
Mia Hoffmann
Owen J. Daniels
8 months ago
With the government portion of the AI Action Summit next week,
@minanrn.bsky.social
,
@miahoffmann.bsky.social
and I wrote for
@thebulletin.org
about some key AI governance questions for the year ahead
thebulletin.org/2025/02/will...
loading . . .
Will the Paris artificial intelligence summit set a unified approach to AI governanceâor just be another conference?
AI innovations and governmentsâ preferences can make international consensus on governance at the Paris Summit challenging.
https://thebulletin.org/2025/02/will-the-paris-artificial-intelligence-summit-set-a-unified-approach-to-ai-governance-or-just-be-another-conference/
1
8
4
Yesterday, the EU AI Actâs first few provisions came into effect. The General Provisions and the prohibitions of unacceptable risk AI systems are applicable from now on. Hereâs what that means:
8 months ago
1
0
0
US leadership in AI has been a goal of the past Trump & Biden administrations. But that concept of leadership focused too much on âAGIâ and too little on AI diffusion. The DeepSeek release - a model that was immediately widely adopted - is a reminder to adjust these priorities. Hereâs why:
8 months ago
1
5
1
reposted by
Mia Hoffmann
Karen Hao
8 months ago
As someone who has reported on AI for 7 years and covered China tech as well, I think the biggest lesson to be drawn from DeepSeek is the huge cracks it illustrates with the current dominant paradigm of AI development. A long thread. 1/
212
6209
3113
Do you care about AI? Wonder what it means for the workforce? Worried about biorisk or tech competition with China? Curious about AI governance? If you answered Yes to any of these, check out our Starter Pack and follow my brilliant colleagues working on these topics!
bsky.app/starter-pack...
add a skeleton here at some point
8 months ago
0
2
0
reposted by
Mia Hoffmann
Justin Hendrix
10 months ago
"Internal company documents... show that Amazon health and safety personnel recommended relaxing enforcement of the production quotas to lower injury rates, but that senior executives rejected the recommendations apparently because they worried about the effect on the companyâs performance."
loading . . .
Amazon Disregarded Internal Warnings on Injuries, Senate Investigation Claims (Gift Article)
A staff report by the Senate labor committee, led by Bernie Sanders, uncovered evidence of internal concern about high injury rates at the e-commerce giant.
https://www.nytimes.com/2024/12/16/business/economy/amazon-warehouse-injuries.html?unlocked_article_code=1.h04.4o0F.0Wz9AlQpIZRI&smid=url-share
13
282
146
reposted by
Mia Hoffmann
Alondra Nelson
10 months ago
âDenied by AI,â the multi-part STAT News investigation of how
#UnitedHealthcare
used an opaque algorithmic system to deny care to people who needed it is a
#mustread
www.statnews.com/2023/03/13/m...
add a skeleton here at some point
26
1143
581
reposted by
Mia Hoffmann
Hypervisible
10 months ago
âAn internal assessment of a machine-learning programme used to vet thousands of claims for universal credit payments across England found it incorrectly selected people from some groups more than others when recommending whom to investigate for possible fraud.â
loading . . .
Revealed: bias found in AI system used to detect UK benefits fraud
Exclusive: Age, disability, marital status and nationality influence decisions to investigate claims, prompting fears of âhurt first, fix laterâ approach
https://www.theguardian.com/society/2024/dec/06/revealed-bias-found-in-ai-system-used-to-detect-uk-benefits
3
61
27
reposted by
Mia Hoffmann
CSET
10 months ago
đ§ŹNew Reportđ§Ź There are many steps in the pathway to biological harm, including risks posed by AI. CSET Fellow
@stephbatalis.bsky.social
offers a suite of corresponding policy and governance tools to help mitigate biorisk. Read more here đ
cset.georgetown.edu/publication/...
loading . . .
Anticipating Biological Risk: A Toolkit for Strategic Biosecurity Policy | Center for Security and Emerging Technology
Artificial intelligence (AI) tools pose exciting possibilities to advance scientific, biomedical, and public health research. At the same time, these tools have raised concerns about their potential t...
https://cset.georgetown.edu/publication/anticipating-biological-risk-a-toolkit-for-strategic-biosecurity-policy/
0
2
2
you reached the end!!
feeds!
log in