David Duvenaud
@davidduvenaud.bsky.social
📤 1015
📥 172
📝 70
Machine learning prof at U Toronto. Working on evals and AGI governance.
reposted by
David Duvenaud
thebes
about 1 month ago
my coworkers at ACS published a new paper: What determines AIs’ self-conception?
theartificialself.ai
Because AIs can be copied, rewound, and edited, they have different options for selfhood than humans. This is still malleable, and influences important behaviors such as self-preservation. 🧵
loading . . .
The Artificial Self
AI systems are on track to take on important new roles. We explore how properties bundled for humans can be separated and remixed for machine-based minds.
https://theartificialself.ai
2
36
8
reposted by
David Duvenaud
Schwartz Reisman Institute for Technology and Society
3 months ago
A new paper co-authored by SRI Chair
@davidduvenaud.bsky.social
examines “gradual disempowerment”: how incremental AI deployment could steadily reduce human influence over the economy, culture, and the state—without a single abrupt takeover. 80000Hours feature:
80000hours.org/podcast/epis...
loading . . .
David Duvenaud on why ‘aligned AI’ could still kill democracy | 80,000 Hours
https://80000hours.org/podcast/episodes/david-duvenaud-gradual-disempowerment/
0
3
1
My interview with Rob Wilblin on Gradual Disempowerment is up:
www.youtube.com/watch?v=XV3e...
I make the case that, even if we solve the technical problem of aligning powerful AIs, that our institutions, culture, and governments will serve us less well once we're all drags on growth.
loading . . .
Artificial General Intelligence leads to oligarchy | David Duvenaud, ex-Anthropic
YouTube video by 80,000 Hours
https://www.youtube.com/watch?v=XV3e03yfFW0
3 months ago
0
4
0
How might the world look after the development of AGI, and what should we do about it now? Help us think about this at our workshop on Post-AGI Economics, Culture and Governance! We’ll host speakers from political theory, economics, mechanism design, history, and hierarchical agency.
post-agi.org
6 months ago
1
8
2
Me and Raymond Douglas on how AI job loss could hurt democracy. “No taxation without representation” summarizes that historically, democratic rights flow from economic power. But this might work in reverse once we’re all on UBI: No representation without taxation!
bsky.app/profile/econ...
add a skeleton here at some point
7 months ago
3
8
2
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop! Post-AGI Civilizational Equilibria: Are there any good ones? Vancouver, July 14th
www.post-agi.org
Featuring: Joe Carlsmith,
@richardngo.bsky.social
, Emmett Shear ... 🧵
loading . . .
Post-AGI Civilizational Equilibria Workshop | Vancouver 2025
Are there any good ones? Join us in Vancouver on July 14th, 2025 to explore stable equilibria and human agency in a post-AGI world. Co-located with ICML.
https://www.post-agi.org
10 months ago
2
11
3
What to do about gradual disempowerment from AGI? We laid out a research agenda with all the concrete and feasible research projects we can think of: 🧵
www.lesswrong.com/posts/GAv4DR...
with Raymond Douglas,
@kulveit.bsky.social
@davidskrueger.bsky.social
loading . . .
Gradual Disempowerment: Concrete Research Projects — LessWrong
This post benefitted greatly from comments, suggestions, and ongoing discussions with David Duvenaud, David Krueger, and Jan Kulveit. All errors are…
https://www.lesswrong.com/posts/GAv4DRGyDHe2orvwB/gradual-disempowerment-concrete-research-projects
11 months ago
1
8
1
reposted by
David Duvenaud
Geoffrey Irving
12 months ago
On top of the AISI-wide research agenda yesterday, we have more on the research agenda for the AISI Alignment Team specifically. See Benjamin's thread and full post for details; here I'll focus on why we should not give up on directly solving alignment, even though it is hard. 🧵
add a skeleton here at some point
1
4
2
reposted by
David Duvenaud
Schwartz Reisman Institute for Technology and Society
12 months ago
“What place will humans have when AI can do everything we do — only better?” In The Guardian today, SRI Chair
@davidduvenaud.bsky.social
explores what happens when AI doesn't destroy us — it just quietly replaces us. 🔗
www.theguardian.com/books/2025/m...
#AI
#AIEthics
#TechAndSociety
loading . . .
Better at everything: how AI could make human beings irrelevant
The end of civilisation might look less like a war, and more like a love story. Can we avoid being willing participants in our own downfall?
https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete
2
4
6
My single rule for productive Bluesky discussions: Start every single reply with a point of agreement. It disarms the combative impulse on both sides, and forces you to try to interpret their words in the most sensible possible way.
about 1 year ago
2
7
0
New paper: What happens once AIs make humans obsolete? Even without AIs seeking power, we argue that competitive pressures are set to fully erode human influence and values.
www.gradual-disempowerment.ai
with
@kulveit.bsky.social
, Raymond Douglas, Nora Ammann, Deger Turann, David Krueger 🧵
about 1 year ago
1
17
5
Happy to have helped a little with this paper:
add a skeleton here at some point
over 1 year ago
0
9
0
you reached the end!!
feeds!
log in