David Duvenaud
@davidduvenaud.bsky.social
📤 919
📥 153
📝 56
Machine learning prof at U Toronto. Working on evals and AGI governance.
Me and Raymond Douglas on how AI job loss could hurt democracy. “No taxation without representation” summarizes that historically, democratic rights flow from economic power. But this might work in reverse once we’re all on UBI: No representation without taxation!
bsky.app/profile/econ...
add a skeleton here at some point
3 days ago
3
6
0
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop! Post-AGI Civilizational Equilibria: Are there any good ones? Vancouver, July 14th
www.post-agi.org
Featuring: Joe Carlsmith,
@richardngo.bsky.social
‬, Emmett Shear ... 🧵
loading . . .
Post-AGI Civilizational Equilibria Workshop | Vancouver 2025
Are there any good ones? Join us in Vancouver on July 14th, 2025 to explore stable equilibria and human agency in a post-AGI world. Co-located with ICML.
https://www.post-agi.org
3 months ago
1
8
3
What to do about gradual disempowerment from AGI? We laid out a research agenda with all the concrete and feasible research projects we can think of: đź§µ
www.lesswrong.com/posts/GAv4DR...
with Raymond Douglas,
@kulveit.bsky.social
@davidskrueger.bsky.social
loading . . .
Gradual Disempowerment: Concrete Research Projects — LessWrong
This post benefitted greatly from comments, suggestions, and ongoing discussions with David Duvenaud, David Krueger, and Jan Kulveit. All errors are…
https://www.lesswrong.com/posts/GAv4DRGyDHe2orvwB/gradual-disempowerment-concrete-research-projects
4 months ago
1
8
1
reposted by
David Duvenaud
Geoffrey Irving
5 months ago
On top of the AISI-wide research agenda yesterday, we have more on the research agenda for the AISI Alignment Team specifically. See Benjamin's thread and full post for details; here I'll focus on why we should not give up on directly solving alignment, even though it is hard. đź§µ
add a skeleton here at some point
1
4
2
reposted by
David Duvenaud
Schwartz Reisman Institute for Technology and Society
5 months ago
“What place will humans have when AI can do everything we do — only better?” In The Guardian today, SRI Chair
@davidduvenaud.bsky.social
explores what happens when AI doesn't destroy us — it just quietly replaces us. 🔗
www.theguardian.com/books/2025/m...
#AI
#AIEthics
#TechAndSociety
loading . . .
Better at everything: how AI could make human beings irrelevant
The end of civilisation might look less like a war, and more like a love story. Can we avoid being willing participants in our own downfall?
https://www.theguardian.com/books/2025/may/04/the-big-idea-can-we-stop-ai-making-humans-obsolete
2
4
6
My single rule for productive Bluesky discussions: Start every single reply with a point of agreement. It disarms the combative impulse on both sides, and forces you to try to interpret their words in the most sensible possible way.
6 months ago
2
7
0
New paper: What happens once AIs make humans obsolete? Even without AIs seeking power, we argue that competitive pressures are set to fully erode human influence and values.
www.gradual-disempowerment.ai
with
@kulveit.bsky.social
, Raymond Douglas, Nora Ammann, Deger Turann, David Krueger đź§µ
8 months ago
1
17
5
Happy to have helped a little with this paper:
add a skeleton here at some point
9 months ago
0
9
0
you reached the end!!
feeds!
log in