Razan Baltaji
@razanbaltaji.bsky.social
📤 6
📥 15
📝 0
reposted by
Razan Baltaji
Anita Keshmirian, PhD.
3 months ago
New preprint: Do LLMs make different moral decisions when reasoning in collective? We find that LLM collectives endorse welfare-maximizing actions more when in groups than solo runs, even at the cost of harming a minority. 📄
arxiv.org/abs/2507.00814
@razanbaltaji.bsky.social
loading . . .
Many LLMs Are More Utilitarian Than One
Moral judgment is integral to large language model (LLM) alignment and social reasoning. As multi-agent systems gain prominence, it becomes crucial to understand how LLMs function collectively during ...
https://arxiv.org/abs/2507.00814
0
0
3
reposted by
Razan Baltaji
Kush Varshney कुश वार्ष्णेय
8 months ago
I'm presenting a seminar at the University of Illinois tomorrow at 4 pm. It'll be about human-centered trustworthy AI in the age of agentic AI and how a systems theory might help us understand and govern certain risks like loss of dignity and loss of control.
calendars.illinois.edu/detail/5528?...
loading . . .
Toward a Systems Theory for Human-Centered Trustworthy Agentic AI
https://calendars.illinois.edu/detail/5528?eventId=33509435
0
4
1
reposted by
Razan Baltaji
Anita Keshmirian, PhD.
9 months ago
I’ll present "Many Minds, Diverging Morals: Human Groups vs. AI in Moral Decision-Making," my recent work with Eric Schulz & RazanBaltaji, next week at SCIoI, Berlin. Join us! Open to the public!
www.scienceofintelligence.de/event/anita-...
loading . . .
Anita Keshmirian (Forward College, Berlin): "Many Minds, Diverging Morals: Human Groups vs. AI in Moral Decision-Making" - scienceofintelligence.de
Moral judgments are inherently social, shaped by interactions with others in everyday life. Despite this, psychological research has rarely examined the
https://www.scienceofintelligence.de/event/anita-keshmirian-many-minds-diverging-morals-human-groups-vs-ai-in-moral-decision-making/
0
4
1
you reached the end!!
feeds!
log in