Angie Boggust
@angieboggust.bsky.social
📤 419
📥 250
📝 11
MIT PhD candidate in the VIS group working on interpretability and human-AI alignment
reposted by
Angie Boggust
VISxAI
9 months ago
#VISxAI
IS BACK!! 🤖📊 Submit your interactive “explainables” and “explorables” that visualize, interpret, and explain AI.
#IEEEVIS
📆 Deadline: July 30, 2025
visxai.io
loading . . .
Workshop on Visualization for AI Explainability
The role of visualization in artificial intelligence (AI) gained significant attention in recent years. With the growing complexity of AI models, the critical need for understanding their inner-workin...
https://visxai.io
0
7
4
I’ll be at
#CHI2025
🌸 If you are excited about interpretability and human-AI alignment — let’s chat! And come see Abstraction Alignment ⬇️ in the Explainable AI paper session on Monday at 4:20 JST
add a skeleton here at some point
10 months ago
0
5
0
#CHI2025
paper on human–AI alignment!🧵 Models can learn the right concepts but still be wrong in how they relate them. ✨Abstraction Alignment✨evaluates whether models learn human-aligned conceptual relationships. It reveals misalignments in LLMs💬 and medical datasets🏥. 🔗
arxiv.org/abs/2407.12543
10 months ago
1
9
2
you reached the end!!
feeds!
log in