Neehar Kondapaneni
@therealpaneni.bsky.social
📤 56
📥 498
📝 19
Researching interpretability and alignment in computer vision. PhD student @ Vision Lab Caltech
You’ve generated 10k concepts with your favorite XAI method -- now what? Many concepts you’ve found are fairly obvious and uninteresting. What if you could 𝑠𝑢𝑏𝑡𝑟𝑎𝑐𝑡 obvious concepts away and focus on the more complex ones? We tackle this in our latest preprint!
3 months ago
1
1
2
Have you ever wondered what makes two models different? We all know the ViT-Large performs better than the Resnet-50, but what visual concepts drive this difference? Our new ICLR 2025 paper addresses this question!
nkondapa.github.io/rsvc-page/
6 months ago
1
25
10
reposted by
Neehar Kondapaneni
Angelina Wang
8 months ago
Our new piece in Nature Machine Intelligence: LLMs are replacing human participants, but can they simulate diverse respondents? Surveys use representative sampling for a reason, and our work shows how LLM training prevents accurate simulation of different human identities.
9
150
37
you reached the end!!
feeds!
log in