Paul Gavrikov
@paulgavrikov.bsky.social
📤 49
📥 76
📝 62
Independent Researcher | Machine Learning & Computer Vision paulgavrikov.github.io
🚨 Our submission deadline is approaching soon (March 12, 2026) 🚨 Submit your work to CV4Edu: Computer Vision for Education, our official workshop at
#CVPR2026
! 📚👁️ 👉 Learn more and submit here:
cv4edu.github.io
about 15 hours ago
1
0
0
Proud to announce that VisualOverload was accepted to
#CVPR2026
! Overall 2/2 accepted.
add a skeleton here at some point
16 days ago
0
2
0
We are proud to announce the first CV4Edu workshop at
#CVPR2026
in Denver. We are bringing together researchers in AI in education, computer vision, and human-centered AI, and invite you to submit a full, short, or position paper to our workshop.
about 1 month ago
1
2
3
Tracking the spookiest runs
@weightsbiases.bsky.social
4 months ago
0
0
0
The worst part about leaving academia is that you loose eduroam access :(
5 months ago
1
0
0
🚨 New paper out! "VisualOverload: Probing Visual Understanding of VLMs in Really Dense Scenes" 👉
arxiv.org/abs/2509.25339
We test 37 VLMs on 2,700+ VQA questions about dense scenes. Findings: even top models fumble badly—<20% on the hardest split and key failure modes in counting, OCR & consistency.
5 months ago
1
8
5
Is basic image understanding solved in today’s SOTA VLMs? Not quite. We present VisualOverload, a VQA benchmark testing simple vision skills (like counting & OCR) in dense scenes. Even the best model (o3) only scores 19.8% on our hardest split.
6 months ago
2
16
5
reposted by
Paul Gavrikov
Keuper Labs
8 months ago
Congratulations to
@paulgavrikov.bsky.social
for an excellent PhD defense today!
0
3
2
Yesterday, I had the great honor of delivering a talk on feature biases in vision models at the VAL Lab at the Indian Institute of Science (IISc). I covered our ICLR 2025 paper and a few older works in the same realm.
youtu.be/9efpCs1ltcM
loading . . .
Paul Gavrikov - Feature Biases in Vision Models (Research Talk @ IISc, Bengaluru)
YouTube video by Paul Gavrikov
https://youtu.be/9efpCs1ltcM
10 months ago
0
0
0
What an incredible week at
#ICLR
2025! 🌟 I had an amazing time presenting our poster "Can We Talk Models Into Seeing the World Differently?" with
@jovitalukasik.bsky.social
. Huge thanks to everyone who stopped by — your questions, insights, and conversations made it such a rewarding experience.
10 months ago
1
3
0
Today at 3pm - poster #328. See you there!
add a skeleton here at some point
11 months ago
0
4
1
On Thursday, I'll be presenting our paper "Can We Talk Models Into Seeing the World Differently?" (#328) at ICLR 2025 in Singapore! If you're attending or just around Singapore, l'd love to connect—feel free to reach out! Also, I’m exploring postdoc or industry opportunities–happy to chat!
11 months ago
1
4
6
We should start boycotting proceedings that do not have a "cite this article" button.
about 1 year ago
0
0
0
Proud to announce that our paper "Can We Talk Models Into Seeing the World Differently?" was accepted at
#ICLR2025
🇸🇬. This marks my last PhD paper, and we are honored that all 4 reviewers recommended acceptance, placing us in the top 6% of all submissions.
about 1 year ago
1
3
4
I analyzed the top 20 most common words in titles of all computer vision (cs.CV) papers on arxiv since 2007. A lot of "learning" papers ...
about 1 year ago
0
0
0
10 years ago
#ICLR
(2014) only had 69 submissions, today nearly 12k. So many good papers:
openreview.net/group?id=ICL...
loading . . .
ICLR 2014
Welcome to the OpenReview homepage for ICLR 2014
https://openreview.net/group?id=ICLR.cc/2014/conference
about 1 year ago
0
0
0
Yeah, I am sure my paper on Feature Biases in ImageNet classifiers is a great match for a journal on lifestyle and sustainable goals.
about 1 year ago
0
0
0
Today at
#NeurIPS2024
Interpretable AI Workshop! Poster sessions are at 10am and 4pm. "How Do Training Methods Influence the Utilization of Vision Models?"
arxiv.org/abs/2410.14470
We won’t be there, but Ruta Binkyte is presenting for us!
about 1 year ago
0
0
1
Gemini 2.0 Flash is the best free-to-use LLM for writing tasks, blazing fast, and last-but-not-least available in the EU ...
about 1 year ago
0
0
0
One of the most interesting aspects of adversarial training to me is not that it increases robustness, but it's ability to revert biases. If a model is strongly biased for X, it will likely completely ignore X at sufficient eps. This is clearly visible in the texture/shape bias!
over 1 year ago
0
0
0
The adaption of ChatGPT amongst non-tech users is incredible - I see so many people in public transport using the app.
over 1 year ago
0
0
0
Tired of showing the same old same adversarial attack examples? Generate your own with this little Foolbox-based Colab:
colab.research.google.com/drive/1WfqEW...
over 1 year ago
0
0
0
WILD! Some researchers from have republished ResNet under their own names at some predatory journal.
@csprofkgd.bsky.social
Predatory:
ijircce.com/admin/main/s...
over 1 year ago
1
4
1
#ICLR
best rebuttal award goes to ...
over 1 year ago
0
5
1
you reached the end!!
feeds!
log in