Augustin Godinot
@grodino.bsky.social
๐ค 51
๐ฅ 180
๐ 17
Algorithm Auditing | CS PhD student @ INRIA/IRISA/PEReN | Visiting UofT @ CleverHans Lab !
PhDone! A few weeks ago, I defended my contributions to avoid a dieselgate moment for AI regulation. After some rest, I am now entering a blissful period of pure academic freedom, courtesy of the French unemployment benefits ๐ซ๐ท Time to look for a new academic home !
10 days ago
1
7
1
Interested in how AI providers can alter their products to evade regulation, the possible defense and what it means for AI governance? Or just want a nice soothing voice to nap after lunch? I'm defending on Feb. 10 at 2:30PM, Paris time. 3y of PhD research distilled in 45min, see ๐ to tune in!
loading . . .
Thinking Out of the (Black)-Box: Tools for machine learning audits in the presence of deceptive model providers
https://grodino.github.io/defense/
about 2 months ago
1
2
0
reposted by
Augustin Godinot
Typst
7 months ago
In the past two years, Typst has become the foundation to base document writing on for so many people. With the lessons from their experience, we are launching our new website today.
1
44
11
We are presenting the paper with Milos tomorrow at 11am, come chat at our poster ! 11am, East Exhibition Hall A-B
#E-1911
add a skeleton here at some point
8 months ago
0
0
0
A new example of discrepancy between what is shown to auditors and what really happens on the system (and this time it's not Meta).Thank you
@aiforensics.org
! Can't wait for the "sorry it was an intern's mistake, we fired them" answer.
add a skeleton here at some point
9 months ago
0
1
0
reposted by
Augustin Godinot
AI Forensics
9 months ago
๐งต We just exposed how TikTok's "Research API" is systematically hiding content from researchers Despite promises of transparency under EU law, the platform is missing 1 in 8 videos from its own research tools:
aiforensics.org/work/tk-api
loading . . .
TikTok Research API - Availability Dashboard
https://playground.tiktok-audit.com/api-na/
1
6
3
๐ฏ ICML25 spotlight ๐ฏ How to detect and prevent audit manipulations? Do you remember ๐ Dieselgate ๐จ? The car computer would detect when it was on a test-bench and reduce the engine power to fake environmental compliance. Well, this can happen in AI too. How can we avoid this ? ๐งต1/6
10 months ago
1
4
1
I will be presenting this work at
#AAAI
on Saturday (board 39). Come and chat about model comparison, auditing and manipulations !
add a skeleton here at some point
about 1 year ago
0
1
0
๐ฃ Queries, Representation, Detection: the next 100 model fingerprinting schemes I made some ๐ Lemon QuRD, I hope you like it! ๐ป code :
github.com/grodino/QuRD/
๐ paper :
arxiv.org/abs/2412.13021
TL;DR: we show that a simple baseline meets or beats existing model fingerprints and investigate why.
about 1 year ago
0
0
1
reposted by
Augustin Godinot
Nicolas Papernot
about 1 year ago
If you work at the intersection of security, privacy, and machine learning, or more broadly how to trust ML, SaTML is a small-scale conference with highly-relevant work where you'll be able to have high-quality conversations with colleagues working in your area.
2
12
5
you reached the end!!
feeds!
log in