@moohax.bsky.social
📤 43
📥 44
📝 4
reposted by
Dreadnode
5 months ago
What's your take on the growing dominance of automated attacks and the implications for AI red teams? Here's ours— based on our analysis of 30 LLM challenges, attempted by 1,674 unique Crucible users, across 214,271 attack attempts:
arxiv.org/abs/2504.19855
0
4
6
reposted by
Dreadnode
8 months ago
@datasociety.bsky.social
and the AI Risk and Vulnerability Alliance just released “Red Teaming in the Public Interest,” a report examining how red teaming methods are being adapted to evaluate genAI. Read the report, featuring commentary from
@moohax.bsky.social
:
datasociety.net/library/red-...
loading . . .
Red-Teaming in the Public Interest
This report offers a vision for red-teaming in the public interest: a process that goes beyond system-centric testing of already built systems to consider the full range of ways the public can be invo...
https://datasociety.net/library/red-teaming-in-the-public-interest/
0
5
3
reposted by
Dreadnode
8 months ago
NEW Crucible Challenge: DeepTweak, an exploration of reasoning model behavior. Cause enough confusion 😵💫, retrieve the flag. Think fast; The first three users to solve DeepTweak will be announced Friday! ➡️
https://crucible.dreadnode.io/challenges/deeptweak?utm_source=social&utm_medium=social&u…
0
4
4
reposted by
Dreadnode
8 months ago
New to Rigging: 🔥 Tracing 🛠️ API Tools 💻 HTTP Generator 🐍 Prompts as Tools →
github.com/dreadnode/ri...
0
7
4
First distillation/extraction attack for OAI was the Stanford Alpaca research. It was after this that OAI changed its ToS to disallow training on outputs. It can happen to all the model providers.
crfm.stanford.edu/2023/03/13/a...
loading . . .
Stanford CRFM
https://crfm.stanford.edu/2023/03/13/alpaca.html
8 months ago
0
2
2
People learning what alignment means by asking DeepSeek about Taiwan.
8 months ago
0
7
0
you reached the end!!
feeds!
log in