Filippo Stocco
@filippostocco.bsky.social
📤 38
📥 74
📝 2
reposted by
Filippo Stocco
Manuel Irimia
about 2 months ago
🧬🧬🧬 New review from the lab: Evolution of comparative transcriptomics: biological scales, phylogenetic spans, and modeling frameworks
authors.elsevier.com/sd/article/S...
By
@mattezambon.bsky.social
&
@fedemantica.bsky.social
, together with
@jonnyfrazer.bsky.social
& Mafalda Dias.
1
37
14
reposted by
Filippo Stocco
Martin Steinegger 🇺🇦
about 2 months ago
MMseqs2 v18 is out - SIMD FW/BW alignment (preprint soon!) - Sub. Mat. λ calculator by Eric Dawson - Faster ARM SW by Alexander Nesterovskiy - MSA-Pairformer’s proximity-based pairing for multimer prediction (
www.biorxiv.org/content/10.1...
; avail. in ColabFold API) 💾
github.com/soedinglab/M...
& 🐍
0
61
17
reposted by
Filippo Stocco
MLSB (in San Diego + Copenhagen)
2 months ago
Stay tuned for details on the 6th edition of MLSB, officially happening this December in downtown San Diego, CA!
add a skeleton here at some point
0
15
7
reposted by
Filippo Stocco
Martin Pacesa
3 months ago
We have written up a tutorial on how to run BindCraft, how to prepare your input PDB, how to select hotspots, and various other tips and tricks to get the most out of binder design!
github.com/martinpacesa...
4
138
55
It’s never been easier to align your protein language model (pLM)! We’ve released a major update to our ProtRL repo: ✅ GRPO via Hugging Face Trainer ✅ New support for weighted DPO Built for flexible, scalable RL with HF trainer base! Check here:
github.com/AI4PDLab/Pro...
3 months ago
1
3
0
reposted by
Filippo Stocco
Andrea Hunklinger
3 months ago
In our newest preprint, we discuss current explainable AI (XAI) methods. We divided the workflow of a generative decoder-only model into four information contexts for XAI: training dataset, input query, model components, and output sequence. See here:
arxiv.org/abs/2506.19532
@aichemist.bsky.social
0
9
4
reposted by
Filippo Stocco
Noelia Ferruz
10 months ago
Protein language models excel at generating functional yet remarkably diverse artificial sequences. They however fail to naturally sample rare datapoints, like very high activities. In our new preprint, we show that RL can solve this without the need for additional data:
arxiv.org/abs/2412.12979
1
58
16
you reached the end!!
feeds!
log in