Quentin Gallouédec
@qgallouedec.hf.co
📤 907
📥 170
📝 17
PhD - Research
@hf.co
🤗 TRL maintainer
It started as a modest project to offer a free, open-source alternative to MuJoCo environments, and today, panda-gym is downloaded over 100k times, and cited in over 100 papers. 🦾
5 months ago
0
7
1
just pip install trl
5 months ago
0
4
0
How many of these 8 things did you know?
huggingface.co/blog/qgallou...
loading . . .
Gotchas in Tokenizer Behavior Every Developer Should Know
A Blog post by Quentin Gallouédec on Hugging Face
https://huggingface.co/blog/qgallouedec/gotchas-in-tokenizer-behavior
5 months ago
1
4
0
🚀 TRL 0.14 – Featuring GRPO! 🚀 TRL 0.14 brings *GRPO*, the RL algorithm behind 🐳 DeekSeek-R1 . ⚡ Blazing fast generation with vLLM integration. 📉 Optimized training with DeepSpeed ZeRO 1/2/3.
8 months ago
0
4
0
reposted by
Quentin Gallouédec
Thomas Wolf
8 months ago
The most impactful open-source project of today (dixit Vercel VP of AI) =>
huggingface.co/blog/open-r1
loading . . .
Open-R1: a fully open reproduction of DeepSeek-R1
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
https://huggingface.co/blog/open-r1
0
81
18
Last moments of closed-source AI 🪦 : Hugging Face is openly reproducing the pipeline of 🐳 DeepSeek-R1. Open data, open training. open models, open collaboration. 🫵 Let's go!
github.com/huggingface/...
loading . . .
GitHub - huggingface/open-r1: Fully open reproduction of DeepSeek-R1
Fully open reproduction of DeepSeek-R1. Contribute to huggingface/open-r1 development by creating an account on GitHub.
https://github.com/huggingface/open-r1
8 months ago
0
32
8
The algorithm behind DeepSeek's R1 model (aka GRPO) now lives in TRL main branch! Go and test it!
8 months ago
0
4
0
[Stonks] TRL is a Python library for training language models. It has seen impressive growth this year. Lots of new features, an improved codebase, and this has translated into increased usage. You can count on us to do even more in 2025.
9 months ago
0
1
0
🎅 Santa Claus has delivered the ultimate guide to understand OOM error (link in comment)
9 months ago
2
16
5
Top 1 Python dev today. Third time since september 🫨
10 months ago
0
4
0
🚨 TRL 0.13 is out! 🤗 Featuring a Process-supervised Reward Models (PRM) Trainer 🏋️ PRMs empower LLMs to "think before answering"—a key feature behind OpenAI's o1 launch just two weeks ago. 🚀
10 months ago
0
1
0
reposted by
Quentin Gallouédec
Lewis Tunstall
10 months ago
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute 🔥 How? By combining step-wise reward models with tree search algorithms :) We're open sourcing the full recipe and sharing a detailed blog post 👇
4
109
22
The number of TRL models on the 🤗 Hub has risen x60 this year! 📈 How about doing the same next year?
10 months ago
0
2
0
reposted by
Quentin Gallouédec
Ben Burtenshaw
10 months ago
We took those TRL notebooks from last week and made a page from them. So if you're upskilling on finetuning or aligning LLMs, and want examples from the community (like Maxime Labonne Philipp Schmid Sergio Paniego Blanco), check it out!
bsky.app/profile/benb...
>>
huggingface.co/docs/trl/mai...
add a skeleton here at some point
1
21
4
Join us at Hugging Face as an intern if you want to contribute to amazing open-source projects, and develop LLM's best finetuning library, aka TRL. 🧑💻 Full remote 🤯 Exciting subjects 🌍 Anywhere in the world 🤸🏻 Flexible working hours Link to apply in comment 👇
10 months ago
1
7
0
reposted by
Quentin Gallouédec
Elie
10 months ago
We’re looking for an intern to join our SmolLM team! If you’re excited about training LLMs and building high-quality datasets, we’d love to hear from you. 🤗 US:
apply.workable.com/huggingface/...
EMEA:
apply.workable.com/huggingface/...
loading . . .
ML Research Engineer Internship, SmolLMs pretraining and datasets - EMEA Remote - Hugging Face
Here at Hugging Face, we’re on a journey to advance good Machine Learning and make it more accessible. Along the way, we contribute to the development of technology for the better.We have built the fa...
https://apply.workable.com/huggingface/j/0643507FC5/
7
64
14
I'd love to! We have a lot of room for improvement here!
add a skeleton here at some point
10 months ago
0
3
0
reposted by
Quentin Gallouédec
Ben Burtenshaw
10 months ago
These tutorials provide a comprehensive but concise roadmap through TRL across the main fine-tuning and alignment classes. 🤔 Let me know if you would like a dedicated course on TRL basics.
1
5
2
reposted by
Quentin Gallouédec
Thomas Wolf
10 months ago
It's Sunday morning so taking a minute for a nerdy thread (on math, tokenizers and LLMs) of the work of our intern Garreth By adding a few lines of code to the base Llama 3 tokenizer, he got a free boost in arithmetic performance 😮 [thread]
5
272
39
How can you avoid the temptation to use a subprocess for sub-commands? This blog post from
@muellerzr.bsky.social
saved my day.
muellerzr.github.io/til/argparse...
loading . . .
Zach Mueller - Calling argparse without subprocess
How to use argparse without the CLI
https://muellerzr.github.io/til/argparse.html
10 months ago
1
4
1
Finetune SmolLM2 with TRL!
add a skeleton here at some point
10 months ago
0
12
1
reposted by
Quentin Gallouédec
jsulz
11 months ago
When XetHub joined Hugging Face, we brainstormed how to share our tech with the community. The magic? Versioning chunks, not files, giving rise to: 🧠 Smarter storage ⏩ Faster uploads 🚀 Efficient downloads Curious? Read the blog and let us know how it could help your workflows!
loading . . .
From Files to Chunks: Improving HF Storage Efficiency
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
https://huggingface.co/blog/from-files-to-chunks
1
33
17
you reached the end!!
feeds!
log in