Nico Bohlinger
@nicobohlinger.bsky.social
📤 40
📥 38
📝 26
26 | Morphology-aware Robotics, RL Research | PhD student at
@ias-tudarmstadt.bsky.social
I'm presenting four different works at IROS 2025 this week in Hangzhou 🤖
4 months ago
1
5
2
⚡️ Can one unified policy control 10 million different robots and zero-shot transfer to completely unseen robots, even humanoids? 🔗 Yes! Checkout our paper:
arxiv.org/abs/2509.02815
4 months ago
1
2
0
🇰🇷 Conferences are about finally meeting your collaborators from all around the world! Check out our work on Embodiment Scaling Laws @CoRL2025 We investigate cross-embodiment learning as the next axis of scaling for truly generalist policies 📈 🔗 All details:
embodiment-scaling-laws.github.io
4 months ago
2
9
2
reposted by
Nico Bohlinger
Antonin Raffin
7 months ago
Need for Speed or: How I Learned to Stop Worrying About Sample Efficiency Part II of my blog series "Getting SAC to Work on a Massive Parallel Simulator" is out! I've included everything I tried that didn't work (and why Jax PPO was different from PyTorch PPO)
araffin.github.io/post/tune-sa...
loading . . .
Getting SAC to Work on a Massive Parallel Simulator: Tuning for Speed (Part II) | Antonin Raffin | Homepage
This second post details how I tuned the Soft-Actor Critic (SAC) algorithm to learn as fast as PPO in the context of a massively parallel simulator (thousands of robots simulated in parallel).
https://araffin.github.io/post/tune-sac-isaac-sim/
4
35
9
Robot Randomization is fun!
loading . . .
7 months ago
1
8
3
🚀 Checkout our new work at
@rldmdublin2025.bsky.social
today at poster#16! We're showing how to make Explicit Policy-conditioned Value Functions V(θ) (originating from Faccio & Schmidhuber) work for more complex control tasks. The secret? Massive scaling!
8 months ago
1
8
1
reposted by
Nico Bohlinger
Intelligent Autonomous Systems
8 months ago
IAS is at RLDM 2025! We have many exiting works to share (see 👇), so come to our posters and talk to us!
4
4
3
⚡️ Do you think training robot locomotion needs large scale simulation? Think again! We train an omnidirectional locomotion policy directly on a real quadruped in just a few minutes 🚀 Top speeds of 0.85 m/s, two different control approaches, indoor and outdoor experiments, and more! 🤖🏃♂️
loading . . .
11 months ago
1
9
3
reposted by
Nico Bohlinger
Antonin Raffin
11 months ago
"As researchers, we tend to publish only positive results, but I think a lot of valuable insights are lost in our unpublished failures." New blog post: Getting SAC to Work on a Massive Parallel Simulator (part I)
araffin.github.io/post/sac-mas...
loading . . .
Getting SAC to Work on a Massive Parallel Simulator: An RL Journey With Off-Policy Algorithms (Part I) | Antonin Raffin | Homepage
This post details how I managed to get the Soft-Actor Critic (SAC) and other off-policy reinforcement learning algorithms to work on massively parallel simulators (think Isaac Sim with thousands of ro...
https://araffin.github.io/post/sac-massive-sim/
5
29
13
you reached the end!!
feeds!
log in