Yue Chen
@fanegg.bsky.social
📤 81
📥 162
📝 11
PhD Student at Westlake University. 3D/4D Reconstruction, Virtual Humans. fanegg.github.io
#Human3R
: Everyone Everywhere All at Once Just input a RGB video, we online reconstruct 4D humans and scene in 𝗢𝗻𝗲 model and 𝗢𝗻𝗲 stage. Training this versatile model is easier than you think – it just takes 𝗢𝗻𝗲 day using 𝗢𝗻𝗲 GPU! 🔗Page:
fanegg.github.io/Human3R/
loading . . .
about 2 months ago
1
2
1
Again, training-free is all you need.
add a skeleton here at some point
about 2 months ago
0
3
0
reposted by
Yue Chen
Haiwen Huang
7 months ago
Excited to introduce LoftUp! A strong (than ever) and lightweight feature upsampler for vision encoders that can boost performance on dense prediction tasks by 20%–100%! Easy to plug into models like DINOv2, CLIP, SigLIP — simple design, big gains. Try it out!
github.com/andrehuang/l...
0
19
5
reposted by
Yue Chen
Andreas Geiger
8 months ago
I was really surprised when I saw this. Dust3R has learned very well to segment objects without supervision. This knowledge can be extracted post-hoc, enabling accurate 4D reconstruction instantly.
add a skeleton here at some point
1
31
2
Just "dissect" the cross-attention mechanism of
#DUSt3R
, making 4D reconstruction easier.
add a skeleton here at some point
8 months ago
0
4
0
#Easi3R
is a simple training-free approach adapting DUSt3R for dynamic scenes.
add a skeleton here at some point
8 months ago
0
4
0
How much 3D do visual foundation models (VFMs) know? Previous work requires 3D data for probing → expensive to collect!
#Feat2GS
@cvprconference.bsky.social
2025 - our idea is to read out 3D Gaussains from VFMs features, thus probe 3D with novel view synthesis. 🔗Page:
fanegg.github.io/Feat2GS
loading . . .
8 months ago
1
24
8
you reached the end!!
feeds!
log in