Felix Petersen
@petersen.ai
π€ 551
π₯ 25
π 18
Machine learning researcher @Stanford.
https://petersen.ai/
pinned post!
Excited to share our
#NeurIPS
2024 Oral, Convolutional Differentiable Logic Gate Networks, leading to a range of inference efficiency records, including inference in only 4 nanoseconds ποΈ. We reduce model sizes by factors of 29x-61x over the SOTA. Paper:
arxiv.org/abs/2411.04732
loading . . .
12 months ago
3
101
22
I'm excited to share that our work on Convolutional Differentiable Logic Gate Networks was covered by MIT Technology Review. π
www.technologyreview.com/2024/12/20/1...
@hildekuehne.bsky.social
loading . . .
The next generation of neural networks could live in hardware
Researchers have devised a way to make computer vision systems more efficient by building networks out of computer chipsβ logic gates.
https://www.technologyreview.com/2024/12/20/1109183/the-next-generation-of-neural-networks-could-live-in-hardware/
11 months ago
1
11
2
reposted by
Felix Petersen
Alfredo Canziani
11 months ago
Convolutional Differentiable Logic Gate Networks @FHKPetersen
3
45
3
reposted by
Felix Petersen
Alfredo Canziani
11 months ago
Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms @FHKPetersen
1
10
3
Join us at our poster session today, 11am-2pm, at East Exhibit Hall A-C *#1502*.
add a skeleton here at some point
11 months ago
0
1
0
reposted by
Felix Petersen
Hendrik Strobelt
11 months ago
Most innovative paper at
#NeurIPS
imho. Can we create a network that becomes the physical chip instead of running on a chip? Inference speedups and energy preservation are through the roof ! Oral on Friday at 10am PT
neurips.cc/virtual/2024...
loading . . .
NeurIPS Poster Convolutional Differentiable Logic Gate NetworksNeurIPS 2024
https://neurips.cc/virtual/2024/poster/96650
0
2
1
Join us on Wednesday, 11am-2pm for our poster session on Newton Losses in *West Ballroom A-D #6207*.
neurips.cc/virtual/2024...
add a skeleton here at some point
11 months ago
0
0
0
Have you ever wondered how training dynamics differ between LLMs ποΈ and Vision ποΈ models? We explore this and close the gap between VMs and LLMs in our
#NeurIPS2024
paper "TrAct: Making First-layer Pre-Activations Trainable". Paper link π:
arxiv.org/abs/2410.23970
Video link π₯:
youtu.be/ZjTAjjxbkRY
π§΅
loading . . .
12 months ago
1
9
3
I'm excited to share our NeurIPS 2024 paper "Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms" π€. Paper link π:
arxiv.org/abs/2410.19055
loading . . .
Newton Losses: Using Curvature Information for Learning with Differentiable Algorithms
When training neural networks with custom objectives, such as ranking losses and shortest-path losses, a common problem is that they are, per se, non-differentiable. A popular approach is to continuou...
https://arxiv.org/abs/2410.19055
12 months ago
1
18
6
Excited to share our
#NeurIPS
2024 Oral, Convolutional Differentiable Logic Gate Networks, leading to a range of inference efficiency records, including inference in only 4 nanoseconds ποΈ. We reduce model sizes by factors of 29x-61x over the SOTA. Paper:
arxiv.org/abs/2411.04732
loading . . .
12 months ago
3
101
22
you reached the end!!
feeds!
log in