First bsky post!
Super excited about the new MatQuant work! Allows training a quantized model where 2bit weights are nested within 4bits and so on. This enables "reading" off accurate models that can have 2bit quantization in the first layer, 4bit in the second layer etc. [1/n]
add a skeleton here at some point
8 months ago