New paper from our team at the Lamarr Institute and Fraunhofer IAIS. We investigate a core tradeoff in large language models: knowledge manipulation vs. knowledge capacity. We propose a novel looped transformer model that combines adaptive loops and memory banks to address both.
21 days ago