Ben Lipkin
@benlipkin.bsky.social
📤 1023
📥 277
📝 25
phd @ mit, research @ genlm, intern @ apple
https://benlipkin.github.io/
pinned post!
Many LM applications may be formulated as text generation conditional on some (Boolean) constraint. Generate a… - Python program that passes a test suite. - PDDL plan that satisfies a goal. - CoT trajectory that yields a positive reward. The list goes on… How can we efficiently satisfy these? 🧵👇
4 months ago
1
11
6
reposted by
Ben Lipkin
Greta Tuckute
about 1 month ago
Humans largely learn language through speech. In contrast, most LLMs learn from pre-tokenized text. In our
#Interspeech2025
paper, we introduce AuriStream: a simple, causal model that learns phoneme, word & semantic information from speech. Poster P6, tomorrow (Aug 19) at 1:30 pm, Foyer 2.2!
loading . . .
1
51
11
reposted by
Ben Lipkin
Hope Kean
about 2 months ago
Is the Language of Thought == Language? A Thread đź§µ New Preprint (link:
tinyurl.com/LangLOT
) with
@alexanderfung.bsky.social
, Paris Jaggers, Jason Chen, Josh Rule, Yael Benn,
@joshtenenbaum.bsky.social
, ‪@spiantado.bsky.social‬, Rosemary Varley,
@evfedorenko.bsky.social
1/8
loading . . .
Evidence from Formal Logical Reasoning Reveals that the Language of Thought is not Natural Language
Humans are endowed with a powerful capacity for both inductive and deductive logical thought: we easily form generalizations based on a few examples and draw conclusions from known premises. Humans al...
https://tinyurl.com/LangLOT
5
70
33
reposted by
Ben Lipkin
Thomas Hikaru Clark
about 2 months ago
1/7 If you're at CogSci 2025, I'd love to see you at my talk on Friday 1pm PDT in Nob Hill A! I'll be talking about our work towards an implemented computational model of noisy-channel comprehension (with
@postylem.bsky.social
, Ted Gibson, and
@rplevy.bsky.social
).
1
18
7
Many LM applications may be formulated as text generation conditional on some (Boolean) constraint. Generate a… - Python program that passes a test suite. - PDDL plan that satisfies a goal. - CoT trajectory that yields a positive reward. The list goes on… How can we efficiently satisfy these? 🧵👇
4 months ago
1
11
6
reposted by
Ben Lipkin
JoĂŁo Loula
5 months ago
#ICLR2025
Oral How can we control LMs using diverse signals such as static analyses, test cases, and simulations? In our paper “Syntactic and Semantic Control of Large Language Models via Sequential Monte Carlo” (w/
@benlipkin.bsky.social
,
@alexlew.bsky.social
,
@xtimv.bsky.social
) we:
1
7
6
reposted by
Ben Lipkin
Kyle Mahowald
5 months ago
I might be able to hire a postdoc for this fall in computational linguistics at UT Austin. Topics in the general LLM + cognitive space (particularly reasoning, chain of thought, LLMs + code) and LLM + linguistic space. If this could be of interest, feel free to get in touch!
0
60
32
reposted by
Ben Lipkin
JHU Computer Science
5 months ago
Jason Eisner & Li Du’s “Syntactic and semantic control of large language models via sequential Monte Carlo” with
@joaoloula.bsky.social
,
@benlipkin.bsky.social
,
@yahyaemara.bsky.social
,
@alexlew.bsky.social
,
@xtimv.bsky.social
, & more presents an architecture for controlled LM generation: (11/12)
loading . . .
Syntactic and Semantic Control of Large Language Models via...
A wide range of LM applications require generating text that conforms to syntactic or semantic constraints. Imposing such constraints can be naturally framed as _probabilistic conditioning_, but...
https://openreview.net/forum?id=xoXn62FzD0
1
5
2
reposted by
Ben Lipkin
Colton Casto
5 months ago
New paper! đź§ **The cerebellar components of the human language network** with:
@hsmall.bsky.social
@moshepoliak.bsky.social
@gretatuckute.bsky.social
@benlipkin.bsky.social
@awolna.bsky.social
@aniladmello.bsky.social
and
@evfedorenko.bsky.social
www.biorxiv.org/content/10.1...
1/n đź§µ
loading . . .
The cerebellar components of the human language network
The cerebellum's capacity for neural computation is arguably unmatched. Yet despite evidence of cerebellar contributions to cognition, including language, its precise role remains debated. Here, we sy...
https://www.biorxiv.org/content/10.1101/2025.04.14.645351v1
2
49
24
New preprint on controlled generation from LMs! I'll be presenting at NENLP tomorrow 12:50-2:00pm Longer thread coming soon :)
6 months ago
1
19
9
reposted by
Ben Lipkin
Greta Tuckute
9 months ago
I defended my PhD at MIT Brain&Cog last week--so much gratitude to my advisor
@evfedorenko.bsky.social
, as well as my committee
@nancykanwisher.bsky.social
,
@joshhmcdermott.bsky.social
and Yoon Kim. Thank you to all my brilliant collaborators and the MIT community. I have loved this journey so much.
5
99
6
reposted by
Ben Lipkin
Alex Lew
10 months ago
If you're interested in a PhD at the intersection of machine learning and programming languages, consider applying to Yale CS! We're exploring new approaches to building software that draws inferences and makes predictions. See
alexlew.net
for details & apply at
gsas.yale.edu/admissions/
by Dec. 15
1
71
24
Lots of folks talking about scaling LLM inference over this last year Internally, I’ve been developing and using a library that makes this extremely easy, and I decided to open-source it Meet the decoding library:
github.com/benlipkin/de...
1/7
loading . . .
GitHub - benlipkin/decoding: Composable inference algorithms with LLMs and programmable logic
Composable inference algorithms with LLMs and programmable logic - benlipkin/decoding
https://github.com/benlipkin/decoding
10 months ago
1
26
5
reposted by
Ben Lipkin
Robert Hawkins
about 2 years ago
✨ Now out in Psychological Review! ✨ We present a new account of relevance that weighs both *epistemic* and *decision-theoretic* utility: statements are relevant if they improve the accuracy of listener’s beliefs *and* their future decision-making.
www.tedsumers.info/_files/ugd/2...
3
43
15
reposted by
Ben Lipkin
Anna (Anya) Ivanova
almost 2 years ago
First post and big news - I am starting as an Assistant Professor in Psychology at Georgia Tech in Jan 2024!
www.language-intelligence-thought.net
loading . . .
LIT Lab
Our goal is to understand the relationship between language and human thought. How does the language network in the brain interact with other systems to interpret meaning in the world? Can models...
https://www.language-intelligence-thought.net/
4
67
14
reposted by
Ben Lipkin
Jennifer Hu
almost 2 years ago
To researchers doing LLM evaluation: prompting is *not a substitute* for direct probability measurements. Check out the camera-ready version of our work, to appear at EMNLP 2023! (w/
@rplevy.bsky.social
) Paper:
arxiv.org/abs/2305.13264
Original thread:
twitter.com/_jennhu/stat...
17
293
73
reposted by
Ben Lipkin
Cory Shain
almost 2 years ago
đź‘‹Hi
#bsky
! Just wanted to let everyone know I'll be arriving at
#Stanford
in fall of 2024 and I'm looking for awesome people to help me figure out language. I'm new here my network is small, so RTs would be great! 🧵👇
1
51
42
reposted by
Ben Lipkin
Cognitive Science Society
almost 2 years ago
🚀 We've officially landed on Bluesky! Excited to join this vibrant new platform and (re)connect with all of you! 🗣️💬 Help us spread the word by retweeting and following.
4
151
100
you reached the end!!
feeds!
log in