Marco Zocca
@ocramz.bsky.social
📤 292
📥 697
📝 543
ML, λ • language and the machines that understand it •
https://ocramz.github.io
pinned post!
CERN for frontier AI >>>
3 months ago
0
2
0
reposted by
Marco Zocca
Prasad Jallepalli, MD, PhD
about 14 hours ago
in retrospect, it was a mistake to teach algebra and geometry but not "statistics for everyone" in high school we are drowning in lies, partly because we never learned to swim
4
83
21
"amateurs" I scoff as I return to vibe-tabbing a web app with no tests
add a skeleton here at some point
about 12 hours ago
0
1
0
academia creates an infinite supply of methodological beef by withholding the value of "strong"
add a skeleton here at some point
about 15 hours ago
0
2
0
reposted by
Marco Zocca
Kyle Cranmer
1 day ago
Is scale all you need? Or is there still a role for incorporating domain knowledge and inductive bias? While I was in Heidelberg, I took some time to write a short essay on this question called "The Bittersweet Lesson".
theoryandpractice.org/2025/09/The%...
#HLF25
2
29
14
(from Naming the Mind - K Danziger:
www.goodreads.com/book/show/81...
)
add a skeleton here at some point
1 day ago
0
1
0
Wait, isn't review always double-blind? Why are author details a factor?
add a skeleton here at some point
1 day ago
1
0
0
ICLR deadline vs solo parenting 2 kids and the whole family having a flu 🤬
4 days ago
1
6
0
reposted by
Marco Zocca
Adam Becker
4 days ago
The real existential threat isn't AI — it's billionaires. My latest, for
@theatlantic.com
:
www.theatlantic.com/books/archiv...
loading . . .
What AI’s Doomers and Utopians Have in Common
Those who predict that superintelligence will destroy humanity serve the same interests as those who believe that it will solve all of our problems.
https://www.theatlantic.com/books/archive/2025/09/what-ais-doomers-and-utopians-have-in-common/684270/
3
92
36
computer, connect the dots for once
5 days ago
0
1
0
do hashtags work on bsky? I'm not trying to farm engagement, just to find people who might know answers 🥶
5 days ago
1
0
0
say I take N multinomial samples x1..xN from two histograms p1 and p2, and have a function X -> Bool that evaluates some sample fitness.There could be many "right" samples. Is it sensible to compare p1 with p2 by the MRR of the results of f on X?
#informationretrieval
#ranking
#search
#generative
5 days ago
1
1
0
more likely: mode-collapsing on old FIXME comments, filling PRs with Freud quotes
add a skeleton here at some point
5 days ago
0
2
0
www.nature.com/articles/s41...
kudos to the DeepSeek team, more foundation model teams should put the effort of submitting to a journal. The review team in this case was a mix of industry and academic experts. Review log is public too, so ✍️
loading . . .
DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning - Nature
A new artificial intelligence model, DeepSeek-R1, is introduced, demonstrating that the reasoning abilities of large language models can be incentivized through pure reinforcement learning, removing t...
https://www.nature.com/articles/s41586-025-09422-z
6 days ago
0
2
0
reposted by
Marco Zocca
ELLIOT Project
9 days ago
🗣️ Can generative AI speak more than English? Cees Snoek presents NeoBabel — a multilingual, open-source text-to-image model in 🇬🇧 🇨🇳 🇳🇱 🇫🇷 🇮🇳 🇮🇷, built for inclusivity and fairness. ▶️ Watch now:
youtu.be/qUadAcykUiI?...
loading . . .
NeoBabel: A Multilingual Open Foundation Model for Visual Generation, by Cees Snoek
YouTube video by ELLIOT Project
https://youtu.be/qUadAcykUiI?list=PL-1Qqni6NKJz40gDG9qAtLZpLXLtZRjTL
0
2
3
reposted by
Marco Zocca
Sam Adams
28 days ago
the most useful thing LLMs can is force us to do is to reconsider the way we use linguistic dexterity as a proxy for intelligence
15
420
95
reposted by
Marco Zocca
Grace Lindsay
10 days ago
"Evolution produces sufficient rather than optimal adaptations." - so true and so annoying
3
44
7
I got nerd sniped by this one, some puzzles are definitely doable with logict in
#haskell
, without turning to a SMT/SAT solver , eg
lobste.rs/s/8ubfdd/man...
add a skeleton here at some point
11 days ago
1
2
0
Not only, also when the risk scenarios will not be entirely made up as they are now.
add a skeleton here at some point
11 days ago
0
0
0
reposted by
Marco Zocca
Leshem (Legend) Choshen @ICML @ACL
11 days ago
The most expensive part of training is the data, not the compute Nikhil Kandpal & Colin Raffel calculate a really low bar for how much it would cost to produce LLM training data with 3.8$\h Well, several scales more than the compute. Luckily (?), companies don't pay for the data 🤖📈🧠
1
1
3
reposted by
Marco Zocca
Yoshitomo Matsubara
15 days ago
As an Action Editor of TMLR, I am looking for researchers who want to serve as reviewers for TMLR (NOTE: This is not for a specific paper in my batch) If you read the guideline and want to serve as a reviewer for TMLR, send me your Google Scholar and OpenReview profiles🙋
jmlr.org/tmlr/reviewe...
loading . . .
Transactions on Machine Learning Research
https://jmlr.org/tmlr/reviewer-guide.html
0
4
2
one I saw recently, while landing in the fog
add a skeleton here at some point
16 days ago
2
4
0
sqlite fails a transaction? skill issue (yours)
17 days ago
0
1
0
train stan account logging on
add a skeleton here at some point
21 days ago
0
3
0
@nsaphra.bsky.social
you confirm it's enough to ask GPT for an unbiased logic check? losing my gd mind at this GuardrailsAI example
26 days ago
1
0
0
I think mainly because they don't experience consequence, at all. They don't experience the irreversibility of time like we do.
add a skeleton here at some point
29 days ago
0
1
0
4yo: (tries literally anything for the first time) SEE? I'm GOOD at this!
29 days ago
0
0
0
SLP 3rd edition when
add a skeleton here at some point
30 days ago
0
2
0
tapping the sign
add a skeleton here at some point
about 1 month ago
1
19
2
ICLR 1 month challenge let's go
add a skeleton here at some point
about 1 month ago
0
1
0
youtu.be/ncHctrAluLM
jefferson airplane in a rubadub style
loading . . .
Prince Fatty & Shniece - Black Rabbit (with Dub)
YouTube video by Beats & Culture
https://youtu.be/ncHctrAluLM?si=NtFJbGicYfEpHK2y
about 1 month ago
0
0
0
my brain is a SEGA AI computer, it has a tape cassette drive and runs Prolog
add a skeleton here at some point
about 1 month ago
1
4
0
7.05, family is done with breakfast 4yo is jumping on the couch wearing a tutu and skates the baby is covered in blueberry jam all shoes are covered in blueberry jam the baby is crying it is now 7.05:10
about 1 month ago
0
2
0
Hasukeru ga hanasemasuka?
add a skeleton here at some point
about 1 month ago
0
0
0
the real question to ask is: how many Ġ's are there in blueberry?
add a skeleton here at some point
about 2 months ago
0
4
0
"oh no I'm averaging attention head activations!" but then you recall fMRI has wayyyy lower resolution
add a skeleton here at some point
about 2 months ago
0
0
0
reposted by
Marco Zocca
Hope Kean
about 2 months ago
Is the Language of Thought == Language? A Thread 🧵 New Preprint (link:
tinyurl.com/LangLOT
) with
@alexanderfung.bsky.social
, Paris Jaggers, Jason Chen, Josh Rule, Yael Benn,
@joshtenenbaum.bsky.social
, @spiantado.bsky.social, Rosemary Varley,
@evfedorenko.bsky.social
1/8
loading . . .
Evidence from Formal Logical Reasoning Reveals that the Language of Thought is not Natural Language
Humans are endowed with a powerful capacity for both inductive and deductive logical thought: we easily form generalizations based on a few examples and draw conclusions from known premises. Humans al...
https://tinyurl.com/LangLOT
5
70
33
reposted by
Marco Zocca
Julia M. Rohrer
about 2 months ago
Me at work: you shouldn’t condition on posttreatment variables as it biases inferences, see Montgomery et al. (2018). Me at home: look at how sweet the kids are with each other when they’re not fighting!
1
72
4
I tried my hand at a sparse linalg library once*, learned a lot on numerics and data structures, but for performance and stability it's best to use the pros *
github.com/ocramz/spars...
add a skeleton here at some point
about 2 months ago
0
4
0
reposted by
Marco Zocca
Julie Carpenter, PhD
about 2 months ago
I'm quoted in this piece. In short: do not use ChatGPT or any generative AI as a “therapist.” It’s not sentient. It doesn’t care. It may even lead people toward harm. And nothing shared is protected by HIPAA or real privacy standards; it's fodder for the machine.
www.dazeddigital.com/life-culture...
loading . . .
AI is transforming how we communicate in relationships
‘All I wanted was to feel seen, heard and understood by him, but instead, he was sending me a robot’s questions’
https://www.dazeddigital.com/life-culture/article/68306/1/how-ai-chatgpt-communicate-love-dating-romance-robot-prompts
2
276
138
reposted by
Marco Zocca
Luca Bertuzzi
2 months ago
BREAKING: The EU Commission has released a mandatory template for AI developers to disclose training data. Unlike the Code of Practice, this is not optional. It could have global fallout, as rights holders abroad might use it to sue over copyright.
digital-strategy.ec.europa.eu/en/library/e...
loading . . .
Explanatory Notice and Template for the Public Summary of Training Content for general-purpose AI models
The Template annexed to this Explanatory Notice aims to provide a common minimal baseline for the information to be made publicly available in the Summary of Training Content for general-purpose AI mo...
https://digital-strategy.ec.europa.eu/en/library/explanatory-notice-and-template-public-summary-training-content-general-purpose-ai-models
6
470
224
reposted by
Marco Zocca
Karen K. Ho
2 months ago
We should have listened when the modems screamed at us.
121
11413
2642
add a skeleton here at some point
2 months ago
0
3
0
the title riffs on an ACL'14 keynote on semantic parsing - good overview of the field up to then:
yoavartzi.com/sp14/slides/...
add a skeleton here at some point
2 months ago
0
2
0
relevant:
aclanthology.org/P18-1198/
add a skeleton here at some point
2 months ago
1
11
2
one of many 🤷♂️ it's also past time to understand the world as a multipolar place
add a skeleton here at some point
2 months ago
1
0
0
you know the "flow" mental state? has anybody studied how parenting babies is the _opposite_ of that? the constant stream of long-tail novelty, risk of harm etc
2 months ago
1
2
0
any proposals for interpretability workshops at
#EurIPS
?
2 months ago
0
0
0
reposted by
Marco Zocca
Ondrej Zika
2 months ago
In my view, these result really caution against the use of LLMs for peer review. Not only are LLMs easily swayed by covert injections, they are also influenced by formulation of the "peer review" prompts. More in the write-up here:
osf.io/preprints/ps...
loading . . .
OSF
https://osf.io/preprints/psyarxiv/zxuwf_v2
1
6
3
not sure this is the most efficient implementation, but I'm quite happy it's so compact. With a tree traversal and left scan we get an annotated AST with a top-down static program analysis.
#haskell
#plt
#compilers
#staticanalysis
2 months ago
1
2
0
distinctly recall the tens of additional km you have to walk in Tokyo if you have to push a stroller. Turns out (as it's often the case in JP) the answer is in a densely printed table, in Japanese only ofc.
add a skeleton here at some point
2 months ago
0
1
0
Load more
feeds!
log in