Rob Horning
@robhorning.bsky.social
📤 4618
📥 257
📝 482
robhorning.substack.com
I wrote something about generative music and reading Vilém Flusser
robhorning.substack.com/p/the-songs-...
loading . . .
The songs remain the same
Into the universe of posthistorical music
https://robhorning.substack.com/p/the-songs-remain-the-same
about 11 hours ago
1
7
3
seeing LLMs as "ideology that can pretend to be a person" helps make it clear what "talking to chatbots" accomplishes: Althusserian "interpellation"
add a skeleton here at some point
3 days ago
4
198
68
this seems right, and always makes me wonder what the "suckers" get out of being on "sucker lists," why they enjoy being on them. Is it just some combination of masochism and getting paid attention to? Confirmation bias as a service?
maxread.substack.com/p/prediction...
7 days ago
2
22
4
regardless of what else they convey in a specific image, image generators offer images that tell those people who want to hear it that "effort/craft is a sham and a waste of time"
10 days ago
1
9
1
seems like one of the main use cases for genAI is to eliminate the joy that other people take in thinking and actually doing things
www.thecut.com/article/woul...
10 days ago
2
71
25
reposted by
Rob Horning
Hypervisible
10 days ago
Sora 2 is not being “misused” when people use it to produce racist, transphobic, misogynistic, & otherwise vile images—it exists to make it easier to put these images out into the world. Calling it a “misuse” is a grave misunderstanding of what these companies are up to, & lets OpenAI off the hook.
8
558
206
reposted by
Rob Horning
Roland Meyer
22 days ago
However, the problem with these images is not just their genericness. It's the deeply populist idea that politics can be reduced to its immediately visible effects: politics is not judged by how it affects people's concrete daily lives, but rather by its aesthetics—by what image it produces 4/
4
183
27
hope that is the actual cover
add a skeleton here at some point
23 days ago
0
8
0
analysis of political AI slop as "generated déjà vu"
slavoj.substack.com/p/welcome-to...
23 days ago
0
2
0
but the existence of text and video generators will also fortify those modes of social verification that don't amount to "it's acceptable for me to believe everything I see and read"
www.nytimes.com/2025/10/19/o...
24 days ago
1
14
5
when any person's face can be pasted into any face-shaped hole, it doesn't make things feel "personalized"; rather it negates the idea that there is anything personal about a face
29 days ago
2
22
6
steal your face right off your head
add a skeleton here at some point
29 days ago
1
12
4
or you could say the entire station has been made into one big ad for this photographer
news.artnet.com/art-world/hu...
about 1 month ago
1
9
0
reposted by
Rob Horning
Roland Meyer
about 1 month ago
«Video generators allow people to experience ideas or beliefs as content without their having to invest their imagination into making them real, into ‹really› believing in them and coming to terms with the implications of their beliefs.»
@robhorning.bsky.social
on Sora Slop Feeds
loading . . .
There are too many waterfalls here
Sora slop feeds
https://robhorning.substack.com/p/there-are-too-many-waterfalls-here?utm_source=post-email-title&publication_id=1073994&post_id=175641108&utm_campaign=email-post-title&isFreemail=false&r=12vt6x&triedRedirect=true&utm_medium=email
0
24
6
reposted by
Rob Horning
👁️
about 1 month ago
hedgehogreview.com/issues/the-u...
add a skeleton here at some point
1
22
5
that users could prefer a generated simulation to actual old clips for nostalgia purposes clarifies how nostalgia is about consuming "decontextualization" in itself — nostalgia negates history under the auspices of longing for it
add a skeleton here at some point
about 1 month ago
2
79
25
"new conspiracism" doesn't explain anything but is a means for isolated individuals to experience "social validation" on demand, in the absence of a verifiable public — a way to intensify the gratification of parasociality
www.nplusonemag.com/issue-51/pol...
about 1 month ago
1
8
1
“infinite video” means not infinite entertainment but infinite boredom; the death drive incarnate
about 1 month ago
0
55
16
the idea that some videos are intrinsically interesting to watch (regardless of whether they have any reference to events or things in themselves, any kind of auratic appeal) feels like it can't survive generative models, which makes all forms of mere seeing trivial
about 1 month ago
1
10
2
this from Yves Citton's Mythocracy is maybe useful for thinking about Sora 2 and other slop feeds: Generated video constitutes an "imaginary of power" that gives consumers pictures of how they've been trained to believe things are "supposed to be"
about 1 month ago
1
22
4
wonder if the ease and rapidity with which "AI" can generate right-wing fantasy images and propaganda makes them more convincing for their consumers — as though one shouldn't have to use their own imagination to manifest the bigotry they insist on
www.lrb.co.uk/blog/2025/se...
about 2 months ago
0
20
4
LLMs mean that no one has to write anything they don't care about, but they also mean that "writing anything" will get equated with "not caring" for most people. (If you really cared, you would video yourself talking about it on your phone.)
about 2 months ago
2
16
5
not bad advice, but presumes that most people read and write to experience "charm, surprise, and strangeness" when the opposite may be the case
www.nplusonemag.com/issue-51/the...
about 2 months ago
2
8
1
What does it mean to "optimize" for this condition — to train users to enjoy it? Why is it most profitable for companies to train us in wanting to pay attention as a way of avoiding rather than seeking meaning?
www.noemamag.com/the-last-day...
2 months ago
1
12
3
seems indicative of how stagnant the ideas behind "AI" are that Baudrillard could write a critique of them in 1995 (The Perfect Crime) and none of it seems dated
add a skeleton here at some point
2 months ago
1
10
0
reposted by
Rob Horning
Hypervisible
2 months ago
And how does the machine know your intent, you might ask? Well by constant surveillance. Just kidding, the machine does not “know” your fucking “intent.”
2
27
11
algorithmic recommendation tries to make it impossible for us to escape our own predictability; continued interaction with these sorts of surveillance systems changes our relationship to our own capability to want things—makes it alien, fully externalized
www.theguardian.com/media/2025/a...
2 months ago
8
83
27
about the commodification of "participation", making "communal vibe" into a set of signs and formal tropes so that listeners can enjoy "participation" by merely consuming and recognizing signs (as with most fashion trends)
add a skeleton here at some point
3 months ago
2
27
3
seems like a distinction without a difference
www.nytimes.com/2025/08/08/t...
3 months ago
1
10
1
AI companies want us to think their products are like "The Entertainment" from Infinite Jest, capable of chatting us into terminal inertia
nymag.com/intelligence...
3 months ago
1
9
3
what does "slop" (as a general phenomenon) serve as an advertisement for? it seems to impress on people not that "AI" is smart but that is "stupid" in all the ways already familiar from marketing
3 months ago
2
4
0
reposted by
Rob Horning
kelly pendergrast
4 months ago
Great new entry into the annals of slop theory from
@kneelingbus.bsky.social
(for those still ok with reading ss links):
kneelingbus.substack.com/p/slop-as-a-way-of-life
Inspired me to note down my current working theory on the genesis of the slop era:
1
24
6
reposted by
Rob Horning
Rusty Foster
4 months ago
Did I once again smuggle post-structuralist theory into my internet jokes newsletter? 👁️🫦👁️
www.todayintabs.com/p/we-need-to...
loading . . .
We Need To Talk About Sloppers
The best ever death metal bot out of Denton
https://www.todayintabs.com/p/we-need-to-talk-about-sloppers-b732
9
97
24
9% seems really low actually
www.teenvogue.com/story/teens-...
4 months ago
2
2
0
reminds me of people on livestreams who do whatever the commenters demand
add a skeleton here at some point
4 months ago
1
8
1
"What such machines offer is the spectacle of thought, and in manipulating them people devote themselves more to the spectacle of thought than to thought itself." (from Baudrillard, The Transparency of Evil)
4 months ago
3
27
11
add a skeleton here at some point
4 months ago
0
10
2
4 months ago
0
3
0
from Baudrillard's "In the Shadow of the Silent Majorities" — similar stakes in culture moving from "posting" to "AI": the model speaks in place of "the masses"
4 months ago
1
35
7
"posting" could seem like it was a way for users to experience or express agency, but at the same time "participation" is typically an attempt to advertise one's conformity, normality
4 months ago
1
6
0
perhaps tech companies no longer coerce people to post publicly because overall surveillance has improved
www.newyorker.com/culture/infi...
4 months ago
4
20
4
and at the same time, prompters get to be the master manipulators, goading the answers they want out of the "truthbot" and seeming to change what is true about the world
add a skeleton here at some point
4 months ago
1
15
4
the "content" of reading and writing become the processes themselves: they become deliberate exercises to expand attention spans in the face of technologies designed to shorten them
kevinmunger.substack.com/p/attention-...
4 months ago
2
19
5
"vibes" against AI make human capabilities into some mystical inexplicable magic, doomed to be undermined when inevitably the "vibes" no longer can tell what is "real" and what is "AI"
4 months ago
2
19
5
reposted by
Rob Horning
Jeremy Antley
5 months ago
Consumer subjectivity is, at its heart, a mimetic process of desire that cloaks its actualization as one of unique, individual choice when, in fact, its end goal is to manufacture governable copies of the ideal capitalist subject. AI Chatbots turn mimetic desire into a frictionless experience.
add a skeleton here at some point
1
7
1
Trusting another person is hard; you have to earn their trust in return—trusting a machine or a brand is one-sided and a form of delusion, a belief that it mirrors back effortlessly your desire to trust and be trusted
5 months ago
1
8
5
reposted by
Rob Horning
Roland Meyer
5 months ago
LLMs aren't subjects, but large socio-technical apparatuses, similar to factories, bureaucracies, or corporations, and our critique should not begin by comparing ourselves to them, but by asking what position they impose on us: How do these apparatuses model and interpellate subjectivity? 2/
1
106
25
the theory of "interpassivity" perhaps helps answer this question; despising the morons in the commercial allow you to use AI without despising yourself
www.nytimes.com/2025/06/25/m...
5 months ago
1
36
7
very eager to read this
add a skeleton here at some point
5 months ago
0
9
0
reposted by
Rob Horning
Casey Explosion
5 months ago
I'm going to be honest here, I think AI is going to do so much damage to the usable internet that even after it's inevitably crashes out, it's going to take years to disentangle from every space it's polluted, like digital asbestos it's gonna be time consuming effort and likely expensive venture.
add a skeleton here at some point
100
6064
2140
Load more
feeds!
log in