loading . . . Can a leftist position on the use of generative AI be materially held? I posted a question on Bluesky about the placement that leftists can occupy when it comes to generative artificial intelligence. Since 2023, I've passively explored the use cases of generative AI, specifically in the case of software development; my primary field of work. My background has forced me to take a collectivist perspective when looking at the impact of it on balancing labor's power and how the tech industry, despite its brief support of progressive campaigns, has a knack for falling back to its roots of putting the levers of finance over anything else. Putting it more plainly, I'm less interested in the shiny things that come from the industry I'm in and more focused on the immediate and planned impact of what comes from it[1]. I've also taken some time to observe a lot of the opinions and stances of folks who have more sway and influence that I do in this industry as well folks who've existed in my orbit for some time to juxtapose against the position I'm forming here.
First things first; if the following things don't click, then I don't think much can be said going forward that you'd be able to gleam from what I'm expressing here:
* The technology industry, within the United States, has routinely required extraction of resources and labor from other parts of the world with very little being given back to those places.
* The technology industry has reshaped dozens of social norms — dramatically in my life (as a 1990s millennial) from payphones to cellphones and even more so for my younger siblings (of the 2000s Gen-Z contingency).
* The technology industry has yet to be held accountable to _any sort_ of power for any of the (harmful) actions that has happened under its watch.
This also won't be the last thing I write on this topic, unfortunately, because it's a complex topic. With that said, let's get into it.
## When Rich Workers and Executives Walk Post on Hacker News
Despite what someone who _probably_ makes more than half of Americans mentioned about the weakness of tech labor[2], the scene for the such has been swelling since 2018 — something I've helped contribute to. Due to such a rise, this has me looking at what opportunities management in industry can, will and have taken with their levers of control. This came to be of importance after the sale of Twitter by Jack Dorsey and its board to Elon Musk, notably when we saw how the company was able to keep running at a moderately okay pace despite firing 6,000 people. The company dipped in market valuation for some time but capital management, especially when in balance with the State helped bring it back up, according to the Wall Street Journal. Fortunately, a lot of the workers who _had_ the ability to find new work have ended up in places like Tiktok, Facebook and Google — all places (sans Tiktok?) that also have experienced a wave of layoffs. There's a need to keep in mind that as generative AI becomes better[3] at convincing management, there's more need to put in levers _against_ hot swapping folks in favor of it. Although Musk didn't declare this with his move, as far as we know; that kind of behavior proved something that Thomas Ptacek of fly.io and Elon Musk agree on (emphasis mine):
> LLMs really _might displace many software developers_. That’s not a high horse we get to ride. Our [software developers] jobs are just as much in tech’s line of fire as everybody else’s have been for the last 3 decades. We’re not East Coast dockworkers; we won’t stop progress on our own.
Despite being a statist (which Musk would also considered himself to be), Thomas seem to be more in favor of private governance versus public. Their lack of depth when it comes to what tech workers have been fighting towards, which even ChatGPT is able to poorly produce in a query about tech workers organizing highlights this disconnect — even to a point of where tech workers who are in the organizing space _supported_ (fiscally and otherwise) the East Coast dockworkers and their strikes. It's not something you'll find if you find places like Hacker News to be the sole perspective of the tech industry. I invite them to reevaluate this position after reading this in full (if they ever do). Statists are conventionally folks who are in favor of big governance, and to ignore how Musk _relied_ on strong (capitalist-centric) governance; the same way a _lot of_ American tech companies do seems like an oversight by Thomas.
Steve Klabnik has written about their dismay in the generative AI discourse, which reads as a want from "both sides" to do better in how they approach conversations around the topic. He linked to another piece, by James Dennis, that takes a perspective on art and creativity to highlight that humans (people?) will continue to create and produce novel things _in spite of_ generative works. Another one is more specific to software engineering, about the eventual decay of the "craft" of software engineering that books and conferences have formed around peoples' cleverness, through the lens of the software engineer's identity crisis. These sit closer to the "center-right" (bear with me) position on how one can look at this technology and how it impacts the craft. Unfortunately, it ends in a way that reinforces the notion of forced evolution of a field as necessary to growth. As someone who's worked in public consulting for a short period of time; the last thing you want to do is _rush ahead_ with a trend or sense of progress because Hacker News prescribes it. In fact, it's always wiser to give it time to iron out. However, there's more positions that lean towards something you'd expect Ptacek to agree with in a piece by Campos on the notion that AI crticism has become lazy. These stances tend to lean toward what you'd find as you read _The Network State_; a book that overindexes towards techno-solutionism as the end-all-be-all and a means of saving us from ourselves. Notably, this piece would fit in around the third chapter in Balaji's book, around tripolarality in power since points are declared towards succumbing education towards technology despite the strongest proponents for "ethical AI in education" tend to be the biggest bullshitters. It does end with this a semi-honest point that capitalism currently dictates the direction of this industry but with no real call-to-action despite demanding more from the space of criticism, which is disappointing because it gives AI proponents _more of an excuse_ to do nothing about most of the issues Campos outlined.
## Leveraging Generative AI for the Public?
The thing that folks do like to mention, especially in my left-leaning circles, when it comes to generative AI, is China's introduction of smaller, cheaper and efficient LLMs that can, at times, outperform the American made ones. This seems to be a habit with Eastern technology & from cars to computer manufacturing. The most notable one is the ones produced by its namesake, DeepSeek. As mentioned, I've been testing some usecases with these solutions at home, most recently with `ollama` and `aider`, allowing me to flip between different downloaded models when working with them[4]. The output is moderately okay — if I give it a "solved" problem, it can get to a particular distance (~40% to 60%) before I need to intervene and correct things. I struggle to replicate the level of performance that Harper's company produced with his journey into social "agentic" coding[5]. Despite it being described as not comparable to that of the output of an actual software engineer[6], folks are comfortable doing the software engineering equivalent of what they're doing with OpenAI's Sora (emphasis mine):
> What’s also happening here is a massive _outsourcing of labor_. OpenAI has cleverly packaged what would otherwise be expensive training and evaluation work as a "fun social experience." Every video prompt, every video tweak, every video that gets shared or discarded, what goes viral, what doesn’t, is training their video generation model. That’s all free labor _that would cost millions_ to replicate in a controlled environment with paid testers. They’re essentially getting millions of people to volunteer as unpaid quality assurance testers, prompt engineers, and data labelers. **They have gamified reinforcement learning at scale**.
The focus here is what I see mirrored in public sector work: a want to "increase response times" (or efficiency or whichever business-centric term you'd like to improve) while not taking into consideration what _human_ decisions (almost always policy) that cause slowdowns and the like[7]. What's happening — as it tends to and was even noted by Thomas Ptacek in his company's blog and ignored in Campos' — is that folks who champion these technologies _rarely_ stop to consider how other people can use their tools for malice. The Wright brothers didn't (couldn't?) and look how that turned out for the future of war and invasion. A particular United States Marine Corps colonel, however, was already operating from a position of violence, on behalf of the state and its interests, and had no issue asking for even _more_ efficiency in how the M45 MEUSOC semi-auto pistol can be used — especially in places like Iraq or by the Los Angeles Police Department.
In none of the pieces mentioned above was there any strong considerations around how generative artificial intelligence has increased the dififculty of folks finding work due to the (now speculative but not improbable) case of lower ranking software engineering posoitions being made redundant. In fact, in Annie's aforementioned post, they needed to rewrite history just a bit in order to justify a transformation in labor (emphasis mine):
> The pendulum metaphor offers us wisdom here. Just as many of us have swung between engineering and management roles, we can embrace a similar fluidity with AI. Some periods we’ll dive deep into the code, experiencing that thrill of crafting elegant solutions. Other times we’ll step back to guide AI systems - **not as overseers, but as master builders who understand every part of their craft**. Like the Industrial Revolution’s workers who became experts at optimising the machines that transformed their craft, we can master these AI systems - making them instruments of our creativity, not replacements for it.
Ironically, the term "overseer" _is more apt_ since what an AI engineer is doing is "guiding" the outputs of a machine without a requirement of understanding the depth of the craft — that's the whole premise of vibe coding. This rephrasing also helps to ignore what many workers of the Industrial Revolution were against: hyper-specialized machinery that _directly_ threatened their ability _to work_ and _negogiate the terms of work_. This is discussed at length in Brian Merchant's latest book, Blood in the Machine, in the brief chapter, "The Machinery Question" that discussed how this perspective on the impact of machines on work depended on one's positionality (and interest in placement) of class. In short, did one want to be a worker of merit or an entrpreneur of control? Software engineering, post the dot-com boom, has enjoyed a comfortable place in pay, especially in the United States, that has warped people's understanding of how that loyality is bought (and can be easily retracted). Digging into the folks that helped craft the _concept_ of modern overseers, or the professional managerial class, we can see how that also leaned on a system that mimicked what Aristotlian philosophy on the need for human exploitation towards automation (warning for those uncomfortable with the linkage of technology and plantations due to their identity and nationality):
> To understand the link between Babbage’s engines and his theories of labor control, we can first look to his view on automation itself. During Babbage’s time, the term “engine” was a synonym for “machine” and was applied to the swell of industrial machinery that was used to transform traditional labor practices. His engines take their place alongside other mechanical tools for labor automation, distinguished by their purposive automation of mental (rather than manual) labor. Babbage understood automation generally—including his engines—as dependent on the division of labor. He observed that “[t]he division of labour suggests the contrivance of tools and machinery to execute its processes,” reasoning that “[w]hen each process has been reduced to the use of some simple tool, the union of all these tools, actuated by one moving power, constitutes a machine.” Division and rationalization of labor—specification of each piece of a given job in order to render the work process (and the people doing it) observable, quantifiable, and controllable “from above”—was, for Babbage, the enabling condition for automation. Thus, in order to design engines to automate mental labor, Babbage first needed to borrow (or develop) systems of labor division and control.
You can't divorce these notions without _actively_ ignoring history and present day impacts of technological innovation. Doing so is tremendously easy because it's not a requirement to download Microsoft's Visual Studio Code or to install Windsurf — the same way we see no lapse in a sense of judgement between faciliating genocide for profit — in the form of state interest. The _point_ here is that by choosing to narrow the scope of production and impact to a point of comfort for one's discourse, folks are doing the work of the alt-right in technology in the left spaces. This has to be something folks acknowledge lest we slide more and more developmental progress in their favor.
## Countermeasures in genAI
I opened up this mentioning a collectivist perspective. It's a novel one for me, frankly, because I once did see technology as a means of giving folks more power in a world that has it held and hoarded by a few. It took me moving to California, closer to the American crux of technology worship to come to terms that it is largely an extension of the means in which American capitalism operates. In fact, Ruha Benjamin's book, Race As Technology makes _many_ cases — new to many, old to some — about how the most immediate deployments of technology tend to have racial underpinnings to operate on behalf of a larger agenda of integrating what she calls The New Jim Code. Before we can begin to talk about democratizing AI, making it fit some definition of open; we have to be honest about who it's being made open for and what we're defining as accessible. Routinely, this is not for the folks who could benefit from having more control over their indirectly leased technologies but for the folks who can afford thousand-dollar machines and phones _off_ -lease[8]. To this day, technology is made and optimized in the perspective of a white man. We have small efforts towards changing this but they're not just far and few — they're intentionally underfunded and driven out of sight. Relying on the mimicry of capital to drive a new direction will result in its trend of burnout or worse for the founders and workers involved.
There's quite a few folks working on this from a perspective that recenters people over the outputs of the machine. One of note is _AI as Normal Technology_; a longer read that yoyos between wanting a neoindustrial agenda to push us towards allowing AI to be more deeply integrated into society and development such that it's not necessarily controlled by a few industry titans. They're more honest about the progression of AI than more advocates:
> According to the normal technology view, such sudden economic impacts are implausible. In the previous sections, we discussed one reason: Sudden improvements in AI methods are certainly possible but do not directly translate to economic impacts, which require innovation (in the sense of application development) and diffusion.
>
> Innovation and diffusion happen in a feedback loop. In safety-critical applications, this feedback loop is always slow, but even beyond safety, there are many reasons why it is likely to be slow. With past general-purpose technologies such as electricity, computers, and the internet, the respective feedback loops unfolded over several decades, and we should expect the same to happen with AI as well.
>
> Another argument for gradual economic impacts: Once we automate something, its of production, and its value, tend to drop drastically over time compared to the cost of human labor. As automation increases, humans will adapt, and will focus on tasks that are not yet automated, perhaps tasks that do not exist today (in Part II we describe what those might look like).
They highlight a forecasting of what job closure and restructing will eventually look like given how generative artificial intelligence operates as a "labor maximizer" towards the end of Part II:
> In addition to AI control, task specification is likely to become a bigger part of what human jobs entail (depending on how broadly we conceive of control, specification could be considered part of control). As anyone who has tried to outsource software or product development knows, unambiguously specifying what is desired turns out to be a surprisingly big part of the overall effort. Thus, human labor—specification and oversight—will operate at the boundary between AI systems performing different tasks. Eliminating some of these efficiency bottlenecks and having AI systems autonomously accomplish larger tasks “end-to-end” will be an ever-present temptation, but this will increase safety risks since it will decrease legibility and control. These risks will act as a natural check against ceding too much control.
It links to one paper that I've shared while working on a LLM project to highlight my concern about the echo chamber of technology and government:
> What’s most notable is that McDermott’s warning is from 1984, when, like today, the field of AI was awash with confident optimism about the near future of machine intelligence. McDermott was writing about a cyclical pattern in the field. New, apparent breakthroughs would lead AI practitioners to predict rapid progress, successful commercialization, and the near-term prospects of "true AI." **Governments and companies would get caught up in the enthusiasm, and would shower the field with research and development funding. AI Spring would be in bloom**. When progress stalled, the enthusiasm, funding, and jobs would dry up. AI Winter would arrive. Indeed, about five years after McDermott’s warning, a new AI winter set in.
Anil Dash wrote on their blog that runs counter to the above mentioned of artifical intelligence criticism being lazy: more on the point that a "moderate" position is nearly not possible/available in most spaces[9]. I disagree with this for a number of reasons made clear by the number of conferences, product launches if one scrolls on LinkedIn and capital raised _in favor_ of promoting generative artifical intelligence. They've written themselves in enthusiasm of retrofitting a API standard for models to communicate _as groundbreaking_ as Web 2.0 itself — disrespectful to the actual gains of that space since it was something done collectively (despite corporate capture) whereas the Model Context Protocol was an amplifying tool for Amazon's Antrophic to enable what Doctorow describes as the flywheel effect of platform capitalism in his book, Chokepoint Capitalism. He's also written what _I think_ is the clearest definition of the MIT-license equivalent of what good generative artificial intelligence model development could look like but this would require what China's doing — some level of state intervention or a wealthy benefactor to fund the basis of this research and work. This wouldn't happen in a capitalist society, especially in the United States, without some sort of nationalistic agenda to ramp up domestic talent[10].
## Wait, so can there be a leftist position on AI?
I actually don't think so — at least, not in a completely puritanical way[11]. As I've mentioned, I've worked on providing generative AI solutions to government at work and I experiment with its efficacy largely to prevent the hype outputs to cloud my perspective, at least from an individual perspective. The individual perspective also tends to be the limiting scope of most of the folks I've mentioned above are approaching it. There's been little mention of how we can reshape policy to handle this transition[12]. Relying on executives down to middle management to take a firmer stance yoyos between being beholden to investors to executives that lean into that Aristotlian stance mentioned earlier. So how does it move from that to one of a collectivist, people-centric position?
### From a Labor Organizing Perspective
It is disappointing that _AI As a Normal Technology_ danced around labor and softly ignored the impact of said productivty gains in relation to the sociopolitical evolution of the landscape as well as what regions of the world had to operate as the battery and labor[13]. This tends to result in the inherent utopian perspective of trusting industry leaders (or developers) to do The Right Thing. This has doesn't tend to work out in favor of the people who need it the most: folks who don't have a fleet of lawyers at their disposal or like me, folks who live in a state whose legislature down to the local level are against any sort of progressive stances. So that returns us back to what we can do together as workers. I would love to see a **sectoral bargaining unit** across engineers, designers, lower management, product managers, researchers — the whole plethora of folks so we can stand shoulder to shoulder like the folks who keep your smartphone's network running, the power that fuels your home and hobbies and the construction of the data centers where you can run your instance of Headscale to get back to your homelab from wherever you are in the worker-built world. This would push back on what Ptacek initially mentioned about our inability of approach with dockworkers but it requires political education and a commitment to folks you don't know as well. That's why events like _Circuit Breakers_ are important so folks _can_ bond, learn about what steps we need to take to get here and learn about meaningful tech labor history[14].
### From a Community Perspective
I don't expect much to shift here, especially since the soft decline of people-centric community events has been overtaken by corporate cosplay of the such. By cosplay, I mean the developer relations community spearheading, with corporate funding, the moves to "reboot" community spaces that went dormant during the (still ongoing) COVID-19 pandemic. Events like WaffleJs have been upsurped by Google Developer communities and the like. And with the advent of generative AI, sidecar events are all about what folks are spending money on to make that they could have spent 30 more minutes developing themselves — or with a bit more curiousity.
Instead, more work and effort needs to be spent on countering the systems that rely on the inputs of generative AI. This enters a level of "black-hat" work since this would also pollute public datasets that folks would be using but unfortunately, until the larger actors that fund companies like https://brightdata.com/ or even Google's own search proxying infrastructure, this is necessary. More efforts in making things like Glaze and Nightshade more integrated in tools that folks use on a regular basis and a means of submitting content for extending the efficacy of said tools[15]. Social media networks can allow folks to opt-in into such protections as they've a hot target for non-consenual scraping. It's weird; these projects technically fall under generative AI since it also modifies images but since it's adversal to _further_ modifications, you'll rarely find any advocates pushing in favor of it. That highlights how the advent of such production isn't necessarily around making the act of "generating art" more accessible but mirrors the plantantion-like behavior mentioned before (though coded with race — as technology inherently is):
> The specter of the plantation that hangs over computation and industrial labor regimes also speaks to the need to revisit the terms of "free" industrial labor, and to recognize the contested process through which this particular category of "freedom" was created and guaranteed. To do so, we must directly confront the unmarked presence of Black unfreedom that haunts "free" labor and reweave links that have been strategically severed between race, labor, and computational technologies.
Put differently, the production and training of this work is non-zero and the need to move with the veneer of the such helps justifies further extraction of the work of people for the sake of "scratching a visual itch".
* * *
If you've read this all the way through, I appreciate any feedback and corrections. As I started, my politics lead my stances and that means taking a critical lens at the industry, its impact and the players within in. If I had to propose a "critical" reading list on AI that's balanced on its development and denigration; the following would be a start:
* Race After Technology, 2017 by Dr. Ruha Benjamin
* Empire of AI by Karen Hao
* The Alignment Problem by Brian Christian
* The AI Con by Dr. Emily Bender and Dr. Alex Hanna
* Weapons of Math Destruction by Cathy O'Neil
* AI Engineering: Building Applications with Foundation Models by Chip Huyen
I hope this'll help advocates understand the contentions and history behind the push against this work. I also hope this helps anti-use proponents a sense of understanding of the scope of the space and avoid repetition of things that have either debunked or made non-relevant. I don't think criticism or advocacy has gotten lazy in its delivery but I do think that we need to consider more — not just the economic impact but the sociopolitical, cultural and societal impacts of this technology. We missed this opportunity with cellphones and the Internet - to a degree, so let's try now.
* * *
1. This really began for me when I was working at Lyft. Some folks have a better ability at separating the technology from the deployment in practice but seeing how it can create an apparatus for surveillance via the internal tool named Helium. That coupled with experiences I had internally about how regions were labeled as "high risk", how the platform collects _way more information_ than necessary, to a point where users of the rival application would even assume it can control pricing based on health and location. Folks aren't wrong in being suspect — even in other countries. ↩︎
2. As far as I can dig, this person has _never_ contributed heavily to the nascent labor scene. If you're going to be talking shit about tech labor, at least be better than Marc Andreessen and donate to the folks working to make it as strong as dockworkers so we can work to protect _everyone_. I really loathe folks who know nothing about labor but speak on it as if they've led the Kodak strike. If you want to be better, fund it. https://www.patreon.com/collectiveaction and https://workerorganizing.org/donate/ are two places you can get it started with. ↩︎
3. I should note that "better" is limited at _simulating_ reasoning (which, in turn, allows it to be better at generating code and decision trees). Said reasoning has to be limited to English; I routinely attempt to get it working in Spanish and Haitian Kreyol and the quality of the machine drops aggressively; highlights a hidden perspective that the folks who are going to need "training" are those who are also notably kept outside of the Anglospheric zone of technology. It doesn't even have to perform _well_ ; just well enough such that they can crunch out roles to cut out the most expensive thing to profits: _human_ labor. ↩︎
4. My homelab is a donated PC from a friend running Debian and using a graphics card that's old as dirt, according to some. The fact that I get better performance with DeepSeek versus whatever American companies are cranking out makes me wonder if, privately, they're aiming to wrap Chinese models under lower tier plans at OpenAI. That'd run into issues if they use that in the GovTech space — but they rely on government officials not knowing how things work in TPP (technical product proposals) to sell them those multi-year contracts. ↩︎
5. Granted, if you look at the organization that he runs, their focus _seems_ to be on seeking a justification to integrate these kind of tools to a place where they could eventually fire the one Black person (and only woman) who works there and replace it with `OpsBoard.biz`. I guess there's an inherent _want_ to have generative AI become something more "useful" but their outputs don't necessarily seem terribly aligned with their politics (unless said politics are more cosmetic). It shouldn't be a terrible surprise that the "crowning" achievement of this technology is that it's "lowering the barrier": https://bsky.app/profile/revolution.social/post/3m3i3oihink25. I'll check out the book, https://www.penguinrandomhouse.com/books/741805/co-intelligence-by-ethan-mollick/, they've suggested though on their post about their eagerness to use LLMs at https://harper.blog/2025/02/16/my-llm-codegen-workflow-atm/. ↩︎
6. This is something you can documented from folks like the former founder of Redis, although they admit that they have more of a preference for using "AI engineering" _despite_ their recorded need to constantly adjust the scope and range of what Google's LLM was suggesting. It is a bit disappointing to see a point of "uncertainity" from the same person but this is endemic in the tech industry: folks can tell you the history of Monty Python or Steve Wozinak but can't tell you about the driving forces behind computing as a whole. ↩︎
7. My personal axe on this is the lack of a federal registry that removes the need for a DMV (or its equivalent) in every state — take the postal office, for example. Having different standards on how to drive for each state makes sense for terrain and the like but the _registration_ and adaption — which turns into a state-by-state identification system, in my opinion and little research, slowed down the United States' ability to roll out something like https://login.gov/ sooner. States' rights are more important when the federal government becomes belligerent (as is the case today) but said belligerency is contextual. ↩︎
8. There's a subtly paternalistic (and racist) thing that I see (white, but not all) folks in tech do when they try to put to non-Western places and speak about the "growth" of technology. In the cases of places like Nigeria, it tends to be in places where capital accumulation is _already_ higher than other places and in regions where they've adopted (if not completely assimilated) Western standards of living. It's rarely done from a place of consensual integration but from a place of covert imperialism. By leaning on the wealthy and integrated, as well as the diaspora that've landed in the West to parrot the lines of multinationals of the notion of progress, we can get things like folks can be easily exploited, as reported here: https://www.technologyreview.com/2022/04/06/1048981/worldcoin-cryptocurrency-biometrics-web3/ and lied to, like Facebook with its cryptocurrency project https://www.theverge.com/2019/10/4/20899310/facebook-libra-paypal-online-currency-payment-system-cryptocurrency. ↩︎
9. I would attribute the inability to talk on this for a number of reasons: social media being an accelerant of opinions (good _and_ bad), the underlying polarization that existed _before_ generative AI made a splash with companies creating less reliable technology (to some, to others it might have been revolutionary) and the increasing visibility of the politicialization of the tech industry. I'd be lying if I haven't contributed to this in some capacity. ↩︎
10. Funnily enough, this is something the Trump administration seems to be in favor of. This is most likely driven by corporate interests because there's no mention in said plan for it to be state-run or backed but only "guided" by an organization that has no leader anymore — thus returning it back to the directives of industry. ↩︎
11. This stems from the word "leftist" becoming more and more diluted in its definition. Democrats weren't really considered to be left-leaning, if one takes the time to read and examine their policy. They exist in what I've previously written as the center-right of the establishment: they'll make strides to prevent all-out rebellion against the state but also work to keep the right as complacent as possible. That's not progress and that's why I always squint at folks who hint at being comfortable with being close to the center — that's establishment holding. ↩︎
12. Yes, with the effective ban on regulation towards artificial intelligence, this is a non-starter from the federal level. That doesn't mean that those with influence by way of capital can't still try. That also hasn't stopped labor groups from attempting to introduce policy, similar to what SAG-AFTRA did under Biden. We (software engineers) don't have to copy — we can remix what's done and add our own improvements, which ends up helping everyone. That's how wins in labor organizing work. ↩︎
13. I am expecting _a bit too much_ from technologists to understand the industry's genesis in colonialism and exploitation. However, I can't go two weeks without some reference to Neal Stephenson, William Gibson or the more prevalent Cory Doctrow. Why is that? ↩︎
14. I see it as the _lowest_ level of entry towards meaningful work in this space and it can be applied to other things beyond worker protections: it can hold companies to account for political causes and for stabilization of work as we see their eagerness to shake down the coffers for the sake of enrichment. I understand that some cling to the idea that companies are infinitely deserving of profit extraction — I want you to see the parallel between that and the current landscape of model training and ask yourself how can that be sustainable, especially when training is still kept in a Anglophonic perspective (which still might be more beneficial to you — to which nothing much more can be said). ↩︎
15. There's also https://artshield.io, which seems to use https://github.com/ShieldMnt/invisible-watermark. I can't imagine that it's as up-to-date as projects like https://github.com/Shilin-LU/VINE. ↩︎
https://www.jacky.wtf/essays/2025/left-ai/