Generative AI vegetarianism | Sean Boots
Generative AI vegetarianism, simply put, is avoiding generative AI tools as much as you can in your day-to-day life.
Generative AI vegetarianism, simply put, is avoiding generative AI tools as much as you can in your day-to-day life.
There’s a power imbalance at work here that’s hard to ignore. Large “AI” companies, the ones with billions in venture capital, send their bots to harvest free content. Not only from big publishers or Wikipedia, but from small, independent websites, too. But we, the people running these sites – often as passion projects, as ways to freely share what we’ve learned, as digital gardens we tend in our spare time – we’re the ones paying for the bandwidth and server resources to handle all those additional requests while those companies profit from the training data they extract. It’s an asymmetric battle: small systems absorbing the demands generated at an entirely different, industrial scale.
This superb essay by Anil Seth won the 2025 Berggruen Prize Essay Competition.
The future history of AI is not yet written. There is no inevitability to the directions AI might yet take. To think otherwise is to be overly constrained by our conceptual inheritance, weighed down by the baggage of bad science fiction and submissive to the self-serving narrative of tech companies laboring to make it to the next financial quarter. Time is short, but collectively we can still decide which kinds of AI we really want and which we really don’t.
- I don’t and won’t use “AI” in the text of any of my published work.
- I’m not worried about “AI” replacing me as a novelist.
- People in general are burning out on “AI.”
- I’m supporting human artists, including as they relate to my own work.
- “AI” is Probably Sticking Around In Some Form.
- “AI” is a marketing term, not a technical one, and encompasses different technologies.
- There were and are ethical ways to have trained generative “AI” but because they weren’t done, the entire field is suspect.
- The various processes lumped into “AI” are likely to be integrated into programs and applications that are in business and creative workflows.
- It’s all right to be informed about the state of the art when it comes to “AI.”
- Some people are being made to use “AI” as a condition of their jobs. Maybe don’t give them too much shit for it.
Appealing to data as the ultimate authority — especially when fueled by engineered desire — isn’t neutrality, it’s an abdication of responsibility.
I suppose it’s not clear to me what a ‘good’ window into unreliable, systemically toxic systems accomplishes, or how it changes anything that matters for the better, or what that idea even means at all. I don’t understand how “ethical AI” isn’t just “clean coal” or “natural gas.” The power of normalization as four generations are raised breathing low doses of aerosolized neurotoxins; the alternative was called “unleaded”, but the poison was called “regular gas”.
There’s a real technology here, somewhere. Stochastic pattern recognition seems like a powerful tool for solving some problems. But solving a problem starts at the problem, not working backwards from the tools.
I write here for you, not for the benefit of building the machines producing a firehose of spam, scams, and slop. The artificial intelligence companies have already violated the expectations of even a public web. Regardless of the benefits they have created — and I do believe there are benefits to these technologies — they have behaved unethically. Defensive action is the only control a publisher can assume right now.
I wanted to quote an excerpt of this post, but honestly I couldn’t choose just one part—the whole thing is perfect. You should read it for the beauty of the language alone.
(This is Anthony Moser’s first blog post. I fear he has created his Citizen Kane.)
This website is for humans, and LLMs are not welcome here.
Cosigned.
Substack willingly platforms and allows bad actors to monetize, hate speech and misinformation.
Says who?
Here are some well-reasoned pieces on the subject for you to educate yourself and decide.
People advancing an inevitabilist world view state that the future they perceive will inevitably come to pass. It follows, relatively straightforwardly, that the only sensible way to respond to this is to prepare as best you can for that future.
This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “ignoring reality”, and the only conversations worth engaging are those that already accept your premise.
Most obviously, aliveness is what generally feels absent from the written and visual outputs of ChatGPT and its ilk, even when they’re otherwise of high quality. I’m not claiming I couldn’t be fooled into thinking AI writing or art was made by a human (I’m sure I already have been); but that when I realise something’s AI, either because it’s blindingly obvious or when I find out, it no longer feels so alive to me. And that this change in my feelings about it isn’t irrelevant: that it means something.
More subtly, it feels like our own aliveness is what’s at stake when we’re urged to get better at prompting LLMs to provide the most useful responses. Maybe that’s a necessary modern skill; but still, the fact is that we’re being asked to think less like ourselves and more like our tools.
It feels like someone just harvested lumber from a forest I helped grow, and now wants to sell me the furniture they made with it.
AI presents design leaders with a quandary, requiring us to tread a fine line between what is acceptable and useful, and what is problematic and harmful.
This document is not a manifesto or an agenda. It is a series of prompts written by design leaders for design leaders, conceived to help us navigate these tricky waters.
Here’s what the “AI will replace developers” crowd fundamentally misunderstands: code is not an asset—it’s a liability. Every line must be maintained, debugged, secured, and eventually replaced. The real asset is the business capability that code enables.
If AI makes writing code faster and cheaper, it’s really making it easier to create liability. When you can generate liability at unprecedented speed, the ability to manage and minimize that liability strategically becomes exponentially more valuable.
This is particularly true because AI excels at local optimization but fails at global design. It can optimize individual functions but can’t determine whether a service should exist in the first place, or how it should interact with the broader system. When implementation speed increases dramatically, architectural mistakes get baked in before you realize they’re mistakes.
Frankly, I’d rather quit my career than live in the future they’re selling. It’s the sheer dystopian drabness of it. Mediocrity as a service.
I tried the tab-completion slot machines; not my cup of tea. I tried image generation and was overcome with literal depression. I don’t want a future as a “prompt artist”.
I’m mostly linking this for what it says, but oh boy, do I love the way it says it with this wonderful HTML web compenent.
I don’t use large language models. My objection to using them is ethical. I know how the sausage is made.
I wanted to clarify that. I’m not rejecting large language models because they’re useless. They can absolutely be useful. I just don’t think the usefulness outweighs the ethical issues in how they’re trained.
Molly White came to the same conclusion:
The benefits, though extant, seem to pale in comparison to the costs.
What I do know is that I find LLMs useful on occasion, but every time I use one I die a little inside.
I genuinely look forward to being able to use a large language model with a clear conscience. Such a model would need to be trained ethically. When we get a free-range organic large language model I’ll be the first in line to use it. Until then, I’ll abstain. Remember:
You don’t get companies to change their behaviour by rewarding them for it. If you really want better behaviour from the purveyors of generative tools, you should be boycotting the current offerings.
Still, in anticipation of an ethical large language model someday becoming reality, I think it’s good for me to have an understanding of which tasks these tools are good at.
Prototyping seems like a good use case. My general attitude to prototyping is the exact opposite to my attitude to production code; use absolutely any tool you want and prioritise speed over quality.
When it comes to coding in general, I think Laurie is really onto something when he says:
Is what you’re doing taking a large amount of text and asking the LLM to convert it into a smaller amount of text? Then it’s probably going to be great at it. If you’re asking it to convert into a roughly equal amount of text it will be so-so. If you’re asking it to create more text than you gave it, forget about it.
In other words, despite what the hype says, these tools are far better at transforming than they are at generating.
Iris Meredith goes deeper into this distinction between transformative and compositional work:
Compositionality relies (among other things) on two core values or functions: choice and precision, both of which are antithetical to LLM functioning.
My own take on this is that transformative work is often the drudge work—take this data dump and convert it to some other format; take this mock-up and make a disposable prototype. I want my tools to help me with that.
But compositional work that relies on judgement, taste, and choice? Not only would I not use a large language model for that, it’s exactly the kind of work that I don’t want to automate away.
Transformative work is done with broad brushstrokes. Compositional work is done with a scalpel.
Large language models are big messy brushes, not scalpels.
If I’m understanding Greg correctly here, he’s saying it’s okay for people to use large language models …because they’re being forced to?