rob's answer has the policy viewpoint.
But I think sharing opinions or rants can be productive in this case, since it strikes at the very heart of why LLM content is so subtly harmful. Words matter, not for their objective content, but because they're the way one human mind connects to another.
To me, the web was always meant to be a web of humans, a tapestry of minds all connecting to each other, free of physical boundaries.
And now, this open web as a place where humans interact as humans is dying. Traffic to sites like ours is falling and the gamification aspects which were meant to incentivize people to participate now incentivize people to post generated content to watch a number go up with no effort of their own.
Actually defending against generated content is difficult, time consuming and frankly depressing: Where once posts with well-constructed sentences meant that at least someone had cared enough about the question they're responding to to write them, now I find myself constantly wondering whether I'm wasting my time reading something no one could be bothered to write.
As much as SE has always intentionally de-emphasized the social aspects of the sites, the part where it's people with genuine questions asking and people with genuine knowledge answering always mattered to me. So much individual effort invested in all the wonderful answers on this site, each with the particular, peculiar voice of the person writing it, each offered without tangible reward.
And now we get answers that have no voice, that play at being knowledge shared but there is no mind behind them sharing it, no one caring whether they are right or wrong. Just pure bullshit in Frankfurt's sense, words whose only purpose is to exist as words, to provoke someone into clicking the upvote button, words that play at being communication but that communicate nothing.
In a comment, knzhou says:
My question is just to ask what other people think about it. I personally don't have a strong opinion against this.
I have a strong opinion. I'd rather read a dozen answers that are wrong in the interesting ways in which humans can be wrong than this slop where no one cared whether it was right. I want to talk to people, not algorithms.
I don't really know what to do about this.
There cannot be automated defenses against the output of LLMs (or at least I have not seen any plausible strategy for this), and every time I suspend someone for using generated text there's this little voice in the back of my head that goes "Maybe this person just talks like that?" or "Maybe I just don't know this topic well enough to judge". I really don't want to turn away genuine people whose only sin was writing a little weirdly. (Of course there are clear-cut cases, but they're effectively no more troublesome than traditional spam)
But I also don't want to stop trying. If the web dies, if it all turns into machines talking to machines, I want to be able to say I did my best to at least preserve this little part of it as long as it was possible.