there is no future for the network at all.
You might be closer to the truth than you think, or fear.
Regardless of the above, no Community Manager will tell you that removing 7% of the users who try to actively participate in a community per week is remotely tenable for a healthy community.
Is "trying to actively participate" the only bar a Community Manager sets for the members of a healthy community? You know what Stack Overflow is missing, partially by design? Because it is a "no chit-chat" site, because members aren't allowed to talk on-site, but only post answers, that's what you get: a bunch of uncoordinated (as far as it looks on-site) people with no common goal. You cannot throw hundreds of thousands of members together without hardly any guidance (literally nobody reads FAQs and Help Centers), and expect them to generate answers and to have those answers have something in common:
Quality
Quality comes in many forms. Attention to detail and language, for one. Eagerness to teach, as another. Providing readable code and reasonable examples. Using abstraction and experience to craft an answer that not only asks the explicitly asked question, but also addresses any implicit properties of the problem, and translate that into knowledge that applies to the question as asked but also serves as a useful resource for later visitors.
There is an abhorrent lack of such educators. I've asked about this on Meta Stack Overflow, three years ago: Where's the new boatload of experts who can explain stuff to me like I'm five?. It's very problematic for the network in the long run.
Apart from that, there's another problem on Stack Overflow:
Quality control
In my 13 years and 4 months of membership I have posted 3,591 answers, averaging 5 per week. Apart from answering, I downvote unclear questions and incorrect answers. My voting stats (available for everyone on my profile):
3,475 upvotes
14,543 downvotes (80%)
11,649 question votes
6,369 answer votes
I mainly browse Stack Overflow from the homepage. It has a filtered view with questions that match my areas of interest and expertise. I click many questions a day, I often abstain from voting. From this you can deduct that I have read tens of thousands of human-written questions and answers. I downvote answers that don't explain what they contain. I downvote when answers contain code that omits a glaring problem from the question. I downvote when an answer contains an incorrect claim, especially when there's easy to find documentation that proves otherwise.
People do not downvote enough. Worse, people counter-upvote downvotes as they see them come in. When I downvote an answer "too soon" after it gets posted, it sometimes immediately gets a counter-upvote, while being not that great at all or flatout wrong.
none of the hypotheses generated by the company can explain away the relationship between % of frequent answerer suspensions and the decrease in frequent answerers, in the context of falling actual GPT post rates.
I have one for you. It's actually pretty trivial. Most posts on Stack Overflow come from people who do not care about quality. Either through lack of knowledge, lack of interest, different education or upbringing, not enough experience, or any other combination of factors. Those people can answer only basic questions. They can do so unchecked, because nobody reads those posts anyway. Those questions now get asked to ChatGPT, instead of on the network. Now those people have nothing to answer anymore. So they venture into different tags, different topics, in which they actually have no experience, but GPT can generate nonsense resembling an answer. So that gets posted instead. Those people get suspended. Learn this is not the way they should behave. Leave.
This message from the moderators is akin to "Hey, you know what you're doing? We don't want that here", and for some users that might be the first time they ever hear that here.
Quality control in measurements
I have no idea how you came to this conclusion:
Based on the data, we would hazard a guess that Stack Overflow currently sees 10-15 GPT answers in the typical day, or 70-100 answers per week. There is room for error due to the inherent uncertainty in the measurement method, but not room for magnitudes of error.
But you could not be more wrong. Take this user. On May 30th, they have posted 6 answers. I suspect all of them to be GPT-generated based on writing style. From two of their answers, I know they are GPT-generated because of what's in them:
- How to set tags for Secret in Azure Key Vault using C#: they hallucinated a class and a method (
SecretCreateOrUpdateOptions, CreateOrUpdateSecretAsync()) which don't exist anywhere on the internet except on that post. - write List<T> to File: they answered the question to a T and perfectly explained what they changed, but missed a glaring problem: OP is asking for an explanation about generics. What use does a generic method have when it accepts a
List<T>, but then constrain T to be PERSON using an if statement? Only an AI could make that up and be so meticulously reasonable about it.
You believe this user to be single-handedly responsible for generating 50% of GPT answers for May 30th? Then it should be trivial to find the other user and ban them both.
You know what Stack Overflow can do? Change. Trust your community-elected moderators to do the right thing. They know what their sites stand for. They know what they, and their community members, do and don't want to happen on their site.