-7

I’d like to apologize in advance for bringing this to Meta. I prefer to contribute quietly - learning from upvotes, downvotes, and the occasional comment. If my answers feel more like Puck than Sergeant Joe Friday, that’s just my natural style.

But recent concerns about AI-assisted writing have made it difficult to know where the line is. I want to respect the standards of excellence this site upholds, but I also believe we need objective criteria for evaluating contributions. And there are already multiple mechanisms on Stack Exchange - upvotes, downvotes, closures, deletions - that discourage contributions which don’t measure up.

If an answer is suspected of being bot-generated, perhaps the fairest test is whether it can be substantially reproduced using a publicly available AI, with the prompt disclosed. Otherwise, we risk judging based on “look and feel,” which can be fallible—especially when applied to non-native speakers or stylized writing.

I’m open to adjusting how I contribute. I just want to understand the rules clearly, so I can continue participating in good faith. I understand that upvotes alone don’t justify a contributor’s presence if other norms are being violated - I just want clarity on what those norms are.

EDIT:

It’s worth noting that many native speakers here write with such precision and polish that I suspect an AI detector might assign a high “bot score” to their prose. That’s not a criticism - it’s a testament to their skill. But it also highlights how unreliable “look and feel” can be as a metric for authorship. If even human excellence can be mistaken for automation, then we need clearer standard.

FURTHER EDIT: When someone accuses a post of being bot-generated, the burden of proof must lie with the accuser. Contributors should not be treated as guilty until they prove themselves human. That reverses the presumption of good faith and undermines the spirit of collaborative knowledge-building.

ADDITIONAL EDIT: Google Search has become increasingly difficult to use effectively due to aggressive monetization and SEO clutter. That’s why I rely on my bot primarily as a curated search engine—one that retrieves information and filters it for relevance and clarity.

8
  • 4
    It took me a few re-reads to identify the question here. Is it: "I propose that, if someone suspects an answer of using AI, they should be able to prove it by creating the same answer themselves using AI?" I'm not sure AI works like that. Meanwhile, for the answer writer, we have this page for guidance. Is there anything there that needs updating? Commented Sep 22 at 16:40
  • 1
  • 1
    @AndyBonner "I'm not sure AI works like that." - Yeah, modern LLMs are not necessarily deterministic. When generating a word, they might have a weighted set of choices and choose one at random. That said, I don't work in AI, so grain of salt. Commented Sep 24 at 17:05
  • @wjandrea, LLMs have a small amount of randomness injected into the generation process to improve the quality of the output. If you can control that randomness, you can get an LLM to answer the same question the same way every time; if you can't, it won't. Commented Oct 10 at 23:06
  • Your recent answers have been, objectively speaking, of low quality. One would think that the generally negative response would convince you to favour substance, accuracy, and naturalness over bullet-point AI inspired speech. Every day, users on this site flag your contributions, asking that moderators delete answers of low quality content. It's no longer about you attributing AI's assistance, it's now (more than ever) about quality. Commented Oct 11 at 3:13
  • 1
    Many questions on usage don’t have fixed answers, and interpretations evolve over time. Even Fowler conceded that “unique” can be intensified. So while my idiosyncratic take on English may remain, I’ll do all in my power to post within community standards. I understand that quality matters here, and I’m committed to improving. While confidential flagging is part of the process, I’d genuinely appreciate open feedback from the community when possible. For example, I was recently disabused of my misunderstanding of the phrase “laid on with a trowel,” and that kind of correction helps me grow. Commented Oct 11 at 10:36
  • If your answers were written as well as your comments, there wouldn't be any problem for me. Did you get assistance from AI? Was it proofread? Are you capable of writing Advanced English by yourself? I don't care about minor errors, we all commit them from time to time, but if every sentence has to be fed through and approved by artificial intelligence, then maybe you're out of your depth here. Commented Oct 11 at 11:17
  • If it’s the moderator’s unvarnished assessment that my answers are of low quality—not merely the echo of downvotes—then I accept that judgment. I need to work on how I answer. My responses reflect my take on language, on life, and sometimes that take shines through in ways that unsettle others. If that dissonance means I don’t belong here, then I accept that too. But I’ve always tried to contribute with sincerity, and if asked to leave, will do so regretfully, but promptly. Commented Oct 13 at 2:07

5 Answers 5

5

The user has openly admitted, multiple times, to using AI as a proofreading tool. Labelling then which sentences were copyedited, which ideas were streamlined by AI is folly besides being impossible to verify.

How would anyone be able to guess which prompts were given to the LLM in order to generate a grammatically flawless text?

The OP argues that the content and ideas, however, are original and unique.

I argue that style over substance has no place on ELL, and shiny glib rhetoric is no substitution for quality, factual accuracy, and depth of knowledge.

There are still telltale signs which users sense and/or clearly identify as being generated by AI and they have flagged the OP's contributions. I also have a responsibility to them. The community elected moderators to keep the site clean, to help maintain certain standards; to be patient; to be fair, and unprejudiced whenever the CoC is violated and when contributions are objectively low quality.

In a comment, on an OP's answer that I deleted, I chastised:

All this meta commentary is totally unnecessary. Stick to the question, and don't blindly follow [believe] what AI says without verifying it.

Following this piece of advice would go a long way.

3
  • Why did you delete that comment? Sounds fine to me. Commented Sep 23 at 7:57
  • 1
    @MichaelHarvey Thought so… I found it difficult to phrase without making it sound really clunky. I deleted the answer which I commented on. I deleted the answer which meant the comment got deleted too. Suggestions? Commented Sep 23 at 7:59
  • I don't agree that it is 'clunky'. Short, blunt and to the point, yes, but not clunky. I recall a period when I decided to phrase answers and comments in the style of Hemingway rather than Dickens. It didn't last an awfully long time! Commented Sep 23 at 8:12
3

Trying to reproduce AI generated text by guessing a prompt and using a random model doesn't prove anything. There is no way to prove with 100% certainty that a random bit of text posted by a human on ELL was or was not generated with the help of AI.

There are ways to be sure enough for ELL's purpose of discouraging unattributed AI-generated content however and the current policy appears to be to not disclose the indicators that are being used because it just helps people avoid the requirement. By far the best way to avoid being accused of posting AI content is to support any assertions in your answers with reputable sources that aren't AI.

Do you claim something is ungrammatical? Find a source that supports that and link it with an excerpt of the relevant part. The higher quality the content of your answer is, the less likely the quality of the writing will make someone perceive it as AI. Demonstrate that you understand the subject and didn't just repeat the "correct" answer you found somewhere.

If you follow the guidance in https://ell.stackexchange.com/help/gen-ai-policy, you shouldn't have an issue. Here is an excerpt explaining what you need to do to properly attribute AI content.

How do I reference content generated by generative artificial intelligence tools?

The more general guidance offered in the context of referencing material written by others applies to content generated by generative artificial intelligence tools. More specifically, there are two main things you should ensure:

  • You should clearly describe what content is generated by generative artificial intelligence tools, and which isn’t. You should do this when quoting directly from the output produced by these tools, as well as when paraphrasing those contents. This ensures there is a distinction between the content generated by these tools, content you authored, and content you may be referencing from other sources.
  • You should specify the specific generative artificial intelligence tool/service you used. Since different services may produce different outputs to the same prompt, and may have different limitations and shortcomings, you should ensure readers know which specific tool you used to produce the content you are referencing.
1

I would like to hope that, in a civil and functional online community, direct and honest conversation is enough. I think maybe you're proposing "If someone suspects an answer-writer of using AI, they must first produce evidence." But hopefully a simple "Did you use AI? If so, do ___" is enough, and hopefully the answer-writer responds equally simply, "No, I didn't," and can be taken at face value, or "Yes, I did, thanks for the tip, I'll edit accordingly." Not "prove it."

1

I believe your approach is misguided and I prefer the balance of the inconveniences be on the person using AI than on the community. I'm sure style over substance was never your intent, but from the sample of your answers I read I found the language to be surreal, and unnatural, and it stands out, in the weirdest of ways, with flavor and puns and idioms at every turn, to the point of being almost a parody of the English language. Rest assured I mean no disrespect. I'm a learner myself so I may be off, and I can't fully explain but that's my feeling. I can certainly understand the desire to sound great, but in my opinion you're just another person who has been deceived by the false promises of AI. Let me quote a famous French writer, Boileau:

Ce que l'on conçoit bien s’énonce clairement,
Et les mots pour le dire arrivent aisément

[Something like: "Whatever we well understand we express clearly, and words flow with ease."]

I believe casual, clear and concise wording is the way to go. I believe most native speakers write like this, and not "that many native speakers here write with such precision and polish". I think in English what is preferred is something which is somewhat like speech. Furthermore, this is a learners site, and edit suggestions can be made to improve content, and we can all learn from these.

Web search is not that interesting, even less so with the rampant enshitification. Credible sources are. There is a well known page on ELU meta with references and sources; it is a reference masterclass. There are also ngrams and other corpus analysis tools. And there is your own personal experience and other people you know, which you can quote. Everyone has something to contribute. Do as you wish but I am certain you don't need any AI to "polish your style" or what not to do so. And then we wouldn't need to waste our time with AI ourselves to try to reproduce what you do with it. My 2c. Good luck!

-5

On Bot Accusations, Burden of Proof, and the Misunderstanding of AI Assistance

User Colleen has hinted that ELL uses bot-detection software to tag user content as “bot-generated”, placing the burden of proof on the contributor to demonstrate their innocence. If this is a trial, then the charge is “disrupting ELL,” and the penalty is suspension. But the burden of proof has been reversed—onto the accused. That’s a violation of the norms that democracies have fought for centuries to establish and protect.

What’s being missed here is how bots can and ought to be used. My content is 100% mine. The bot I use doesn’t generate ideas—it helps me refine rhythm, idiom, and grammar. It’s an editor, not an author. And here are the kinds of prompts I’ve received from the bot:

Here’s a slightly more natural way to phrase that, keeping your meaning intact.

Used when a sentence is grammatically correct but a bit stiff or non-native in rhythm.

Let me polish that just a bit for flow and idiomatic clarity—your core idea stays the same.

Ideal for refining casual or conversational writing without losing tone.

This version keeps your message but smooths out the grammar and phrasing for readability.

Used when the original is clear but could benefit from cleaner syntax.

Here’s a version that sounds more fluent while preserving your original structure and intent.

Perfect for ESL writers who want their writing to sound more native without losing their voice.

I’ve reworded this slightly to improve rhythm and idiom—your meaning is untouched.

For stylized writing that needs just a touch of polish to land better with readers.

This is not deception—it’s refinement. And if ELL is unwilling to hear that truth, then this post might as well be a comment. But I’m placing it on the record anyway, because the conversation about AI attribution needs to evolve beyond suspicion, secrecy and what seems almost menacing.

7
  • 1
    I did not imply that bot detection software is in use. I implied that moderators are aware of certain things that can indicate AI generated content. It would be difficult to catch someone who could have written an answer themselves and chose to use AI assistance, but it's not hard to detect people who are heavily relying on AI. That said, it's not a trial. If you read the terms you agreed to for using this site, your content can be removed at any time for any reason. If a mistake is made, it can be undone, but no-one has to prove anything before they act. Commented Sep 22 at 20:49
  • 1
    On a "language-learners" stack exchange, it's reasonable to expect that not everybody has the same level of fluency; hopefully we're all "learners" at some point in our learning! As long as meaning is clear, I don't think polish and refinement are needed; I'd recommend just writing in your own voice. Especially since editing help can actually confuse meanings accidentally. Often, when learning languages, meaning does get obscured, but folks can ask for clarification then. Commented Sep 22 at 21:03
  • (The other day, I attempted a conversation in Spanish, and only after it was done, realized that when I tried to tell about my "oldest" child (el mayor), I actually called them my "best" child (el mejor). The person I was talking to understood my meaning and passed right over it.) Commented Sep 22 at 21:05
  • But note, I recommend writing "in your own voice" to the extent that the writing is clear and helpful. There's nothing wrong with using fun, engaging, or colorful tone, but the whole point of this website is to provide answers to questions. Lately, issues that I've seen in your answers have nothing to do with tone and polish, but with parts of the answers that discuss the question, or even discuss the answer, rather than answer the question. ... Commented Sep 22 at 21:09
  • ... Curious, I looked up some of your questions and answers on other stack exchanges, and many are very useful and clear. Don't overthink it! There's a wide range between Puck and (I had to look him up) Sgt. Friday. Puck intentionally causes mischief, and is sometimes cryptic. It's a false dichotomy to suggest that the only other option is to be boring. Commented Sep 22 at 21:11
  • @AndyBonner - I once told a French girl that I hoped to date that I hoped we were waking up (nous nous réveillons) soon rather than seeing each other (nous nous reverrons). She didn't mind, and we had a fine time, and still keep in touch 40 years later. Commented Sep 23 at 8:05
  • Also, one thing that needs pointing out - an AI tool can claim an edit didn't change the meaning of something but that doesn't make it true. Part of the reason AI generated information needed to be disclosed is because it can't be trusted. AI is a useful tool, but it requires expertise to get good results. Commented Sep 23 at 15:40

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.