Skip to main content

You are not logged in. Your edit will be placed in a queue until it is peer reviewed.

We welcome edits that make the post easier to understand and more valuable for readers. Because community members review edits, please try to make the post substantially better than how you found it, for example, by fixing grammar or adding additional resources and hyperlinks.

6
  • Re "who doesn't want their GPT answer deleted": Yes, but fortunately the whole point is to do as little work as possible. They have the minimum-effort attitude and the minimum amount of work is the top priority. There is a reason old-school plagiarism was replaced by ChatGPT. Only a tool (that automated the process) could make it widespread, I think. Commented Jun 8, 2023 at 17:46
  • 3
    Obfuscating AI generated text isn't driven by people posting on SE. It's driven by all the other places where submitting AI generated content as your own work is unacceptable (e.g. schools), which, in aggregate, are orders of magnitude larger than SE. For those, tools have been and continue to be created which perform various obfuscations for people automatically. Obviously, it's not all users who obfuscate, but the tools exist and usage of them is increasing. So, it is that such things do have coordination outside of SE, it's just that the coordination isn't, necessarily, SE specific. Commented Jun 8, 2023 at 17:47
  • 6
    @This_is_NOT_a_forum Don't forget about the 30 minute answer rate limit for users. This forces them to wait, which means at least some are board staring at an already created answer. Might as well spend some of that forced-time making it harder to detect. Commented Jun 8, 2023 at 17:49
  • 3
    In the weeks after the suspensions started, I started seeing a couple of recurring patterns: 1) Users would still post ChatGPT content, but make efforts to carefully mark all the code correctly (which ChatGPT did not (yet?) do). 2) Users would post ChatGPT content, but make an effort to remove the telltale ChatGPT introduction/conclusion phrases (in most cases replacing them with their own, rife with spelling/capitalization/punctuation errors). 3) Users would post ChatGPT content but remove all prose and only post the (commented) code. (Verifiable by asking the same question to ChatGPT.) Commented Jun 9, 2023 at 5:38
  • "develop some sort of rubric or standard for evaluating GPT flags" But even then the result could still have been that it's not repairable and not acting on GPT flags is the best. In your model, who decides in case of a difference in opinion? Commented Jun 9, 2023 at 11:42
  • @Trilarion: Consensus decision making is hard. Ideally, they would talk it out. Realistically, they would try to reach some sort of compromise that leaves everyone a bit unhappy. Commented Jun 9, 2023 at 16:45