Timeline for What is "Answer Bot" and what is it doing?
Current License: CC BY-SA 4.0
43 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Feb 9 at 8:01 | review | Suggested edits | |||
| Feb 9 at 8:51 | |||||
| Feb 4 at 22:32 | comment | added | CPlus | I don’t like it at all. | |
| Feb 4 at 14:04 | comment | added | tenfour | The best content on SE is from those who love to teach. Those people don't want to sift through AI slop hallucinations. Maybe try listening to your best contributors? | |
| Jan 24 at 8:13 | comment | added | Franck Dernoncourt | AI-assisted answers make 100% sense to me. No point in wasting human time when AI can produce a decent draft or even a final correct answer. Looking forward to it. | |
| Jan 6 at 17:26 | comment | added | SNBS | This is IMHO a waste of moderators' time... too many AI-generated answers are useless. And if a human needed it, they could ask AI themselves. | |
| Jan 6 at 15:05 | comment | added | Starship | Re:"It's a safe assumption that LLMs [...] will likely get better over time" its also a safe assumption that computers will be able to run more intensive websites over time. Should Stack Exchange make their site currently inaccessible to anyone without a supercomputers because, eventually normal computers will be able to run the site, in years? | |
| Dec 12, 2024 at 16:20 | comment | added | Lundin | Hint: the completely random AI experiements popping up every second month or so aren't impressing anyone. Surely Prosus & co can't be so oblivious that they confuse completely random AI experiments for some sort of viable business plan with profits to be made in the end. Why the investors haven't put a stop to all this is beyond me, I guess they hate money? | |
| Dec 12, 2024 at 16:07 | comment | added | Lundin | Since not doing "AI-something-something" is evidently not an option for the company... At some point the company has to decide if SO should be used as training material for AI, or some sort of AI prompt. It can't be both at once - this is common sense! You desperately need to come up with a plan slightly more detailed than "lets do AI-something-something" and you needed this plan 2 years ago. Offering ChatGPT as the training material to ChatGPT is probably not going to impress anyone. | |
| Dec 10, 2024 at 23:42 | comment | added | Makyen | If 50 more notifications come in before the moderator clicks on the orange diamond (at the top of every page on their site), the contents of the orange-diamond notification will be shuffled out of the visible moderator inbox. That's rare on the vast majority of sites, but common on SO, due to volume, which is the reason it's typically copied into a pinned chat message. Given the notifications, there was significant discussion in the Teachers' Lounge. SE went way beyond the minimum notification required on this one. If a mod didn't know about the AI concept, that's entirely on the moderator. | |
| Dec 10, 2024 at 23:42 | comment | added | Makyen | @ꓢPArcheon On 2024-09-11 SE posted an announcement in the Mod Team asking moderators to participate in giving feedback on this AI concept, offering $20 to every moderator who participated in learning about it and giving feedback. On the same day, SE sent out an orange-diamond, global moderator notification about that Mod Team announcement. Moderators copied the orange-diamond notification into pinned chat messages in both the Teachers' Lounge and SO Mod site chatroom (and maybe other rooms too). Orange-diamond notifications are shown to all moderators and are not dismissed until viewed. | |
| Dec 10, 2024 at 12:26 | comment | added | hkotsubo | Why not use AI to achieve things like this? IMO, it'd be way more useful and less controversial... | |
| Dec 10, 2024 at 10:34 | comment | added | Mad Scientist | @ꓢPArcheon No, there was actually a lot of information about the planned implementation. The parts we did not know are when exactly the first experiments were planned and which sites would participate. | |
| Dec 10, 2024 at 10:32 | comment | added | ꓢPArcheon | @MetaAndrewT. seems both your and MadScientist answer kinda confirm what I assumed. The concept has been discussed, the "implementation" wasn't disclosed. | |
| Dec 10, 2024 at 10:14 | comment | added | Meta Andrew T. | @ꓢPArcheon the "Answer bot" concept has been discussed in length in the TL and Mod Teams. However, I guess it can't be helped if some moderators aren't following the news because they are reluctant to participate in TL, Mod Teams, or even per-site Mod chat room. | |
| Dec 10, 2024 at 9:01 | comment | added | Mad Scientist | @SPArcheon all mods are potentially aware of these planned experiments, this was communicated in a space available to all mods. We were not aware this particular test was done and neither do we know which sites they will test it on or which sites agreed to the experiment. And there was feedback on this idea from the mods, lots of feedback. | |
| Dec 10, 2024 at 8:58 | comment | added | ꓢPArcheon | @Makyen little issue with that is that apparently only some moderators have been made aware, and I suspect not even all moderators of the specific site were included given the "mods who are open to considering the test" thing. This has the potential to change the whole network, so IMHO discussing the experiment with a few cherry picked friends is not enough. I think it should be clear to everybody that there are multiple mods posting on this question that first discovered this thing now. | |
| Dec 9, 2024 at 23:04 | comment | added | ColleenV | So your AI is not going to provide citations for the sources it used? Or does ethical AI only need to provide citations because y’all weren’t happy about Google’s AI siphoning away traffic? | |
| Dec 9, 2024 at 22:44 | comment | added | Proud anti-zionist | @Bryan Eh, I don’t think I argued for what you’re arguing against, here, so I’m just gonna disengage. | |
| Dec 9, 2024 at 22:22 | comment | added | NoDataDumpNoContribution | @Zoe-Savethedatadump "It's already well-established that even if they try, these kinds of bots cannot reliably and accurately cite sources" Then maybe somebody will sue for license violations. Unfortunately courts will take years for these cases and in the meantime it's a bit wild, wild West. | |
| Dec 9, 2024 at 21:56 | comment | added | M-- | @BryanKrause If the last time I tried doing something related to a specific matter it resulted in a strike, I would at least solicit feedback or announce the changes related to that specific matter. | |
| Dec 9, 2024 at 21:52 | comment | added | Bryan Krause | @AndreascondemnsIsrael I think you're holding the bar unreasonably high. I don't know what kind of work you do, but I don't imagine you'd get anything done if every time you breathed or had an idea you had to solicit opinion from hundreds of people before your next move. It's a recipe for complete paralysis. | |
| Dec 9, 2024 at 21:30 | comment | added | Proud anti-zionist | @Bryan If you're building a community-based entity, you owe the community transparency and a hand on the wheel. SE has stolen access from the valuable community members to vital pieces of current and future development in the community. That's immoral. It isn't a matter of putting this specific incidence up to scrutiny, it's a matter of withholding information and plans from community members at all. The whole practice of sharing community matters only with moderators, or select people, is quite nasty, and not compatible with ethical community curation. Future plans are the matter of everyone. | |
| Dec 9, 2024 at 21:11 | comment | added | Rob | @Philippe, thanks for stepping up and accepting the "produce pelting", but no upvote. Essentially this duplicates someone asking Chat-GPT and hand editing the answer, we know it won't be CMs tasked with the job after this lands; so it's more work for something that isn't popular. It's a polite LMGTFY, and post the answer. | |
| Dec 9, 2024 at 21:10 | comment | added | user152859 | Quick question then: did you plan to keep it secret until January, and the only reason we now know about this "experiment" is a bug? Didn't you once say the company is being transparent with the community? Because doing things in secret behind the community back and against their core wishes is... quite the opposite. (Sorry, didn't end up the short and quick comment I meant.) | |
| Dec 9, 2024 at 20:57 | comment | added | Bryan Krause | @AndreascondemnsIsrael It wasn't a change to the site or platform, it was a pre-test of a test. It's not feasible to have the company put absolutely everything they do through a community code review. They made a mistake here that has no consequences for operation of the site except for leaking some possible future plans. Be upset with those plans if you'd like to, sure, I think there's a lot this test reveals that doesn't look good. But please don't be upset that they're actually trying to test things out with input just because it isn't brought to the whole community from the start. | |
| Dec 9, 2024 at 20:52 | comment | added | Proud anti-zionist | @BryanKrause The problem is that SE is making changes to the site in secret with the upper social class. What M-- is getting at, is that changes to the platform are the interest of the whole community, not just its ruling class. This whole arrangement and its secrecy involved, is one of the reasons why I don't have any faith in the company and the community model practiced here, anymore. | |
| Dec 9, 2024 at 20:49 | comment | added | Anerdw | "we’re investigating why some elements became visible beyond the moderators of those sites" - this is not the first time this has happened. You probably don't need me to tell you this, but if WIP features are being released completely by accident once every few months, something's going on with yall's deployment model. | |
| Dec 9, 2024 at 20:36 | comment | added | Zoe - Save the data dump | @NoDataDumpNoContribution It's already well-established that even if they try, these kinds of bots cannot reliably and accurately cite sources | |
| Dec 9, 2024 at 20:33 | comment | added | NoDataDumpNoContribution | I really hope this bot does give attribution to the sources it uses. Because if not, it might violate some licenses. From the posted examples it unfortunately doesn't look like it. | |
| Dec 9, 2024 at 20:13 | comment | added | Bryan Krause | @M-- If you read the post, releasing this at all was a mistake; not really possible to post ahead of time about a mistake you don't know you will make. It was supposed to be a demo, apparently just for the mods at WebApps, no one else was supposed to see it. | |
| Dec 9, 2024 at 20:03 | comment | added | Resistance Is Futile | @Zoe-Savethedatadump I know. This whole thing is aimed at SO. Experimenting on other sites is just a pretext like it was with 1-rep voting. | |
| Dec 9, 2024 at 19:52 | comment | added | Zoe - Save the data dump | @ResistanceIsFutile SE is going to interpret the results however they see fit. Even if this experiment turns the victim sites into complete dumpster fires, the experiment will either be a success that they'll roll out to SO, or require more data that they only can get from SO | |
| Dec 9, 2024 at 19:46 | comment | added | M-- | There should have been a post about this on Meta WebApps, not an answer after the fact on MSE. | |
| Dec 9, 2024 at 19:25 | vote | accept | cocomac | ||
| Dec 9, 2024 at 18:58 | comment | added | Resistance Is Futile | "We won’t know whether or not those are viable without experimentation." Except we do know. And we told you that. Moderators, especially SO ones, have seen and have removed thousands of AI posts. Most of those were utmost crap. For the same reason why we don't allow users to post AI, this experiment will also fail. The results will be just load of junk containing potentially dangerous and harmful information. | |
| Dec 9, 2024 at 18:54 | comment | added | Resistance Is Futile | "Already we see many examples of SE contributors utilizing LLM resources, and the added workload on moderators and curators that this creates." Are you seriously going to use that as an excuse? Moderators are fully capable of handling and removing LLM content posted by users. It was the company that literally prevented moderators from doing so which lead to the last years strike. Almost all problems and extreme workload for moderators were caused by company's actions. Please don't try to sell this feature, which was strongly opposed by moderators, as something that will help us. It will not. | |
| Dec 9, 2024 at 18:40 | comment | added | Makyen | @ꓢPArcheon Moderators have been aware of this for a few months, and have been able to provide feedback on the concept. This post already indicates that the company has been working with moderators. | |
| Dec 9, 2024 at 18:30 | comment | added | ꓢPArcheon | Others already commented about how foolish this feels, so I'll instead focus on a different question. Exactly when did the company plan to disclose this "experiment" had not yet another meddling user ruin the plan? Didn't the spirit behind the mod agreement (please, spare me the wording technicalities...) required some sort of discussion at least in the Teacher Lounge? | |
| Dec 9, 2024 at 17:41 | comment | added | Mad Scientist | If this experiment is only about preparing for the future, can you state that you will not enable this on live sites outside of beta tests with the current set of mainstream LLMs (or LLMs of similar performance)? And how much better would they have to be before you would consider them good enough? | |
| Dec 9, 2024 at 17:35 | comment | added | Proud anti-zionist | @Jeremy Here goes my time, I’ll never get it back. It’s the experiment that’s limited to specific sites, not the overarching goal. We know from experience that SE wants this on every SE site, and it’s on Stack Overflow where it’ll be most valuable. With the idea that LLMs are here to stay, you can take a pretty good guess that this is indicative of a wider entry. Yet again, it’s «more details later», because sharing them now would be counterproductive in their eyes. SE is still not over their love for useless genAI; it doesn’t seem they learned, so how can you expect a good outcome this time? | |
| Dec 9, 2024 at 17:15 | comment | added | user1114 | @Zoe-Savethedatadump I think this sounds pretty reasonable. Not all communities on the network are as blanket opposed to this as we are on Stack Overflow, and experimenting on a potential implementation in partnership with communities who agree to it seems like a responsibly way to proceed. | |
| Dec 9, 2024 at 17:13 | comment | added | Zoe - Save the data dump | The fact this is being legitimately considered is a massive middle-finger to the community. Nothing more, nothing less | |
| Dec 9, 2024 at 17:05 | history | answered | PhilippeStaffMod | CC BY-SA 4.0 |