Timeline for Labs experiment launch: stackoverflow.ai
Current License: CC BY-SA 4.0
9 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jul 1 at 7:35 | comment | added | Maarten Bodewes | @KevinB To be fair, testing infra is a valid reason and it is in the post. However, that doesn't explain the use case for which it is testing. I presume the load is use case specific, so that brings us back to square one, what the hey is this for? There are plenty of AI systems that do this already, and do it at least as good. | |
| Jun 28 at 18:01 | history | edited | Franck Dernoncourt | CC BY-SA 4.0 | added 2 characters in body |
| Jun 27 at 19:26 | comment | added | user400654 | It's a chatbot that you're trying to find posts on SO to support, for reasons, if it isn't to create what would effectively be attribution, what is it? it's certainly not a great way to find content on SO or support people visiting SO, for the same reasons we've seen a massive decline in traffic, so what other reason could it be other than trying to provide attribution? a bullet point for marketing to sell? | |
| Jun 27 at 19:12 | comment | added | user400654 | @AshZade i don't see how any of that explains the point of this experiment? you're testing infrastructure viability? you mean testing whether or not a server can handle some load? or... what? identify and fix bugs... for an ai bot that you are phoning in? or the fake attribution solution you're building in the right column? thus far i don't think your team has provided a reason for running this experiment short of seeing if it works... which isn't reasoning... It still just looks like an attempt at using AI for the sake of using AI. | |
| Jun 27 at 17:43 | comment | added | Ash Zade Staff | I need to correct the statement that “the whole point of this experiment is to fake attribution”. I helped write the post: “This limited release is a first iteration to understand infrastructure viability, identify and fix bugs, assess core functionality, and gather initial feedback“. The alpha is doing its job thanks to the feedback re: strict moderation, incorrect response re: SO’s AI policy, and a few API issues we’ve seen the last few days. | |
| Jun 27 at 16:04 | comment | added | Maarten Bodewes | I'm currently not seeing it happening. Somehow I can see the merit in automatically giving some rep if an AI parrots an answer but: 1. the answer seems to be often wrong 2. the attributions are wrong and 3. at least for the moment, it will just post links not add rep and programmers will just ignore the links. Finally, if it cannot do better than existing tools then nobody will use it either. So eh, both not good and I'm missing the point. Note that I'm a big proponent of using AI - the right way. | |
| Jun 27 at 15:54 | history | edited | Journeyman Geek | CC BY-SA 4.0 | added 1 character in body |
| Jun 27 at 15:49 | comment | added | user400654 | The whole point of this experiment, as i understand it, is to see if they can fake attribution and the majority of their effort will be on tuning that right column. The first iteration is certainly quite poor. I'm seeing similar results, where it can't cite highly popular common sources and tends to favor irrelevant content. Though, even the AI chatbot side of it seems ot miss the mark, often failing to return a correct answer when other tools return a correct answer on attempt one. | |
| Jun 27 at 15:44 | history | answered | Maarten Bodewes | CC BY-SA 4.0 |