Timeline for How can I make a "random" generator that is biased by prior events?
Current License: CC BY-SA 3.0
9 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jun 16, 2020 at 10:15 | history | edited | CommunityBot | Commonmark migration | |
| Mar 1, 2015 at 16:39 | comment | added | David C Ellis | The implementation as I described it above produces an equal distribution as @DMGregory pointed out, and so it isn't a valid answer to this problem. A similar model I came up with later was to reduce a weight by a fixed amount when it occurred and reset all other weights to their original values, but unless you reduce the weight to zero (which results in zero streaks) this does nothing to reduce the probability of a streak, only to reduce the max and average length of streaks. | |
| Feb 28, 2015 at 15:46 | comment | added | dimo414 | @DavidCEllis are you saying your implementation was flawed, or the idea itself is? My back-of-a-napkin intuition came to roughly the model you describe (adjust a probability down when drawn, gradually restore all probabilities to their original values over time) and it still makes sense to me. | |
| Feb 27, 2015 at 18:07 | history | edited | David C Ellis | CC BY-SA 3.0 | Pointing out that my theory was incorrect, leaving the answer for the comments which point this out |
| Feb 27, 2015 at 18:04 | comment | added | David C Ellis | Dangit, indeed it was bugged as you pointed out. Well, strike this answer. | |
| Feb 27, 2015 at 16:24 | comment | added | DMGregory♦ | If that was your intent, then your implementation has a bug. Look at the graph - The fumble weight only ever bounces between 7 and 11, with no values outside of that. I ran a simulation using the continuous modification that you describe, and the graphs are drastically different, with the probabilities of each state converging toward 25% each within the first hundred trials. | |
| Feb 27, 2015 at 15:46 | comment | added | David C Ellis | Actually the data presented above is applying the +1/-3 to the most recent weight each time a roll is processed. So if you miss once at the initial 50% weight, the next miss weight would be 47%, and if you miss again, the following weight would be 44%, and so on. It does reduce runs (separate metric was tracking runs, found as much as a 24% reduction in runs), but they are still inevitable as this scheme still has a strong chance of leaving each of the 4 weights with a non-zero probability (e.g. Four crits in a row would leave the crit weight with zero chance of occurring). | |
| Feb 27, 2015 at 3:59 | comment | added | DMGregory♦ | Note that this only works if your +1s/-3s are applied relative to the original weights, rather than to the most recently used weights. (Continuously modifying the weights uniformly like this makes them drift toward being equiprobable). While this keeps the probability on-target over the long run, it does very little to reduce runs. Given that I've missed once, the chance that I'll miss twice more in a row is 22% with this scheme, vs 25% with independent draws. Increasing the weight shift for a bigger effect (say to +3/-9) results in biasing the long-run probability. | |
| Feb 26, 2015 at 23:13 | history | answered | David C Ellis | CC BY-SA 3.0 |