Yes, they are: rolling a 15 or a 2 is in the same bag but rolling a 1 is in the other bag. So it's a 19/20 probability against a 1/20 probability.It's funny how people don't realise that rolling two 2s in a row is just as hard as two 1s but you don't see people pay attention to that either.
The odds of rolling a 15 and then a 2 specifically are also the same.
People only seem to attach special attention to very specific patterns they believe are mathematically any different.
They aren't.
It all depends on how the random generator is made and used. Firstly, making a good RNG is complicated; there are many books and articles on the subject. But programmers often rely on library functions that return an n-bit integer value with a uniform probability, and then it's up to them to use that correctly.Basically, unless you start logging 1000 results your numbers are likely going to be very random unless the rng is badly created. With modern computers, making a good rng isn't that complicated anymore.
Even excel's rng works quite well.
If you generate 100 sets of 5 numbers from 1 to 20 you will end up with quite a few distributions that don't seem random.
But if you generate a 100 sets of 1000 numbers, most sets will have pretty good distributions. If you then create a 100 sets of 100k numbers it's very unlikely any of them will turn out to be a whole bunch of only 1s.
If you need to generate a 1d20, you can see the first problem: the typical random sets - values on n bits - don't produce a multiple of 20 samples. If you take a 16-bit random value modulo 20, you introduce a bias, because 20 doesn't divide 65536. It's so small that you won't notice it though, but if you take 8-bit values, that's a few %, so you may see the 1 or the 20 (very) slightly more or less often.
Many programmers also have the impression that using a modulo is costly because it involves a division (which isn't true for constants), so they'll find creative workarounds to 'optimize', and who knows what bias they can introduce then. Though in this case they don't need to use it very often so I doubt they would bother 'optimizing' that.
Some libraries offer floating-point random values, which are sometimes tricky to use or that are not well balanced. It's also easy to mess up when rounding those values. Or by abusing the seed, etc.