Except, Maplestone, when you can use probability analysis and show items so far off the curve that the probability of the incidence is below 0.00000001 - and you have happening in multiple experiments multiple times, it's pretty obvious something is wrong, and the premise that the "die" used for the experiment is a "fair" one pretty much goes out the window.
Besides, it's a known fact in computer science that you can't make a "fair die" with a computer "RNG" (which is why they are technically referred to as pseudo-RNGs), you can just make more and more complicated algorithms to try to make them less biased.
Sometimes Patterns aren't - but sometimes they are.
After all, it is quite possible for a pseudo-RNG to be fair, over 10s of millions of tests, but if you look at arcs of 1000 (or even 100) consecutive samples (as opposed to ones picked at random, which is what is used for analysis normally), you'll see small scale streaking. The fact that streaking is happening at all levels of the testing range ends up being lost in looking at the random sampling.
Think of it like looking at the mean level of water below a dam with a hydro-electric plant that only runs 4 hours on, 4 hours off. Sure, it will average "x" feet, consistantly, but any stretch of level samples (as opposed to random samples taken through the day and night), is almost certain to read wildly all-above or all-below the mean, unless the stretch occurs shortly after a startup or shutdown of the water flow to the turbines (in which the patterns are likely to be chaotic, and still show an ascending or descending curve).