# 了解“随机性”

``rand()` `

` `rand() * rand()` `

# 只是一个澄清

## 例 ` ` BarChart[BinCounts[RandomReal[{0, 1}, 50000], 0.01]]` ` ` ` BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] * RandomReal[{0, 1}, 50000], {50000}], 0.01]]` `

## 另一个例子

2 * Random（）是均匀分布的： ` ` BarChart[BinCounts[2 * RandomReal[{0, 1}, 50000], 0.01]]` ` ` ` BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] + RandomReal[{0, 1}, 50000], {50000}], 0.01]]` `

## 中心极限定理 ` `BarChart[BinCounts[Table[RandomReal[{0, 1}, 50000] + RandomReal[{0, 1}, 50000] + Table[RandomReal[{0, 1}, 50000] + RandomReal[{0, 1}, 50000], {50000}], 0.01]]` ` `rand()`根据伪随机种子（通常基于当前时刻，它总是在变化）生成一组可预测的数字。 乘以序列中的两个连续数字会生成一个不同的，但同样可预测的数字序列。

` `0.3, 0.6, 0.2, 0.4, 0.8, 0.1, 0.7, 0.3, ...` `

`rand()`将生成上面的列表，并且`rand() * rand()`将生成：

` `0.18, 0.08, 0.08, 0.21, ...` `

`random()``(0,1)` ，但`random()*random()``(0,0,0,1)`

`random()``random()*random()``random()+random()``(random()+1)/2`或任何其他不会导致固定结果的组合具有相同的熵源（或者伪随机发生器的初始状态是相同的），答案就是它们同样是随机的（差别在于它们的分布）。 我们可以看到一个完美的例子是骰子游戏。 你得到的数字是`random(1,6)+random(1,6)` ，我们都知道获得7的机会最高，但这并不意味着两个骰子的滚动结果或多或less是随机的滚动的结果。

（奇怪的是，它也会增加低滚动，假设你的随机性从0开始，你会看到一个0的尖峰，因为它会把另一个滚动变成0的值。考虑0和1之间的两个随机数）并相乘，如果两个结果都是0，那么无论结果如何，整个事情都会变成0，唯一的办法就是让两个滚动条成为1.实际上，这可能无所谓但它使一个奇怪的图。） ` ` 1 2 3 4 5 6 ----------------------------- 1| 1 2 3 4 5 6 2| 2 4 6 8 10 12 3| 3 6 9 12 15 18 4| 4 8 12 16 20 24 5| 5 10 15 20 25 30 6| 6 12 18 24 30 36` `

• 高偏差： `sqrt(rand(range^2))`
• `(rand(range) + rand(range))/2`
• 低：偏差： `range - sqrt(rand(range^2))`

“随机”与“更随机”有点像是问哪个零更“零”。

` `010110101110010` `

So how do we come up with 1/3 and 2/3 ? When the contestant originally picked a door, he had a 1/3 chance of picking the winner. I think that much is obvious. That means there was a 2/3 chance that one of the other doors is the winner. If the host game him the opportunity to switch without giving any additional information, there would be no gain. Again, this should be obvious. But one way to look at it is to say that there is a 2/3 chance that he would win by switching. But he has 2 alternatives. So each one has only 2/3 divided by 2 = 1/3 chance of being the winner, which is no better than his original pick. Of course we already knew the final result, this just calculates it a different way.

But now the host reveals that one of those two choices is not the winner. So of the 2/3 chance that a door he didn't pick is the winner, he now knows that 1 of the 2 alternatives isn't it. The other might or might not be. So he no longer has 2/3 dividied by 2. He has zero for the open door and 2/3 for the closed door.

Consider you have a simple coin flip problem where even is considered heads and odd is considered tails. The logical implementation is:

` `rand() mod 2` `

Over a large enough distribution, the number of even numbers should equal the number of odd numbers.

Now consider a slight tweak:

` `rand() * rand() mod 2` `

If one of the results is even, then the entire result should be even. Consider the 4 possible outcomes (even * even = even, even * odd = even, odd * even = even, odd * odd = odd). Now, over a large enough distribution, the answer should be even 75% of the time.

I'd bet heads if I were you.

This comment is really more of an explanation of why you shouldn't implement a custom random function based on your method than a discussion on the mathematical properties of randomness.

When in doubt about what will happen to the combinations of your random numbers, you can use the lessons you learned in statistical theory.

In OP's situation he wants to know what's the outcome of X*X = X^2 where X is a random variable distributed along Uniform[0,1]. We'll use the CDF technique since it's just a one-to-one mapping.

Since X ~ Uniform[0,1] it's cdf is: f X (x) = 1 We want the transformation Y <- X^2 thus y = x^2 Find the inverse x(y): sqrt(y) = x this gives us x as a function of y. Next, find the derivative dx/dy: d/dy (sqrt(y)) = 1/(2 sqrt(y))

The distribution of Y is given as: f Y (y) = f X (x(y)) |dx/dy| = 1/(2 sqrt(y))

We're not done yet, we have to get the domain of Y. since 0 <= x < 1, 0 <= x^2 < 1 so Y is in the range [0, 1). If you wanna check if the pdf of Y is indeed a pdf, integrate it over the domain: Integrate 1/(2 sqrt(y)) from 0 to 1 and indeed, it pops up as 1. Also, notice the shape of the said function looks like what belisarious posted.

As for things like X 1 + X 2 + … + X n , (where X i ~ Uniform[0,1]) we can just appeal to the Central Limit Theorem which works for any distribution whose moments exist. This is why the Z-test exists actually.

Other techniques for determining the resulting pdf include the Jacobian transformation (which is the generalized version of the cdf technique) and MGF technique.

EDIT: As a clarification, do note that I'm talking about the distribution of the resulting transformation and not its randomness . That's actually for a separate discussion. Also what I actually derived was for (rand())^2. For rand() * rand() it's much more complicated, which, in any case won't result in a uniform distribution of any sorts.

It's not exactly obvious, but `rand()` is typically more random than `rand()*rand()` . What's important is that this isn't actually very important for most uses.

But firstly, they produce different distributions. This is not a problem if that is what you want, but it does matter. If you need a particular distribution, then ignore the whole “which is more random” question. So why is `rand()` more random?

The core of why `rand()` is more random (under the assumption that it is producing floating-point random numbers with the range [0..1], which is very common) is that when you multiply two FP numbers together with lots of information in the mantissa, you get some loss of information off the end; there's just not enough bit in an IEEE double-precision float to hold all the information that was in two IEEE double-precision floats uniformly randomly selected from [0..1], and those extra bits of information are lost. Of course, it doesn't matter that much since you (probably) weren't going to use that information, but the loss is real. It also doesn't really matter which distribution you produce (ie, which operation you use to do the combination). Each of those random numbers has (at best) 52 bits of random information – that's how much an IEEE double can hold – and if you combine two or more into one, you're still limited to having at most 52 bits of random information.

Most uses of random numbers don't use even close to as much randomness as is actually available in the random source. Get a good PRNG and don't worry too much about it. (The level of “goodness” depends on what you're doing with it; you have to be careful when doing Monte Carlo simulation or cryptography, but otherwise you can probably use the standard PRNG as that's usually much quicker.)

Floating randoms are based, in general, on an algorithm that produces an integer between zero and a certain range. As such, by using rand()*rand(), you are essentially saying int_rand()*int_rand()/rand_max^2 – meaning you are excluding any prime number / rand_max^2.

That changes the randomized distribution significantly.

rand() is uniformly distributed on most systems, and difficult to predict if properly seeded. Use that unless you have a particular reason to do math on it (ie, shaping the distribution to a needed curve).

Multiplying numbers would end up in a smaller solution range depending on your computer architecture.

If the display of your computer shows 16 digits `rand()` would be say 0.1234567890123 multiplied by a second `rand()` , 0.1234567890123, would give 0.0152415 something you'd definitely find fewer solutions if you'd repeat the experiment 10^14 times.

Most of these distributions happen because you have to limit or normalize the random number.

We normalize it to be all positive, fit within a range, and even to fit within the constraints of the memory size for the assigned variable type.

In other words, because we have to limit the random call between 0 and X (X being the size limit of our variable) we will have a group of "random" numbers between 0 and X.

Now when you add the random number to another random number the sum will be somewhere between 0 and 2X…this skews the values away from the edge points (the probability of adding two small numbers together and two big numbers together is very small when you have two random numbers over a large range).

Think of the case where you had a number that is close to zero and you add it with another random number it will certainly get bigger and away from 0 (this will be true of large numbers as well as it is unlikely to have two large numbers (numbers close to X) returned by the Random function twice.

Now if you were to setup the random method with negative numbers and positive numbers (spanning equally across the zero axis) this would no longer be the case.

Say for instance `RandomReal({-x, x}, 50000, .01)` then you would get an even distribution of numbers on the negative a positive side and if you were to add the random numbers together they would maintain their "randomness".

Now I'm not sure what would happen with the `Random() * Random()` with the negative to positive span…that would be an interesting graph to see…but I have to get back to writing code now. 😛

1. There is no such thing as more random. It is either random or not. Random means "hard to predict". It does not mean non-deterministic. Both random() and random() * random() are equally random if random() is random. Distribution is irrelevant as far as randomness goes. If a non-uniform distribution occurs, it just means that some values are more likely than others; they are still unpredictable.

2. Since pseudo-randomness is involved, the numbers are very much deterministic. However, pseudo-randomness is often sufficient in probability models and simulations. It is pretty well known that making a pseudo-random number generator complicated only makes it difficult to analyze. It is unlikely to improve randomness; it often causes it to fail statistical tests.

3. The desired properties of the random numbers are important: repeatability and reproducibility, statistical randomness, (usually) uniformly distributed, and a large period are a few.

4. Concerning transformations on random numbers: As someone said, the sum of two or more uniformly distributed results in a normal distribution. This is the additive central limit theorem. It applies regardless of the source distribution as long as all distributions are independent and identical. The multiplicative central limit theorem says the product of two or more independent and indentically distributed random variables is lognormal. The graph someone else created looks exponential, but it is really lognormal. So random() * random() is lognormally distributed (although it may not be independent since numbers are pulled from the same stream). This may be desirable in some applications. However, it is usually better to generate one random number and transform it to a lognormally-distributed number. Random() * random() may be difficult to analyze.

For more information, consult my book at http://www.performorama.org. The book is under construction, but the relevant material is there. Note that chapter and section numbers may change over time. Chapter 8 (probability theory) — sections 8.3.1 and 8.3.3, chapter 10 (random numbers).

We can compare two arrays of numbers regarding the randomness by using Kolmogorov complexity If the sequence of numbers can not be compressed, then it is the most random we can reach at this length… I know that this type of measurement is more a theoretical option…

Use a linear feedback shift register (LFSR) that implements a primitive polynomial.

The result will be a sequence of 2^n pseudo-random numbers, ie none repeating in the sequence where n is the number of bits in the LFSR …. resulting in a uniform distribution.

Use a "random" seed based on microsecs of your computer clock or maybe a subset of the md5 result on some continuously changing data in your file system.

For example, a 32-bit LFSR will generate 2^32 unique numbers in sequence (no 2 alike) starting with a given seed. The sequence will always be in the same order, but the starting point will be different (obviously) for a different seeds. So, if a possibly repeating sequence between seedings is not a problem, this might be a good choice.

I've used 128-bit LFSR's to generate random tests in hardware simulators using a seed which is the md5 results on continuously changing system data.

Actually, when you think about it `rand() * rand()` is less random than `rand()` . 这是为什么。

Essentially, there are the same number of odd numbers as even numbers. And saying that 0.04325 is odd, and like 0.388 is even, and 0.4 is even, and 0.15 is odd,

That means that `rand()` has a equal chance of being an even or odd decimal .

On the other hand, `rand() * rand()` has it's odds stacked a bit differently. Lets say:

` `double a = rand(); double b = rand(); double c = a * b;` `

`a` and `b` both have a 50% precent chance of being even or odd. Knowing that

• even * even = even
• even * odd = even
• odd * odd = odd
• odd * even = even

means that there a 75% chance that `c` is even, while only a 25% chance it's odd, making the value of `rand() * rand()` more predictable than `rand()` , therefore less random.

OK, so I will try to add some value to complement others answers by saying that you are creating and using a random number generator.

Random number generators are devices (in a very general sense) that have multiple characteristics which can be modified to fit a purpose. Some of them (from me) are:

• Entropy: as in Shannon Entropy
• Distribution: statistical distribution (poisson, normal, etc.)
• Type: what is the source of the numbers (algorithm, natural event, combination of, etc.) and algorithm applied.
• Efficiency: rapidity or complexity of execution.
• Patterns: periodicity, sequences, runs, etc.
• and probably more…

In most answers here, distribution is the main point of interest, but by mix and matching functions and parameters, you create new ways of generating random numbers which will have different characteristics for some of which the evaluation may not be obvious at first glance.

It's easy to show that the sum of the two random numbers is not necessarily random. Imagine you have a 6 sided die and roll. Each number has a 1/6 chance of appearing. Now say you had 2 dice and summed the result. The distribution of those sums is not 1/12. 为什么？ Because certain numbers appear more than others. There are multiple partitions of them. For example the number 2 is the sum of 1+1 only but 7 can be formed by 3+4 or 4+3 or 5+2 etc… so it has a larger chance of coming up.

Therefore, applying a transform, in this case addition on a random function does not make it more random, or necessarily preserve randomness. In the case of the dice above, the distribution is skewed to 7 and therefore less random.

As others already pointed out, this question is hard to answer since everyone of us has his own picture of randomness in his head.

That is why, I would highly recommend you to take some time and read through this site to get a better idea of randomness:

To get back to the real question. There is no more or less random in this term:

both only appears random !

In both cases – just rand() or rand() * rand() – the situation is the same: After a few billion of numbers the sequence will repeat(!) . It appears random to the observer, because he does not know the whole sequence, but the computer has no true random source – so he can not produce randomness either.

eg: Is the weather random? We do not have enough sensors or knowledge to determine if weather is random or not.

The answer would be it depends, hopefully the rand()*rand() would be more random than rand(), but as:

• both answers depends on the bit size of your value
• that in most of the cases you generate depending on a pseudo-random algorithm (which is mostly a number generator that depends on your computer clock, and not that much random).
• make your code more readable (and not invoke some random voodoo god of random with this kind of mantra).

Well, if you check any of these above I suggest you go for the simple "rand()". Because your code would be more readable (wouldn't ask yourself why you did write this, for …well… more than 2 sec), easy to maintain (if you want to replace you rand function with a super_rand).

If you want a better random, I would recommend you to stream it from any source that provide enough noise ( radio static ), and then a simple `rand()` should be enough.