I presume that you are referring to discrete numbers (integers) rather than real numbers.
There are formulae from calculus for calculating the probability density of particular
distributions of continuous numbers within particular bounds. Within any interval
of real numbers there are of course an infinity of other real numbers.
I presume that you are using the frequentist definition of probability.
Perhaps all that is required to show the difficulty of selecting an integer from an infinite
set of integers is to argue that each number needs an index to enable selection, and to
determine that each number has an equal chance of being selected. But in an infinite
set of integers the indexes are also going to be infinite. That means that no upper and
lower bounds can be formulated for the index numbers. But the formulae for selecting a number
at random require upper and lower index boundaries. Therefore the formulae for selecting a
random number from an infinite set of integers are undefined.
If you adopt a Baysesian definition of probability (probability relates to belief) then perhaps
a random number can be defined as a number which will surprise you. Benford's Law shows that
human beings are highly biased towards smaller numbers, and very large numbers have a
vanishingly small chance of being contemplated by a human being. Suppose we define a set of
humanly accessible numbers (numbers which we can think about even if very rarely). Then since
the set from which we have to select the random number is infinite, and the integer numbers will
thus be indexed by an infinite set of integers, it follows that a number that has an index that
falls outside the set of human accessible numbers will contain the maximum possible surprise.
But since the set of index numbers is infinite, there must be an infinity of numbers that will
have the maximum possible surprise. Therefore it is not possible to select a single random number
from an infinite set of integers.
Lance
Bookmarks