To work with big integers, we have to make registers to contain them. A register is a one dimensional array of a particular data type.

Say we have a register consisting of 10,000 BYTEs. Then, that register can contain an integer of 10,000 digits.

We could use base 10, but then we would be wasting most of each BYTE. If our data type is BYTE, then, the natural base to use is 256, because a byte is 8 bits, and 2^8 = 256. We don't need to make up any special symbols, a base 256 digit ranges from 0 to 255.

The larger the base we use, the fewer will be the number of digits for a particular integer. Then, during a multiplication, there are fewer carries, so, it seems to me that generally, the bigger the base, the better. On the other hand, multiplication with the smallest base, 2, is amazingly simple.

On the other thread I tried using base 65536 (2^16). I was using C, and that was the biggest base I could implement, the reason being that for a 32 bit system, an unsigned long int is 4 bytes, so the largest value it can represent is 2^32 - 1, = 4,294,967,295, or 65536 * 65536 - 1. In base 65536, when you multiply two digits, the largest value you can get is 65535 * 65535 = 4,294,836,225. If I used a larger base than 65536, I would sometimes overflow the unsigned long int.

But, of course, if we use a base other than 10, most likely we will be confronted with the problem of doing conversions back and forth to base 10, assuming that the user expects the input and output to be in base 10.