Native bit size, Arbitrary-Precision Arithmetic

Posted by justpeachypay@reddit | Python | View on Reddit | 13 comments

Okay so I have a homework question to compare the int factorial and the float factorial of 200 and to explain what I find. For reference 200! results in a answer and 200.0! results in ∞. I’m like 90% sure the professor want an answer along the lines of “the calculated value exceeds (overflows) the maximum value that can be represented by the data type.” Especially given this is not a compsci class. Great, that’s easy. BUT I was like why, and so it took me a while because I’m a pretty new to programming and struggle with a lot of the terminology but I ended up coming across bignum. Now, the general idea of bignum makes sense to me… I understand how the arrays are structured beyond the bit size of the cpu, what I don’t understand is where the bit size comes in. Here’s my thought process: I understand that python uses the bit size to binarily encode the output - I think this is our base array and each individual 0 or 1 is an array. In order to maximize performance python can assign anywhere from 0 to 1073741823 = 2^30 digits to each array… ie 10^1073741823 (I’m not sure why this helps me visualize this number but it does)…. Now here’s where I get tripped up, how are you able to assign another number to a binary system (I know we’ve turned them into arrays but that’s still not making sense to me)? And why does this result in 2^31 -1 possible int digits? (I am assuming this is because it’s using 32 bits as parent arrays? I have python 64 bit so I’m not really understanding the correlation. Am I thinking about this completely wrong?

I’m sorry if some of this is unclear.