Change my mind: Floating point should not be the default number representation in high-level programming languages.

Posted by KingSupernova@reddit | programming | View on Reddit | 57 comments

Change my mind: Floating point should not be the default number representation in high-level programming languages.

Floating point is useful because it's so fast. But the whole point of high-level languages is to trade performance for saving programmer time. A python program is never going to be as fast as one in C, but it will be a lot easier to write.

Floating point is already a tradeoff in this direction; integer arithmetic is even faster, but less useful, so floating point was a reasonable compromise... 80 years ago. Nowadays, hardware improvements have made performance a non-issue for a huge number of cases. I wouldn't be surprised if more than half of today's actively-developed software could take a 10x speed penalty on its arithmetic operations without breaking. (Just look at how slow most modern apps and webpages are; performance is clearly not a priority for most large companies.)

Floating point has severe drawbacks! The fact that you can't use it for exact math with non-integers is something most programmers have gotten used to, but in reality it's an absolutely *massive* cost. Millions of person-hours have been wasted on learning, debugging, and implementing workarounds for floating point's quirks; all of which would be unnecessary if programming languages supported arbitrary-precision arithmetic out-of-the-box.

Indeed, many programming languages already have separate types for integers vs. floating point. I don't think this division makes sense. The default number type in any high-level language should be one that roughly matches how numbers actually work in the real world, with floating point as a secondary alternative for performance-critical cases.