Wait, so comparisons in floating point only just KINDA work? What DOES work?

preview_player
Показать описание
An introduction to the floating point numbers (iee-754), and some of the oddities surrounding it.

Disclaimer: Commission is earned from qualifying purchases on Amazon links.

Follow me on:

Some great resources:

Some more great stuff:
Рекомендации по теме
Комментарии
Автор

Btw, please support me for more videos!

simondev
Автор

In the 70's and 80's we called floating point computer math: "floating point approximation". Someone in marketing dropped the word "approximation" sometime over the years.

KevinInPhoenix
Автор

Many years ago when designing the Sheerpower programming language for business applications, we spent a ton of money (over $100K) on this exact problem. We ended up with a data type called "real" with integer and fractional components located in their own memory locations. The hard part was making the runtime performance fast. Once done, it has been enjoyable never worrying about all of the FP pitfalls that you very well explained. In fact, this is the best explanation and clarity I have ever seen! Thank you.

MisterDan
Автор

My favorite floating point hack is that 7/3 - 4/3 - 1 will always give you machine epsilon. I don't quite remember how, but I found a comment in the depths of stackoverflow that claimed it worked regardless of programming language, OS and computer. As long as it's using the IEEE standard it works.

Kebabrulle
Автор

My "favorite" thing about floats is that float operations are nonassociatve. That is, (a + b) + c need not equal a + (b + c), and same for multiplication.

petrie
Автор

I think this is why Sun did that big push to evangelize interval arithmetic. It basically covered for all the imprecision of floats by simply treating them as fuzzy intervals. Things like == comparisons are now interval overlap checks and operations that make the error worse actually make the intervals grow. You basically avoid a lot of these headaches by just assuming that error will always be there and developing your arithmetic around that assumption.

parasharkchari
Автор

My boss recently told me a story of a game he once worked on. If you left it running for about 28 hours or so, all kinds of weird shit would start happening. Like the rendering would break completely, certain things would stop moving etc. The reason was that certain things in the game kept some kind of on-going timer. This was usually a timer of accumulated delta times and in the range of seconds. Turns out that after the amount of time mentioned above, these accumulated timers got so big that a delta time of 1/60 was no longer large enough to affect them in any way, thus they froze entirely. It's basically one of the floating point issues you mentioned in the video.

This specific bug never got a proper fix, just a workaround, which was to simply pause the game on inactivity.

RPG_Hacker
Автор

My rule of thumb with programming using floating point numbers to just assume that two floating point numbers are never equal. The only time a FP is equal to another FP is when they were obtained by copying. FPs can be compared as "less than" or "greater than" as a sort of "inside/outside" check, with "equals" case being implicitly bundled with either one of those two.

Niohimself
Автор

It kind of makes sense that floating point values don’t play well with equality, because the real numbers are infinitely divisible. In the real world, when you’re comparing things, you are always working to a certain degree of precision.

The only way for two objects to be the exact same length would be for them to have the same number of atoms, which is an integer comparison.

scaredyfish
Автор

This is an amazingly dense video! I have to watch it multiple times to completely absorb it.

ellaa_nashwara
Автор

I love the little HTML changes you make in the websites.

The description in 7:32 makes the title even better :D

neintonine
Автор

Years ago, I recall reading the specs for a Java3D library and I think they had a 256bit fixed point library. IIRC, you could represent Planck lengths in the same model as the observable universe. Though I imagine there would be performance costs for that, with 32 byte numbers. A spacetime coordinate system would use 128 bytes for 3 space and 1 time coordinates. Or even just homogeneous space coordinates.

zemoxian
Автор

This was so helpful! I always wondered why in MatLab, I sometimes have to do a - b < 0.00001 instead of a = b to compare two values.

jakesto
Автор

12:27 A perfect example of this in effect is Minecraft (specifically something called the farlands on with wiki), back before there was a world border. I'd encourage to go check it out! It's really interesting and works wonderfully to visualise these floating point errors in action.

Sollace
Автор

The fundamental problem with FP arithmetic is that Real numbers are not natural fit for binary computers. There's no way to directly map values with moving decimal point in a register, since the register has fixed length, without accumulating large errors. That leaves you with the fixed point format option, where you have to choose between limited range or limited precision, but not both. The convoluted way FP arithmetic is implemented in the binary logic constraints makes it possible to have both cases (range and precision), at a cost of added complexity and a thick book of rules/limitations -- the IEEE-754 standard -- that historically made high-perf FP hardware implementation even more expensive.

Ivan-prku
Автор

Omg I can't believe you are a time traveller. What other exciting computer science moments have you witnessed?

nprograms
Автор

A minor niggle: What is described as a “mantissa” here is really a significand, one which is linear within the range allowed for a given exponent value. Mantissas, as in log tables, are logarithmic. If the exponent in a floating point format were represented as a binary fixed point (so that the usual significand would no longer be needed), the fractional part of the exponent would truly be the mantissa (and in the language of log and antilog tables, the integer part of the exponent would be called the “characteristic”). (Watch out for negative exponents, since the mantissa still has a positive sense in log tables. For M=0.113943, C = [−1, 0, 1, 2], 10^(C+M) yields [0.13, 1.3, 13, 130].)

shaggygoat
Автор

I don't know if you have ever been a lecturer, but I can tell you you are really good, and all people who have had you around are very lucky. Great coverage, great dissecting of the subject, extremely well presented. Thank you, subscribed just from one video. :)

BillDemos
Автор

I understand the basics of float, and I've decided to just use integers whenever possible. Far less issues.

gloweye
Автор

I really appreciated this video, I have thought about it on and off during the last week, thanks for quality content. I'm always excited for your new videos. Keep it up Simon!

Awezify