CppCon 2015: John Farrier “Demystifying Floating Point'

preview_player
Показать описание


Every day we develop software that relies on math while we often overlook the importance of understanding the implications of using our IEEE floats. From the often cited “floating point error” to unstable algorithms, this talk will explain the importance of floats, understanding their storage, the impact of the IEEE floats on math, and designing algorithms better. Finally, the talk will conclude with a quick case study of storing time for games and simulations.

John Farrier is a software engineer, researcher, and musician. He designs software architectures for military modeling and simulation activities. His projects support efforts across the U.S. Department of Defense ranging from lab-based experimental software to fielded software on live fire test ranges.


*-----*
*-----*
Рекомендации по теме
Комментарии
Автор

There are the same amount of numbers between 0.5 and 0.25, between 0.25 and 0.125, between 0.125 and 0.0625, ... so if you count the number of floats between 0 and 1, that's a lot more than between 1 and 2. In fact, there are 255 "groups" of numbers, each from N to N*2, each having 8388608 numbers.

dascandy
Автор

The comment about stack overflow is fairly on point.

skilz
Автор

The difference between float and double for the Kahan version of triangle area is due to input conversion from decimal to binary. If you cast the float input to double, the results are almost identical.

hanyouchu
Автор

Can anybody provide a link to those 50 equations /*test-cases???*/ that must fork same on all IEEE754 machines?

alexloktionoff
Автор

Why 0.0 to 0.1 have more precision?
Is it because every floating point number is unique, and there's a lot of overlaps up to 2^23?

enhex
Автор

@4:31 and @8:10 why does -1^0 compute to zero? Shouldn't it be 1? Is it a typo or am I missing something?

ehsanamini
Автор

At 11:08 (slide 24), the binary representation of 1.0e-37 shown here is different from what I got using visual studio: EA1C0802 (little endian). Why so? The latter does not seem to be a denormalized number.

janasandeep
Автор

Very good talk. It wouldn't have occurred to me to group like exponentiation.

georganatoly
Автор

Units in last place = ulps. This is a measurement I've never seen before. Some compilers can control how rounding works via options.

Calm_Energy
Автор

5:22, for 64 bits its should be 11 bit of exp

RajeshKumarsep
Автор

talk has expressed very important maths topics

Courserasrikanthdrk
Автор

16:56 'on the CPU math is done exactly and then rounded to give it back to you'  did you really say that? Do you really mean that?

andik
Автор

Confusing talk. This guy is all over the place. Every slide in no way connects to the previous slide.

UpstreamNL