Floating Point Numbers (Part2: Fp Addition) - Computerphile

preview_player
Показать описание


This video was filmed and edited by Sean Riley.

Рекомендации по теме
Комментарии
Автор

I'm quite surprised there is no video on regular expressions yet. Would love one about the history of it and why it is so cryptic

StefanH
Автор

The animations on every computerphile video is the most underrated yet the one of the most important components. They make the explanations way more easier to grasp by visually explaining what the speaker wants to convey. On this video, especially, the animations were so on point!

amkhrjee
Автор

I find the easiest way to learn about floating point is via 8-bit floating point. While impractical for actual use, it's helpful to be able to actually see the whole domain. There's a PDF by Dr. William T. Verts which lists a value for each of the 256 combinations.

robspiess
Автор

Thank you for explicitly covering this topic. Better than anything else I've found online.

cmscoby
Автор

I can't remember if they've done one on radix sorting, but understanding the representational bit-pattern of floats is very helpful to being able to sort them with that familiy of algorithms.

cacheman
Автор

Did anyone notice that he wrote the first two 0 on the table at 3:35 :D

VruzZWG
Автор

There have been quite a few processors historically where the fpu cheated not having the full 48 bit needed but really going for something much smaller than say 36 or 38 bits. Rounding of the last once.

People that made software, specially in the 90-tys had to be very careful with this not trusting it to much. This was also one reason why 64 bit become very popular. Even if you do cheat. It becomes more accurate anyway.

Sadly this is quite common to this day software developers run 64 bit when it's really not needed. This is specially problematic with gpu acceleration that for some cards emulate 64 bit, running much slower than half speed.

Also worth saying. 16 bit floating ponit is actually quite a bit more accurate than people think . And twice as fast on most modern cpu and some moden gpu;s.

There even exist 8 bit floating points. Four times as fast. While they are really inaccurate and have a very slim range. When they can be used the preformance gane is huge

matsv
Автор

These guys are so good with explaining things to us not so smart people. Well done mate.

valuedhumanoid
Автор

FWIW: I did an FPU for an experimental CPU core I was working on (targeting an FPGA). It normally works with Double, but only has an ~64-bit intermediate mantissa (for FADD), and this was mostly because the FADD unit was also being used for Int64->Float conversion (reusing the same normalizer; otherwise it could have been narrower). The rest of the bits just "fell off the bottom". Similar goes for FMUL, which only produced a 54-bit intermediate result (with a little bit-twiddly mostly to fix up rounding). Similarly: FDIV was done in software; rounding was hard-coded; it used "denomal as zero" behavior; ... Most of this was to make it more affordable (if albeit not strictly IEEE conformant; most code wouldn't notice).

BGBTech
Автор

Very informative! Thank you for this explanation.

thiswasleft
Автор

My machine organization class is doing this as an assignment right now thank you

ryananderson
Автор

Fascinating subject. I have simulated 32 bit floating point addition, subtraction, multiplication in excel vba then built the 'circuits' in logisim. Implementing rounding, subnormals, special values then testing is quite involved and can really waste a lot of time. I chased 1s and 0s for months. My coding skills are a basic but got things working well ( I think ?)To comprehend it mathematically first is the way forward.

RossMcgowanMaths
Автор

The explanation is nice and explains why floats are coarse like they are.

lisamariefan
Автор

Can you do a video about: rounding and rounding-errors?

alen
Автор

Multiplication isn't really simpler for floats though, because multiplying the mantissas for floats is pretty much the same as multiplying two integers. It's just that the extra step (adding the exponents) is almost trivial.

JaccovanSchaik
Автор

Amazing... A Computerphile video that uses pen and paper to visualize addition, and not a nice CGI... In the year 2019...

enantiodromia
Автор

Noob question: when they measure FLOPS (on a computer) are they performing additions, or subtractions, or multiplications...?

TheTwick
Автор

even more great stuff. how can i thank

brahmcdude
Автор

what about infinities and NaNs? will there be another video?

JakubH
Автор

You did the "Double Dabble" video to explain going from bit representation to a string. Could you do a video explaining how to do it for floating point?

Adamarla