Why Computers are Bad at Algebra | Infinite Series

preview_player
Показать описание

The answer lies in the weirdness of floating-point numbers and the computer's perception of a number line.

Tweet at us! @pbsinfinite
Email us! pbsinfiniteseries [at] gmail [dot] com

Previous Episode
Making Probability Mathematical

Written and Hosted by Kelsey Houston-Edwards
Produced by Rusty Ward
Graphics by Ray Lux
Assistant Editing and Sound Design by Mike Petrow

Resources:
Random ASCII

Special thanks to Professor Alex Townsend

In 1994, Intel recalled - to the tune of $475 million - an early model of their Pentium processor after they discovered it was making arithmetic errors. Arithmetic mistakes - like the one Intel’s Pentium processors were making - are often rooted in computer’s unusual version of the real number line.

Comments answered by Kelsey:

Neroox05

Joshua Sherwin

Joshua Hillerup
Рекомендации по теме
Комментарии
Автор

i don't think "bad at algebra" is an accurate description of the computer shortcomings discussed in the video

lllll
Автор

So why do you say algebra (in the title) when you mean arithmetic? Computers can do symbolic manipulation just fine assuming they're programmed well.

dWHOHWb
Автор

You're saying "computers" can't do this or that, but what you really mean is that the IEEE floating point format can or can't represent certain things. "Computers" can be programmed to count in any way you want, to any precision or scale.

MrTeknotronic
Автор

Summary of all the comments below:

1. Your title should say Arithmetic and not Algebra.

2. Exponents come before the mantissa according to IEEE standards.

zebulongriggs
Автор

I did not see anything about algebra in the video. I thought it was going to be about the errors that computer algebra programs still make, sometimes. But despite the misleading title it was an interesting video.

johanrichter
Автор

This representation is incorrect.
According to the IEEE 754 standard, the exponent bits come before the mantissa bits.

michaelevans
Автор

2:20

I'd like to note that computers can store negative numbers without using floating point. They are stored in 8/16/32/64 bits like signed integer numbers, with exception that the most significant bit repesents the sign (0 for positive, 1 for negative). Positive signed integers work exactly like unsigned ones, but ones that start with 1 are interpreted differently.

If you have an 8 bit integer, is 2, but -2 is This is called two's complement. It may look strange, but the nice thing about it is that you can do exactly the same operation to change from positive to negative, and from negative to positive: invert all bits and add 1.

aozora
Автор

This video is talking about problems of IEEE1024 floating point format and discussed its problems. But that is problem of a special number format not of the entire computer. You can store rational numbers as pair of integers and where you can use large and large integers using appropriate data structure.

For fractions you too can increase abruptly larger, using appropriate data structure.

And finally these have nothing to do with algebra, but numerical calculation.

subhoghosal
Автор

LOL! My video player crashed immediately after she said "computers make mistakes too." For a second I thought it might have been part of the presentation.

DavidVitez
Автор

Thank you! I find it so hard to explain floats to "normal" people. I'll try to use your video in the future :)

vtechk
Автор

Did she say 'Algebra' anywhere in the video?

sweethater
Автор

The diagram at 10:22 brings to mind another odd aspect of floating point numbers: the hole at zero.

See the bunch of 3 lines in the middle of the diagram, that burst into 3 trees that stretch up to the number line? If you implement floating point in the most natural, simple way, you only get 2 of them, the outer 2. In between them there is a gap, and the gap is much larger than any of the little gaps in the trees around it.

Some numbers will make it clearer. The smallest positive number that you can represent with an 11 bit signed exponent is 1/1024, which is 9007199254740992 / 9223372036854775808, but the next number you can represent after that is just 9007199254740993 / 9223372036854775808, then 9007199254740994 / 9223372036854775808, and so on. The denominator stays the same for a long time while the numerator increases by 1. So you can see where there is a relatively large gap at zero, surrounded by much smaller gaps.

To get around the hole at zero, floating point standards generally specify that some value of the exponent is treated as again the smallest value of the exponent, but this time the mantissa doesn't have an implied leading 1. That lets us fill in the hole at zero, with small gaps all the way down to zero.

This magic exponent corresponds to the third, middle line in the diagram. It could also have been drawn as two lines, since the other exponents get two lines for the positive and negative parts, but I see what the diagrammer means.

Understanding this device also tells us why floating point has positive and negative zeroes: there are two cases where the exponent has the magic value and the mantissa is zero, and they both have to mean zero. So we get positive and negative zeroes.

Tehom
Автор

In IEEE notation, the exponent comes before the mantissa. And, as I recall, the Pentium chip had to be recalled because it was giving answers that were wrong even after rounding considerations.

But ultimately the title seems misleading. When I think of algebra, I think of solving for variables; and this video only covered basic arithmetic. The limits of floating-point representation are interesting and all. But it seems it's a different topic.

PvblivsAelivs
Автор

This is so common an issue to encounter as a programmer and you constantly have to design around direct float comparisons and do manual range rounding ups to avoid these issues

Songfugel
Автор

It's worth noting that floating point numbers are only one of several ways of representing numbers available to computers. Integers can be stored unambiguously and compared reliably, provided they fall within the available range (which is determined by whether or not it is signed, and how many bits have been assigned to it). By extension, fixed-point numbers are possible by scaling an integer down by a factor of a power of ten.

For example, if you're dealing with money, you might want to use integers scaled down by a factor of 100 instead of floating point numbers in order to avoid the £0.10 * 10 != £1.00 problem. Essentially what you're actually doing is storing the currency in its minor unit, and only converting to major units when it comes time to display it to the user.

TheJamesM
Автор

@6:47
9, 812 has changed a bit since I went to school :p

seanm
Автор

I was an engineering student when the Pentium problem came out. I had a CAD program on my PC (with a Pentium) that would occasionally throw "random lines" around. When I traded my processor in to Intel for a replacement, the issue stopped.

AnonymousFreakYT
Автор

12:10 not only chip designers have to worry about floating point precision. Actually this is quite important for software developers, especially of scientific software. Even during my some of my bachelor physics classes we had to do multiplication by adding the logarithms, so that we dont run into these large number problems

onlynamelefthere
Автор

Pbs: "Computers are bad at algebra"
Wolfram alpha: (see profile pic)

johnjohnson
Автор

I am surprised and delighted that you decided to explain the nitty-gritty details of floating-point math on your show, as I've met fellow computer programmers who didn't know how floating point numbers were stored and manipulated. It was also one of the clearest and most concise explanations of floating-point I've ever seen. Bravo!

StirlingWestrup