iAPX The first time Intel tried to kill x86

preview_player
Показать описание
Intel seems to like trying to kill off it's most successful line of x86 CPUs. Itanium might be the last time we all remember, but the first time was with iAPX.

Lets look at the architecture that was once Intel's future, until it crashed and burnt.

0:00 - Introduction
0:16 - A word from our sponsor
0:43 - The background
1:20 - What iAPX stands for (Intel can't spell)
2:33 - iAPX and high level languages
4:19 - Ada
5:59 - iMAX
6:55 - iAPX 432 (The lemon)
9:00 - The compiler is rubbish too
11:00 - IBM PC to the rescue
13:30 - 80286 the end of the iAPX 432
16:00 - Thanks
Рекомендации по теме
Комментарии
Автор

5:23 Ada was designed for writing highly reliable code that could be used in safety-critical applications. Fun fact: the life-support system on the International Space Station is written in Ada, and runs on ancient Intel 80386 processors. So human lives depend on the reliability of all that.

lawrencedoliveiro
Автор

The 68000 was a nice chip! It has hardware multiply and divide. An assembly programmer's dream. I know, because I coded in Z-80 and 8080 Assembly. Good reporting!

antonnym
Автор

4:42 Ada didn't program the difference engine, which was built after her death, because that wasn't programmable. The difference engines works by repeatedly calculating using finite differences. Ada wrote a program for the analytical engine, a more advanced Babbage design that would've been the first programmable computing machine.

sundhaug
Автор

Actually, Microsoft didn't write DOS. They bought QDOS from Seattle Computer Products after they had told IBM they had an OS for the 8088.

davidfrischknecht
Автор

That programming reminds me very much like a "professional" I used to know ; he worked on a Navy base, and wrote his own code. Professional, in this case, meant he received a paycheck.. The dataset came from COBOL - very structured, each column with a specific set of codes/data meaning something.. He wrote some interpreted Basic code to break down the dataset. It went like this: He would first convert the ascii character into a number. There's 26 lower case characters, and 26 uppercase characters, and 10 digits ( zero thru nine ). So first he converted the single ascii character into a number. Then he used 62 If ... then statements to make his comparison. Mind you, it had to go thru ALL 62 if then statements. This was done for several columns of data PER line, so this really stacked up on instructions carried out. Then based upon the output, he'd reconvert the ascii code back to a character or digit, using another 62 if then statements. And there was like 160 columns of data for each line of the report he was processing.. With a single report consisting of a few thousand lines all the way up to a few hundred thousand rows. Needless to say, with all of the if then statements ( not if -then - else ) it was painstakingly slow. A small report might take 2 to 3 hours to process; larger reports- well, we're talking an all day affair; with ZERO output till it was completed; so for most that ran it, they thought that the code had crashed.
In comparison, I wrote a separate routine using Quick Basic, from scratch, not even knowing his code existed, using on...gosub and on .. goto statements, with complete report taking about 1 1/2 minutes, with the every tenth line output to screen so one knew that it was actually working. Only later did I find out his code already existed, but no one used due to how long it took to run. Prior to this, the office in question had 8 to 10 people fully engaged, 5 to 6 days a week, 8 to 10 hours a day, manually processing these reports, reading the computer printouts, and transferring the needed rows to another report, by hand, on electric typewriters. Yes, this is what they did... as stupid as that seems...

MrPirfree
Автор

Intel building the iAPX: Oh no this massively complicated large-die CPU with lots of high-level language support that relies on the compiler to be good *sucks*
Intel 20 years later: What if we built a massively complicated large-die CPU with lots of high-level language support that relies on the compiler to be good?

Vanders
Автор

It wasn't just that the processor returned by value. Values were literally wrapped in system objects, each with all sorts of context wrapped around it. You put all this together, and you'd have a subroutine exit take hundreds of nanoseconds.

This is when all the system designers who WERE interested in this, tell the Intel Rep, "This is garbage." and leave the room.

tschak
Автор

Ha, great video. I love the hilarious background footage... that tape drive at 2 mins in looks like it's creasing the hell out of the tape. And lots of Pet and TRS-80 footage too!

leeselectronicwidgets
Автор

I'm guessing one of the subsequent attempts at intel shooting itself in the foot would be the 1989 release of the i860 processor. For this one they decided that the 432 was such a disaster that what they needed to do was the exact opposite - to implement a RISC that would make any other RISC chip look like a VAX. So we ended up with a processor where the pipelines and delays were all visible to the programmer. If you issued an instruction the result would only appear in the result register a couple of instructions later and for a jump you had to remember that the following sequential instructions were still in the pipeline and would get executed anyway. Plus context switch state saves were again the responsibility of the programmer. This state also included the pipelines (yes there were several) and saving and restoring them were your job. All this meant that the code to do a context switch to handle an interrupt ran into several hundred instructions. Not great when one of the use cases touted was real time systems. Again intel expected compiler writers to save the day with the same results as before.

On paper, and for some carefully chosen and crafted assembly code examples, the processor performance was blistering. For everyday use less so, and debugging was a nightmare.

serifini
Автор

The 8086/8088 was a stopgap stretch of the 8 bit 8080, started as a student project in the middle of iAPX development. iAPX was taking too long, and interesting 16 bit devices were being developed by their competitors. They badly needed that stopgap. A lot of the most successful CPU designs were only intended as stopgap measures, while the grand plan took shape. The video doesn't mention the other, and more interesting, Intel design that went nowhere - the i860. Conceptually, that had good potential for HPC type applications, but the cost structure never really worked.

steveunderwood
Автор

Back in 1980 I was off to University to do a Computer Science degree, and was also building a computer at the same time - I]d been mucking about with digital electronics for a number of years and with the confidence of a teenager that didn't know if he had the ability to do so, I had designed several processor cards based on the TMS9900, 6809 and others (all on paper, untested!) and had read about the iAPX 432 in several trade magazines, and was quite excited by the information I was seeing, although that was marketing mostly. I tried getting more technical data through the Universiity (because Intel wouldn't talk to hobbyists!) but found information thin on the ground, and then six months or so later all the news was basically saying what a lemon the architecture was, so I lost interest as it appears the rest of the world did.
About a decade later my homebrew computer booted up with its 6809 processor, I played with it for a few weeks, then it sat in storage 20 years, because "real" computers didn't require me to write operating systems, compilers etc :D

IanSlothieRolfe
Автор

Intel would use the term iAPX286 to refer to the 80286, along with the 432 in sales literature. Intel had intended the 286 for small to medium time sharing systems (think the ALTOS systems), and did not have personal computer use anywhere on the radar. It was IBM's use in the AT that changed this strategy.

tschak
Автор

Have you ever heard the tragedy of iAPX the Slow? I thought not, it's not a story Intel would tell.

Very great video, loved it.

brandonm
Автор

One more thing that really hurt the 432's performance was the 16 bit packet bus. This was a 32 bit processor, but used only 16 lines for everything. Reading a word from memory meant sending the command and part of the address, then the rest of the address, then getting half of the data and finally getting the rest of the data. There were no user visible registers so each instruction meant 3 or 4 memory accesses, all over these highly multiplexed lines.

jecelassumpcaojr
Автор

IBM ultimately chose the 8088 because it meant that the I/O work that had been done on the Datamaster/23 could be lifted and brought over. (No joke, this was the tie-breaker)

tschak
Автор

Allegedly the "X" stood for arCHItecture, as in the Greek letter. Feels like it's grasping for straws though.

ChannelSho
Автор

Someone I knew in my area had a type ofAXP development machine. And he remarked its performance was very under rated, and that it had IPC higher than a 386 when coded properly. And estimated a single chip derivative with tweaks could out perform a 486 or 68040 in IPC. Sadly he developed dementia and passed away about 10 years ago. And I have never been able to track the hardware down or what happened to it. He had one of the most exotic collections I had ever seen. And he was a very brilliant man before mental illness overtook him. I only got to meet him at the tale end of his lucidity which was such a shame.

wishusknight
Автор

You should do an episode on the Intel 860 and 960. I’m remember them being interesting processors but never saw anything use them

MrWoohoo
Автор

You forgot the 80186 processor! Not used in a PC. But it was a huge failure couldn’t jump back out of protected mode. This needs a reboot!

RonCromberge
Автор

Yet again a super interesting video. Thank you. I had no idea that this processor existed.

a