Sophie Wilson - The Future of Microprocessors

preview_player
Показать описание

Рекомендации по теме
Комментарии
Автор

Sophie Wilson is a legend. Nobody else has influenced processor design as she did with the ARM. And her quips about Brexit were spot on too.

johnsim
Автор

It's funny how this is still super relevant despite being 6 years old.

tangentfox
Автор

Superb. I love the ease with which Sophie explains micro-processor matters. This is the clearest explanation of what are the factors that are behind the faltering in Moore's law that we've seen over the last decade. Thanks Sophie!

llaith
Автор

Yes, Sophie, thank you for an amazing talk. :D

klaxoncow
Автор

I recently got back into assembler. I’ve done 6502 as mid than did 68000, a step back to Z80 and 8086 at school. When I graduated college I became a developer. But it’s all very high level languages. The lowest I get is C. So as an academic and retro trip, through memory lane, I started to do assembler at home again. And for some reason I got tickled by the ARM. I remembered a colleague of had a RISC OS System back in 1997 and I had an Alpha (he too moved to alpha after that). But he wrote Boulder Dash in assembler on his than 6 year old RISC OS machine. And I remember being really impressed by that OS. It booted from rom and some drivers Only came from disk and it’s all written in assembler. So I started looking into ARM assembler only recently. And Alpha was already brilliant in its design and so much easier. But ARM now is almost like a giver language. I love the instructions readability and fixed length, it’s very predictable. The fixed length can be quirky with immediate assignment but it’s logical. It’s really quite an impressive little cheap CPU and easy to program lowlevel.

rdoetjes
Автор

Still relevant today, a nice video. I grew up writing machine code on the 6502 (not even using an assembler until I wrote one later on... entered the hex directly in the machine language monitor!). There were many tricks that one could play get past the 8-bit data width limit by using self-modifying code and the two indirect memory modes. The best thing about the processor was that the instruction cycle was 100% deterministic, so you could actually transfer data between two machines simply by synchronizing the two processors at the start and then shoving data out the I/O (and in the I/O on the other system) without any further handshakes. Literally as fast as the CPU could issue the writes.

The part about more of the silicon having to be dark turned out not to be true. Right now its something like one gate out of five (20%) for high density logic, and the issue is more about signal integrity and not so much about power density. Power density for the chip as a whole is mitigated by the cache-to-logic ratio. The power density of the CPU data & and instruction cache logic is very low whereas the power density of the compute logic is very high, so the area ratio between the two can be used to regulated the power density of the die as a whole without having to resort to adding tons of dark silicon.

Since modern CPUs need huge caches to operate efficiently, it turns out to work quite well. The early CPUs had no data or instruction caches at all.

-Matt

junkerzn
Автор

Great talk! Thanks Sophie Wilson and Erlang Solutions

thrillscience
Автор

Coming to this in early 2020, I thought it was going to be boring but it is worth watching through.

paulmilligan
Автор

Sophie Wilson, my hat is off. Wonderful talk, full of fascinating insight and history delightfully delivered. My hero (non sexist usage).

martinda
Автор

It's great to see she is still going strong.

Frisenette
Автор

one thing not mentioned was the issue of EMF radiation issues within the processor. This causes design issues too.

jburr
Автор

Blimey I'm impressed with silicon wafer chips and hello Computer World engaged in amazing Sophie Wilson✔.

garyproffitt
Автор

35:00 amazing slide. what I'm missing is cost per instruction per second, though.

wtthehll
Автор

There is one more fundamental problem that comes from Quantum Mechanics: even if let's say we could go around the thermal issues with better materials like nanotechnology that's improving we will still get to a limit due to tunneling if the gate becomes really narrow.

alexandrugheorghe
Автор

Keep it up. More Nostalgia is all we need.

mouseminer
Автор

What an uncommonly sensible and intelligent woman! She refuses to fall into the logical error of reification, in this case the reification of Moore's Observation.

The most commonly seen and heard variety of this error is probably the asinine "We ended up in the nice comfortable place we are today because of reversion to the mean." Reversion to the mean is a powerful engine with exactly the same amount of driving force as Moore's "law." None.

She is also a huge example of that olde rule, There's no limit to what you can do if you don't care who gets the credit. The whole lecture is littered with her generosity to her coworkers, to other innovators, and to people over on the hardware side. What a winner!

TheDavidlloydjones
Автор

A bit of a mis-speak there saying the 6502 32 bit add example takes 26 clock cycles. A rule of thumb for 6502 is that the execution time is equal to the number of bytes of memory read or written -- with a few exceptions that take a little more. Those LDA, ADC, and STA instructions all take two bytes of code plus the data byte read or written and so take no fewer than 3 clock cycles each (in fact exactly three). There are four each of them, twelve instructions, so that's 36 clock cycles. Plus two for the CLC, for a total of 38 clock cycles, not 26.

BruceHoult
Автор

When Wilson says "We'll give you lots of cores but most of them will be turned off to avoid overheating" that's kind of true but it's also failing to join up the dots of some of the other points she made individually. First, you *can* run all the cores if you reduce the voltage and frequency. For example the i9-7980XE can run all 18 cores continuously forever at 2.6 GHz, or one core at 4.4 GHz. Obviously if you do this then 18 cores are not 18 times faster than 1 core .. it's more like 10.6x faster. So why not just lock the thing to 2.6 GHz and call it a 2.6 GHz chip? Well, because Amdahl's law. If you take that 95% parallel, 5% serial task and run it on an 18 core processor at 2.6 GHz then it's 9.7x faster than a single core at 2.6 GHz, or 5.75x faster than a single core at 4.4 GHz. BUT ... if you run the 95% at 2.6 GHz and the 5% at 4.4 GHz then you get 7.16x faster than a single core at 4.4 GHz or 12.12x faster than a single core at 2.6 GHz. In the limit, you *can* get a 20x speedup compared to a single core at 4.4 GHz by adding enough cores for the parallel part, even if you have to run them at 2.0 GHz or even 1.0 GHz.

BruceHoult
Автор

Very interesting. What I'm taking from this is that, even though hardware has advanced into multi-cores, software hasn't followed up. Maybe we would need a new high-level parallel programming language so that we could finally get rid of obsolete sequential C++, and take advantage of actually using all of those available cores that we pay so much money for? We can use computers to help us design very efficient hardware, why can't we use computers to help us design a better, more efficient, high-level parallel programming language better suited to current technology? C is a nearly 50 years old programming language. It equates to using binary toggle switches to program an Altair 8800 in machine language all over again.

AlainHubert
Автор

12:35 TIL that RISC is actually Reduced Instruction Set Complexity (not Computer) and that makes more sense! Such a great talk :)

LewisCollard