Everyone is racing to copy Apple. Here's why.

preview_player
Показать описание
Apple Silicon is changing how consumer tech is designed thanks to software-defined hardware blocks for homogenous compute. The end of the general-purpose CPU is near.

“Can’t innovate anymore, my ass.”

Рекомендации по теме
Комментарии
Автор

Coming from a computer engineer who helps build ASICs for a living, i am not used to this level of nuance and understanding in Tech YouTube. What a breath of fresh air. Your hypothesis on the need for and consequent rise of ASICs and the upcoming tight loop software hindrance (verticality) is absolutely spot on. Loved this. ❤

DibakarBarua_mattbusbyway
Автор

At the time, we used ASICs to perform any computations.
Then came reprogrammable circuits which later became modern CPUs, GPUs, controllers, etc...
At the end, we will see a someback to ASICs as in application-specific accelerators integrated into our chils more and more.

shapelessed
Автор

Having a computer science degree, I was expecting some fluff and inaccuracies, but this is really good!

MakeWeirdMusic
Автор

Fun fact: The ML and Video Accelerator cores are already running in Asahi Linux for development. Even the Speakers do work for almost a Year now but have been disabled due to security issues (fear of blowing them up...). And there has been lots of progress in getting display out to work over USB-C. Asashi is just WILD

maxkofler
Автор

Domain-specific acceleration seems the be the inevitable future of integrated computing, simply because of physics and economics. In the late 1990s we already gave the task of rendering 3D graphics to a dedicated ASICs and over time those GPUs themselves evolved to be much more versatile and programmable yet still incorporating domain-specific logic blocks. The circle of technological evolution is repeating in different tune every now and then, but the general trend is what we see in the mobile market with ever more powerful SoC designs.

Ivan-prku
Автор

Amiga did it first? At the time though they didn't have operating systems robust enough to abstract away these hardware accelerators so each application had to code to the bare metal, which was really painting themselves into a corner. Each new chipset had to support previous chipsets. And that just wasn't sustainable.

yarnosh
Автор

That is how Nintendo used to design their consoles at one point. Figure out and decide what the console will be able to do and from there decide on the hardware that will let them do what they envisioned.

DavidGoscinny
Автор

VERY interesting content. In audio recording there are a lot of boxes by Universal Audio and others that use custom CPUs for custom tasks. They're state of the art, but they're basically no faster than a Core 2 Duo - they're just built for a specific purpose.

budgetkeyboardist
Автор

I've felt for at least a decade software engineers have been letting things slide with regard to efficiency. Even python itself is basically just a usability upgrade to C, which pays for itself by losing efficiency. When you see how well third party code can compress and streamline Windows 11, when you think about how little productivity gain there has been from Office 95 to 365 despite exponentially higher spec requirements, I think the situation will not only be improved by hardware and software working together in closer tandem but also by software engineers working harder to make their code more efficient and streamlined.

I really don't want to go back to the late 80s early 90s when the IBM PC the Macintosh computer and the Amiga all used custom hardware to squeeze unique software experiences.

Crusader
Автор

What I worry about is the right to repair, Apple moving the SSD controller to the SoC (and t2 ) has been pretty terrible for consumers as the one component most likely to die is tethered by Secure Enclave. So far there hasn’t been any real advantages: There’s hardware encrypted NVMe and apple doesn’t have any performance gains to show for it.

dmug
Автор

I find it funny how Intel has no plans to ditch x64, which has pretty much peaked in 2020, yet ARM64 has yet to reach its halfway point.

PowerOfNod
Автор

Not a big issue, _IF_ there’s wide adoption of standards-based APIs (Vulkan, OpenCL, etc).
Great video Quinn, but you missed a part of the story. Apple actually invented OpenCL, made it available as a vendor agnostic heterogeneous computing platform (in 2009). Years down the track they’ve now abandoned it, while the rest of the industry supports it. It’s a story, just like Metal vs Vulkan/OpenGL, of how Apple has completely shifted gear, from open, now to proprietary frameworks. That mentality is not the Apple I used to know.

Toothily
Автор

Sadly companies tend to copy all the bad things from Apple while neglecting what it is that truly sets Apple apart. Like how Acer copied the one mouse button design but without having the same quality of the touchpad and touchpadsoftware and that with a thick laptop while Apple only removed that 2 mouse button standard because they wanted to make the laptop thinner all the time. Dumb, that one mouse button is a flaw, it just is that Apple did the rest good enough to get away with it. Another example: the glueing/soldering of RAM and SSD's, that is something that you should not copy, most customers don't like that. Apple just gets away with this because of their strong marketing.

peterjansen
Автор

Nice video but I'm not sure everyone is racing to copy Apple M-series SoC with dedicated ASIC. Rather, I think the move is to add general-purpose blocks that accelerate those workload like AVX for SIMD matrix calculations. Then gradually add more and more cores like Intel 13th-gen and AMD Zen 5's efficiency cores to run massive parallel calculation that don't need high clock speeds.

As someone who's been in this industry for 30+ years and having seen the ebbs and flows of CPU architectures, my issue with ASIC is obsolescence and compatibility.

Compatibility. Apple M2 still doesn't support AV1 while nVidia/AMD/Intel GPUs, Snapdragon 8 Gen 2 & Mediatek Dimensity 1000 do. M2 also don't support Raytracing and Shaders so most modern games wouldn't look as good as it can (if it can run at all.) And no, you can't emulate those RT cores or GPU pipeline in M2 graphics system.

Another ASIC disadvantage is silicon limitation. The Image Signal Processor (ISP) restricts how much processing can be done for a particular image sensor or in today's context, sensors package. For example, Vivo phones newer than X70 has a dedicated V1 co-processor alongside the mighty Snapdragon 8 to handle low-light scenes because the SD8's ISP + AI cores aren't powerful enough. (Most likely the Snapdragon frame depth is too shallow for frame stacking. Vivo X70 & later X-series all produce incredible low light photos that are better than Galaxy S23 and iPhone 14.)

ASICs are built with a rather inflexible buffer, if at all. Many ASICs use unbuffered buses to connect to other components like ISP to sensors. If the Sensors package is not connected to CPU via buffers, it is not possible for software developers to access RAW sensor data and can only use data from the ISP (RAW images != raw data.) So, the capability of the camera is fixed at the design stage of the silicon. ASICs also can't access main memory because it doesn't have a memory controller. The interconnect between CPU and ASICs also costs silicon real-estate, which is a major issue with AMD Radeon 7000 GPU infinity fabric when they move to an MCM design.

On the flipside, you can decode AV1 videos on modern Intel/AMD CPUs BUT it'll take a comparatively huge amount of power running off the SSE units in CPU. I wonder if the ARM CPUs on the M2 can do it.

Old formats like QuickTime, AVI, Realmedia, WMA, etc., will not be playable if the SoC maker takes out the ASIC and doesn't supply compatible libraries. Sure, cross-platform players like MPC/VLC will probably support playback of old formats using ALU/FPU/DSP but you lose the advantage of lower power consumption and no, most people aren't going to transcode their old files to the new supported format because transcoding loses image quality and is a hassle in general.

As for emulation? You cannot emulate more powerful hardware and expect playable FPS. So you can emulate old platforms/games, but modern DX12 games on M2? Forget it. M2 probably has 1536 ALUs in the graphics system compared to RTX 4090 16384 CUDA cores, each with 1 ALU+FPU.

There's a lot of overhead for x86->ARM emulation. Rosetta is fantastic but there's only so much it can do. For x86 Windows/Linux/ChromeBook, it's not even emulation since it's all x86 + iGPU. It's like Hypervisor or Container with very little overhead.

So IMO, GP-CPU/GPU aren't going away. And that's why you don't see any companies in the laptop space following Apple. The advantages of ASIC in the laptop space are minimum. There are many ways to accelerate certain functions in the CPU/GPU without resorting to ASIC like making SSE/AVX instructions faster. You do this by lengthening the register size, add more transistors performing operations on the register, increase buffer depth and increase L1/L2 cache.

For Mobile, and other ARM licensees will continue to utilise ASIC for power hungry functions because their ALU/FPU/DSP are weak due to thermal ceilings. But these SoC become obsolete in less than 5 years due to emerging technologies. Since battery size is the biggest limiting factor, ASIC makes sense due to the reduction in power draw so I think mobile is the perfect use-case for ASIC.

For laptops, I will never buy an Apple Mac now because I know that M1/M2 Macs will never be able to run modern DX12 games or program in Unreal Engine 5.2. Even if you translate DX12 calls to Vulkan/Metal calls, the hardware just isn't there in the M2 SoC to be able to process all that data.

If you managed to finish reading this, pat yourself on the back! There are much more info but this comment is too long already and I've spent too much time. [putting on flame suit]

erictayet
Автор

If only apple open sourced the m1 drivers :/ (that's never gonna happen but imagine how much more enjoyable it would've been gaming on an M1 without much tweaking)

alexlexo
Автор

Quality time and time again... Just Brilliant

varunbobba
Автор

In Windows, those equivalents could be provided by Intel, AMD, or NVidia, or, in many cases, by a choice of two of them.
They are not binary compatible, but software generally still manages to work on all three; or at least on AMD or NVidia in situations where the Intel option isn't powerful enough. This is because the graphics / ml / etc libraries the software developers use have versions compatible with all of the hardware options, and can transparently detect which one to use at run-time or install-time.
For that reason, I wouldn't be too worried about fragmentation in the long term.

katrinabryce
Автор

Asahi is actually a really interesting case rn, Lina is a fricking genius and programmer god

inlineskr
Автор

I predict that within five years, particularly with chips like the M1 Ultra, we’re going to see AI models which can send every possible signal to a piece of hardware, log the response, and write a driver for you as a starting point.

citywitt
Автор

This is basically the architecture of the Amiga back in the 80s.
Specialised custom chips galore.
It was so ahead of its time.

DigitalNomadOnFIRE
join shbcf.ru