
If you’re fabbing an eight-core chip and 7 out of 8 cores work, you still can’t use it. On a monolithic die, every core has to be functional. When you increase core count though, the monolithic approach results in exponentially greater costs. This in large part explains why Intel’s mainstream consumer CPU line has, until recently, topped out at 4 cores. The discarded unit obviously cost money to make, so that cost has to factor into the final selling price.Īt low core counts, a monolithic approach works fine. The inverse, though, is that for every 10 CPUs you manufacture, you have to discard at least 2-3 defective units. If you’re on a mature process node like Intel’s 14nm+++, your silicon yields will be in excess of 70%. Yields refer to the proportion of usable parts made. When foundries manufacture CPUs, (or any piece of silicon for that matter) they almost never manage 100 percent yields. Monolithic CPUs Offer Best Performance but are Expensive and… Strap in: things are going to get a little complicated.

We need to take a quick look now at the economics of silicon yields. If everything else is the same, the monolithic approach will always net you the best performance. Since everything is on the same physical substrate, different cores take much less time to communicate, access the cache, and access system memory. There are some clear advantages to this approach. What this means, essentially, is that all cores, cache, and I/O resources for a given processor are physically on the same monolithic chip. Intel follows what’s called a monolithic approach to processor design. Intel Monolithic Processor Design vs AMD Ryzen Chiplets You’ll want to keep that in mind as we take a deep dive here in the next section. And the other’s, well, a lot of small fruit. Which one’s more fruit: A watermelon or a kilo of apples? One’s a really big fruit. Here’s an annoying elementary school analogy that might help you understand the difference. Buying into Skylake refresh-refresh-refresh-refresh wouldn’t necessarily net you better framerates.ĪMD and Intel have (or used to have) fundamentally different processor design philosophies. A drastic improvement to IPC meant that AMD was able to offer more cores, but also match Intel in single-threaded workloads. Things started to improve for AMD in the gaming section with the introduction of the Zen 2 based Ryzen 3000 CPUs, and Intel’s gaming crown was finally snatched with the release of the Zen 3 based Ryzen 5000 CPUs. But a combination of hardware-side issues like latency, and a lack of Ryzen-optimized games meant that Intel still commanded a significant performance lead in gaming workloads. It all Started with Zenįirst and second-gen Ryzen played spoiler to Intel’s midrange efforts by offering more cores and more threads than parts like the Core i5-7600K. Intel, meanwhile, continues to do things more or less exactly as they’ve done since the arrival of Sandy Bridge in 2011. The new Ryzen processors marked a complete re-imagining of AMD’s approach to CPUs, with a focus on IPC, single-threaded performance, and, most notably, a shift to an MCM or modular chiplet design. The tables started turning in 2017 with the arrival of the Zen microarchitecture. A combination of low IPC and inefficient design almost drove the company into the ground. In the mid-2000s, with the introduction of the Bulldozer chips, AMD started losing ground against Intel. Although both use the x86 ISA to design their chips, over the last decade or so, their CPUs have taken completely different paths.

Intel and AMD have been the two primary processor companies for more than 50 years now.
