Intel Makes Strong Statement on Data Center Front with Sapphire Rapids


Over the past two years, data center computing has been dominated on two fronts. Arm gaining a foothold in the cloud and AMD gaining a foothold everywhere. Yes, Intel still owns the vast majority of the data center, but it certainly doesn’t. This sentiment is partly due to AMD EPYC’s astonishing history and partly due to the lack of significant response from Intel. Of course, when AMD first reappeared in 2017, it was understandable that Intel was a little dismissive. But, after two successive generational launches that saw the company nibble the share of Intel’s data center. It almost looked like 2005-2006, when AMD gained over 20% market share with Opteron.

With 2021, a lot of changes have taken place at Intel. Pat Gelsinger left VMware to return to the business as CEO, and we started to see green shoots as he got the business back on track. And I think what Intel showed in its recent Architecture Day and Hot Chips presentation shows that the company is getting back on track in the data center.

Intel always seemed to “understand”

Before going into some details regarding Intel’s Architecture Disclosure, it’s essential to point out something about Intel that can sometimes get lost. The company understands something very critical to its long term success. And there you have it: the applications that will run in our data centers tomorrow will be very different from those today. And the infrastructure required to run these applications and workloads must also be different.

Just as Intel supplanted the “big iron” in the data center decades ago, new architectures with different performance and power profiles are making their way into today’s data center. The company that wins in the next generation will have the wallet to support these emerging workloads – and the company that has the confidence of enterprise IT to support what comes next.

The efficient core

Intel has introduced two new core architectures as part of its deployment: the efficient core (E-core) and the performance core (P-core). As the names suggest, the Efficient core design goals revolve around scalability and density, while the Performance Core targets workloads that require the best performance.

Although the Efficient core only seems to show up in the Intel client (Alder Lake) slides, I can see where this core might fit well with different deployment models such as large scale non-cloud environments and some edge instances. .

The efficiency gains that Intel demonstrates in its Efficient core are quite spectacular. Intel claims that the Efficient core can deliver 40% more performance than a single Skylake. And four efficient single-threaded cores can deliver the same performance as two Skylake cores with four threads – at 80% less power. I like how Intel describes its Efficient core – “built for throughput, enables scalable multithreaded performance for modern multitasking.”

Performance is more than CPI

Intel’s performance core reveal shows what the company thinks about the performance characteristics of the workloads that will power the future data center. Specifically, there will be requirements covering scalar, vector and spatial spaces. And that specific microarchitecture design considerations can enable this wide range of workloads.

Intel’s new Advanced Matrix Extensions (Intel AMX) are a critical component in enabling compute-intensive workloads such as machine learning by dramatically increasing the number of instructions per clock cycle per core. It is precisely this type of activation that demonstrates Intel’s understanding that top performance requires more than solid whole performance. Instead, it’s all about solid performance combined with a compute complex capable of supporting the unique demands of the increasingly important workloads in the data center.

With the Performance Core, Intel is making some bold performance claims. The company claims a performance claim of 19% (out of the 11e Intel Core generation) at the same frequency.

Sapphire Rapids – the data center SoC

Sapphire Rapids is the code name for the processor that will follow Ice Lake in the Scalable Xeon roadmap. While we’ve established that Intel’s performance core will be a well-designed foundational part of Sapphire Rapids, there is much more to it.

As you can see in the graphic above, Sapphire Rapids includes four compute tiles interconnected through a high-speed interconnect. Each tile has a complete complex of computation (cores, acceleration motors), I / O and memory. This design should give Intel greater packaging flexibility and should support better performance and performance per watt in the data center.

One might look at the Sapphire Rapids complex and think it looks like other chiplet designs. And from a very high level, he does. What stands out about Intel is what it integrates into and around the complex to deliver optimal performance. For example, Sapphire Rapids will include two acceleration engines to offload common functions (data streaming and cryptography and data de / compression). These motors unload considerable compute overhead from the core, enabling faster performance and more balanced workloads. No application modification is required, no particular architecture.

Two other notable improvements in Sapphire Rapids relate to AI and container support. AMX (described previously) is expected to result in substantial performance improvements for AI workloads (with native support for major frameworks and libraries).

Intel claims up to 69% better container support improvement over Cascade Lake for Kubernetes support. The company attributes this to improved instructions, improved telemetry, and its use by Sapphire Rapids of throttle motors.

At the end of the proverbial day, I don’t think many IT pros will wonder if AMX delivers 8 times the performance of the AVX-512. Or if Sapphire Rapids’ rapid assistive technology (QAT) will allow a 98% reduction in the use of cores for cryptographic functions. But, he will notice three things if Sapphire Rapids lives up to Intel’s positioning:

  1. Better performance for data center workloads
  2. More consistent performance for those same workloads
  3. Support for a wider variety of workloads

Closing thoughts

Intel entered the data center as a disruptor. Starting off as a CPU relegated to light tasks like directory services and file / printing, the x86 architecture has apparently taken over the corporate data center overnight (who are the fans? of NetWare?). Cloudification of the data center has led IT architects to worry less about processor architectures and more about workload-friendly compute platforms. In the Age of Cloud Native and Runtime Environments – x86 v Arm v Something Else? It does not matter. What matters are performance, consistency of performance, horsepower, price and safety.

I’m writing this because Intel’s Architecture Day seems to indicate that it understands that it has to go back to innovative roots to win in this processor market. The basic designs have been revealed and Sapphire Rapids is expected to position Intel fairly competitively. I look forward to the next reveal, when we can learn more about speeds, flows, and time to market. Stay tuned.

Note: The editors and editors of Moor Insights & Strategy may have contributed to this article.

Moor Insights & Strategy, like all research and analysis companies, provides or has provided paid research, analysis, advice or advice to many high-tech companies in the industry including 8×8, Advanced Micro Devices, Amazon , Applied Micro, ARM, Aruba Réseaux, AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies , Diablo Technologies, Digital Optics, Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc. , Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR, Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MapBox, Marvell, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Mesophere , Microsoft, Mojo Networks, National Instruments, Net App, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Poly, Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY, Springpath, Spirent, Splunk, Sprint , Stratus Technologies, Symantec, Synaptics, Synverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zebra, Zededa and Zoho which can be cited in blogs and research.


Comments are closed.