Why Intel discontinued its Optane memory business • The Register

Analyses Intel CEO Pat Gelsinger has confirmed that Intel will leave its Optane business, ending its attempt to create and promote a slightly slower level of memory than RAM but with the benefits of persistence and high IOPS.

The news, however, shouldn’t come as a surprise. The division has been around for some time following Micron’s 2018 decision to close its joint venture with Intel, selling the factory where the 3D XPoint chips that go into Optane units and modules were made. While Intel signaled it was open to use by third-party foundries, without the means to produce its own Optane silicon, the writing was on the wall.

As reported in May by our sister site Blocks and Files, the sale only came after Micron weighed down Intel with an excess of 3D XPoint memory modules, more than the chip maker could sell. Estimates put Intel’s inventory at around two years of supply.

In its poor earnings report for the second quarter, Intel said the abandonment of Optane will result in an inventory reduction of $ 559 million. In other words, the company gives up on the project and writes inventory as a loss.

The deal also marks the end of Intel’s SSD business. Two years ago, Intel sold its NAND flash production and business plans to SK hynix to focus its efforts on the Optane business.

Announced in 2015, 3D XPoint memory came in the form of Intel’s Optane SSD two years later. However, unlike competitive SSDs, Optane SSDs couldn’t compete in terms of capacity or speed. The devices, on the other hand, offered some of the highest I / O performance on the market, a quality that made them particularly attractive in latency-sensitive applications where pure IOPS were more important than throughput. Intel said its PCIe 4.0-based P5800X SSDs could achieve up to 1.6 million IOPS

Intel has also used 3D XPoint in its Optane persistent memory DIMMs, most notably at the launch of its second and third generation Xeon Scalable processors.

From a distance, Intel’s Optane DIMMs looked no different than your common DDR4, aside from, perhaps, as heatsinks. However, upon closer inspection, DIMMs may be available in capacities far in excess of what is possible with DDR4 memory today. Capacities of 512GB per DIMM were not uncommon.

DIMMs were inserted alongside standard DDR4 and enabled a number of new use cases, including a tiered memory architecture that was essentially transparent to operating system software. When deployed in this way, DDR memory was treated as a large level 4 cache, with Optane memory acting like system memory.

While not offering performance comparable to that of DRAM, the approach enabled the implementation of very large and memory-intensive workloads, such as databases, at a fraction of the cost of an equivalent amount of DDR4, without requiring the software customization. That was the idea, anyway.

Optane DIMMS can also be configured to act as a high performance storage device or a combination of storage and memory.

And now?

While DDR5 promises to address some of the capacity challenges solved by Optane persistent memory, with planned 512GB DIMM capacities, it is unlikely to be competitively priced.

GDR isn’t getting cheaper, at least not quickly, but NAND flash prices are plummeting as supply outstrips demand. Meanwhile, SSDs are getting faster fast.

Micron this week began mass production of a 232-layer module that will push consumer SSDs into more than 10GB / sec territory. Latency is still not fast or low enough to replace Optane for large in-memory workloads, analysts say The registerbut it’s approaching terribly the 17GB / sec offered by a single channel of low-end DDR4.

So, if NAND isn’t the answer, then what? Well, there is actually an Optane memory alternative on the horizon. It’s called compute express link (CXL), and Intel has already invested heavily in the technology. Introduced in 2019, CXL defines a cache-consistent interface for connecting CPUs, memory, accelerators, and other peripherals.

CXL 1.1, which will ship alongside the long-delayed Intel Sapphire Rapids Xeon Scalable processors and fourth-generation AMD Eypc Genoa and Bergamo processors later this year, allows memory to be connected directly to the CPU via the PCIe 5.0 link.

Vendors, including Samsung and Marvell, are already planning memory expansion modules that plug into PCIe as GPUs and provide a large pool of additional capacity for memory-intensive workloads.

Marvell’s acquisition of Tanzanite this spring will allow the vendor to also offer memory capabilities at similar levels to Optane.

Also, since memory is managed by a CXL controller on the expansion board, older and cheaper DDR4 or even DDR3 modules can be used along with modern DDR5 DIMMs. In this regard, CXL-based memory tiering may be superior as it is not based on a specialized memory architecture such as 3D XPoint.

VMware is evaluating software-defined memory that shares memory from one server to other boxes, an effort that will be much more powerful if it uses a standard like CXL.

However, emulating some aspects of Intel’s Optane persistent memory may have to wait until the first CXL 2.0 compatible CPUs arrive on the market, which will add support for memory pooling and switching. It also remains to be seen how the software interacts with CXL memory modules in tiered memory applications. ®

Leave a Reply

%d bloggers like this: