Hacker Read top | best | new | newcomments | leaders | about | bookmarklet login

Hmm, ive seen report by fujitsu that it was fine if used with dram

>Intel Optane persistent memory is blurring the line between DRAM and persistent storage for in-memory computing. Unlike DRAM, Intel Optane persistent memory retains its data if power to the server is lost or the server reboots, but it still provides *near-DRAM performance*. In SAP HANA operation this results in tangible benefits

>Speeds up shutdowns, starts and restarts many times over – significantly reduce system downtime and lower operational costs

? Process more data in real-time with increased memory capacity

? Lower total cost of ownership by transforming the data storage hierarchy

? Improve business continuity with persistent memory and fast data loads at startup

https://sp.ts.fujitsu.com/dmsp/Publications/public/wp-perfor...



sort by: page size:

It sounds like intel optane persistent memory could work here

Disclosure: I work on Google Cloud.

As I replied in a sub-thread, I think Intel's marketing diagram [1] is probably useful to help separate the Optane flavors. This is about the "near DRAM" variant.

While the blog post highlights running SAP HANA (SAP's in-memory focused database), you can use them for whatever you want. The persistent part is that it's persistent across reboots. The hope is that this might make it easier to have tiered database/caching systems, since the gap between DRAM and this new "memory" is much closer than say DRAM and SSD.

[1] https://newsroom.intel.com/wp-content/uploads/sites/11/2018/...


> This is an interesting option as it allows in-memory databases to be atomically persistent without having to add any code.

It isn't quite that simple. To start with, you need either a cache line flush or cache line writeback instruction to get you data out to persistent storage. And the Optane DIMMs currently are used in one of two modes: as transparent expansion of DRAM or as separately addressed and managed storage. In the DRAM-like mode, the system's actual DRAM is used as a cache, and the Optane storage does not persist across reboots. In the storage mode, you need extra code to access it, though you can treat it as simply a block device instead of using special-purpose persistent memory programming interfaces.


All of that stuff is really interesting from a technical perspective, but if the persistent memory isn't cheaper than RAM then 95% of machines won't need it, or won't need more than a small buffer of it. It'll be amazing for database servers and almost nobody else.

And Optane failed that test.


I don't think Intel really knew what they were going for with Optane / persistent memory. They had an invention, it had a bunch of interesting stats, and they were hoping someone out there would figure out a use.

Databases seem like the obvious answer. Persistent memory / Optane is much faster than NAND Flash... but less dense than NAND Flash. Its also slower than DRAM, but more dense, so it sits in between.

Whether or not that's useful remains to be seen.


I don't know. I feel like instead of reducing memory tiers Optane will only add another layer in the stack. It is still 10x slower than DRAM, which sounds quite significant. Besides, I'm not sure if persistence is really the most important aspect of storage anyways, especially with modern-day always-on clustered systems.

yep but Optane DIMM /NVDIMM is not for client product. It is on data center product list. Persistent Memory isn't just a concept on the textbook. It be realized now.

That is assuming Optane is the only Persistent Memory solution. Micron, Samsung, all have something similar ( whether it is by technological design or function ) in the work.

Optane used to be very attractive when it promised all the performance at cheaper than DRAM price. But now DRAM price has sunk, and we will have to see whether 2nd gen Optane will deliver what Intel promised.


Don't know specifically about HANA. But, any database with high throughput would benefit from high bandwidth PCIe non-volatile storage. Databases with significant random accesses in their workload would see a large improvement in performance due to Optane. Nothing else apart from main memory matches Optane with its insane random access performance.

ERP applications which use HANA typically do have significant random accesses in their workload. It is due to this reason that it was initially marketed as an in-memory database. The random access performance of RAM was necessary for high performance. If HANA was just being used in in-memory mode, the speed increase is probably not that impressive. But, if there is non-volatile storage involved, then yes the speed increase would be impressive.


Please forget Optane, just think about intel persistent memory. If you really care about performance /latency.

Seems like you wanted Optane memory before they were cancelled

>Intel® Optane™ persistent memory is available in capacities of 128 GiB, 256 GiB, and 512 GiB and is a much larger alternative to DRAM which currently caps at 128 GiB.


SemiAccurate's singing a different tune this year, now that Intel has stated that the Optane DC Persistent Memory modules are warrantied for 5 years regardless of workload. Intel's gotten write endurance up to sufficient levels for use as main memory, though if you use those DIMMs as memory rather than storage, then your DRAM will be used as a cache.

Several of the folks I've talked to who were most excited about Optane (and disappointed about its delayed introduction & market rejection) were interested primarily in the greater per-chip density, with persistence as at best a "nice-to-have".

Take your typical in-memory database: The primary scalability constraint is how much "fast enough" memory you can park close to your CPU cores. Persistence lets you make some faster / broader failure & recovery guarantees, but if those are major application concerns, you probably wouldn't be using an in-memory database in the first place.


Optane is persistent and bit-addressable, with mean latency under 1 microsecond, which is an order of magnitude faster than other SSDs.

I don't think the market has figured out the right use case for Optane yet. The majority of desktop applications won't benefit from lower latency IOPS and it's too expensive to use for general purpose storage on servers. It does make sense for constant write or seek-heavy applications like database journals, but most databases are optimized for doing bulk sequential reads/writes and won't take advantage of Optane's bit addressable storage.

Intel's recently started shipping Optane DIMM modules that act like slow, cheap, high density RAM. This is an interesting option as it allows in-memory databases to be atomically persistent without having to add any code.


Yeah I'm sad to see Optane go. I bought a few DCPMMs myself (and had to upgrade my workstation CPUs to support them) to test out the capabilities and maybe even write a little toy OS/KV-store engine that runs as a multithreaded process in Linux using mmap'ed Optane memory as its persistent datastore (never got to the latter part). The "do not use mmap for databases" argument would not apply here as you wouldn't cache persistent block devices to RAM, but the "RAM" itself is persistent, no writeback or block disk I/O is needed.

Intel discontinued/deprecated Optane, before I could do anything really cool with it. But Intel can probably still reuse lots of their cache coherency logic for external CXL.mem device access.

One serious early adaptor and enterprise user of Optane tech was Oracle. More specifically Oracle's Exadata clusters (where database compute nodes are disaggregated from storage nodes that contained Optane) and connected via InfiniBand or RoCE. And since Optane is memory (memory addressable), they could skip OS involvement and some of the interrupt handling when doing RDMA ops directly to/from Optane memory located inside different nodes of the Exadata cluster. I think they could do 19 microsecond 8kB block reads from storage cells and WAL sync/commit times were also measurable in microseconds (if you didn't hit other bottlenecks). They could send concurrent RDMA write ops (to remote Optane memory) for WAL writes into multiple different storage nodes, so you could get very short commit times (of course when you need sync commits for disaster recovery, you'd have to pay higher latency).

With my Optane kit I tested out Oracle's Optane use in a single-node DB mode for local commit latency (measured in a handful of microseconds, where the actual log file write "I/O" writes were sometimes even sub-microsecond). But again, if you need sync commits across buildings/regions for disaster recovery and have to pay 1+ ms latency anyway, then the local commit latency advantage somewhat diminishes. I have written a couple of articles about this, if anyone cares: [1][2].

Fast forward a few years, Oracle is not selling Exadata clusters with PMEM anymore, but with XMEM, which is just RAM in remote cluster nodes and by using syscall-less RDMA ops, you can access it with even lower latency.

[1] - https://tanelpoder.com/posts/testing-oracles-use-of-optane-p... [2] - https://tanelpoder.com/posts/testing-oracles-use-of-optane-p...


Optane was a very interesting technology but I don't think it as revolutionary as the article makes it to be. First, Optane does not make the memory hierarchy simpler as you still need fast RAM to solve the write endurance problem. Second, it is not obvious that Optane would be cost-competitive with SSDs. Third, you don't need a tech like that to implement a persistent memory system like one described in the article: just use RAM backed by an SSD and a battery/capacitor to flush the dirty pages in an event of power failure. I think Apple does something like this in their newer devices.

This indeed leaves Optane as a limited-use enterprise persistent cache technology.


FYI, Intel Optane Memory is a wildly different product from the one under discussion, which is Optane DC Persistent Memory or Optane DCPMM. The former is a low-capacity consumer SSD caching software solution, and the latter is persistent memory modules in a DIMM form factor.

Persistent memory is going to a big thing in the cloud computing space in the coming years. Without an answer to Optane AMD is going to have a tough time competing with Intel.

That's about the same durability as Intel Optane had, so the first thing it could be would be to replace Optane where it has been used.

Optane did inspire a lot of R&D into persistent data structures, databases and file systems that started to challenge the traditional model of local memory and persistent storage. IMHO, a few of those projects were a little bit overoptimistic, and used NVRAM as DRAM without many restrictions. For NVRAM to be viable, I think it still needs to have overprovisioning, wear levelling, memory-protection and transactions, provided by hardware and/or an OS but not necessarily with traditional interfaces. It is mostly a matter of mapping it CoW via a paging scheme instead of directly, and it will still be at near-DRAM speed.

next

Legal | privacy