I think is different then today's Optane stuff. It sound like 512G cards means that it added to system's ram plus under special control - allow to be persistent - which means a huge multi-G database could be store on and have instance loading and ram like speeds.
Would this be why we are getting the likes of Dell starting to now advertise a system with 24GB memory which is actually 8GB DDR4 and 16GB Optane storage but they are trying to make it look like the system has 24GB memory installed. There was a write up about it on PCper and he said he has mixed feelings about the way they were advertising the system as having 24GB memory installed when in fact it was only 8GB and that it was a bit misleading.
This is completely different in how it appears to the OS. The optane in the dell is an SSD cache.
It's possible it inspired the scumbag marketeer to start listing it that way, but it's every bit as much lying scumbag marketing as if they were to add the 8gb of DDR4 and 11GB of GDDR5X in a 1080 ti and claim 19 GB of memory.
Yes you are right after reading more into it from other sites as well as Anandtech's write up here today I see that it would be impossible for Dell to have this setup in their system. I am thinking some marketing person probably got their facts messed up and just wrote out bad spec's or maybe it was on purpose to confuse customers.
The sad thing is that it would probably work if you just set the drive as permanent swap file. On the other hand, the type of person who would fall for such a trick probably would see a tiny speed up.
Don't tell me: the thing also has no SSD. So even 16GB isn't enough to cache the hard drive, all the while having to page virtual ram through the optane cache. So it's a disaster as well as false advertizing.
Not sure if the industry will adopt this. The whole idea of ram is fast access at RUN TIME and having the hard drive as a permanent storage (5000 write is manageable as most writes happen in memory due to transient data). But having a write cycle life to the ram does not make any sense, that too something that is as low as 10000 writes. Boot up time for servers has never been an issue after the advent of SSDs. Seems like a pointless implementation, unless there is no write cycles.
Because these works directly as system memory and are also faster than any ssd solution. They are very useful in situations where you have to handle very large files, or anything that eats a lot of memory. So big that it does not fit to normal ram. This is much faster than loading that data from any normal storage. Some heavy video editing would be quite optimal to this type of memory or some very large databases, so that the whole thing would stay in the system memory.
Think about having a huge database, and how no amount of RAM can really hold it. So instead you keep the active bits of the DB in RAM, and the rest of it on the fastest storage possible. It use to be HDDs, then SSDs, then Optane, and now this. Much cheaper than RAM, and much faster than SSDs, this is what Optane/xpointe was supposed to be from day 1. Buy a server with a 'mere' 128GB of RAM, and 512GB of this optane and you can save a ton of money while taking virturally no performance hit. And with the ability to quickly back up RAM to the Optane you can get away with smaller battery backups and easier power recovery options.
This was Optane should have been from day 1. There is a real use for this rather than the BS released with previous Optane SSDs. Glad to see it finally come out, and hopefully they launch it well.
Venues like Flash Memory Summit have been talking about this for years. That the point of the references to SNIA.
The point is that what you are supposed to get is PERSISTENT RANDOM BYTE access. This has a number of consequences, but most importantly it allows you to design things like databases or file systems that can utilize "in-memory" optimized data structures based on pointers rather than IO optimized data structures built around 4K atomic units; and to have reads and writes that are pure function calls without having to go through OS (and then IO) overhead.
Technically there are a number of challenges that make this non-trivial; in particular to ensure ACID transactions just like with a traditional database you have to ensure that metadata is written out in a very particular order (so that if something goes wrong partway, the persistent state is not inconsistent) and this mean both new cache control instructions needed in the CPU, and new algorithms+data structures needed for the design and manipulation of the database/file system.
Now, that's the dream. Because Intel has been SO slow in releasing actual nvDIMMs people are (justifiably) more than a little skeptical of exactly what they ARE releasing and its characteristics. IF Intel were to release what they promised years ago, everything I said above would work just great. You'd have the equivalent of say 500GB persistent storage in your computer with the performance (more or less) of DRAM and, rather than the most simple-minded solutions of using all that space as a cache for the file system, you would use it to host actual databases used by the OS and/or some applications for even better performance.
BUT no-one (sane) trusts Intel any more so, yeah, your skepticism is warranted. WHEN real, decent, persistent DRAM arrives at DRAM speeds and acceptable power levels, what I said above will happen. IS that what Intel is delivering? Hmm. Their vagueness about details and specs makes one wonder... It certainly SEEMS like they are pushing this as a faster (old-style) file system that's on the DRAM bus rather than the PCI bus so that much closer to CPU; but they are NOT pushing the fact that you can use the persistence for memory-style (pointer-based) data structures rather than as a file system. (The stuff they call DAX). This may just reflect that those (fancier) use cases are not yet ready --- or it may reflect the fact that their write rates and write endurance are not good enough...
"The whole idea of ram is fast access at RUN TIME"
That WAS the idea of RAM, up to about 20-25 years ago. Since then, CPU speeds and DRAM speeds started diverging more and more, and now performance profile of DRAM looks like the performance profile of HDD 25 years ago (you can read 512B sector quite quickly after a very significant latency).
Real RAM now is last level cache - unfortunately, it is as small as RAM used to be 25 years ago while requirements of modern, terribly unoptimized software are much higher.
The whole point of these Optane DIMMs is to offer capacities beyond the amount of DRAM you can afford. NVDIMMs that use DRAM as working memory and backup to NAND flash on power loss are only at 32GB per module and require external capacitor banks. These Optane DIMMs will offer 4-16x the capacity and don't need any supercaps to be non-volatile.
This is looking more a disclosure than a launch. Launch in the future, as it requires next-generation Xeons. Still waiting for the Q&A, news is breaking :)
Another delay hidden with comments like "select customers" and "we are sampling". they were sampling last year... and year before.
Whenever it comes out it will enable huge memory spaces. but since 90% of servers use less than 600GBs of memory, I am not sure it is a huge impact. For the 1TB database niche, Optane DC persistent memory is perfect. I predict a great niche market with little volume through 2024
The "select customers" bit isn't a delay, it's how the enterprise storage market works. Big cloud providers that make huge orders get first dibs on the new tech, and the leftovers eventually make it to retailers after there are already large-scale production deployments. That's how the market works for flash-based SSDs, too.
the select customers in this case are not the big cloud providers. its limited volume to small applications. we will know that when Intel announces revenue for Optane DImms is <2% of NVM solutions revenue in Q4. it will actually be very close to zero.
AFAIK most servers have less than 600GB only because above 512 the cost goes up pretty astronomically, and it's often more cost effective to scale out to more servers with >512GB than to scale up to an extremely expensive server with huge ram support + cutting edge high density ram modules. It's driven more by economics than software needs, and the economics is part of what Intel is trying to address here.
If memory is the issue, then a new server is not cheaper than adding more memory (even though DRAM prices are insane). memory is not usually the main issue and added servers help. BTW: Most servers have less than 200GB. 90% are less than 600GB. Current Purley supports 12 DIMMS (2 per channel/ 6 channels per CPU).
The whole point of these Optane DIMMs is to offer capacities beyond the amount of DRAM you can afford. NVDIMMs that use DRAM as working memory For computer, memory upgrade visit our online store priceblaze
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
28 Comments
Back to Article
CajunArson - Wednesday, May 30, 2018 - link
Persistent memory event?This live blog will self-destruct in 5 seconds.
A5 - Wednesday, May 30, 2018 - link
So this is the product form of the Optane DIMMs they showed all those years ago? Neat, it'll be fun when I can get one at home in like 5 years :PHStewart - Wednesday, May 30, 2018 - link
I think is different then today's Optane stuff. It sound like 512G cards means that it added to system's ram plus under special control - allow to be persistent - which means a huge multi-G database could be store on and have instance loading and ram like speeds.haukionkannel - Wednesday, May 30, 2018 - link
Yep, these replace the ram. Very different than normal ssd variants!nonoverclock - Thursday, May 31, 2018 - link
One clarification: they supplement the memory. This is a new layer of the memory hierarchy.HStewart - Thursday, May 31, 2018 - link
But they are different than normal ram - it is persistent - which would be good for larger database which faster loading and access.rocky12345 - Wednesday, May 30, 2018 - link
Would this be why we are getting the likes of Dell starting to now advertise a system with 24GB memory which is actually 8GB DDR4 and 16GB Optane storage but they are trying to make it look like the system has 24GB memory installed. There was a write up about it on PCper and he said he has mixed feelings about the way they were advertising the system as having 24GB memory installed when in fact it was only 8GB and that it was a bit misleading.DanNeely - Wednesday, May 30, 2018 - link
This is completely different in how it appears to the OS. The optane in the dell is an SSD cache.It's possible it inspired the scumbag marketeer to start listing it that way, but it's every bit as much lying scumbag marketing as if they were to add the 8gb of DDR4 and 11GB of GDDR5X in a 1080 ti and claim 19 GB of memory.
rocky12345 - Thursday, May 31, 2018 - link
Yes you are right after reading more into it from other sites as well as Anandtech's write up here today I see that it would be impossible for Dell to have this setup in their system. I am thinking some marketing person probably got their facts messed up and just wrote out bad spec's or maybe it was on purpose to confuse customers.wumpus - Saturday, June 2, 2018 - link
The sad thing is that it would probably work if you just set the drive as permanent swap file. On the other hand, the type of person who would fall for such a trick probably would see a tiny speed up.Don't tell me: the thing also has no SSD. So even 16GB isn't enough to cache the hard drive, all the while having to page virtual ram through the optane cache. So it's a disaster as well as false advertizing.
sharath.naik - Wednesday, May 30, 2018 - link
Not sure if the industry will adopt this. The whole idea of ram is fast access at RUN TIME and having the hard drive as a permanent storage (5000 write is manageable as most writes happen in memory due to transient data). But having a write cycle life to the ram does not make any sense, that too something that is as low as 10000 writes. Boot up time for servers has never been an issue after the advent of SSDs.Seems like a pointless implementation, unless there is no write cycles.
haukionkannel - Wednesday, May 30, 2018 - link
Because these works directly as system memory and are also faster than any ssd solution. They are very useful in situations where you have to handle very large files, or anything that eats a lot of memory. So big that it does not fit to normal ram. This is much faster than loading that data from any normal storage. Some heavy video editing would be quite optimal to this type of memory or some very large databases, so that the whole thing would stay in the system memory.CaedenV - Wednesday, May 30, 2018 - link
Think about having a huge database, and how no amount of RAM can really hold it. So instead you keep the active bits of the DB in RAM, and the rest of it on the fastest storage possible. It use to be HDDs, then SSDs, then Optane, and now this. Much cheaper than RAM, and much faster than SSDs, this is what Optane/xpointe was supposed to be from day 1. Buy a server with a 'mere' 128GB of RAM, and 512GB of this optane and you can save a ton of money while taking virturally no performance hit. And with the ability to quickly back up RAM to the Optane you can get away with smaller battery backups and easier power recovery options.This was Optane should have been from day 1. There is a real use for this rather than the BS released with previous Optane SSDs. Glad to see it finally come out, and hopefully they launch it well.
name99 - Wednesday, May 30, 2018 - link
Venues like Flash Memory Summit have been talking about this for years. That the point of the references to SNIA.The point is that what you are supposed to get is PERSISTENT RANDOM BYTE access. This has a number of consequences, but most importantly it allows you to design things like databases or file systems that can utilize "in-memory" optimized data structures based on pointers rather than IO optimized data structures built around 4K atomic units; and to have reads and writes that are pure function calls without having to go through OS (and then IO) overhead.
Technically there are a number of challenges that make this non-trivial; in particular to ensure ACID transactions just like with a traditional database you have to ensure that metadata is written out in a very particular order (so that if something goes wrong partway, the persistent state is not inconsistent) and this mean both new cache control instructions needed in the CPU, and new algorithms+data structures needed for the design and manipulation of the database/file system.
Now, that's the dream. Because Intel has been SO slow in releasing actual nvDIMMs people are (justifiably) more than a little skeptical of exactly what they ARE releasing and its characteristics. IF Intel were to release what they promised years ago, everything I said above would work just great. You'd have the equivalent of say 500GB persistent storage in your computer with the performance (more or less) of DRAM and, rather than the most simple-minded solutions of using all that space as a cache for the file system, you would use it to host actual databases used by the OS and/or some applications for even better performance.
BUT no-one (sane) trusts Intel any more so, yeah, your skepticism is warranted. WHEN real, decent, persistent DRAM arrives at DRAM speeds and acceptable power levels, what I said above will happen. IS that what Intel is delivering? Hmm. Their vagueness about details and specs makes one wonder...
It certainly SEEMS like they are pushing this as a faster (old-style) file system that's on the DRAM bus rather than the PCI bus so that much closer to CPU; but they are NOT pushing the fact that you can use the persistence for memory-style (pointer-based) data structures rather than as a file system. (The stuff they call DAX).
This may just reflect that those (fancier) use cases are not yet ready --- or it may reflect the fact that their write rates and write endurance are not good enough...
peevee - Thursday, May 31, 2018 - link
"The whole idea of ram is fast access at RUN TIME"That WAS the idea of RAM, up to about 20-25 years ago. Since then, CPU speeds and DRAM speeds started diverging more and more, and now performance profile of DRAM looks like the performance profile of HDD 25 years ago (you can read 512B sector quite quickly after a very significant latency).
Real RAM now is last level cache - unfortunately, it is as small as RAM used to be 25 years ago while requirements of modern, terribly unoptimized software are much higher.
sharath.naik - Wednesday, May 30, 2018 - link
Samsung's rapid Ram cached SSD with some capacitors to push the data to SSD in case of power failure will achieve a better solution than this.rocky12345 - Wednesday, May 30, 2018 - link
Yea I use Samsung Rapid Cache on my Sammy SSD it works fairly well too I have not had any problems with it yet...."Knocks on wood"...lolBilly Tallis - Wednesday, May 30, 2018 - link
The whole point of these Optane DIMMs is to offer capacities beyond the amount of DRAM you can afford. NVDIMMs that use DRAM as working memory and backup to NAND flash on power loss are only at 32GB per module and require external capacitor banks. These Optane DIMMs will offer 4-16x the capacity and don't need any supercaps to be non-volatile.SharpEars - Wednesday, May 30, 2018 - link
What are the price points? Are we talking $3999 per module, here?Ian Cutress - Wednesday, May 30, 2018 - link
This is looking more a disclosure than a launch. Launch in the future, as it requires next-generation Xeons. Still waiting for the Q&A, news is breaking :)emvonline - Wednesday, May 30, 2018 - link
Another delay hidden with comments like "select customers" and "we are sampling". they were sampling last year... and year before.Whenever it comes out it will enable huge memory spaces. but since 90% of servers use less than 600GBs of memory, I am not sure it is a huge impact. For the 1TB database niche, Optane DC persistent memory is perfect. I predict a great niche market with little volume through 2024
futrtrubl - Wednesday, May 30, 2018 - link
But why do 90% of servers use less than 600GBs of memory? Could it be that the limitations of the system limited how they use resources?Billy Tallis - Wednesday, May 30, 2018 - link
The "select customers" bit isn't a delay, it's how the enterprise storage market works. Big cloud providers that make huge orders get first dibs on the new tech, and the leftovers eventually make it to retailers after there are already large-scale production deployments. That's how the market works for flash-based SSDs, too.emvonline - Thursday, May 31, 2018 - link
the select customers in this case are not the big cloud providers. its limited volume to small applications. we will know that when Intel announces revenue for Optane DImms is <2% of NVM solutions revenue in Q4. it will actually be very close to zero.tomatotree - Thursday, May 31, 2018 - link
AFAIK most servers have less than 600GB only because above 512 the cost goes up pretty astronomically, and it's often more cost effective to scale out to more servers with >512GB than to scale up to an extremely expensive server with huge ram support + cutting edge high density ram modules. It's driven more by economics than software needs, and the economics is part of what Intel is trying to address here.emvonline - Thursday, May 31, 2018 - link
If memory is the issue, then a new server is not cheaper than adding more memory (even though DRAM prices are insane). memory is not usually the main issue and added servers help.BTW: Most servers have less than 200GB. 90% are less than 600GB. Current Purley supports 12 DIMMS (2 per channel/ 6 channels per CPU).
dark4181 - Thursday, May 31, 2018 - link
FFS, Intel, just join the Gen Z Consortium.priceblaze - Monday, October 28, 2019 - link
The whole point of these Optane DIMMs is to offer capacities beyond the amount of DRAM you can afford. NVDIMMs that use DRAM as working memoryFor computer, memory upgrade visit our online store priceblaze