bit-tech.net

Intel announces Optane system requirements

Intel announces Optane system requirements

Intel's 3D Xpoint-based Optane accelerators are, the company has hinted, just around the corner, but you're going to need a Kaby Lake processor or better to use them.

Intel has revealed the system requirements for running its delayed Optane non-volatile memory devices, and if your system isn't Kaby Lake or newer you're going to be disappointed.

Intel first unveiled Optane in 2015, promising that the technology would launch the year after - a date the company has since missed. Developed in partnership with memory specialist Micron, Optane is an implementation of 3D Xpoint technology and an attempt at bridging the gap between fast-but-small volatile DRAM and slow-but-capacious non-volatile mass storage. At the time, Intel claimed Optane would offer performance similar to dynamic RAM (DRAM) but with the benefit of keeping its data when the system is powered off, like a traditional solid-state drive (SSD).

While similar non-volatile memory products have been launched by competing companies, Intel became the first to promise a product range that would benefit not only the server market but also desktop users. During the Intel Developer Forum in 2015, Intel head Brian Krzanich revealed a slide demonstrating that an Optane-equipped desktop PC running a sixth-generation Skylake processor could expect to see performance gains in gaming applications - but now that it's getting closer to launching the devices, Skylake support appears to have been dropped.

According to a freshly launched microsite detailing the company's plans for the 3D XPoint technology, Optane will require a seventh-generation Kaby Lake processor at minimum. Additionally, anyone looking to add Optane into their system will need an Intel 200-series chipset motherboard with an M.2 type 2280-S1-B-M or 2242-S1-B-M storage connector linked to a PCH Remapped PCI-E controller with two or four lanes and B-M keys meeting Non Volatile Memory Express (NVMe) v1.1 standards. Said motherboard will also need to have a BIOS supporting Intel's Rapid Storage Technology (RST) driver version 15.5 or above.

While Intel is clearly getting ready to launch its Optane modules for the desktop, the company is keeping quiet on exactly when the devices will hit retail and, critically, how much they will cost.

13 Comments

Discuss in the forums Reply
Pookie 21st February 2017, 11:07 Quote
So is it like an SSD and RAM all in one?
Gareth Halfacree 21st February 2017, 11:10 Quote
Quote:
Originally Posted by Pookie
So is it like an SSD and RAM all in one?
It's an SSD that's (supposed to be almost) as fast as RAM. It'll come in two flavours, when it eventually comes: M.2 for PCs and DIMM for servers. The DIMM stuff is designed to effectively replace dynamic RAM in certain applications; the M.2 stuff is designed to act as a cache between RAM and mass storage (although, theoretically, there shouldn't be anything to stop you using it as a plain storage device if you really wanted to.)

The end-game, though is to basically do away with DRAM altogether and just have 'storage' which acts as both working memory and long-term storage. That'd bring with it some major advantages: doesn't matter if your computer loses power 'cos when it comes back it's exactly the same as when it went off (assuming the CPU caches are also non-volatile, of course, or there's a battery backup for 'em), you don't have to load anything into memory 'cos it's already in memory (which effectively means zero loading times), and you no longer have to worry about buying enough memory to cope with your working data 'cos all your storage is memory (so that 1TB database you're playing with is now an in-memory database.)

It also does away with one of the major complexities of DRAM: the fact you have to keep refreshing the bloody stuff or it forgets what it's doing. Back in the day we used to use static RAM (SRAM), which was technically volatile in that without power it'd eventually lose what it was doing but didn't need constant refreshing. It was faster than DRAM and could be made sort-of-non-volatile simply by sticking a battery on the PCB (something a lot of PCMCIA SRAM cards used to do). Trouble is, it was also expensive so we switched to DRAM and the joys of constant refresh cycles. Without the need to constantly refresh the contents of RAM, performance will increase and the complexity of a computer will drop - both of which are noble goals.
jb0 21st February 2017, 13:55 Quote
Quote:
Originally Posted by Gareth Halfacree

Back in the day we used to use static RAM (SRAM), which was technically volatile in that without power it'd eventually lose what it was doing but didn't need constant refreshing. It was faster than DRAM and could be made sort-of-non-volatile simply by sticking a battery on the PCB (something a lot of PCMCIA SRAM cards used to do).
Trufax: I have a RAM disk card for my ancient TI home computer covered with a buttload of SRAM ICs and some NiCad AAs for data retention. Solid state drive, 1980s-style. (And next to the floppy drives of the era, it is blazing fast storage)

Also a bunch of old Nintendo carts with slowly-decaying CR2032s soldered in. I should really bust them all open and replace the batteries before they leak...


ALL HAIL THE SRAM!
Gareth Halfacree 21st February 2017, 14:47 Quote
Quote:
Originally Posted by jb0
Trufax: I have a RAM disk card for my ancient TI home computer covered with a buttload of SRAM ICs and some NiCad AAs for data retention. Solid state drive, 1980s-style. (And next to the floppy drives of the era, it is blazing fast storage)
I've got the cheaper version in my Cambridge Computers (which was founded by Uncle Clive after Amstrad bought Sinclair Research) Z88: 32KB EPROM carts. Solid state storage, but you can only write to it once. When it's full, you take it out of the Z88 and put it under a UV light to wipe it entirely, then you can write another 32KB to it. No editing of files you've stored on there (just load 'em then save 'em as FILENAME-2, FILENAME-3 and so forth) and no deleting individual files - and it takes a full half-hour to wipe properly if you're using a period-appropriate eraser (or about five to ten minutes if you're using a modern one).

Total pain in the harris, but it worked.
jb0 22nd February 2017, 07:36 Quote
Actually, the way we historically used our RAM disk, that would've worked well for us. We set it up with a disk image that had a set of commonly-used utilities(the Funnelweb suite for any 99ers out there), and ran all those from the RAM disk while our user files were all on floppies. And occasionally reloaded the utility image from a floppy because the RAM disk got corrupted.
Burning Funnelweb to EPROM would've avoided the post-corruption reloads.

*looks up the Z88*
And EPROM seems like a sound decision for a portable computer of the era, actually. Definitely avoids the loss of data when the AAs die that plagued some other portables.
edzieba 22nd February 2017, 10:09 Quote
With the 200 series chipset always being the requirement, I'd just assumed that Kaby Lake was also a requirement from the start (i.e. if you need to add new hardware to the chipset to support this, you probably need to add it to the CPU too due to the CPU taking voer more and more PCH duties).
Gareth Halfacree 22nd February 2017, 10:13 Quote
Quote:
Originally Posted by edzieba
With the 200 series chipset always being the requirement, I'd just assumed that Kaby Lake was also a requirement from the start (i.e. if you need to add new hardware to the chipset to support this, you probably need to add it to the CPU too due to the CPU taking voer more and more PCH duties).
Except you appear to have missed the bit where Intel demonstrated Optane running on a Skylake system in 2015, as detailed in the article. So, the decision to make it a Kaby Lake-or-newer exclusive does not appear to be a technological one.
edzieba 22nd February 2017, 12:43 Quote
Quote:
Originally Posted by Gareth Halfacree
Quote:
Originally Posted by edzieba
With the 200 series chipset always being the requirement, I'd just assumed that Kaby Lake was also a requirement from the start (i.e. if you need to add new hardware to the chipset to support this, you probably need to add it to the CPU too due to the CPU taking voer more and more PCH duties).
Except you appear to have missed the bit where Intel demonstrated Optane running on a Skylake system in 2015, as detailed in the article. So, the decision to make it a Kaby Lake-or-newer exclusive does not appear to be a technological one.
Except Kaby Lake wasn't available in 2015 to demo with. There's a big difference between demoing a system and having it working in practice (like the lack of memory remapping for general use). The first time requirements for Octane were announced at the end of 2015, 200 series chipset was confirmed but no mention of 100 series support.
Cthippo 23rd February 2017, 05:37 Quote
Dumb question here, but how do you reboot this? Currently if yur working memory gets screwed up you reboot which wipes the memory and reloads the OS from storage. If the memory is non-volatile and your computer locks up, how do you wipe it?
Gareth Halfacree 23rd February 2017, 09:40 Quote
Quote:
Originally Posted by edzieba
Except Kaby Lake wasn't available in 2015 to demo with. There's a big difference between demoing a system and having it working in practice (like the lack of memory remapping for general use). The first time requirements for Octane were announced at the end of 2015, 200 series chipset was confirmed but no mention of 100 series support.
Optane, not Octane, and my point was that a 200-series chipset wasn't always a requirement, 'cos Intel demoed the platform running on a Skylake board.
Quote:
Originally Posted by Cthippo
Dumb question here, but how do you reboot this? Currently if yur working memory gets screwed up you reboot which wipes the memory and reloads the OS from storage. If the memory is non-volatile and your computer locks up, how do you wipe it?
In the case of Da Future, where volatile memory is no longer a thing, you'd simply use your choice of a journaled filesystem or a copy-on-write filesystem. If everything goes non-linear, just reboot and the system will roll back the corruption (either using the journal or by using the last copy, depending on your chosen filesystem).

My desktop uses both options: the system drive is formatted ext4, which is a journaled filesystem, and the data drive is formatted Btrfs, which is a copy-on-write filesystem. The former basically says "dear diary, today I'm going to modify file foo to say bar" then if it screws up it can look at the journal and say either "oh, it was supposed to say bar, let me fix that" or "everything went wrong, let's make the file say foo again"; the latter says "I'm modifying foo and saving it as bar; foo still exists, but the reference points to bar" then if it screws up it can say "sod it, I'm pointing the reference back to foo again."

The nice part of not having volatile memory any more, though, is that if you reboot your computer now 'cos one application hung the system you lose everything that was in memory at the time; in our theoretical non-volatile PC it'd only have to roll back the thing that got corrupted, so everything else pops back up as though nothing happened.
Paradigm Shifter 23rd February 2017, 10:45 Quote
Quote:
Originally Posted by Gareth Halfacree
The nice part of not having volatile memory any more, though, is that if you reboot your computer now 'cos one application hung the system you lose everything that was in memory at the time; in our theoretical non-volatile PC it'd only have to roll back the thing that got corrupted, so everything else pops back up as though nothing happened.

In an ideal world, yes. Sadly, the world in which we live is far from ideal, and especially with computers, sometimes issues can snowball... ;)

I would guess that to avoid issues like minor corruption (like, say, bit flips) becoming endemic, the error checking/correcting routines would have to be pretty extreme. Otherwise 'random' errors would propagate throughout the system until the whole thing had to be wiped...? (Which, as you point out, is easy on volatile memory as it's a simple power cycle...)
Gareth Halfacree 23rd February 2017, 10:59 Quote
Quote:
Originally Posted by Paradigm Shifter
I would guess that to avoid issues like minor corruption (like, say, bit flips) becoming endemic, the error checking/correcting routines would have to be pretty extreme. Otherwise 'random' errors would propagate throughout the system until the whole thing had to be wiped...? (Which, as you point out, is easy on volatile memory as it's a simple power cycle...)
No more extreme than the error checking and correction that your current SSD has. At least, I hope your data isn't being eaten by random errors propagating through the system!
jb0 24th February 2017, 11:11 Quote
Quote:
Originally Posted by Cthippo
Dumb question here, but how do you reboot this? Currently if yur working memory gets screwed up you reboot which wipes the memory and reloads the OS from storage. If the memory is non-volatile and your computer locks up, how do you wipe it?

My thoughts is that there would be selective memory initialization at boot time. Smack reset, the the OS loader comes back and resets all the RAM it considers non-permanent.

Effectively, instead of partitioning a hard disk or flash drive, you would be partitioning your RAM... which takes us right back down memory lane to the good old days, where upgraded systems with a then-abundance of RAM would let you set part of the system RAM up as a (volatile) RAM disk to spare you the pain of slow floppy access times. Because how else could you possibly make use of 128 kbytes of RAM?
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums