bit-tech.net

Intel teases 1.6Tb/s optical interconnect tech

Intel teases 1.6Tb/s optical interconnect tech

Intel's upcoming MXC optical interconnect, to be unveiled at next month's IDF, boasts peak transfer rates of an eye-popping 1.6Tb/s.

Intel has announced plans to unveil a new optical interconnect for servers at the Intel Developer Forum in San Francisco next month, boasting a peak transfer rate of 1.6Tb/s.

Designed to replace existing inter-server optical interconnects, the new standard is dubbed MXC with Intel boasting of numerous improvements including smaller connectors and, crucially, significantly improved data throughput. While MXC development began two years ago, Intel is only now making it fully public with a planned presentation at the IDF in September.

Ahead of the unveiling, the company has published a teaser for the presentation which makes some bold boasts as to the technology's capabilities. Based on Intel's silicon photonics work and glass company Corning's latest-generation optical fibre technology, dubbed ClearCurve LW, Intel claims MXC allows for peak transfer rates over short distances of 1.6Tb/s from a compact connector.

High-speed inter-server communication is critical to high-performance computing. Any given HPC or supercomputer is typically constructed of multiple server nodes operating as a massively parallel cluster - and all the compute performance in the world is of absolutely no use if you can't get the data from one server to another quick enough. Intel has already made significant investments in interconnect technology, picking up Cray's Aries and Gemini and QLogic's InfiniBand last year alone.

A look at InfiniBand, for which Intel paid an impressive $125 million in cash, gives a clue as to just how advanced MXC really is: the top-end 12x EDR InfiniBand link provides 300Gb/s of throughput, around a fifth the peak performance promised by Intel's MXC. The technology also has applications in longer-range communications: while the 1.6Tb/s speed requires very short fibre-optic cabling, the company has tested MXC on ClearCurve LW fibres to 300m with performance of 25Gb/s.

Sadly, Intel is keeping a few things under wraps for the presentation itself - in particular pricing, production schedule and when data centre customers are likely to be able to pick up hardware with MXC connections. Thus far, it's also not clear as to whether Intel will be releasing MXC as a licensable standard for all to use or keeping it an exclusive for Intel-powered servers - although we'd guess it will opt for the former in order to encourage widespread adoption.

6 Comments

Discuss in the forums Reply
iwod 15th August 2013, 10:43 Quote
Can a lower yield of these be used for Optical Thunderbolt? Even 1/20 of it, 80Gbps will do.
edzieba 15th August 2013, 11:08 Quote
They're billing it specifically as "The Next Generation Optical Connector ", not just as a new interconnect. I'm guessing that they've bonded the laser emitter and receiver to the ends of the fibre itself, to cut transmission losses from the fibre/port interface at each end, and to allow each fibre to be 'tuned' at the factory to compensate for any slight variance in each. If the each cable is characterised, then that's going to be a real pain for splicing your own fibre (you'll either need to buy cables already manufactured to the right length, or have the cable reanalysed after every splice), but for that massive increase in data rate companies may happily eat the cost.
ashchap 15th August 2013, 16:44 Quote
Quote:
Originally Posted by Article
all the compute performance in the world is of absolutely no use if you can't get the data from one server to another quick enough.

Not necessarily, for example: Folding@Home - Vast compute power, extremely slow to get the data from one server to another.
Gareth Halfacree 15th August 2013, 19:09 Quote
Quote:
Originally Posted by ashchap
Not necessarily, for example: Folding@Home - Vast compute power, extremely slow to get the data from one server to another.
True, but that's neither HPC nor supercomputing - that's distributed computing, and only suitable for selected workloads.
Gradius 19th August 2013, 22:18 Quote
1.6Tb/s sure is nice, but WAYYY too expensive atm.
ch424 19th August 2013, 23:08 Quote
200GB/sec makes remote DMA a reasonable option - this might completely change how people look at HPC server architecture!
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums