bit-tech.net

Rumour Control: Intel's Light Peak to ditch light

Rumour Control: Intel's Light Peak to ditch light

Intel's Light Peak will launch early next year, but using copper rather than optical cabling.

Intel might be betting heavily on its Light Peak technology, but it looks like initial versions of what senior fellow Kevin Kahn famously described as 'the last cable you'll ever need' won't actually use light at all.

That's the claim being put forward by an unnamed 'industry source' familiar with Intel's plans for its ultra-fast interconnect technology, who told CNET that initial versions of Light Peak will be based on copper cabling rather than fibre-optic.

Interestingly, the source's comments, which have been neither confirmed nor denied by the chip giant, indicate that the sudden shift away from optical networking to traditional electrical interconnections won't affect the planned speed of Light Peak, which is still set at 10Gb/sec bi-directionally.

Despite the move to copper cabling, Light Peak is still reported to be on track, with a launch expected in the first half of 2011. Both Sony and Apple are believed to be planning devices based around Light Peak technology, although it's not known whether the purported systems will include USB 3 alongside Light Peak.

Do you think that Light Peak is sounding more and more unlikely to be the killer cable that Intel is clearly hoping for, or will you be watching the launch with interest in the hope of 10Gb/sec external storage devices? Share your thoughts over in the forums.

24 Comments

Discuss in the forums Reply
mi1ez 13th December 2010, 15:59 Quote
Well there goes backwards compatibility then!
Fordy 13th December 2010, 16:10 Quote
I'm more worried about 10Gbsec^-1 internal storage devices, before external storage devices.

Give me a Light Peak SSD any day. Note "give", no way I'll pay the crazy high price tag for that!
HourBeforeDawn 13th December 2010, 16:11 Quote
ya well when I see it and it works well then I will be impressed.
Lazy_Amp 13th December 2010, 16:14 Quote
"Initial" Versions?

This is probably to shorten the gap between USB3 and LightPeak, and because customers started calling. But how can you manage backwards compatibility when one uses electrical connections and the other is light?

LightPeak 2 anyone? Now with actual optics!


... on another note, was there ever supposed to be a power supply connection on the originally planned lightpeak so that the target device didn't need external power connectors?

Anyway I am simply inquisitive and skeptical.
play_boy_2000 13th December 2010, 17:40 Quote
I dont mean to make a '640k aught to....' type statement, but I fail to see exactly what intel intends to accomplish here? In the next decade, I can see the need for internal speeds of 10gbit/s (for SSDs), however I would think that SATA has one more revision in it, which should push it into the 1 Gigabyte/s range, which, when I stop to think about it, is pretty much the same speed as main memory (PC133) a decade ago.

External?
eSATA 1.5gbit/s is more than enough, USB3 will be nice if SSDs ever challenge HDDs on $/GB
I just finished getting all the devices in my house onto gigE ports. I'll look at 10gigE in 5 years or so.

Where does that leave lightpeak?
Lazy_Amp 13th December 2010, 18:20 Quote
Honestly, I'd be more interested in lightpeak as a fast interconnect onboard between chips on a motherboard rather than external storage. Or perhaps even CPUs in a server environment.

But then again, no reason to link it to main memory, since work is being done to stack memory die on top of CPU die... making Chaches in the Gigabyte range.

Rambling now. Just odd really.
Yslen 13th December 2010, 19:46 Quote
Presumable the speed they're intending to get is limited by something other than the medium used to shift the data, so someone pointed out they could just use copper wire and save piles of cash. That's my interpretation of this anyway.
Deadpunkdave 13th December 2010, 19:57 Quote
Quote:
Originally Posted by Fordy
I'm more worried about 10Gbsec^-1 internal storage devices, before external storage devices.

Give me a Light Peak SSD any day. Note "give", no way I'll pay the crazy high price tag for that!

There's nothing inherently expensive involved in LightPeak, whether or not it is a fibre-optic cable used.
Quote:
Originally Posted by Yslen
Presumable the speed they're intending to get is limited by something other than the medium used to shift the data, so someone pointed out they could just use copper wire and save piles of cash.

The only cost to Intel is the R&D and they won't waste the work they've already done. If the non-optical version is launched then it just means a delay so that they can launch the optical version at a higher bit-rate. The plan was always to develop the standard far beyond the initial launch anyway. Launching a copper cable version will just mean that they can get the controllers and ports into products before they release an optical standard that they're happy with.
play_boy_2000 13th December 2010, 21:19 Quote
Quote:
Originally Posted by Lazy_Amp
Honestly, I'd be more interested in lightpeak as a fast interconnect onboard between chips on a motherboard

PCI express 3.0 is already 8 gbit/s for just 1x, never mind 16x.
Quote:


Or perhaps even CPUs in a server environment.
infiband already has that covered
Quote:

But then again, no reason to link it to main memory, since work is being done to stack memory die on top of CPU die... making Chaches in the Gigabyte range.
Interesting...
aron311 14th December 2010, 01:06 Quote
I just want Intel to stop stalling on USB3. It's becoming a joke!
Cthippo 14th December 2010, 01:57 Quote
It's going to be like Firewire. Theoretically better than USB, but never really goes anywhere.
Dazza007 14th December 2010, 04:21 Quote
I think a general user of a PC might be attracted to the optics of light peak if Intel market it properly, they'll see its completely different to USB and the speed gains that come with it as the reason to buy, if intel stick to copper they would probably relate to the USB name and buy that,the average user knows nothing about firewire but everybody knows about USB. perhaps they're talking about lightpeak using copper wire as the power source for the optics and somebodys put 2+2 together but isnt very good at maths.
Altron 14th December 2010, 04:41 Quote
Moving to fiber for 10gbit is just silly.

You can do that in copper just fine, and without needing any expensive transceivers. Latency is probably improved as well.

What tends to go over people's heads is that the huge advantage of fiber is very low attenuation and good resistance to interference, not speed or bandwidth. You can move a lot of data very quickly over copper. The only issues with copper arise when you get to long distances and have to fight attenuation and interference. At a couple inches, copper can move hundreds of gigabits per second, as seen in PCIe 3.0 speeds. At a hundred meters, you'll need good shielded cable to do 10 gigabits. At a kilometer, forget about it. Fiber won't even notice a kilometer. You need to get into the hundred km range before a significant amount of attenuation occurs.

But at the consumer level, dealing with a normal sized room, there's minimal benefit to choosing a fiber interconnect over a copper interconnect. It's just a whole lot of extra expense for little gain.
fluxtatic 14th December 2010, 07:20 Quote
Quote:
Originally Posted by Cthippo
It's going to be like Firewire. Theoretically better than USB, but never really goes anywhere.

To an extent, yes. Apple shot themselves in the foot with Firewire, wanting a ridiculously high royalty ($1/port or so). Motherboard manufacturers didn't bite (aside from high-end boards targeted at bleeding-edgers and DAW folks), since the royalty on USB is $0 (<A HREF="http://www.usb.org/developers/usb20/faq20/">Clicky< /a>.

LightPeak may come on some enthusiast boards, but I doubt we'll ever see wide-spread adoption, especially after Intel pissed off every motherboard manufacturer on the planet by making them go off the farm for USB 3.0 chips. Intel was hoping to have everyone on board with their shiny new tech, but now that USB 3.0 is here (and backwards-compatible), there doesn't seem to be a lot of point to LightPeak, especially if it isn't released as an open standard. So far, I haven't seen any indication that Intel intends to do that.
Snips 14th December 2010, 07:32 Quote
Could this be thought as IDE to Sata 1 = USB3 to Peak?

How long has it taken to remove IDE ports from motherboards?

There's clearly room for one more and no one knows how much the royalty is. If Sony and Apple are interested then with them being THE two high end brands (expensive), maybe there is something in this Intel technology.
Altron 14th December 2010, 08:49 Quote
Think of it like going to a place that's a 200 miles away. You could drive there in 4 hours at 50mph. Or, you could drive a half hour to the airport, spend a half hour going through security, a half hour taxiing on the runway, an hour in the air at 200mph, a half hour getting off the plane, a half hour getting your baggage, and a half hour driving another car to the destination.

You think, well, airplanes are a lot faster and higher-tech than cars, so the second way is better. But it's not. It took just as long. The actual transmission was more efficient, but the amount of intermediate steps required was higher, so there was not a difference in overall time, and it was more expensive.

That's what using fiber for 10 meter or less connections like peripherals and stuff is like.

Now, let's change it up. What if your destination is 2,000 miles away? In that case, it would take 40 hours to drive there. Or, it would take 10 hours to fly there. The amount of time it takes to get to the airport and take off is the same regardless of how far you fly, 1.5 hours, as is the amount of time it takes to get out of the airport (1.5 hours), so your total time is 13 hours, it's much faster. Now flying is a much more appealing option. Even though the plane ticket is more expensive, the plane can fly nonstop. If you were to drive, you'd get tired and need to stop and rent a hotel room part of the way on the journey, which would be more expensive than just flying.

That's what using fiber for 100 meter or longer connections is like.
Boogle 14th December 2010, 09:53 Quote
Quote:
Originally Posted by Altron
Think of it like going to a place that's a 200 miles away. You could drive there in 4 hours at 50mpg. Or, you could drive a half hour to the airport, spend a half hour going through security, a half hour taxiing on the runway, an hour in the air at 200mph, a half hour getting off the plane, a half hour getting your baggage, and a half hour driving another car to the destination.

You think, well, airplanes are a lot faster and higher-tech than cars, so the second way is better. But it's not. It took just as long. The actual transmission was more efficient, but the amount of intermediate steps required was higher, so there was not a difference in overall time, and it was more expensive.

That's what using fiber for 10 meter or less connections like peripherals and stuff is like.

Now, let's change it up. What if your destination is 2,000 miles away? In that case, it would take 40 hours to drive there. Or, it would take 10 hours to fly there. The amount of time it takes to get to the airport and take off is the same regardless of how far you fly, 1.5 hours, as is the amount of time it takes to get out of the airport (1.5 hours), so your total time is 13 hours, it's much faster. Now flying is a much more appealing option. Even though the plane ticket is more expensive, the plane can fly nonstop. If you were to drive, you'd get tired and need to stop and rent a hotel room part of the way on the journey, which would be more expensive than just flying.

That's what using fiber for 100 meter or longer connections is like.

Very good post. I just want to know why mpg was mentioned?
ShahJahan 14th December 2010, 10:54 Quote
Quote:
I just want to know why mpg was mentioned?

LOL! I think he mean mph, and I think you knew that already?!
bobwya 14th December 2010, 18:40 Quote
Quote:
Originally Posted by Altron
Think of it like going to a place that's a 200 miles away. You could drive there in 4 hours at 50mph. Or, you could drive a half hour to the airport, spend a half hour going through security, a half hour taxiing on the runway, an hour in the air at 200mph, a half hour getting off the plane, a half hour getting your baggage, and a half hour driving another car to the destination.

I would take the train. 200 miles @100mph = 2 hours.
Altron 15th December 2010, 08:49 Quote
Quote:
Originally Posted by bobwya
Quote:
Originally Posted by Altron
Think of it like going to a place that's a 200 miles away. You could drive there in 4 hours at 50mph. Or, you could drive a half hour to the airport, spend a half hour going through security, a half hour taxiing on the runway, an hour in the air at 200mph, a half hour getting off the plane, a half hour getting your baggage, and a half hour driving another car to the destination.

I would take the train. 200 miles @100mph = 2 hours.

Yes, but you still have to drive to the train station and find parking at their garage.

My point is the intermediate steps required for this conversion. Your mobo and peripherals and expansion cards are all electrical on the inside. It's very easy for them to communicate over an electronic interface, because their internal circuitry is all electrical. They can just keep pushing electrons around, no need to convert to a fundamentally different interface. With LightPeak, your circuitry is still electrical. You're just converting the electrical signals to optical ones, transmitting the optical ones, then converting them back to electrical so you can use them.

I got my "panties in a twist" when a previous LightPeak article claimed it as an advance in "optical computing". LightPeak is not optical computing. No calculations are being done with photons. It's not computing anything, just transmitting data.

These letters I type are converted to binary by my computer, sent to your computer, and converted back into English. You and I are communicating in English. We can't speak binary, but our communications are converted to and from binary by our computers.

LightPeak is the same idea. The computers aren't computing optically. They're computing electrically, then using a piece of equipment to convert it to optical, then using a similar piece of equipment to convert it back to electrical.

My aim isn't to criticize LightPeak. It seems fine and dandy. My aim is to clear up some myths about it. It's not fundamentally a better technology than copper for short distances that the average user would encounter (<100m), and it's not a revolutionary advance in computing any more than being able to convert to and from binary was a revolutionary advance for the English language.

Also, I LOL'ed at the picture with the rainbow of light. Optical communication occurs at 1.3 microns and 1.5 microns, both well outside the range of human vision. This is because of a dispersion minimum in fused silica fibers around those wavelengths which reduces pulse broadening and other signal degradation effects.
Deadpunkdave 15th December 2010, 12:12 Quote
It shouldn't be forgotten that half of Light Peak is making a single technology to do everything transmission-wise, replacing both usb and hdmi by having a multi-protocol controller. Have a read at http://techresearch.intel.com/ProjectDetails.aspx?Id=143 and click around if you're interested. This is why this rumour kind of makes sense, though the name certainly won't if they are launching with copper.

Also, no one beats the BBC for completely nonsensical pictures of optical fibres:
http://news.bbcimg.co.uk/media/images/49644000/jpg/_49644864_cables432-1.jpg

and even better...
http://news.bbcimg.co.uk/media/images/49309000/jpg/_49309946_49309944.jpg

I kid you not.
tugboat 16th December 2010, 07:35 Quote
@ Altron:

In some respects your analogy is correct, but Is too narrow in scope and not, I feel, quite accurate.

In each and every current data communication standard/protocol there are latencies introduced by everything from controllers to drivers to quality of the connections be they soldered or mechanical, shielded or not. So Latencies introduced by changing venues getting from point A to point B I think is a wash. Personally, I think Light Peak, or whatever the final version, is the answer to completely overhaul the way our computers do business in the future.

Consider that it can in one fell swoop exceed the capacity of virtually every buss on the mobo. SATA, PCIe, memory, USB, all of it. Talk about speeding things up. Regarding the comment above by Play_Boy_2000, What is wrong with SATA exceeding 10 year old memory buss speeds? Hell lets put the memory on Light Peak also.

The main point I'm trying to make is that there are at least 3 standards that could get replaced now. SATA, USB, and PCIe. There are others. They could all be rolled into one now (relatively speaking time wise). Why should you want to pay for a couple more iterations of USB or SATA just because they can make it past a couple of gig's/sec speed. Personally I hate nickle and dime tech upgrades. I personally wouldn't hesitate dumping legacy USB, SATA, and whatever. Mfr's can continue, and you can bet your ass they would as long as there was a dollar in it, to make legacy mobo's and components. Those building new could go new tech and never look back.

I also submit to you that the sooner the PC world accepts this new technology, the sooner we can keep it from getting locked up by the likes of Apple or Sony. Hell just look at Blu-Ray, as good as it is it will always be to damned expensive for what it is. And Sony would I think rather die than let the price go down and it become a mainstream component. I won't even talk about Apple so as not to start a flame war.
Altron 16th December 2010, 19:41 Quote
Quote:
Originally Posted by tugboat
@ Altron:

In some respects your analogy is correct, but Is too narrow in scope and not, I feel, quite accurate.

In each and every current data communication standard/protocol there are latencies introduced by everything from controllers to drivers to quality of the connections be they soldered or mechanical, shielded or not. So Latencies introduced by changing venues getting from point A to point B I think is a wash. Personally, I think Light Peak, or whatever the final version, is the answer to completely overhaul the way our computers do business in the future.

Consider that it can in one fell swoop exceed the capacity of virtually every buss on the mobo. SATA, PCIe, memory, USB, all of it. Talk about speeding things up. Regarding the comment above by Play_Boy_2000, What is wrong with SATA exceeding 10 year old memory buss speeds? Hell lets put the memory on Light Peak also.

The main point I'm trying to make is that there are at least 3 standards that could get replaced now. SATA, USB, and PCIe. There are others. They could all be rolled into one now (relatively speaking time wise). Why should you want to pay for a couple more iterations of USB or SATA just because they can make it past a couple of gig's/sec speed. Personally I hate nickle and dime tech upgrades. I personally wouldn't hesitate dumping legacy USB, SATA, and whatever. Mfr's can continue, and you can bet your ass they would as long as there was a dollar in it, to make legacy mobo's and components. Those building new could go new tech and never look back.

I also submit to you that the sooner the PC world accepts this new technology, the sooner we can keep it from getting locked up by the likes of Apple or Sony. Hell just look at Blu-Ray, as good as it is it will always be to damned expensive for what it is. And Sony would I think rather die than let the price go down and it become a mainstream component. I won't even talk about Apple so as not to start a flame war.

I'd rather have my dual-channel DDR3 that gets 340 gigabits per second through two inches of copper than memory connected via 10 gigabit fiber.

It's a more complicated connection. Instead of traces on a PCB meeting a pin, which meets traces on another PCB, it's a PCB trace to miniaturized light source transmitting through a fiber into a small optical detector, which then puts its signal through a PCB trace.

Having one interface for everything is a poor idea. Different interfaces are optimized for different tasks.

High data rate, long distance, low cost. Pick any two.

Look at PCIe x16. It's capable of an astonishing 128 gbps with the new 3.0 standard, and is found on a $40 motherboard. It's a cheap interface that's very very fast, because it only has to go a couple of inches.

Look at USB. It's a pathetic 480mbps, but it's also very inexpensive, and can signal over much longer distances, 10m between repeaters.

Future advances are not going to have PC components linked by fiber (until processors are optical and not electrical). They're going to have very fast and very short copper links.

Look at older motherboards. They have a processor, a northbridge that has a memory controller and handles expansion slots, a southbridge, and a graphics processor.

The current generation of processors has all of that functionality integrated onto the die. The CPU is on it. The memory controller is on it. A significant amount of L3 cache is on it. A basic graphics processing unit is on it. The PCIe controller is on it.

We're taking more and more functions onto the die, because it makes them fast. That's where the tech is heading. Make the memory controller and PCIe controller on the die. Put a GPU on the die. Put more and more cache on the die.

Fiber optics have one great area of strength, and that's distances and networking.

I'd like to see a standard that integrates the functionality of USB, Ethernet, and Displayport. Give me the ability to stream audio and video, send and receive data, and control my computer with a single cable. Make it capable of 100m without repeaters, and make it inexpensive. Fiber can do it, and that's what we need.

What we don't need is fiber inside of the computer connecting one circuit board to another, until the circuits themselves are optical.
Landy_Ed 17th December 2010, 12:33 Quote
meh, it's all out of date! I bet siemens are already all over the microwave photon solution as that has now been proven to be faster than light....

http://www.telegraph.co.uk/science/science-news/3303699/We-have-broken-speed-of-light.html


Great posts, Altron, very informative & well considered.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums