bit-tech.net

Noise cancellation tech quadruples optical bandwidth

Noise cancellation tech quadruples optical bandwidth

A noise cancellation system for optical fibres has been shown as quadrupling the speed of long-distance data communication, promising a leap in performance for the internet.

A system of sending twin waves through an optical fibre cable to boost signal to noise ratio at the far end could lead to long-range communications with a peak data transfer rate some four times greater than currently possible.

Developed by engineers working at Bell Labs, now a subsidiary of communications giant Alcatel-Lucent, the technology of using what the team calls 'phase-conjugated twin waves' has been successful used to transmit data at 400Gb/s over a 12,800km of optical fibre - or, to put it into relative terms, around four times the data transfer rate of current long-distance optical fibres over a length significantly greater.

The team's paper, published online this weekend following its submission to the Nature Photonics journal, is hard going for non-scientists. 'We show that the nonlinear distortions of a pair of phase-conjugated twin waves are essentially anticorrelated, so cancellation of signal-to-signal nonlinear interactions can be achieved by coherently superimposing the twin waves at the end of the transmission line,' the paper's abstract reads. 'We demonstrate that by applying this approach to fibre communication, nonlinear distortions can be reduced by >8.5 dB. In dispersive nonlinear transmission, the nonlinearity cancellation additionally requires a dispersion-symmetry condition that can be satisfied by appropriately predispersing the signals.'

Simplified, the team has achieved a way of significantly boosting the signal to noise ratio in an optical fibre communication system by sending two versions of the signal whereby one is a mirror image of the other - the 'phase conjugation' of the abstract. As the signal travels, noise picked up along the way affects both waves differently: noise that flips a 0 into a 1 on one wave will flip the 1 into a 0 on the other. When the two waves are received at the far end of the signal path, the waves are superimposed on one another again to form a single wave - and the noise picked up along the way cancels itself out.

The basic concept is similar to that of active noise-cancelling headphones, which uses microphones to generate an phase-conjugated waveform of external noise. This waveform is then played back along with the music you're trying to listen to, interfering with the external noise and cancelling much of it out. The team's work, therefore, can be thought of as noise-cancellation for optical communications systems - where the noise in question is light, rather than sound.

Previous systems attempting to achieve the same goal - the eradication or reduction of noise in optical fibre communication systems - have existed, but the Bell Labs team is the first to have created a working prototype that can operate without intermediate hardware. As a result, the team's technology can in theory be applied to existing long-distance communications links - such as the undersea cables that connect much of the world together - without the need for massive re-engineering work and a team of divers.

'Nowadays everybody is consuming more and more bandwidth - demanding more and more communication,' explained Dr Xiang Liu, lead researcher on the project, in a statement to the BBC. 'We need to solve some of the fundamental problems to sustain the capacity growth.'

The team's work is published in Nature Photonics, with the full text available for purchase on the journal website.

17 Comments

Discuss in the forums Reply
SchizoFrog 28th May 2013, 10:46 Quote
While I am all for advancements in technology, this is one area I am fed up with. We don't need significantly faster broadband, we need a significantly better network to provide what we already have to more people. The usual argument is that people in rural areas can not get high broadband speeds or even a regular connection but I think the problem is far worse than that. People seem to take it for granted that those that live in built up areas can access the full potential of services and that just is not the truth, there are vast areas of London, the capital city I'll remind you that have no fibre networks laid at all. I live near Victoria Park in the East End which is about 1.5 miles from the City's financial square mile and there is not fibre near me even if I wanted it. I live less than half a mile from a hub and the broadband speed I get is about 6-8Mb and even BT's high speed service is quoted to only give me around 15Mb (while costing an extra £30 a month).

So while this is good for future main lines to handle the business end of data transfer for the providers only those who already have more than enough speed wil see any benefit and those struggling will be left further behind.
Gareth Halfacree 28th May 2013, 10:58 Quote
Quote:
Originally Posted by SchizoFrog
While I am all for advancements in technology, this is one area I am fed up with. We don't need significantly faster broadband, we need a significantly better network to provide what we already have to more people.
This isn't about faster broadband; this is about faster backbones. If you'd care to re-read the article, you'll see that the team is targeting undersea cables - the links that connect the US to the UK, the UK to mainland Europe, Australia to everywhere else and so forth. They are the absolute heart of the internet, and they're struggling.

As increasing numbers of consumers get online, these links become saturated. Add in the fact that we're now dealing with quantities of data that would have been unthinkable a decade ago, as everyone with a smartphone suddenly starts streaming 1080p movies, and the problem becomes massive. The team's aim is to create a drop-in solution that you stick at either end of an undersea intercontinental fibre-optic cable and see an immediately quadrupling of available bandwidth - meaning you can fit four times as many people onto the network as you could before.

In other words: this area is exactly the area you're not fed up with; the team is looking to create a 'significantly better network to provide what we already have to more people.' Without an upgraded backbone, higher deployment of last-mile services will just cripple the 'net. I don't know about you, but I don't particularly want to go back to the dark old days of having to buy a chunk of international data when I want to browse US websites...
xrain 28th May 2013, 11:13 Quote
I wonder if this requires Polarization Maintaining fiber, or if it still works with standard single/multi-mode fiber.
liratheal 28th May 2013, 11:35 Quote
God I love technology.
Jerz 28th May 2013, 11:39 Quote
Correct me if I'm wrong, but isn't this very similar to how noise is cancelled on copper Ethernet connects? The color pairs?
Gareth Halfacree 28th May 2013, 11:43 Quote
Quote:
Originally Posted by xrain
I wonder if this requires Polarization Maintaining fiber, or if it still works with standard single/multi-mode fiber.
Good question, and one I'm afraid I can't answer. Well, short of blowing £22 on a PDF of the paper itself, of course...
Quote:
Originally Posted by Jerz
Correct me if I'm wrong, but isn't this very similar to how noise is cancelled on copper Ethernet connects? The color pairs?
Not really. That's known as twisted pair, and is used to reduce near-end cross-talk and electromagnetic interference - not really something optical cables have a problem with. They're both methods of reducing noise, of course, but very different in operation - there's no overlaying of inverse waveforms with twisted pair, and there's no EMI concerns with optical fibre.
Phil Rhodes 28th May 2013, 11:51 Quote
Strictly speaking, Ethernet (and USB, and firewire, and PCIe, etc, etc) does use differential signalling over its twisted pairs, which appears to be what they're talking about here. Or at least what you're talking about here. Are you sure that's actually what they're talking about here?
Gareth Halfacree 28th May 2013, 11:53 Quote
Quote:
Originally Posted by Phil Rhodes
Strictly speaking, Ethernet does use differential signalling over its twisted pairs, which appears to be what they're talking about here. Or at least what you're talking about here. Are you sure that's actually what they're talking about here?
I know as much as I have written in the article; if you'd like more details, feel free to buy the paper for £22. Let me know what you find - I'd be interested in the finer points of the team's work.

EDIT: Actually, it turns out you can rent the paper for $2.99 for 48 hours, which is a better deal. Alternatively, anyone who works or studies at a university with ReadCube access can read it for free.

FURTHER EDIT: Plus, differential signalling is specifically about reducing EMI and crosstalk in electrical signalling. One thing an optical cable is not is electrical. The team's work - to my understanding - is not differential signalling, but shares similarities (using two signals to detect and cancel out unwanted noise). Differential signalling uses the difference between the wires to communicate information, as the linked Wikipedia article explains; in the team's work on optical transmission, the two waves (analogous, you seem to be suggesting, to the paired wires in electrical differential signalling applications) are identical apart from being the inverse of each other. The data is transmitted as part of the wave, not as the difference between the two waves: if you received a single one of the two waves without interference, you would have all the data - something that is not the case with differential signalling, which requires that you receive (or transmit) on both wires in order for any data to make it intact.
Phil Rhodes 28th May 2013, 13:21 Quote
From the first page that's available free, what they're proposing is actually nothing to do with differential signalling, but they're using some fairly deep jargon to describe what might actually be reasonably simple quadrature techniques.

Although you could do differential, I suppose, with multispectral optical signals down a single fibre. I have no idea if that'd be in any way useful.
Shirty 28th May 2013, 13:31 Quote
Quote:
Originally Posted by liratheal
God I love technology.

Sexy innit?
play_boy_2000 28th May 2013, 16:36 Quote
This doesn't make any sense. It sounds like it is an enhancement to error correction, and may allow longer distances between optical regenerators, but I totally fail to see how it could increase transmission capacity. I imagine that the 400gbit/s link speed was only mentioned to show that it works on a production DWDM network.
-Xp- 28th May 2013, 17:59 Quote
This sounds exactly like Common Mode Rejection, which has been used in professional audio equipment for decades. Why has it taken so long to apply this principle?
Jerz 28th May 2013, 21:18 Quote
You're right, I overlooked that fundamental difference of the two mediums, electrical and optical. What I failed to get from the article snip-it link is what this optical "interference" is actually from? My, albeit limited, understanding of fiber optics is that the limiting factor in transmissions over distance is attenuation and optical dispersion/refraction along the cable. Wouldn't two slightly different frequencies or wave lengths travel slightly different paths in the cable and therefor get affected slightly differently?
ch424 28th May 2013, 22:07 Quote
Quote:
Originally Posted by Gareth Halfacree
there's no overlaying of inverse waveforms with twisted pair

In high-speed serial stuff (SATA, PCIe, USB, DVI...), inverse waveforms is exactly what you're doing. The two signals are 180 degrees out of phase.
Quote:
Originally Posted by Gareth Halfacree

FURTHER EDIT: Plus, differential signalling is specifically about reducing EMI and crosstalk in electrical signalling. One thing an optical cable is not is electrical. The team's work - to my understanding - is not differential signalling, but shares similarities (using two signals to detect and cancel out unwanted noise).


Yup, this is the difference - but I'll explain why further down...
Quote:
Originally Posted by Gareth Halfacree

Differential signalling uses the difference between the wires to communicate information, as the linked Wikipedia article explains; in the team's work on optical transmission, the two waves (analogous, you seem to be suggesting, to the paired wires in electrical differential signalling applications) are identical apart from being the inverse of each other. The data is transmitted as part of the wave, not as the difference between the two waves: if you received a single one of the two waves without interference, you would have all the data - something that is not the case with differential signalling, which requires that you receive (or transmit) on both wires in order for any data to make it intact.

Not quite. Differential pairs are just there to increase your signal-to-noise ratio. If you just had one of the wires it would still work a lot of the time, as long as you had a way of maintaining DC balance. Adding the second, inverted, signal just helps reduce crosstalk (net field emitted is smaller) and gives you common-mode EMI rejection, allowing you to turn the data rate up.
Quote:
Originally Posted by play_boy_2000
This doesn't make any sense. It sounds like it is an enhancement to error correction, and may allow longer distances between optical regenerators, but I totally fail to see how it could increase transmission capacity.

If you reduce signal losses, your SNR increases, so your channel capacity increases. Shannon-Hartley_theorem
Quote:
Originally Posted by -Xp-
This sounds exactly like Common Mode Rejection, which has been used in professional audio equipment for decades. Why has it taken so long to apply this principle?

It is basically common-mode rejection. You can get common-mode rejection pretty much for free in differential signalling, but only because differential signals are all transmitted as baseband. USB, SATA, PCIe and DVI are just serialised versions of the data, with a bit of scrambling (either to increase the number of edges to help clock recovery or reduce the number of edges to save bandwidth) and sometimes some error detection/correction, but the transmitted bit rate is still about the same as the signal frequency. In optical stuff, you're not transmitting at baseband. The symbol rate is Mbit or Gbit/sec, but the frequency of the light is in THz. You therefore can't do a simple inversion to get rid of common-mode noise.

So, because you can't easily control the phase of your optical carrier, these people have found the next-best equivalent, which is to use the phase conjugate of the optical waves. They then found that it does work just the same as common-mode rejection once it gets to the far end. The reason that it's not been done before is presumably because it's much higher frequency and therefore quite hard to produce phase-conjugate signals, then re-combine them accurately.
Bonedoctor 29th May 2013, 03:51 Quote
Quote:
Originally Posted by ch424
In high-speed serial stuff ...therefore quite hard to produce phase-conjugate signals, then re-combine them accurately.

What he said... lol.
Gareth Halfacree 29th May 2013, 07:55 Quote
Quote:
Originally Posted by ch424
Not quite. Differential pairs are just there to increase your signal-to-noise ratio. If you just had one of the wires it would still work a lot of the time, as long as you had a way of maintaining DC balance. Adding the second, inverted, signal just helps reduce crosstalk (net field emitted is smaller) and gives you common-mode EMI rejection, allowing you to turn the data rate up. [more explanation snipped]
I wasn't aware of that - thanks for the clarification!
SchizoFrog 29th May 2013, 13:46 Quote
Quote:
Originally Posted by Gareth Halfacree
Quote:
Originally Posted by SchizoFrog
While I am all for advancements in technology, this is one area I am fed up with. We don't need significantly faster broadband, we need a significantly better network to provide what we already have to more people.
This isn't about faster broadband; this is about faster backbones. If you'd care to re-read the article, you'll see that the team is targeting undersea cables - the links that connect the US to the UK, the UK to mainland Europe, Australia to everywhere else and so forth. They are the absolute heart of the internet, and they're struggling.

As increasing numbers of consumers get online, these links become saturated. Add in the fact that we're now dealing with quantities of data that would have been unthinkable a decade ago, as everyone with a smartphone suddenly starts streaming 1080p movies, and the problem becomes massive. The team's aim is to create a drop-in solution that you stick at either end of an undersea intercontinental fibre-optic cable and see an immediately quadrupling of available bandwidth - meaning you can fit four times as many people onto the network as you could before.

In other words: this area is exactly the area you're not fed up with; the team is looking to create a 'significantly better network to provide what we already have to more people.' Without an upgraded backbone, higher deployment of last-mile services will just cripple the 'net. I don't know about you, but I don't particularly want to go back to the dark old days of having to buy a chunk of international data when I want to browse US websites...

Gareth, my final paragraph did say that it was better for the main lines... I still find it a joke that you can get up to 100MB from cable providers while others struggle to get bandwidth fast enough to stream the BBC iPlayer or YouTube properly. As for Smartphones, even many 3G services out perform many household connections. So my point was while this may help with the ever overloaded mainlines, when it comes to home users those who have will get more and those who don't, won't.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums