PCI-Express 3.0 explainedManufacturer: PCI-SIG
Can you really believe it's been six years since we first saw PCI-Express? Even PCI-Express 2.0 has been around since 2007, and three years later, here we are with the next generation ratified and ready to roll.
Well, we won't see it until next
year (maybe), when the first hardware to include PCI-Express 3.0, Sandy Bridge-E 'Patsburg'
on LGA2011, is rumoured to start shipping. At this early stage AMD looks to miss out, as its Bulldozer compatible 900-chipsets are rumoured
to only support PCI-Express 2.0.
Depending on how consumer-orientated LGA1366's replacement, Sandy Bridge-E, is next year, most of us will probably see PCI-Express 3.0 when Ivy Bridge arrives in early 2012. Again, that's still speculation as Intel has not officially announced the details of either.
What we do know though is what PCI-Express 3.0 will offer.
PCI-Express 3.0 looks the same as PCI-Express 2.0 from the outside
A Doubling of Bandwidth...
Following the tradition that goes back to AGP (remember that?
), the bandwidth is again doubled from 500MB/sec or 4Gb/sec on PCI-Express 2.0 to 1GB/sec or 8Gb/sec on PCI-Express 3.0. That's per lane, in each direction. This means the total bandwidth for an 16x PCI-Express graphics slots go up from 16GB/s to 32GB/s, so it should cope better with the future demands of high performance graphics cards.
To double the bandwidth the PCI-SIG hasn't just cranked up the transfer frequency by two though, instead it's lessened the encoding overheads to make faster transfers more efficient.
|Generation||Bit rate||Interconnect bandwidth||Bandwidth (per lane)||Maximum bandwidth (16 lanes)|
... but not a doubling of transfer rate?
Typically PCI-Express (and a whole lot of other signalling buses) use an '8b/10b' encoding, which means ten bits are transferred for eight bits (one byte) of actual data. That 20 per cent overhead is considerable, and as we've seen from CPUs, frequencies cannot perpetually increase. This means that this overhead becomes ever more an issue as the bandwith increases.
To work around this problem, PCI-Express 3.0 encodes the data in a much larger '128b/130b' chunk, and then 'a known polynomial is applied to a data stream in a feedback topology', with an 'inverse polynomial' sat at the other end to decode the data. In more human terms, this is basically a hard-coded mathematical function that is designed to evenly spread the 0s and 1s (which are electrical clock blips), so they don't interfere with each other during transport. This technique is called 'scrambling'.
This means the data transfer rate can be lower while still achieving the same real bandwidth. For example, instead of 5GT/sec of PCI-Express 2.0 increasing to 10GT/sec, PCI-E Express 3.0 only needs 8GT/sec. Less transfers mean greater efficiency - so PCI-E Express 3.0 also uses less power - and it doesn't have to use higher grade materials - so the products are cheaper to make. Win-win.
However, PCI-E Express 3.0 provides no more power than its predecesors - but that's actually a good thing. Having >300W drawn from each 16xPCI-E slot would drive up the cost of motherboards, as the copper traces on the motherboard would need to be thicker. Manufacturers might even need to add extra layers to try and route these high power traces and their associated electromagnetic interference away from sensitive data traces.
Finally, PCI-E Express 3.0 has the same physical characteristics as PCI-E Express 2.0, so it's backward compatible with previous versions of PCI-Express, regardless of the data encoding change.