Intel sheds more light on Nehalem

Intel sheds more light on Nehalem

Intel offers some insights into Nehalem, but we don't know final performance numbers or clock frequencies yet.

In a conference call about Intel technologies at the VLSI Symposia 2008 yesterday, Intel let loose a few more Nehalem details for us.

First off, the Nehalem processor family will cover everything from Mobile to Server in much the same way the Core 2 architecture has done so for the past few years.

Nehalem will have a 25GB/sec inter-socket bandwidth using 6.4GT/sec QPI links, which is currently "three times larger than our best competition today," said Rajesh Kumar, Director of Intel Circuit and Low Power Technologies. It'll also have 32GB/s memory bandwidth thanks to the triple channel DDR3 1,333MHz (at least on Bloomfield that launches in Q4).

Nehalem has four enhanced cores, an uncore for connecting the cores with the I/O, and third level cache. In addition, it includes:
  • Configurable clocking
  • fastlock low-skew PLLs
  • high reference clock frequencies
  • analogue supply tracking system
  • adaptive frequency clocking
  • low jitter Intel QuickPath interconnect
  • integrated memory controller clock generation
  • jitter-attenuating DLLs
The memory, processing cores and I/O centre are all completely decoupled with regards to voltages and frequencies so each can optimise their own working environment with regards to performance and power. However, Intel was keen to point out that unlike the competition's asynchronous design, these three were intrinsically linked with synchronous interfaces to offer lower latencies and higher performance.

Nehalem's memory to cache latency, for example, will be "drastically smaller" compared to the competition. The decoupling also further allows a benefit of a modular system, where extra components can be easily dropped in because they are essentially self contained.

Intel's EIST and C1E states for clock changing to save power will now work 56 percent faster in Nehalem and the chip frequency will also adapt to power supply voltage changes and vDroop - this should make a system ever more stable, but we think it might push enthusiasts into looking for the best motherboards and PSU combinations that completely minimise this clock down effect, especially if it affects performance figures.

Intel even dropped an interesting titbit that it was thinking about completely decoupling itself from rated frequencies because of the constant clock changes, however it found customers and retailers were very much against this move. Despite the fact that, internally, the CPU is constantly adjusting its clock speed, from the outside it appears like a fixed frequency due to overall averaging. No doubt this continual variation will surely make our job testing hardware reliably that much more difficult though - it depends on the level of clock changes and the quality of motherboard and power supply.

Finally, Intel also mentions in its documents that the Duty Cycle adapts to transistor variation and lifetime stress - does this mean that even if your CPU isn't made as well as the next guy's, instead of dying outright it will reduce the time that part is working. Does this translate into reducing the core frequency over time?

In other words, after 12 months of overvolting and overclocking, your CPU might end up running at a lower speed you bought it at, or have less cache available as the chip turns down the use of these tired transistors? Considering CPUs die very rarely these days we can't see it being much of a problem - unless you put some silly voltages through it, that is. However, the long term implications and resellability might be of concern for some end users. We will endeavour to find out the answers from Intel.

Discuss in the forums.


Discuss in the forums Reply
Timmy_the_tortoise 17th June 2008, 12:25 Quote
Core 2 should suit me for the next 2+ years, to be honest. Unless I need 8 or more threads (for whatever ridiculous programme I might be running), I see no need for Nehalem yet.
liratheal 17th June 2008, 12:26 Quote
How would server hardware deal with the Duty Cycle stuff?

I mean, I like the idea of the CPU not flat out dying, but how would that impact high-stress always on situations?

Would RMA warranties and the like be altered to deal with the server market, or would there be an entirely different Duty Cycle for the application of these in server environments?
bowman 17th June 2008, 12:30 Quote
More hippie crap to shut down in the BIOS. They can keep all this energy saving malookey for themselves, it just adds another variable to the mix, one I can do fine without.

I don't even have a Core 2 now so I'll be grabbing one of these.
yakyb 17th June 2008, 13:24 Quote
i'll aim for one of these for april next year (bonus time!!!) along with a new gpu hopefully

cant wait to fire up 8 cores when video encoding
Hugo 17th June 2008, 13:39 Quote
Hi I'm <so and so> from <such and such a company> and I'd like to ask a question of no real importance the answer to which I don't really want to know but which will make me sound really smart...

Some really interesting stuff in the VLSI papers it has to be said.
amacieli 17th June 2008, 15:27 Quote
why get 8 cores when you can video encode faster with a gtx 280/260?
B3CK 18th June 2008, 03:08 Quote
What's with all this "jitter reduction here, jitter reduction there" business?

As to the "the Duty Cycle adapts to transistor variation and lifetime stress " I just hope this doesn't turn out to be a way Intel can say, well, your brand new $400.00 chip is still working at 96% capacity and our return service only kicks in at 94% or below.
I would just hound intel to everyone I know if something like that happened. On the other hand, it should allow the chip to still plug away after years of power outages, and the never ending supply of Microsoft hard reboots due to driver non-interaction.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.

Discuss in the forums