bit-tech.net

Intel unveils 32nm process technology

Intel unveils 32nm process technology

Intel has demonstrated the world's first 32nm processors and announed massive plans for its latest process technology.

You probably thought that a 45nm transistor was pretty small, but Intel has announced that it’s taken its silicon technology even further into the realms of the infinitesimal today, as the company has just demonstrated the world’s first 32nm processors and announced massive plans for the technology.

The company plans to spend a whopping $7 billion US over the next two years on building the four 32nm fabrication plants, creating 7,000 high-skill jobs in the US. One is already up and running in Oregon, where another plant is scheduled to be running by the end of 2009. Meanwhile, two further fabs will be built in Arizona and New Mexico in 2010.

The 32nm processors are based on the same materials used in Intel’s 45nm chips, using a high-k gate dielectric and a metal gate, as opposed to the old SiO2 dielectric and polysilicon gate used in Intel’s previous 65nm chips. However, Intel was keen to point out that it’s now refined the high-k + metal gate technology, which the company says is now in its second generation.

The refinements include a reduction in the oxide thickness of the high-k dielectric from 1.0nm on a 45nm chip to 0.9nm on a 32nm chip, while the gate length has squeezed down from 35nm to 30nm. As a result of this Intel says that it’s seen performance improvements of over 22 per cent from the new transistors. The company also claims that the second generation high-k + metal gate technology has reduced the source-to-drain leakage even further than the 45nm-generation technology, meaning that the transistors require less power to switch on and off.

Interestingly, Intel also says that the 32nm chips will be made using immersion lithography on ‘critical layers’, meaning that a refractive fluid will fill the gap between the lens and the wafer during the fabrication process. AMD is already using immersion lithography to make its 45nm CPUs, but Intel has so far used dry lithography on its 45nm CPUs.

Commenting on the manufacturing facilities, Intel’s CEO Paul Otellini said that the factories would "produce the most advanced computing technology in the world." He added that "the chips they produce will become the basic building blocks of the digital world, generating economic returns far beyond our industry."

Intel says that its first 32nm chips will be ready for production in the fourth quarter of this year, and has announced a number of new products that will be based on the technology.

Got a thought on the announcement? Discuss in the forums.

8 Comments

Discuss in the forums Reply
n3mo 10th February 2009, 21:32 Quote
Since the most intensive tasks are slowly moved towards GPGPU and CPUs will never even get close to GPU in efficiency and power, designing faster CPUs is just a waste of time. Even if they get to 11nm, the CPUs will be too slow for anything. Although it will be useful for GPUs.
perplekks45 10th February 2009, 22:14 Quote
For any highly-threadable tasks you're right. But if it's not breakable into many threads, CPUs still have a purpose.
Joeymac 10th February 2009, 22:20 Quote
Quote:
Originally Posted by n3mo
Since the most intensive tasks are slowly moved towards GPGPU and CPUs will never even get close to GPU in efficiency and power, designing faster CPUs is just a waste of time. Even if they get to 11nm, the CPUs will be too slow for anything. Although it will be useful for GPUs.

Until of course they integrate the GPU into the CPU.... oh wait.
Still, maybe Intel should work on a better GPU to put in with the CPU... oh wait they are doing that as well aren't they.
Agamer 10th February 2009, 22:22 Quote
Except the CPU's will still be needed to do the tasks that can't be put onto the GPU. After all not all types of processing can be pushed to the GPU's.

Hence any speed advantage in CPU's will still be able to benefit what remains on them.
dogdude16 11th February 2009, 05:46 Quote
I'm just excited that they are putting $7 billion in to the U.S. economy.
n3mo 11th February 2009, 16:17 Quote
Quote:
Originally Posted by Agamer
Except the CPU's will still be needed to do the tasks that can't be put onto the GPU. After all not all types of processing can be pushed to the GPU's.

Hence any speed advantage in CPU's will still be able to benefit what remains on them.

Not always. As soon as the Linux kernel will be able to compile on a GPU architecture, the CPU will be useless. GPUs are, in the worst case, faster than CPU by a factor of ~15 and can easily do anything a CPU can - faster. Well, the will be eventually. CUDA is, fr now, only a frontend, the GPU architecture needs some changes, not to mention Micro$oft's inability to adapt to anything new. So, for now we will use slow, outdated x86, while wasting all the real power on stupid, repetitive games. But some day they will reach 11nm, the core clock won't go any further and no more cores will fit on the die. And than maybe someone will say "x86 was outdated and underperforming anyway, let's do something new" It's like with the oil - the less there is left, the more money and interest goes into alternatives.
JumpingJack 12th February 2009, 08:47 Quote
Quote:
Originally Posted by n3mo
Since the most intensive tasks are slowly moved towards GPGPU and CPUs will never even get close to GPU in efficiency and power, designing faster CPUs is just a waste of time. Even if they get to 11nm, the CPUs will be too slow for anything. Although it will be useful for GPUs.

As mentioned above, for highly parallel and regular data structures, your statement makes sense, but irregular data structures, branched logic, and integer performance the GPU is the waste of time and power. The GPU also has to provide for full double precision, which I believe they have made massive progress, but it still not fully IEEE-754 compliant -- I need to check that, not sure if this is true anymore.

Anyway, each has strengths and weaknesses. As we are witnessing, computing is evolving to bring the two together. In the near term, nVidia has a good argument with CUDA and some real killer applications/results, but they have no CPU to pair with it to round it out -- i.e. you cannot currently run an OS on a GPU for example. Intel, on the other hand, as well as AMD with the ATI acquisition, are have the two major pieces to bring the best of both together.

In the long term nVidia is gonna get squeezed (my prediction)...
devdevil85 13th February 2009, 20:15 Quote
Quote:
Originally Posted by dogdude16
I'm just excited that they are putting $7 billion in to the U.S. economy.
Amen to that brother.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums