Why do we need teraflop computing?

Written by Wil Harris

September 28, 2006 | 22:13

Tags: #idf #tera-scale

Companies: #intel

We spoke to Intel's Steve Pawlowski today about the possibilities for tera-scale computing in the future.


Whilst Intel has been making a song and dance about having 80 cores on one chip, we were a little shakey as to what the actual application of this would be. Steve told us that will be a diverse range of applications, some of which we haven't even thought of yet. One example was that if you are a financial trader, you rely on complex models of markets and data being computed, often overnight. With tera-scale computing, you could have those models generated for you in real time, allowing you to predict and simulate the market in an instant.

Alternatively, he gave an example of game graphics and physics. Cloth, water and hair simulations all require massive amounts of rendering power in movies, with massive server farms rendering CGI for films like Lord of the Rings. With this amount of computing power, this could be done in real time on the desktop within a game. Problems that previously required supercomputing power will arrive on the desktop.


The cores themselves will probably be less complex than they are today, we were told, whilst retaining full compatibility with the x86 instruction set. The cores will be optimised for threaded performance at the expense of scalar performance.

The chips that eventually arrive will be more likely to have 64 cores than the 80 suggested. Steve suggested that the power of two had served computing well up until now, and saw no reason to change away from that. "I happen to believe that a power of two makes a lot of sense - 64 cores, then 128. 80 is not a natural number unless you're using IEEE floating point."

Manufacturing will be made possible by the shift to new process technologies. Whilst the processors could technically be built today, they would be huge, expensive and unreliable. As we move down to 32nm production and beyond, they will become feasible for mass production.


However, this new era is not without its problems. Number one is building the ecosystem. Helping developers create massively threaded applications and operating systems will be a massive challenge in the time scale. "The biggest issue for ISVs is synchronisation of the cores. We have engineers, tools libraries - people whos job it is to enable the technology we have going out. There are some challenges we have in terms of supporting shared memory models in this class of system. Debugging is probably a more interesting problem."

Which leads us to the next issue - Paul Otellini's promise that we would see these chips within five years. Whilst Pawlowski did admit that this was possible, he said it would keep him awake for years to come.

Whilst the manufacturing may be possible, getting the rest of the system - the I/O, the system bus, the memory - up to speed would be difficult.

"We need to move the ecosystem along in a very efficient way. Will it happen? We'll see. There are challenges to overcome. We have people on that, but we couldn't tell you if it's enough yet."

Discuss this in the forums
YouTube logo
MSI MPG Velox 100R Chassis Review

October 14 2021 | 15:04