Huang talks Larrabee, x86 compatibility

Huang talks Larrabee, x86 compatibility

Nvidia CEO Jen-Hsun Huang shared his feelings on exactly what x86 compatibility means to him.

During a roundtable discussion with Nvidia CEO Jen-Hsun Huang earlier today, he commented on Larrabee once more.

We don’t know what Larrabee is because it hasn’t shipped yet, so we can’t really talk about it. And also by the time that Larrabee ships, Nvidia’s technology will be so much more advanced,” said Huang.

The reason why Larrabee is so important to Intel is because GPU computing is really important. Our GPUs today are completely programmable, completely general purpose and they support C – the modern computing language.

What is the number one benefit of x86?” he asked in his typical rhetorical fashion. “x86 is all about binary compatibility. Nobody ever said that if I built a new computer architecture, the one I would create is x86 unless I wanted binary compatibility.

So then the question becomes: is Larrabee binary compatible with Windows? Is Larrabee binary compatible with x86 and 64-bit x86? Is Larrabee binary compatible with SSE 3, 4, 5, 6? The answer is no,” said Huang.

He then went onto ask what the benefit of x86 really is. “It has tools and so does every other CPU. ARM, PowerPC, Cell – they all have tools. The reason why these tools exist is because most of them are high-level language tools. Therefore, I think that part of is a bit of a distraction and a bit of a smokescreen.

The secondary thing is that we also believe in x86 – we believe in heterogeneous computing. The CPU and GPU should work together and the CPU is x86, so I’m x86 as well,” Huang added.

Nvidia can’t seem to help itself when it comes to talking about Larrabee these days, but I guess that’s what you get when a company like Intel starts to bang on your door.

With that said though, some of the points Huang makes are very important – how many developers code in machine code or SSE these days? Most use a high-level language and then a compiler with switches for hardware with different capabilities (like newer versions of SSE). In many respects, CUDA is very similar because it’s essentially C with an Open64 compiler and some additional threaded optimisations.

While I can’t say I’m pleased that Nvidia is banging the Larrabee drum once more, I’m also starting to feel that the x86 drum is starting to get a bit tired as well – at least Huang decided against referring to Larrabee as ‘slideware’ again. Discuss in the forums.


Discuss in the forums Reply
PQuiff 26th August 2008, 12:54 Quote
Meh i dont care.

I just want faster computers that dont crash every 10 minutes.
Saivert 27th August 2008, 13:24 Quote
I think Intel wants to move towards single processor systems. I.e one CPU to rule them all. Therefore x86 cores is important because that's what Intel has used in the past and will continue to use. It's THEIR architecture after all.
If they succeed at getting x86 cores to render graphics at high speeds, AMD and NVIDIA will have to seriously start looking around for something new. This is a bit game of chicken. They both think the other camp has the losing ground and continues to talk down on each other's offering.

PS3 was going to use the Cell processor for graphics rendering, but the engineers didn't pull it off and had to resort to using a discrete graphics chip (from NVIDIA). this proves that there are still some time before you can get away with doing anything on one processor.

Don't underestimate Intel even though you don't believe in Larrabee.

Hope I have cleared the reasons why both NVIDIA and Intel think they are right for their own reasons in this matter.
wuyanxu 27th August 2008, 13:59 Quote
the importance of x86 in GPU is all Intel's marketing.

RISC architecture is far better and elegant than x86 (CISC). look at the failed attempt at Intel Atom power consumption vs ARM's power consumption.

many people say ATI's 4870 is a great architecture and gtx280 is a brute force approach. and this is just like ARM/PowerPC vs Intel, Intel gives you better performance whil ARM/PowerPC architecture is more effecient and elegant. (which is why most mainframes are still using PowerPC CPUs, for effeciency)

as long as the driver allows CPU and GPU to work together, there's no need for x86 instruction sets. (the "tools" in Huang's words)
as long as there's a capible shader compiler, there's no difference between x86 instruction sets and other (ATI/nVIdia) approach
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.

Discuss in the forums