Nvidia's David Kirk on CUDA, CPUs and GPUs

Written by Tim Smalley

April 30, 2008 | 13:00

Tags: #architecture #chief #computing #cpu #cuda #david #gpu #gpus #interview #kirk #larrabee #parallel #programming #rasterisation #ray #scientist #tracing #visual

Companies: #intel #nvidia

More CUDA

If you were to move from where every product has the same feature set, but as you say consumers don’t need double precision, will Nvidia find itself in a situation where certain CUDA-compiled programs won’t run on every Nvidia GPU?

“Every CUDA program will run everywhere. What I mean by that is that there will be forward compatibility, as new programs won’t necessarily run on older hardware because the older hardware might lack some features introduced later on. This is just like running SSE code on pre-SSE hardware – it just can’t run.

“But, it may run at vastly different speeds on different hardware – and that’s the same case with, for example, GeForce and Quadro. The professional OpenGL applications typically can run on GeForce, but just really slowly. This is because OpenGL has good software emulation in the driver for any hardware feature that doesn’t exist – although some applications can detect that and refuse to run.

Nvidia's David Kirk on CUDA, CPUs and GPUs More CUDA

“That’s going to be the case when we really want to create a better experience for HPC and all the things those people need. We probably won’t support those features at the same speed on the consumer products, but we will at some rate. I think that the right balance is that maybe there’s something that people will need double precision for in the consumer products – but they won’t need a petaflop,” explained Kirk.

Nvidia's David Kirk on CUDA, CPUs and GPUs More CUDABut surely there are problems out there that could make use of a petaflop of compute power in the consumer space? “Sure, there are loads of graphics problems we can come up with, but they probably won’t need double precision,” he responded.

During David’s presentation to the Imperial College Department of Computing students, he said that 99.9 percent of Nvidia’s revenue doesn’t come from the high-performance computing market. I asked David if that makes CUDA quite a big risk for Nvidia.

“It’s at the beginning of the exponential and we’re not moving away from our core markets,” said Kirk. “The opportunity is that we’ve got access to a big market already and we can put this massively parallel computing device in more than 200 million PCs by the end of this year. I’d say that gives us a pretty big installed base for developers to write applications.

“For professional products to have a big impact on high-performance computing, you have to put things into perspective. If Nvidia completely captured every single node in the world of high-performance computing in the world, it would not be enough to be in our top 50 customers,” explained Kirk.

That’s a pretty big thing to say, but it shows the differences between different markets. “That’s right, it’s just the difference between mass market and special purpose. Every scientist and engineer needs a desk-side box with a couple of Tesla units in it to do their analysis.”

Surely that’s not enough to give Nvidia a big enough installed base though? “Not in isolation no, but gamers account for two to four million units [per year], while the consumer market is one to two hundred million units,” he added.
Discuss this in the forums
YouTube logo
MSI MPG Velox 100R Chassis Review

October 14 2021 | 15:04

TOP STORIES

SUGGESTED FOR YOU