bit-tech.net

Problems with Multiple GPU Architectures

Problems with Multiple GPU Architectures

Basically, if you're bypassing the API and programming directly-to-metal, then you'll also need to do a fair amount of QA testing and optimisations for different GPU architectures. Isn't this going to make life harder for developers?

'Absolutely, yes, it would,' says Huddy. 'The problem with the PC is that you ideally want a PC that doesn't crash too much, and if a games developer is over-enthusiastic about the way they program direct to the metal, they can produce all sorts of difficulties for us as a hardware company trying to keep the PC stable.

*DNP - ask James - Farewell to DirectX? Problems with Multiple GPU Architectures
The idea of going direct-to-metal is probably more appealing to cutting edge game developers, rather than indie developers such as Introversion - the company behind Multiwinia

'So there are difficulties there in terms of making sure that the access to the hardware is kind of virtualised, I guess, to make sure that a PC developer can get the same kind of capabilities as a console developer. But in terms of doing the very best for the platform, that's how they would actually achieve that.'

Of course, programming direct-to-metal isn't for every games developer. We mentioned the possibility to Introversion's lead designer and developer Chris Delay, for example, who simply said: 'I don't want anything to do with that, but presumably it depends on what you're developing. If you're making Crysis 3 or something like that, then it may be exactly what you want.'

Indeed, Crytek's R&D technical director, Michael Glueck, said 'yes, that would appeal to us,' when he heard about Huddy's claims. However, he also seemed surprised, pointing out that 'AMD is the biggest force driving the OpenCL standard; it even released drivers to run OpenCL on SSE2 and that's kind of a way to remove low-level hardware access for simple compute tasks.'

Glueck also points out that 'some years ago, all CPU performance critical tasks were done by the programmer, from low-level assembly optimisations to high-level thread and memory management. Now that the "compute world" merges CPUs and GPUs, you waste a lot of time, by using higher level API-layers such as OpenCL, CUDA or Direct Compute, to execute less smart algorithms on GPUs, because they still run faster than on CPUs.

'Having direct access to hardware would mean no drivers magically translating your byte code once again, and also having low-level memory management available, which you have to some degree with CUDA, and also your own thread scheduler would be really enjoyable. It definitely makes sense to have a standardised, vendor-independent API as an abstraction layer over the hardware, but we would also prefer this API to be really thin and allow more low-level access to the hardware. This will not only improve performance, but it will also allow better use of the available hardware features.'


*DNP - ask James - Farewell to DirectX? Problems with Multiple GPU Architectures
Could Crysis have run more efficiently if Crytek had bypassed Direct3D and programmed direct-to-metal instead?

Either way, Glueck agrees that APIs are likely to become less important to game developers in the future, especially as GPUs start to behave more like general purpose units than fixed function graphics hardware.

APIs are currently 'made to suit a special hardware design,' explains Glueck, 'but as more and more GPUs become general purpose, I doubt there will be more fixed function units added. Even nowadays, rasterisation and tessellation in software on OpenCL can work quite nicely. It won't match the speed of the fixed hardware, but just like alpha-test, perspective interpolation and dedicated vertex- and fragment-units; it will be done by GPGPU units at some point.'

Whether this actually happens remains to be seen, but Huddy certainly seems convinced that the idea of having no API at all is going to put pressure on Microsoft in the future. By getting low-level access to the hardware, games developers could potentially make many more draw calls, and push PCs way ahead of consoles in terms of graphics, which is where they should be, given the hardware we've got. However, this could also come at a cost in terms of stability and compatibility.

Either way, it looks as though DirectX's future as the primary gateway to PC graphics hardware is no longer 100 per cent assured, especially when it comes to cutting edge graphics.

Related Reading

AMD will be first to DirectX 11
Nvidia releases WHQL Windows 7 graphics driver
DirectX 10 support coming to Mac OS and Linux
DirectX 11: A look at what's coming