bit-tech.net

AMD announces support for Havok Physics

AMD announces support for Havok Physics

AMD says Havok will be accelerated across its full range of products - how long will it be before we see GPU-accelerated Havok Physics though?

In what will no doubt add an interesting twist to the shape of today's industry, AMD has announced that it will work with Havok to optimise the Havok Physics engine on the company's full range of products.

AMD says that there are "over 100 developers and 300 leading titles already using Havok's physics engine," making it "the leading developer of game physics."

Naturally, this includes both AMD's CPUs and its ATI Radeon graphics cards and looks to be a case of two fingers at Nvidia. Indeed, Nvidia recently told bit-tech that PhysX has around 150 supported titles across all platforms.

We spoke to an Nvidia spokesperson who turned down the opportunity to comment at the time, but said that he'd come back to us with more information "very soon."

There are some things to think about here – the first is that the industry is starting to split, with proprietary technologies appearing on either side of the fence. I don't think that's a good thing for the PC gaming industry, as you'll get a situation where developers choose to develop for one particular hardware vendor (or two in the case of Havok).

Another thing to think about is that Havok is a wholly owned subsidiary of Intel, and Intel doesn't have a GPU at the moment – its bread and butter is in CPUs and will continue to be that way until 2009-2010. We've got to wait for Larrabee until Intel joins the fray and that makes me wonder how long it's going to be before we see GPU-accelerated Havok physics.

A lot of the groundwork would have already been done because ATI first talked about GPU-accelerated physics back at Computex 2005; however, the question is whether Intel will want to allow Havok physics to be accelerated on the GPU at the moment. After all, the company seemed upset by Nvidia's push to move video encoding and transcoding tasks onto the GPU—and to deliver massive performance increases in the process—so I can't help but wonder if Intel will hold GPU-accelerated physics off until it releases Larrabee.

Intel will want a bunch of killer applications (or at least middleware) available for developers to use as soon as Larrabee is available – one of those will very likely be physics, and another will be something revolving around Project Offset.

What do you make of all this? Discuss in the forums.

22 Comments

Discuss in the forums Reply
p3n 12th June 2008, 14:51 Quote
Source and havok already utilise multi threading?
Tim S 12th June 2008, 14:59 Quote
Quote:
Originally Posted by p3n
Source and havok already utilise multi threading?

yep, and that's all done on the CPU AFAIK.
kempez 12th June 2008, 15:32 Quote
Haven't AMD/ATI also announced support for PhysX as well?
Timmy_the_tortoise 12th June 2008, 15:39 Quote
I thought Nvidia was soldering an entirely separate PPU onto their newest cards next to the GPU, meaning that valuable GPU power isn't taken up by physics processing.. That sounds more exciting than this ATI GPU acceleration.

Or, did I get that wrong?
Tim S 12th June 2008, 15:48 Quote
Quote:
Originally Posted by kempez
Haven't AMD/ATI also announced support for PhysX as well?

not as yet, no.
Jojii 12th June 2008, 17:38 Quote
here is my guess,

intel is worried that if the install base is large for physx then developers will create titles for it, so maybe they brokered a deal to get ati into the havok game to act as a stop gap on the install base % so developers will continue to create new games that feature the havok engine.

then intel shafts ati later, without astroglide
Tim S 12th June 2008, 17:41 Quote
Quote:
Originally Posted by Jojii
then intel shafts ati later, without astroglide

Ouch.
johnmustrule 12th June 2008, 18:19 Quote
physx is a superior simulation, I can't understand why AMD would support intel with this.
TreeDude 12th June 2008, 18:47 Quote
Quote:
Originally Posted by johnmustrule
physx is a superior simulation, I can't understand why AMD would support intel with this.

PhysX is not superior. Ever play a game with support for PhysX? The whole idea was so that great physics don't cause a fps hit. But it does. A bad one at that. PhysX was a good idea with a terrible follow through. Not to mention that it hardly even added anything to the games. Most of the time it just gave you a bit more debris.

AMD is only doing this because they do not want to add a third option. There are already 2 clear physics solutions and they needed to pick one to back. PhysX is no longer an option, due to the Nvidia buyout, so their choice was already made. This comes at no surprise to me.
byronrock 12th June 2008, 19:32 Quote
There's no sense if AMD dont will use (right now) in his GPUs. AMD must to transport Havok to its poor "Close to Metal" (if still isnt dead) for emulate Havok at GPGPU.
I dont know but its kind of weird this decision, i though AMD was closer to choose Nvidia's PhisX (no matter if CUDA is needed) than Intel's Havok.

But its supposed that AMD decision isnt close. And with the time can change. :)

Can AMD take the 2 options HAVOK and PhisX?? Kind of hybrid decision.

For to have total compatibility with all future game titles
Icy EyeG 12th June 2008, 19:59 Quote
Quote:
Originally Posted by Timmy_the_tortoise
I thought Nvidia was soldering an entirely separate PPU onto their newest cards next to the GPU, meaning that valuable GPU power isn't taken up by physics processing.. That sounds more exciting than this ATI GPU acceleration.

Or, did I get that wrong?

No, nVIDIA is porting PhysX to CUDA.
BioSniper 12th June 2008, 20:06 Quote
It reminds me of 3dfx vs OpenGL vs Direct3D all over again.
Timmy_the_tortoise 12th June 2008, 21:11 Quote
Quote:
Originally Posted by Icy EyeG
Quote:
Originally Posted by Timmy_the_tortoise
I thought Nvidia was soldering an entirely separate PPU onto their newest cards next to the GPU, meaning that valuable GPU power isn't taken up by physics processing.. That sounds more exciting than this ATI GPU acceleration.

Or, did I get that wrong?

No, nVIDIA is porting PhysX to CUDA.

Ahh... That makes sense.

Havok is the much better engine anyway. I've seen GRAW with and without PhysX.. and there is pretty much no difference, it's literally just a few extra black bits flying out from the centre of an explosion...

Sure, the tech demos were great... But I have yet to see a decent implementation. And Havok are reaching a point which pretty much matches PhysX in terms of physics objects on screen at once..
Brett89 12th June 2008, 23:14 Quote
I'd love to see the Havok engine go further, seeing as it's so good.
Quote:
Up yours Nvidia
Priceless.
Passarinhuu 13th June 2008, 00:09 Quote
Quote:
Originally Posted by TreeDude
PhysX is not superior. Ever play a game with support for PhysX? The whole idea was so that great physics don't cause a fps hit. But it does. A bad one at that. PhysX was a good idea with a terrible follow through. Not to mention that it hardly even added anything to the games. Most of the time it just gave you a bit more debris.

You can't add more debris and expect not to have a performance hit. You're rendering more objects at the same time, there has to be lower performance... Unless the PhysX is also unloading work from the GPU there is no way you can avoid it
TreeDude 13th June 2008, 04:46 Quote
Quote:
Originally Posted by Passarinhuu
Quote:
Originally Posted by TreeDude
PhysX is not superior. Ever play a game with support for PhysX? The whole idea was so that great physics don't cause a fps hit. But it does. A bad one at that. PhysX was a good idea with a terrible follow through. Not to mention that it hardly even added anything to the games. Most of the time it just gave you a bit more debris.

You can't add more debris and expect not to have a performance hit. You're rendering more objects at the same time, there has to be lower performance... Unless the PhysX is also unloading work from the GPU there is no way you can avoid it

Dude Physx was an actual addon card. Made by Ageia before Nvidia bought them out. It WAS offloading and it STILL had a performance hit.
HourBeforeDawn 13th June 2008, 04:52 Quote
hmm well i think this is a better way to go as Havok was the better physics engine over PhysX
notatoad 13th June 2008, 05:07 Quote
Quote:
Originally Posted by TreeDude
Dude Physx was an actual addon card. Made by Ageia before Nvidia bought them out. It WAS offloading and it STILL had a performance hit.
yes, but it only offloaded the physics calculations, which i believe is relieving the CPU not the GPU. the system could calculate the trajectories for 50 peices of debris at once instead of 5, but that forces the graphics card to render 10x the polygons which is where the framerate hit comes from.
Sebbo 13th June 2008, 07:38 Quote
ATI was originally looking at running the havok engine on their GPU's back when they had a physics on crossfire solution coming, but that ground to a halt when intel bought havok. going from that, continuing to work with the havok engine seems an easy decision compared to forking out money to their main competitor in the GPU market or developing their own solution.
Tim S 13th June 2008, 07:46 Quote
Some of the PhysX content I've seen looks really good now it's on the GPU and you know how critical we were of PhysX on the PPU card... it was pants. And Ageia had no installed base, which meant no games could use the effects in such a way that'd break gameplay if you didn't have the PPU add-in card. We'll have to see how good it ends up being when it's actually shipping code running PhysX on the GPU now there's a big installed base behind the engine (70m+ CUDA GPUs).

Havok has had some great implementations in the past, but I can't help but feel a large portion of the engine really needs to be accelerated on the GPU to get the best performance/most realistic effects. The CPU will still be used for certain effects, but not all, because it's a better fit in some cases.
[USRF]Obiwan 13th June 2008, 09:52 Quote
seeing is believing...
Cthippo 14th June 2008, 17:46 Quote
So are you going to have to have two computers with different CPUs to get the physics in all your games, or will these be cross compatible? Or will game manufacturers have to code for both Havok and PhysX in every game to insure that customers can play their game regardless og what CPU they have.

While I think the advancement of physics for gaming is great, I'm afraid that if the need to support two different physics engines without a standardized API will be another arguement for developers to work with consoles first.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums