bit-tech.net

Nvidia dismisses AMD's Batman accusations

Nvidia dismisses AMD's Batman accusations

The two figureheads behind Nvidia's The Way It's Meant To Be Played program have dismissed AMD's Batman accusations, saying AMD could have implemented AA if they wanted.

Late last week, AMD publicly blasted Nvidia and a number of game developers because of some issues in a few recent games that have shipped as part of Nvidia's The Way It's Meant To Be Played program.

Ian McNaughton, a senior manager in Advanced Marketing at AMD, claimed that Nvidia blocked AMD from working on Batman: Arkham Asylum, Need For Speed: Shift and Resident Evil 5, claiming that they are "proprietary TWIMTBP titles". Ouch.

McNaughton complained that Batman: Arkham Asylum has an anti-aliasing mode on Nvidia hardware, which disappears when an ATI Radeon is recognised as the primary GPU in the system. The game also implements Nvidia's PhysX technology, too.

However, he neglected to mention that Batman: AA is based on Unreal Engine 3, which uses a deferred renderer on DirectX 9.0. Deferred renders don't support MSAA in DirectX 9.0 without a driver workaround, which is exactly what Nvidia's DevTech team helped to implement (and test). Because of the tight development schedule though, this couldn't be tested on ATI's Radeon graphics cards.

Ashu Rege, director of Nvidia's DevTech team, said that "you have no idea how tight the schedule was on Batman." He explained that there was less than a week to fix several critical bugs in the physics effects before the game went off for GfW approval because the Nvidia engineer working with developer Rock Steady on the PhysX implementation was just about to go on holiday when the bugs came to light.

Rege said "we had absolutely no time to go 'oh yeah, how can we screw ATI by the way?' Seriously, nobody ever has time to think about those kinds of things. In this situation, had we enabled something that was not tested on ATI GPUs [and broke the game as a result], there were a number of things that could have happened. The worst thing from my perspective is that the developers won't want our help in the future because we broke their game."

Tony Tamasi, senior vice president of Content and Technology at Nvidia, chimed in and said; "Even if we wished we could, we can't possibly expect to really support ATI's drivers. What if ATI changes the way its drivers apply AA and that breaks Rock Steady's game - whose fault is that?"

Tamasi later went onto say that "no game developer on the planet is going to let us do anything to a game which prevents it from running on ATI, or having a good experience. Whenever we go to do something, the first principle we apply is 'do no harm' - you never make it worse than before you went in. Ever."

"If we did that, next time, the developer is going to say 'sorry, we don't want to work with you guys' and that's the end of our existence," added Rege.

Discuss in the forums.

56 Comments

Discuss in the forums Reply
she'shighvoltage! 3rd October 2009, 04:44 Quote
...wow, fail and then some.
"WELL YOU'RE AT FAULT BECAUSE WE DIDN'T GET ENOUGH TIME"
dyzophoria 3rd October 2009, 04:49 Quote
so the game was rushed, or semi-rushed? from what i'm understanding, that's the case
Yoy0YO 3rd October 2009, 06:02 Quote
They should kiss and make up and release a patch.
I'm really annoyed being caught up in this damn'd company politics. First physX, now AA.
Elton 3rd October 2009, 06:13 Quote
How is that even work? Nvidia doesn't own the tech rights to AA.
Goty 3rd October 2009, 06:37 Quote
This really isn't a response to AMD's claims at all; If anything it's just a sidestep.

AMD claims that NVIDIA actively blocked them from working with developers on the game and NVIDIA says that there just wasn't time to test on ATI hardware. That's not the same thing!
erratum1 3rd October 2009, 07:41 Quote
As part of 'the way it's meant to be played program' Nvidia send in their engineers to work with the developer and Ati don't. They couldn't possibly share what they had done to get AA with Ati due to the fact that it possibly could have made the game unplayable on Ati hardware. and if that had happened their would have been all HELL TO PAY. Don't forget that games cost millions to develop, can you imagine if they broke it, GET THE HELL OUTTA MY STUDIO AND NEVER COME BACK ! lol.
Evildead666 3rd October 2009, 08:37 Quote
No. Nvidia are probably just pulling a "Call of Juarez"...
It wouldn't surprise me if a patch came out after all the reviews were done...

Not many sites go back and re-review a game again......
impar 3rd October 2009, 09:53 Quote
Greetings!
Quote:
Originally Posted by Goty
This really isn't a response to AMD's claims at all; If anything it's just a sidestep.

AMD claims that NVIDIA actively blocked them from working with developers on the game and NVIDIA says that there just wasn't time to test on ATI hardware. That's not the same thing!
Agree.

And then this:
Quote:
He explained that there was less than a week to fix several critical bugs in the physics effects before the game went off for GfW approval because the Nvidia engineer working with developer Rock Steady on the PhysX implementation was just about to go on holiday when the bugs came to light.
Why use GFWL?
general22 3rd October 2009, 10:37 Quote
Looks like AMD isn't willing to improve their own developer relations program and rather they would prefer to complain about the extremely successful NVIDIA program. It seems this time around they didn't have time to test this AA mode on ATI hardware but then again this is NVIDIA here so there may have been more insidious motives.
frontline 3rd October 2009, 10:40 Quote
What's next? Nvidia telling developers to exclude DX11 versions of games until their hardware is out in the marketplace?

"We didn't have time to implement the DX11 code due to tight release deadlines......"

I guess it's to be expected if one company or another are throwing money at software developers to get an edge, but i just wish the developer would be able to concentrate on optimising the software for the latest hardware out there, regardless of who manufactures it. Although, now we have mostly console ports it's debatable whether any new PC GPU hardware will be fully exploited in the near future.
wuyanxu 3rd October 2009, 11:03 Quote
Quote:
Originally Posted by general22
Looks like AMD isn't willing to improve their own developer relations program and rather they would prefer to complain about the extremely successful NVIDIA program.

this is the problem, ATI should stop complaining and get a developer support program setup, that way, the customer gains.
gavomatic57 3rd October 2009, 14:35 Quote
Games take years to develop, so it seems Nvidia are to blame because they engaged with the developer whilst AMD didn't bother at any point during those years. Unless Nvidia are paying for the development, they can't legally block a rival from working with the developer. The developer can block ATI's driver bods, but Nvidia can't.
perplekks45 3rd October 2009, 14:52 Quote
Yay! More nVidia bashing!

If AMD wants their cards to be supported properly by games maybe they should stop whining and start a program similar to TWIMTBP. That way they could block nVidia support and we would have a world in which you can only play half the games due to artificial limitations by the graphics card manufacturers.

Sure, nVidia wasn't unhappy about the fact that AMD hardware isn't fully supported but if AMD set up their own team properly they could possibly work together and make games enjoyable for everyone... no matter what hardware you have.
LonDom 3rd October 2009, 15:26 Quote
Its pathetic, EA released a game that wasnt out of beta testing, jesus the tuning resets if replay a race. But the ATI thing is has to stop they also need a TWIMTBP program.

my 2 cents.
Pterodon 3rd October 2009, 15:59 Quote
Quote:
However, he neglected to mention that Batman: AA is based on Unreal Engine 3, which uses a deferred renderer on DirectX 9.0. Deferred renders don't support MSAA in DirectX 9.0 without a driver workaround, which is exactly what Nvidia's DevTech team helped to implement (and test). Because of the tight development schedule though, this couldn't be tested on ATI's Radeon graphics cards

False.
UE3 does not support MSAA on DX9 HARDWARE. Does not exist any driver workaroud to do that, its tecnically impossible. What you can see with Dx9 HW is not MSAA, its just blur.
But it has to support MSAA on Dx10/10.1 and Dx11 cards, without any workaround. Its standard support. And in B:AA the MSAA with DX10/Dx10.1 hardware only works wiht nVidia cards.
mi1ez 3rd October 2009, 16:02 Quote
I'm disliking nvidia more and more at the moment with their physx, their naming scheme, that false graphics card they were demoing and now this...
wuyanxu 3rd October 2009, 17:07 Quote
Quote:
Originally Posted by LonDom
Its pathetic, EA released a game that wasnt out of beta testing, jesus the tuning resets if replay a race. But the ATI thing is has to stop they also need a TWIMTBP program.

my 2 cents.
i agree completely, people should look at ATI/AMD's strategy of bashing competitors and think how to make sure the customer get most out of their card.

nVidia is doing their part by investing and working with developers for a better gaming experience. nothing wrong with this and should not receive internet hatred for trying to improve the gaming experience.

in the end, you are not going to care which company trys to block the other. all customers should care is that the cards they've bought work with all the features, and nVidia have delivered that.
Elton 3rd October 2009, 17:51 Quote
The only problem I have with this is this: Nvidia's TWIMTBP program is proprietary, if it was open to all that would be fine. But it clearly isn't.

The other thing that really bothers me is how unrealisitc some of you are, well in terms of ATI should get a program like this. They do, they have developer support, it's just not paraded around like Nvidia.

That said, it does give me a bit of incentive to buy an Nvidia card.
gavomatic57 3rd October 2009, 18:07 Quote
Quote:
Originally Posted by Elton


That said, it does give me a bit of incentive to buy an Nvidia card.

Exactly.

And if AMD have a programme like TWIMTBP, why keep it so secret? The only game I recall seeing an ATI logo on in my whole collection is Call of Juarez.
Evildead666 3rd October 2009, 18:30 Quote
Half Life 2 was an ATi game if I remember....with the coupons because it wasn't ready.
Tim S 3rd October 2009, 18:48 Quote
Quote:
Originally Posted by Pterodon
Quote:
However, he neglected to mention that Batman: AA is based on Unreal Engine 3, which uses a deferred renderer on DirectX 9.0. Deferred renders don't support MSAA in DirectX 9.0 without a driver workaround, which is exactly what Nvidia's DevTech team helped to implement (and test). Because of the tight development schedule though, this couldn't be tested on ATI's Radeon graphics cards

False.
UE3 does not support MSAA on DX9 HARDWARE. Does not exist any driver workaroud to do that, its tecnically impossible. What you can see with Dx9 HW is not MSAA, its just blur.
But it has to support MSAA on Dx10/10.1 and Dx11 cards, without any workaround. Its standard support. And in B:AA the MSAA with DX10/Dx10.1 hardware only works wiht nVidia cards.

They are supporting driver-forced MSAA. The option in game just enables the driver workaround that Nvidia implemented for all UE3 games. It was announced back at the G92 launch and it's the same capability in Batman.
Quote:
Originally Posted by Elton
The only problem I have with this is this: Nvidia's TWIMTBP program is proprietary, if it was open to all that would be fine. But it clearly isn't.

I think you're getting confused. :)

PhysX is proprietary, because it only works with Nvidia hardware (I don't like the fact that the game physics are limited to one vendor... and neither do you guys!), but the work they do on TWIMTBP goes well beyond just adding PhysX (or support for AA). Quite a lot of it is fixing the game (Nvidia's developer tools are pretty popular and things like NVPerfHUD work on ATI hardware), getting the developer to implement a proper PC interface and actually adding more so that the port from the consoles is less of a port.

Developers often have a miniscule budget to spend on the PC version of their game because that's not where the money is and the publishers are all about ROI. They often have very little time in their development schedule for the consoles because of that and it's why some console ports are exactly that - they've had no TLC and some often have the console button mapping on screen (press A to start the game, etc).
rickysio 3rd October 2009, 19:42 Quote
SO nVidia invests money, and ATI can't be half arsed to do it, and they complain?

AMD = Advanced Marketing Drivel.
gavomatic57 3rd October 2009, 21:09 Quote
Quote:
Originally Posted by rickysio
SO nVidia invests money, and ATI can't be half arsed to do it, and they complain?

Pretty much.
I think ATI forget why a lot of people buy powerful graphics cards...for games! The drivers and the optimisations are every bit as important as the hardware.
Goty 3rd October 2009, 22:22 Quote
Quote:
Originally Posted by rickysio
SO nVidia invests money, and ATI can't be half arsed to do it, and they complain?

AMD = Advanced Marketing Drivel.

ATI invests money in making better hardware for the consumer instead of giving money to developers to lock out any features that might give ATI a greater edge in performance.
Tim S 3rd October 2009, 23:39 Quote
Quote:
Originally Posted by Goty
Quote:
Originally Posted by rickysio
SO nVidia invests money, and ATI can't be half arsed to do it, and they complain?

AMD = Advanced Marketing Drivel.

ATI invests money in making better hardware for the consumer instead of giving money to developers to lock out any features that might give ATI a greater edge in performance.

It's not quite as clear cut as that. Nvidia spends a lot of money with a lot of developers, but actual hard cash changes hands "very rarely" according to Tony Tamasi, who runs the group. Most of the time it is spent on developer support with engineers, developer tools, the Game Test Labs in Moscow for debugging code and much more.

AMD also spends money on content, but it's less widespread than Nvidia. Here's a couple of recent games that AMD has spent money on: Codemasters for bundling Dirt 2 (first DX11 game... I know Battleforge has DX11 content via a patch, but I'm talking about the first game to ship with DX11 in the box); Techland on co-marketing and bundling of Call of Juarez (first real DX10 game... yes, there was Lost Planet and CoH, but they were again patched or had a novel DX10 implementation).

AMD has worked with other developers, but it's not clear whether they're entering into financial agreements where money changes hands.

One thing that's worth thinking about is something an Intel exec said to me a few years ago (this isn't a direct quote, but the meaning is all there): "without great software, the hardware is nothing, no matter how great the hardware is". Nvidia's largest engineering group is its software team... there are over 1,100 driver engineers, 200 engineers working on content/developer relations (across HPC/CUDA, mobile phones, PC gaming, etc). I know Intel also employs more software engineers than hardware engineers (not including process technology) and I presume the same is true for AMD - they unfortunately don't talk a lot about it though.
gavomatic57 4th October 2009, 11:42 Quote
Quote:
Originally Posted by Tim S


One thing that's worth thinking about is something an Intel exec said to me a few years ago (this isn't a direct quote, but the meaning is all there): "without great software, the hardware is nothing, no matter how great the hardware is". Nvidia's largest engineering group is its software team... there are over 1,100 driver engineers, 200 engineers working on content/developer relations (across HPC/CUDA, mobile phones, PC gaming, etc). I know Intel also employs more software engineers than hardware engineers (not including process technology) and I presume the same is true for AMD - they unfortunately don't talk a lot about it though.

Exactly, so while ATI may get a bit of a hardware edge from time to time, it's performance is hampered by dreadful drivers and games that are released without the driver optimisations needed to make them shine, whereas Nvidia and their labs work with the developers to get the hardware. You pay more for a Nvidia card, but at least you know the software is going to work well with it because the vendor has made the effort.
Combatus 4th October 2009, 19:32 Quote
Quote:
Originally Posted by gavomatic57
Quote:
Originally Posted by Tim S


One thing that's worth thinking about is something an Intel exec said to me a few years ago (this isn't a direct quote, but the meaning is all there): "without great software, the hardware is nothing, no matter how great the hardware is". Nvidia's largest engineering group is its software team... there are over 1,100 driver engineers, 200 engineers working on content/developer relations (across HPC/CUDA, mobile phones, PC gaming, etc). I know Intel also employs more software engineers than hardware engineers (not including process technology) and I presume the same is true for AMD - they unfortunately don't talk a lot about it though.

Exactly, so while ATI may get a bit of a hardware edge from time to time, it's performance is hampered by dreadful drivers and games that are released without the driver optimisations needed to make them shine, whereas Nvidia and their labs work with the developers to get the hardware. You pay more for a Nvidia card, but at least you know the software is going to work well with it because the vendor has made the effort.

I wouldn't go as far as saying current ATI drivers are dreadful. Batman: AA aside we've actually seen very few problems with ATI drivers in the lab with the 4xxx series cards (except the HD 4870X2). Shoddy drivers are more of an excuse with the HD 3800 series and earlier. In fact we've actually seen more anomalies with Nvidia drivers in our benchmarks with GTX 275s outperforming GTX 285s and stuttering in games like Fallout 3 where ATI cards breeze through them.
Tim S 4th October 2009, 19:46 Quote
Quote:
Originally Posted by Combatus
Quote:
Originally Posted by gavomatic57
Quote:
Originally Posted by Tim S


One thing that's worth thinking about is something an Intel exec said to me a few years ago (this isn't a direct quote, but the meaning is all there): "without great software, the hardware is nothing, no matter how great the hardware is". Nvidia's largest engineering group is its software team... there are over 1,100 driver engineers, 200 engineers working on content/developer relations (across HPC/CUDA, mobile phones, PC gaming, etc). I know Intel also employs more software engineers than hardware engineers (not including process technology) and I presume the same is true for AMD - they unfortunately don't talk a lot about it though.

Exactly, so while ATI may get a bit of a hardware edge from time to time, it's performance is hampered by dreadful drivers and games that are released without the driver optimisations needed to make them shine, whereas Nvidia and their labs work with the developers to get the hardware. You pay more for a Nvidia card, but at least you know the software is going to work well with it because the vendor has made the effort.

I wouldn't go as far as saying current ATI drivers are dreadful. Batman: AA aside we've actually seen very few problems with ATI drivers in the lab with the 4xxx series cards (except the HD 4870X2). Shoddy drivers are more of an excuse with the HD 3800 series and earlier. In fact we've actually seen more anomalies with Nvidia drivers in our benchmarks with GTX 275s outperforming GTX 285s and stuttering in games like Fallout 3 where ATI cards breeze through them.

Absolutely, we wouldn't say the 5870/5850 were the cards to buy at the moment if the drivers sucked the big one! :)
Elton 4th October 2009, 20:02 Quote
Yeah I apologize I was a bit confused. And at any rate, I'm only waiting for the GT300 to see if their image quality in AA/AF has improved. Once that happens I'll be pumping frames in Oblivion with max filters.
thehippoz 4th October 2009, 20:46 Quote
reading thew evga forums.. as you know evga is a nvidia reseller- and guys on the forum are getting new members to buy the gtx295 over the ati cards there =\ I mean there's fanboy.. and then there's those guys
perplekks45 4th October 2009, 20:57 Quote
I'll wait for GT300 series' cards and then I'll be in the market for a replacement for my 8800GTS/640. Before that I'm impressed by the HD5xxx series, especially the power draw [idle is w0000t!?] but I won't make a move.
Initialised 4th October 2009, 21:14 Quote
PhysX in this title is also "The Way it's Meant to be Gimped". Beyond3D have a workaround here http://forum.beyond3d.com/showthread.php?p=1332461 that allows the PhysX load to be spread across all cores instead of being limited to just one. So if you have a decent quad or evan better an HT-enabled quad you should be able to get decent performance with PhysX running on the CPU. For AA just force it in CCC, but honestly, the last game I had to do that on was TCOR: Butcher Bay.

As I see it this is a way for nVidia to feed or revive the (now outdated) assumption that you need to wait for drivers and patches for games to work well on ATi hardware. TWIMTBP is a good thing in general but it loses it's value when it is used to promote nVidia hardware (which is now a generation behind) by limiting the features available on alternative hardware whether it is PhysX on the CPU or AA on the GPU.

NVIDIA: The Way it's meant to be Gimped.
ZERO <ibis> 5th October 2009, 05:42 Quote
Basically ati says that it is unfair for NVIDIA to invest money on its end to make games play better on its hardware and believes that NVIDIA should also make the games work on ATI hardware. Seeing as how ATI owns NVIDIA this clearly is logical and therefore NVIDIA is not supporting its parent company. But lets not forget that AMD also owns ATI and therefore NVIDIA and so NVIDIA should make it work better on AMD cpus and becuase Intel owns all of them NVIDIA needs to add special i7 support. But most importantly becuase all of these companies use silicon to make their chips I think that it is logical that sand also be supported in the games by NVIDIA and becuase Parrotfish make sand in tropical reefs clearly NVIDIA needs to focus on the intrests of the Parrotfish above all else. In face all game development should be delayed and all projects from all companies until the needs and demands of Parrotfish are discovered and provided.

Only in this way can everything be fair! Thank you ATI for stating the clear and true logic stated above. How could we think of this without you :D
DarkLord7854 5th October 2009, 07:02 Quote
Quote:
Originally Posted by general22
Looks like AMD isn't willing to improve their own developer relations program and rather they would prefer to complain about the extremely successful NVIDIA program. It seems this time around they didn't have time to test this AA mode on ATI hardware but then again this is NVIDIA here so there may have been more insidious motives.

It's more like nVidia really had no reason to test it on ATI. Why in the world would you do your competitor a favor by fixing stuff to work with their hardware when your job is to implement it to make sure your hardware works?

I see it as a case of butthurt because AMD can't get their sh*t together.

Like how Opera complains IE ships with Windows. Get the f*ck over yourself and do a better job at marketing your crap then FFS, don't sit there and cry like a little b*tch because you're not doing anything to improve the experience on your hardware.

AMD seem to expect things to fall into their lap.
thehippoz 5th October 2009, 07:47 Quote
it was the physx blocking on batman where they went too far imo..

I know folders love nvidia because it's just better.. and programs like badaboom and vreveal (I use these) are very nice to speed things up- something ati has to get on with dx11.. I'm positive nvidia could have had something out to match this card if they weren't so milky.. just be happy ati is around to keep pressure up, without them nvidia would be full of overpriced fail waiting on larribee to mature- glad ati's come out with a good one this time finally

I mean these arguments like on evga.. seen a guy just the other day buy a gtx295 on a mass forum recommendation over the 5870- he was asking about the two.. I mean that to me is strait fanboy- your buying a dx10 part that works off of sli and that somehow makes sense to these guys

and look, the 5870 caused them to turn around and attempt to make a 3 billion transistor part =] but then listening to them say it will be out end of the year- you gotta start wearing boots..
B1GBUD 5th October 2009, 09:24 Quote
Quote:
Originally Posted by Combatus

I wouldn't go as far as saying current ATI drivers are dreadful. Batman: AA aside we've actually seen very few problems with ATI drivers in the lab with the 4xxx series cards (except the HD 4870X2). Shoddy drivers are more of an excuse with the HD 3800 series and earlier. In fact we've actually seen more anomalies with Nvidia drivers in our benchmarks with GTX 275s outperforming GTX 285s and stuttering in games like Fallout 3 where ATI cards breeze through them.

Your kidding right? I can remember problems with ATI's x1900 drivers, especially for crossfire. The performance was pathetic and I questioned if a dual card setup was right. I stuck in a couple of 8800GTX's and Nvidia have proved they can get there drivers working a damn sight better than ATI ever could.

I'm not a fan bot, I'm prepared to give either Nvidia or ATI another crack of the whip... but the way it's going I think I'll stick with Nvidia for my next purchase. PhysX isn't the show stopper it was promised to be but it would be handy to be able to support anyway!
chizow 5th October 2009, 16:47 Quote
Quote:
Originally Posted by Combatus

I wouldn't go as far as saying current ATI drivers are dreadful. Batman: AA aside we've actually seen very few problems with ATI drivers in the lab with the 4xxx series cards (except the HD 4870X2). Shoddy drivers are more of an excuse with the HD 3800 series and earlier. In fact we've actually seen more anomalies with Nvidia drivers in our benchmarks with GTX 275s outperforming GTX 285s and stuttering in games like Fallout 3 where ATI cards breeze through them.
But isn't this news bit all about shoddy drivers? While you guys have done a better job than most news outlets of dismissing their laughable claims and allegations against TWIMTBP, its amazing how much play this story has gotten as if somehow Nvidia has done something wrong. They've simply done what any successful company would, invested resources to support and improve their own products. Somehow AMD and their supporters think these benefits should automatically transfer to their own hardware as if Nvidia has some obligation to support them also. It makes no sense whatsoever.

In any case, it should be obvious this guy McNaughton has no credibility. To claim they couldn't get early enough access to RE:5 is not only a joke, but an insult to the reader's intelligence. I guess AMD is going to claim this one just popped up on their radar last minute, given its in the top 5 for console sales this year, been complete for months and already had a PC benchmark released months ago. Same for NFS: Shift, another highly anticipated title from a major publisher, EA, so really there shouldn't have been surprises there. Lastly there's Batman: AA, which has generated a lot of buzz for at least a year and of course is garnering deserved GOTY buzz.

For AMD to claim they didn't have time to work on these titles but instead spent their time and resources focusing on garbage features for garbage titles like BattleForge, Stormrise, HAWX, STALKER Pripyat under their own Get in the Game DevRel label is a slap in the face to their customers, plain and simple. They have poured money into Dirt2 for DX11 support which is a promising title, but I think its obvious Nvidia is better managing their resources with their selection of TWIMTBP titles as demonstrated by the resulting additional features and product.
chizow 5th October 2009, 17:48 Quote
Quote:
Originally Posted by Initialised
PhysX in this title is also "The Way it's Meant to be Gimped". Beyond3D have a workaround here http://forum.beyond3d.com/showthread.php?p=1332461 that allows the PhysX load to be spread across all cores instead of being limited to just one. So if you have a decent quad or evan better an HT-enabled quad you should be able to get decent performance with PhysX running on the CPU. For AA just force it in CCC, but honestly, the last game I had to do that on was TCOR: Butcher Bay.

FYI, this workaround simply reduces the processing load by decreasing the number of calculations used for PhysX, most noticeably collision detection. This leads to some unexpected and undesirable results....

But to better illustrate how poorly the CPU handles advanced physics calculations, you can see below:

http://www.youtube.com/watch?v=AUOr4cFWY-s#t=1m10s

Looks like batman stepped in some gum....then some toilet paper....hell I guess he just figured he might as well tar and feather himself for the fun of it. Also notice the horrendous frame rates. But that's what you get with these hack job workarounds instead of proper vendor support. You can run PhysX in full fidelity by turning off GPU acceleration and run it on the CPU also, you just got unplayable 5-10 FPS framerates even with the fastest CPUs on the planet.

Also, the common fallacy that Nvidia's TWIMTBP somehow hurts all consumers is clearly false, it benefits all Nvidia's customers which by any metric is the vast majority (2:1) of Gamers and PC gaming hardware purchasers. As for the AA issue, its more of the same with UE 3.0, if you own a UE 3.0 game having to force MSAA via the driver shouldn't be anything new, especially given UE3.0 has always required compatibility flags that aren't exposed normally for both AA and SLI/CF. For Nvidia users that means relying on nHancer until a patch flags those bits in a driver update. For AMD users, that means renaming your game .exe to UT3 or Bioshock until they get around to flagging the proper bits in their (hidden) game-specific profile.
perplekks45 5th October 2009, 19:58 Quote
Quote:
Originally Posted by ZERO <ibis>
Basically ati says that it is unfair for NVIDIA to invest money on its end to make games play better on its hardware and believes that NVIDIA should also make the games work on ATI hardware. Seeing as how ATI owns NVIDIA this clearly is logical and therefore NVIDIA is not supporting its parent company. But lets not forget that AMD also owns ATI and therefore NVIDIA and so NVIDIA should make it work better on AMD cpus and becuase Intel owns all of them NVIDIA needs to add special i7 support. But most importantly becuase all of these companies use silicon to make their chips I think that it is logical that sand also be supported in the games by NVIDIA and becuase Parrotfish make sand in tropical reefs clearly NVIDIA needs to focus on the intrests of the Parrotfish above all else. In face all game development should be delayed and all projects from all companies until the needs and demands of Parrotfish are discovered and provided.

Only in this way can everything be fair! Thank you ATI for stating the clear and true logic stated above. How could we think of this without you :D
rep++ for that! I lol'd.
Quote:
Originally Posted by B1GBUD
Your kidding right? I can remember problems with ATI's x1900 drivers, especially for crossfire. The performance was pathetic and I questioned if a dual card setup was right. I stuck in a couple of 8800GTX's and Nvidia have proved they can get there drivers working a damn sight better than ATI ever could.

You do realize we're in 2009, right? And it's pretty close to the end as well...
AMD's [oh, yeah... in case you didn't know: They bought ATi a while ago] drivers are a lot better than in the times where you could still buy new x1xxx cards whereas nVidia seems to have gone from near-perfect driver support to pretty awful in some cases and a-bit-better-than-average in the vast majority. Resting on their laurels it seems.
LordPyrinc 6th October 2009, 01:36 Quote
I just picked up the game yesterday and am installing it now. I do have an NVidia card, but I would be a bit tiffed if I had an ATI card and found out that not all the features are supported. I guess I made the right choice in manufacturer when I went with NVidia.
Elton 6th October 2009, 06:59 Quote
AA at any rate isn't too big of an issue really. Seeing as ATT can force it easily. I mean I do that for almost all games, Oblivion, Mass Effect, Bioshock. It's not too surprising.
crazyceo 6th October 2009, 08:15 Quote
AMD say "WHAAAA! WHAAAA! WHAAAA! I'm telling my mommy!"

Nvidia say "Go tell ya mommy and ask her how my kids are doing!"

Batman AA is a very good game and was sure to be big hit on the back of The Dark Knight movie. Don't you think it would have been in AMD's best interest to get onboard at the very beginning and help their hardware customers.

AMD are to blame here not Nvidia. AMD could have said "Here, have all this equipment and development tools and let's see if we can get it working on our hardware" but no, they sat on their hands and instead got the marketing department to plan a strategy to fill the media with more whining.

Why not put that energy into helping their customers instead of just pissing them off.
ElMoIsEviL 6th October 2009, 09:55 Quote
Quote:
Originally Posted by chizow
FYI, this workaround simply reduces the processing load by decreasing the number of calculations used for PhysX, most noticeably collision detection. This leads to some unexpected and undesirable results....

But to better illustrate how poorly the CPU handles advanced physics calculations, you can see below:

http://www.youtube.com/watch?v=AUOr4cFWY-s#t=1m10s

Looks like batman stepped in some gum....then some toilet paper....hell I guess he just figured he might as well tar and feather himself for the fun of it. Also notice the horrendous frame rates. But that's what you get with these hack job workarounds instead of proper vendor support. You can run PhysX in full fidelity by turning off GPU acceleration and run it on the CPU also, you just got unplayable 5-10 FPS framerates even with the fastest CPUs on the planet.

Also, the common fallacy that Nvidia's TWIMTBP somehow hurts all consumers is clearly false, it benefits all Nvidia's customers which by any metric is the vast majority (2:1) of Gamers and PC gaming hardware purchasers. As for the AA issue, its more of the same with UE 3.0, if you own a UE 3.0 game having to force MSAA via the driver shouldn't be anything new, especially given UE3.0 has always required compatibility flags that aren't exposed normally for both AA and SLI/CF. For Nvidia users that means relying on nHancer until a patch flags those bits in a driver update. For AMD users, that means renaming your game .exe to UT3 or Bioshock until they get around to flagging the proper bits in their (hidden) game-specific profile.

You're full of it.

I am the author of that video and you must be that retarded fanboi from HardOCP (Atech I believe is your name there).

The problem you have here, my son, is that you have challenged individuals who are your intellectual superiors. Here is why your argument doesn't hold water.

A. PhysX is written in CUDA (Which is a variant of either Fortran, C++ or C with special nVIDIA CUDA extensions).

B. The framerate is actually in the 40-50s. The video is locked to 29FPS (FRAPS video recording). So not only is the CPU handling the PhysX in that clip, it's also recording and encoding the clip into an AVI file.

So let's see how both of these two pieces of evidence relate to your argument. You claim that because CPUs can't execute special CUDA code as well as CUDA architecture based GPUs can; that GPU > CPU in terms of Physics? Correct?

You are correct to mention that the level of Physical Interactions (the precision so to speak) is lowered. But what this does show is that you can get the same effects (the way it looks) contrary to those nVIDIA comparisons.

Let's have a look here:
http://www.realworldtech.com/page.cfm?ArticleID=RWT090909050230

Do you notice something? Oh yes, a Nehalem based Core i7 is pretty much on par with a GT200 based nVIDIA GPU when it comes to horsepower. This also explains why the PhysX engine used in Ghostbusters (Infernal Engine) produces far more collisions than we see in Batman yet remains playable:
http://www.youtube.com/watch?v=KGQue3ruGVw

PhysX CUDA Libraries are coded to run like crap on CPUs and deliberately limited to show CPUs in a bad light (thus selling more nVIDIA cards). When you code a Physics engine properly (for a CPU as the Infernal Engine demonstrates) you can get damn close to GPU Physics without the hassle.

Know your place :)
Tim S 6th October 2009, 16:11 Quote
Quote:
Originally Posted by ElMoIsEviL
Quote:
Originally Posted by chizow
FYI, this workaround simply reduces the processing load by decreasing the number of calculations used for PhysX, most noticeably collision detection. This leads to some unexpected and undesirable results....

But to better illustrate how poorly the CPU handles advanced physics calculations, you can see below:

http://www.youtube.com/watch?v=AUOr4cFWY-s#t=1m10s

Looks like batman stepped in some gum....then some toilet paper....hell I guess he just figured he might as well tar and feather himself for the fun of it. Also notice the horrendous frame rates. But that's what you get with these hack job workarounds instead of proper vendor support. You can run PhysX in full fidelity by turning off GPU acceleration and run it on the CPU also, you just got unplayable 5-10 FPS framerates even with the fastest CPUs on the planet.

Also, the common fallacy that Nvidia's TWIMTBP somehow hurts all consumers is clearly false, it benefits all Nvidia's customers which by any metric is the vast majority (2:1) of Gamers and PC gaming hardware purchasers. As for the AA issue, its more of the same with UE 3.0, if you own a UE 3.0 game having to force MSAA via the driver shouldn't be anything new, especially given UE3.0 has always required compatibility flags that aren't exposed normally for both AA and SLI/CF. For Nvidia users that means relying on nHancer until a patch flags those bits in a driver update. For AMD users, that means renaming your game .exe to UT3 or Bioshock until they get around to flagging the proper bits in their (hidden) game-specific profile.

You're full of it.

I am the author of that video and you must be that retarded fanboi from HardOCP (Atech I believe is your name there).

The problem you have here, my son, is that you have challenged individuals who are your intellectual superiors. Here is why your argument doesn't hold water.

A. PhysX is written in CUDA (Which is a variant of either Fortran, C++ or C with special nVIDIA CUDA extensions).

B. The framerate is actually in the 40-50s. The video is locked to 29FPS (FRAPS video recording). So not only is the CPU handling the PhysX in that clip, it's also recording and encoding the clip into an AVI file.

So let's see how both of these two pieces of evidence relate to your argument. You claim that because CPUs can't execute special CUDA code as well as CUDA architecture based GPUs can; that GPU > CPU in terms of Physics? Correct?

You are correct to mention that the level of Physical Interactions (the precision so to speak) is lowered. But what this does show is that you can get the same effects (the way it looks) contrary to those nVIDIA comparisons.

Let's have a look here:
http://www.realworldtech.com/page.cfm?ArticleID=RWT090909050230

Do you notice something? Oh yes, a Nehalem based Core i7 is pretty much on par with a GT200 based nVIDIA GPU when it comes to horsepower. This also explains why the PhysX engine used in Ghostbusters (Infernal Engine) produces far more collisions than we see in Batman yet remains playable:
http://www.youtube.com/watch?v=KGQue3ruGVw

PhysX CUDA Libraries are coded to run like crap on CPUs and deliberately limited to show CPUs in a bad light (thus selling more nVIDIA cards). When you code a Physics engine properly (for a CPU as the Infernal Engine demonstrates) you can get damn close to GPU Physics without the hassle.

Know your place :)

I don't disagree that PhysX is poorly written for CPUs, but what I do disagree with is the fact you're using theoretical double precision throughput as a measure for efficiency when it comes to PhysX calculations. Since PhysX is supported by all Nvidia GPUs since G80, it doesn't make use of double precision FP ops since G80 doesn't support double precision.

RV770 is still a more efficient GPU when it comes to overall peak theoretical throughput in single precision ops, but the VLIW architecture makes things a little more interesting for the developer if they want to achieve maximum throughput.
chizow 6th October 2009, 17:18 Quote
Quote:
Originally Posted by ElMoIsEviL

You're full of it.

I am the author of that video and you must be that retarded fanboi from HardOCP (Atech I believe is your name there).
Full of it? Only a "retarded fanboi" would claim "hardware PhysX runs just fine on the CPU" with that horrid demonstration of a Batman covered in crap. Not only that, but in other areas the effects are even worst, like Steam behaving like liquid in zero G environments. I guess its simulating PhysX in space on the CPU? What a joke!
Quote:
The problem you have here, my son, is that you have challenged individuals who are your intellectual superiors. Here is why your argument doesn't hold water.
Make sure to let me know when they arrive, my child. /endthulsadoomvoice.
Quote:
A. PhysX is written in CUDA (Which is a variant of either Fortran, C++ or C with special nVIDIA CUDA extensions).
Wrong, my child, PhysX is written in C and compiled for [insert whatever backend API is required]. Honestly, do you think PhysX was really written in CUDA when just about every platform it supports pre-dates CUDA and GeForce Acceleration? DO SOME RESEARCH before commenting ignorantly, especially if you're going to take some comical patronizing tone.
Quote:
B. The framerate is actually in the 40-50s. The video is locked to 29FPS (FRAPS video recording). So not only is the CPU handling the PhysX in that clip, it's also recording and encoding the clip into an AVI file.
It might be 40-50FPS at the start, but the point I highlighted, @1m10s it clearly drops to below 20 FPS, the transition is obvious once you actually have a sufficient CPU load, once again showing the CPU is inadequate when it comes to accelerating PhysX effects. And yes, the effects look like utter rubbish, so again, you'd have to be a "retarded fanboi" to think this is a valid workaround or substitute for actual GPU PhysX acceleration.
Quote:
So let's see how both of these two pieces of evidence relate to your argument. You claim that because CPUs can't execute special CUDA code as well as CUDA architecture based GPUs can; that GPU > CPU in terms of Physics? Correct?
PhysX (like Havok) has run on x86 platforms far longer than CUDA existed, so once again, any assumptions you're making about CUDA being unoptimized is clearly inaccurate. The PhysX runtime libraries are the same for both the CPU and GPU for PC releases.
Quote:
You are correct to mention that the level of Physical Interactions (the precision so to speak) is lowered. But what this does show is that you can get the same effects (the way it looks) contrary to those nVIDIA comparisons.
No you don't. Your video makes PhysX look like garbage, a proper GPU accelerated implementation looks great. Its not even close really. And you don't need to be a "retarded fanboi" to see the difference. Compare these paper effects to those in your video. Not only are per-object nodes and fidelity clealry increased, but it also runs at constant frame rate throughout:

http://www.youtube.com/watch?v=6GyKCM-Bpuw#t=5m25s
Quote:
Let's have a look here:
http://www.realworldtech.com/page.cfm?ArticleID=RWT090909050230

Do you notice something? Oh yes, a Nehalem based Core i7 is pretty much on par with a GT200 based nVIDIA GPU when it comes to horsepower. This also explains why the PhysX engine used in Ghostbusters (Infernal Engine) produces far more collisions than we see in Batman yet remains playable:
http://www.youtube.com/watch?v=KGQue3ruGVw
ROFL, that bit of irrelevant info might actually hold some significance if double-precision floating point was actually used for current physics engines, but its not. The relevant point in your link confirms the GPU handles single precision floating point math better than the CPU by ten-fold, with a GT200 clocking in ~1 TeraFLOPs and the CPU only ~100 GigaFLOPs. In actual practice, that ten-fold difference is actually much greater than that because the much lower throughput quickly becomes the bottleneck for any game engine, as it would have to wait for all physics calculations to complete before rendering the frame. In your example video the results are obvious, not only is PhysX running at reduced precision leading to the artifacting and collison detection problems, it STILL runs slower than GPU acceleration.
Quote:
PhysX CUDA Libraries are coded to run like crap on CPUs and deliberately limited to show CPUs in a bad light (thus selling more nVIDIA cards). When you code a Physics engine properly (for a CPU as the Infernal Engine demonstrates) you can get damn close to GPU Physics without the hassle.
Once again, PhysX pre-dates GPU acceleration, if you look in your games folders, the PhysX libraries use the same compiled binaries for both the CPU and GPU. Its not coded "like crap" anymore than any other software/CPU solution like Havok, Velocity, or Cryengine, its just limited to clearly inferior floating point throughput.
Quote:
Know your place :)
LMAO the irony coming from someone who thinks that workaround video is somehow proof of anything other than the limitations of the CPU when it comes to physics acceleration. Maybe one day when you can buy a CPU capable of 10x greater floating point ops you'll be able to play Batman with PhysX at full fidelity, but by that point physics load will be such an insignificant portion of GPU load no one will even notice.
chizow 6th October 2009, 17:32 Quote
Quote:
Originally Posted by Tim S

I don't disagree that PhysX is poorly written for CPUs, but what I do disagree with is the fact you're using theoretical double precision throughput as a measure for efficiency when it comes to PhysX calculations. Since PhysX is supported by all Nvidia GPUs since G80, it doesn't make use of double precision FP ops since G80 doesn't support double precision.
As noted above....PhysX pre-dates GPU acceleration by a long-shot, its not even close. It ONLY ran on CPUs for years so if there's any inefficiences it'd be poorly written for GPUs. From a uarch standpoint, it should be obvious the CPU isn't as well equipped as a GPU to handle physics. Physics benefits from highly math intensive parallel operations and in-order instructions which the GPU excels at, whereas the CPU excels at OoO instructions and has difficulty extracting parallelism to even keep multiple cores occupied.
Quote:
RV770 is still a more efficient GPU when it comes to overall peak theoretical throughput in single precision ops, but the VLIW architecture makes things a little more interesting for the developer if they want to achieve maximum throughput.
Not sure how the Vec5 shaders can be seen as more efficient when this rarely bears out in practice. As you mentioned, the VLIW Vec5 arch relies heavily on optimation from the developer and compiler, unfortunately that rarely occurs so the solution for AMD has been a brute-force approach to just throw more mostly brain-dead shaders at the problem.

With Cypress we're starting to see diminishing returns and as a result, the 5870 scales much worst than expected. Nvidia on the other hand has made numerous changes to their core arch by doubling SFU and dispatch, decoupling those units from SP within an SM, and allowing concurrent kernels to be run across each SM in the GPU. And on top of all that, they doubled SP just to be safe. I guess we'll see which turns out to be a better design decision, but I'd say Nvidia's design has been clearly more efficient given its performance typically outperforms ATI in just about every GPGPU application to-date.
thehippoz 6th October 2009, 17:48 Quote
well I remember when the first physx demo game came out- cellfactor.. me and my buddy were pretty excited about ageias physics card after watching some video on the cloth effects.. anyway that first demo game was a total flop, we modified the ini file to run the demo off the cpu instead of the ppu.. it ran fine rofl

I mean I remember looking at my buddy and going wth is this.. we were getting full frames too

when nvidia bought them, it was pretty common knowledge physx was a gimmick- was a great idea.. but I have to agree with elmo- if coded correctly physics works fine off the cpu.. heck look at crysis

if you look at how much cpu is used gaming on the core 2's and up- throwing physics at current cpu's is perfect.. I see it as nothing but marketing to tell you the truth.. seen nothing in any physx game that I said- wow they couldn't do that with havoc.. it just sells more nvidia cards
ZERO <ibis> 8th October 2009, 17:57 Quote
Maybe I should find a game with physics-x and do my own benchmarks to see the difference. If I ever do this test I will post here so that we can see the fps difference between using gpu vs cpu.
thehippoz 8th October 2009, 18:03 Quote
yeah I believe elmo is right zero.. that video he posted- batman is in a dream (drugged by scarecrow that part) and you can't walk faster than that.. he's getting good frames considering it's using physx on the cpu- we experienced the same thing on cellfactor
Elton 9th October 2009, 06:44 Quote
Thing about it is, you're going to have to re-write the ini a bit...

Still, PhysX was always a gimmick.
D3lta 11th October 2009, 21:01 Quote
Hi guys,

tutorial: how activate AA on demo release 1.0 and HD4000 w/o CCC

link <--Google translation, sorry......
gavomatic57 11th October 2009, 21:29 Quote
Quote:
Originally Posted by Elton
Thing about it is, you're going to have to re-write the ini a bit...

Still, PhysX was always a gimmick.

To an ATI user yes, to a Nvidia user it is added value/polish.
Elton 13th October 2009, 05:57 Quote
Quote:
Originally Posted by gavomatic57
To an ATI user yes, to a Nvidia user it is added value/polish.

I can't really agree, if only because PhysX hasn't changed the gameplay much, so far, it's pretty cloth movements and corpses. Perhaps a window here and there and a few bricks, but there's no real gameplay change with PhysX to warrant a purchase of a second card..
impar 13th October 2009, 12:18 Quote
Greetings!
Quote:
Originally Posted by gavomatic57
To an ATI user yes, to a Nvidia user it is added value/polish.
GTX260 user here, had to disable Physx to play Mirrors Edge whitout choppiness.
Get the same 60FPS (vsync on) with it disabled or enabled, but the experience is much better with it disabled.
gavomatic57 14th October 2009, 14:07 Quote
Quote:
Originally Posted by impar
Greetings!

GTX260 user here, had to disable Physx to play Mirrors Edge whitout choppiness.
Get the same 60FPS (vsync on) with it disabled or enabled, but the experience is much better with it disabled.

How odd, I had the 192 core 260 and it run smooth with Physx on.
gavomatic57 14th October 2009, 14:10 Quote
Quote:
Originally Posted by Elton
I can't really agree, if only because PhysX hasn't changed the gameplay much, so far, it's pretty cloth movements and corpses. Perhaps a window here and there and a few bricks, but there's no real gameplay change with PhysX to warrant a purchase of a second card..

It isn't supposed to change gameplay - there are always going to be people who bought ATI cards who can't use it. AA and AF don't change gameplay either yet we still crank them up.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums