bit-tech.net

AMD says PhysX will die

AMD says PhysX will die

Godfrey Cheng, Director of Technical Marketing in AMD's Graphics Products Group, has said that PhysX will die. We're back to the good old days - let's grab some popcorn as there's some life left in this little brawl.

Godfrey Cheng, Director of Technical Marketing in AMD's Graphics Product Group, has said that PhysX will die if it remains a closed and proprietary standard.

"There is no plan for closed and proprietary standards like PhysX," said Cheng. "As we have emphasised with our support for OpenCL and DX11, closed and proprietary standards will die."

This was part of AMD's response to our questions about EA's and 2K's decision to adopt PhysX across all of their worldwide studios earlier this week.

When asked what impact major publishers adopting PhysX across all of their studios would have on the PC gaming industry as a whole, Cheng responded by saying:

"We cannot provide comments on our competitor's business model except that it is awfully hard to sustain support by monetary incentives. The product itself must be competitive. We view Havok technologies and products to be the leaders in physics simulation and this is why we are working with them. Games developers share this view. We will also invest in technologies and partnerships beyond Havok that enhances gameplay."

It's interesting that Cheng says game developers share the view that Havok is the market leader in physics simulation - does that mean both Electronic Arts and 2K Games adopted PhysX against their development studios' wishes? Cheng pointed out that "People need to scrutinize various announcements on what is being "licensed". Is it to replace the whole physics simulation / tool stack within a game or within the whole studio? Is it for a specific physics simulation product or just a couple of titles? Remember PhysX also has game physics libraries in addition to its new GPU based products.

"An agreement to support PhysX may be for a limited portfolio of features. If you recall, Ageia had tremendous difficulty giving away its technologies and products for free whereas Havok could charge licensing fees. The quality of Havok's product and support hasn't changed nor has the preference by the developers for Havok. Games developers and studios are always interested in NRE and co-marketing deals which may be the catalyst for recent PhysX announcements."

So, what about the promise of GPU-accelerated Havok Physics on ATI Radeon graphics cards? It's still coming apparently and Cheng said that "[AMD] will provide more clarity to our work once more milestones have been achieved between AMD and Havok."

"Our guidance was end of this year or early next year but, first and foremost, it will be driven by the milestones that we hit. To put some context behind GPU based physics acceleration, it is really just at the beginning. Like 3D back in the early 1990s. Our competition has made some aggressive claims about support for GPU physics acceleration by the end of this year. I.e. Support in many titles....but we can count the titles on one hand or just one or two fingers," added Cheng.

"It should be noted that title support for GPU accelerated physics simulation is NOT the end game. The end game is having GPU physics as an integral part of game play and not just eye candy. If it is optional eye candy, GPU physics will not gain traction. The titles we have seen today with shattering glass and cloth waving in the wind is not integral to game play and the impact on the game's experience is minimal. We are looking for ways to integrate GPU physics better into game play. Or even things like AI instead of focusing on eye candy / effects physics."

Cheng's final words make a lot of sense and I find myself agreeing with him. We said something similar when Nvidia announced that the PC version of Mirror's Edge was delayed because of the PhysX implementation which, following a brief hands-on preview last week, does nothing but add some additional eye candy. None of it influences the actual gameplay experience.

However, it still remains to be seen when we're actually going to see Havok Physics running on Radeon GPUs - we get the feeling that Intel is holding the aces back until Larrabee hits the scene. All we're hearing at the moment are buzzwords like OpenCL and DirectX 11 Compute - but they're not here today and Radeon owners are expected to play a waiting game.

What do you think to AMD's response to Nvidia's recent success with PhysX? Share your thoughts in the forums.

63 Comments

Discuss in the forums Reply
pimlicosound 11th December 2008, 17:55 Quote
On the one hand, it's sort of his job to not congratulate nVidia for the work they've done with PhysX, so one could look at it cynically, but he does have a good point - the implementations of PhysX we've seen so far aren't game-changing. And I can't see the gaming world being revolutionised by such a closed, proprietary technology, that only some developers / hardware manufacturers / gamers will have access to.
Mentai 11th December 2008, 18:20 Quote
Yes he has a point, and yes it is only eye candy at this stage, but when you are in the market of creating eye candy, surely it holds some importance. To me the cloth and glass effects looked really good in the new Mirrors Edge trailer, and I'm glad I'll be able to see them, whereas my friends with newer more powerful rigs (ATI) aren't too happy that I can achieve higher graphical settings because of my hardware vendor.
I myself am pleased to see something being done until dx11 gets here. Physics like this has been a long time coming, the fact that the green team is getting a headstart is just how business goes. They definitely made a very good move by getting publishers on their side, even if it may be a temporary investment.
Tim S 11th December 2008, 18:25 Quote
Mentai, I agree that the effects look cool in Mirror's Edge - there's no denying that - but there's also no denying that the effects don't actually add much to the experience. I still look back at Cell Factor (yes, rubbish game I know) as the holy grail for game physics - where the physics is actually integral to the gameplay and to make things look more realistic... not just to make things look prettier!

Nvidia is heading in the right direction with PhysX (studio adoption), but without widespread support across other vendors' GPUs/parallel processors - yes, it runs on the CPU, but very slowly - it's not going to take off in a big way.
sui_winbolo 11th December 2008, 18:56 Quote
I completely agree that it will die if it remains closed.

What game developer wants to make games that only work with Nvidia and not ATI?

I can pretty much guarantee that no company developing a game would do this for the PC platform. Even if you split up the market say 50/50 customers own either Nvidia or ATI. There's 50% of the PC population not being able to play a game with awesome amazing physic effects because they own ATI.

I know my statement doesn't really hold water because there's so many factors I'm not considering, but my point is a developer is less likely to make a game that only works with PhysX because it's target population would be rather small. Sure their game might support it, but it won't be to the grand extent it could be. It'll be cool glass and wind effects. Not a game built completely on it.

However, what if ATI and Nvidia both had PhysX? What if that became a standard for PC gaming? That would be pretty damn cool to see the games churned out for the PC.
Mankz 11th December 2008, 19:24 Quote
I thought that PhysX was dead before it even began?
tejas 11th December 2008, 19:51 Quote
AMD will die as a company and go under before PhysX ever dies... I give AMD two years before they are liquidated...

Jen Hsun Huang will get the last laugh... Sad but True ;)
teamtd11 11th December 2008, 19:52 Quote
I would of thought as cpu's get more and more powerful. that this will easily be done with spare cpu time.
n3mo 11th December 2008, 20:07 Quote
nVidia likes to think that everyone loves them and is wiling to do as they wish (it's like a smaller version of Intel), but sadly it is not like that. I also think that if it remains as a technology available only for nVidia it is doomed do fail.

Ideally we would see both Havok and PhysiX available on nV and ATI GPUs, this would be the best option for everyone.
Quote:
Originally Posted by tejas
AMD will die as a company and go under before PhysX ever dies... I give AMD two years before they are liquidated...

Jen Hsun Huang will get the last laugh... Sad but True ;)

Wat? AMD won't die in your lifetime. Yeah, they might not be so popular nowadays after not-so-good Phenoms, but Phenom II might change it... and remember that AMD is far more popular when high performance computing is considered - 7 out of ten top supercomputers work on AMD chips, and ATI gave them a huge financial boost. And anyway, pray that AMD does good, because the day after AMD dies you will pay 1000 pounds for a Celeron, and for Intel quad-core you will have to sell your children.

Now, having a multicore GPU with PhysiX running on one or two cores would be sweet.
bowman 11th December 2008, 20:21 Quote
Now that OpenCL and/or DX11 compute shaders will be ubiquitous noone cares what API runs the physics as long as it's there.
drakanious 11th December 2008, 20:37 Quote
Quote:
Originally Posted by bowman
Now that OpenCL and/or DX11 compute shaders will be ubiquitous noone cares what API runs the physics as long as it's there.

I agree with this, mostly, but there is no implementation of physics [or anything at the moment] on OpenCL nor DX11 computer shaders. What I want to see is physics APIs [or even better, entire game engines] coded to take advantage of OpenCL and DX11, if present.
UrbanMarine 11th December 2008, 20:37 Quote
Don't you have to buy a seperate card to used Physix anyways? I never really looked into it because like Mankz thought, it died before it began.
B1GBUD 11th December 2008, 21:18 Quote
Quote:
Originally Posted by UrbanMarine
Don't you have to buy a seperate card to used Physix anyways? I never really looked into it because like Mankz thought, it died before it began.

You can still run PhysX on an ATI shod pc with an Ageia PPU PCI-e / PCI card, nVidia bought Ageia (or the license to develop) and released GPU accelerator support in drivers (180.xx I believe)

But I agree with most that it won't get many votes if it stays closed.
wuyanxu 11th December 2008, 21:41 Quote
bring us working Hovak physics on your ATI cards before prediction competitor's doom.

this is just bad cooperate practice.
ChrisRay 11th December 2008, 21:53 Quote
Hey Tim. This is Chris. ((We know each other from editors day and have shared a drink or two)). But anyway. I have to admit I am very dubious of this interview. A couple of things I'd like to point out.
Quote:
"There is no plan for closed and proprietary standards like PhysX," said Cheng. "As we have emphasised with our support for OpenCL and DX11, closed and proprietary standards will die."

If AMD was serious about supporting PhysX and had approached Nvidia about it. Nvidia by all accounts I've heard would have worked with them to get it supported. The exact same thing can be said about Cuda. It should be pointed out that OpenCL is basically based off CUDA and DX11 is following very similar path. AMD would like you to believe that Nvidia is holding them back from support for things like that. Nvidia is focused on the GPU. While AMD is still a CPU company. And is riding the tail of Intel and supporting their far more dangerous competitor.

When larrabee actually ships and Intel is doing GPU computing and GPU physics too. We will see how things hit.

Quote:
"People need to scrutinize various announcements on what is being "licensed". Is it to replace the whole physics simulation / tool stack within a game or within the whole studio? Is it for a specific physics simulation product or just a couple of titles? Remember PhysX also has game physics libraries in addition to its new GPU based products.

This is a just sidestep. Of course PhysX wont be on every single title that comes from Electronic Arts. But now EA has the base tools to implement at they see fit. As we are both aware. Nvidia TWIMP campaign has been very successful at getting features and technology implemented. The fact that Nvidia is promoting this shouldnt be a surprise to anyone.
Quote:

[AMD] will provide more clarity to our work once more milestones have been achieved between AMD and Havok."

What does that even mean?
Quote:

"It should be noted that title support for GPU accelerated physics simulation is NOT the end game. The end game is having GPU physics as an integral part of game play and not just eye candy. If it is optional eye candy, GPU physics will not gain traction. The titles we have seen today with shattering glass and cloth waving in the wind is not integral to game play and the impact on the game's experience is minimal. We are looking for ways to integrate GPU physics better into game play. Or even things like AI instead of focusing on eye candy / effects physics."

What do you buy a GPU or hardware for if not for more eye candy and gameplay? We could be playing Crysis with the Quake 3 engine and get the exact same gameplay experience couldnt we? Physics as you know are just ways to enhance the visual and gaming experience. Visuals alone dont make a game. But they certainly do help. Especially in this industry.


Tim. I know you didnt make these responses yourself. But I honestly think you should at least contact Nvidia get there counter marketing arguments to these claims. To keep the article balanced.

Cheers
Chris

SLIZONE Administrator
Nvidia User Group Member
HourBeforeDawn 11th December 2008, 22:10 Quote
I have to agree as well as PhysX and the route its taking will die and besides its not even that great of a platform to begin with compared to Havoc and other built in physic engines like those scene on steam.
Tim S 11th December 2008, 22:45 Quote
Quote:
Originally Posted by ChrisRay
Hey Tim. This is Chris. ((We know each other from editors day and have shared a drink or two)). But anyway. I have to admit I am very dubious of this interview. A couple of things I'd like to point out.
Quote:
"There is no plan for closed and proprietary standards like PhysX," said Cheng. "As we have emphasised with our support for OpenCL and DX11, closed and proprietary standards will die."

If AMD was serious about supporting PhysX and had approached Nvidia about it. Nvidia by all accounts I've heard would have worked with them to get it supported. The exact same thing can be said about Cuda. It should be pointed out that OpenCL is basically based off CUDA and DX11 is following very similar path. AMD would like you to believe that Nvidia is holding them back from support for things like that. Nvidia is focused on the GPU. While AMD is still a CPU company. And is riding the tail of Intel and supporting their far more dangerous competitor.

When larrabee actually ships and Intel is doing GPU computing and GPU physics too. We will see how things hit.
Hey Chris,

I am merely reporting AMD's response to the announcement, as I said I would during my initial piece on the PhysX news from EA and 2K. I asked Nvidia to comment, but that offer was declined.

I am as dubious as the next about AMD's plans to support Havok on the GPU. It was promised for the end of this year but it hasn't happened - I believe Intel is holding it back for Larrabee, as stated in the article. Only then will we know if AMD was true to its word about supporting Havok on the GPU. Right now, there is nothing at all and we're just hearing buzzwords like OpenCL and DirectX 11 Compute as the saviours for AMD - that's exactly what I said at the end of this article.
Quote:
Quote:
"People need to scrutinize various announcements on what is being "licensed". Is it to replace the whole physics simulation / tool stack within a game or within the whole studio? Is it for a specific physics simulation product or just a couple of titles? Remember PhysX also has game physics libraries in addition to its new GPU based products.

This is a just sidestep. Of course PhysX wont be on every single title that comes from Electronic Arts. But now EA has the base tools to implement at they see fit. As we are both aware. Nvidia TWIMP campaign has been very successful at getting features and technology implemented. The fact that Nvidia is promoting this shouldnt be a surprise to anyone.

It is a sidestep - it was a question that popped up following Godfrey's decision to not-comment-but-comment on Nvidia's push with PhysX. But I am merely reporting what he said.
Quote:
Quote:

[AMD] will provide more clarity to our work once more milestones have been achieved between AMD and Havok."

What does that even mean?
IMO, it's clear that they're not ready to talk and they're waiting for Intel to release its milestone (i.e. Larrabee). Sub text: AMD is not sure how to react to this yet. I do know that there has been at least one face to face meeting between AMD and Electronic Arts since the announcement, but I do not know the outcome of said meeting.
Quote:
Quote:

"It should be noted that title support for GPU accelerated physics simulation is NOT the end game. The end game is having GPU physics as an integral part of game play and not just eye candy. If it is optional eye candy, GPU physics will not gain traction. The titles we have seen today with shattering glass and cloth waving in the wind is not integral to game play and the impact on the game's experience is minimal. We are looking for ways to integrate GPU physics better into game play. Or even things like AI instead of focusing on eye candy / effects physics."

What do you buy a GPU or hardware for if not for more eye candy and gameplay? We could be playing Crysis with the Quake 3 engine and get the exact same gameplay experience couldnt we? Physics as you know are just ways to enhance the visual and gaming experience. Visuals alone dont make a game. But they certainly do help. Especially in this industry.

The holy grail of physics, and the way I want to see it implemented into games, is like we saw in Cell Factor... where the physics is actually part of the gameplay. However, out at Nvision, I sat down in several very interesting discussions with a number of top developers, physics middleware companies (many who are supported by TWIMTBP) and engineers (including several members of Nvidia's DevRel team) and they all agreed with me: until whatever physics engine is used is supported by every major hardware vendor, we will see nothing more than effects physics.

The reason? It's because there has to be a lowest common denominator in every system that the game is being played on. That is, sadly, not an Nvidia GPU. It is the CPU. And the developer can't break gameplay on systems that don't have an Nvidia GPU - it has to work when one is not present - falling back onto the CPU. Unfortunately for PhysX, while it runs on the CPU thanks to some great work by Nvidia's engineers who ported CUDA to the CPU, it is too slow for it to be usable in a game from the developer's perspective. Many of these developers are part of TWIMTBP, but they spoke freely and openly with me about the problems they face with GPU accelerated physics - if it is only available through CUDA, PhysX will not take off in the way it deserves to.

Please don't get me wrong, I am not slamming CUDA here because I think it has done great things for the GPU computing industry - without it, we wouldn't be on the verge of a revolution. I write about GPUs almost exclusively these days, so having more stuff to write about is fantastic - I have Nvidia to thank for that following the introduction of CUDA. Both OpenCL and DirectX 11 Compute are very similar to CUDA in many ways and there's good reason for that - it's because CUDA did the right thing. Its problem, though, is not that it isn't a fantastic piece of technology, it's because the software industry relies on cross platform compatibility.

Tim
ChrisRay 11th December 2008, 23:09 Quote
Tim first. Please dont feel I am selectively quoting. Just highlighting topics I find interesting.
Quote:
I asked Nvidia to comment, but that offer was declined.

Interesting. I'll ask them about it.

Quote:
I believe Intel is holding it back for Larrabee, as stated in the article.

I agree with this. The problem I see for AMD is they typically play follow the leader in this industry. And they tend to make alot of claims without actual substance. I'll be interested to see when intel releases Larrabee and I do strongly believe their GPU physics will be around the corner for them. And if AMD isnt onboard they're going to look foolish for comments like this.

Quote:

Please don't get me wrong, I am not slamming CUDA here because I think it has done great things for the GPU computing industry - without it, we wouldn't be on the verge of a revolution. I write about GPUs almost exclusively these days, so having more stuff to write about is fantastic - I have Nvidia to thank for that following the introduction of CUDA. Both OpenCL and DirectX 11 Compute are very similar to CUDA in many ways and there's good reason for that - it's because CUDA did the right thing. Its problem, though, is not that it isn't a fantastic piece of technology, it's because the software industry relies on cross platform compatibility.

In this paticularly respect. I do not believe its Nvidia's fault. CUDA for instance is not as "Nvidia only". IE its more to the fact that its just simply compatible with Nvidia's architecture. I think its fairly clear that the current AMD architecture is not suited for CUDA. But that doesnt mean AMD couldnt make it work. And that Nvidia would go out of their way to stop them.

Quote:
Right now, there is nothing at all and we're just hearing buzzwords like OpenCL and DirectX 11 Compute as the saviours for AMD - that's exactly what I said at the end of this article.

I also look forward to DX11 and OpenCL. And I do think they are gonna be truly revealing about who has taken the " Big Steps" with GPU Computing in the first place. But other than Havok there's no real API out there for Physics other than PhysX.
Quote:
IMO, it's clear that they're not ready to talk and they're waiting for Intel to release its milestone (i.e. Larrabee). Sub text: AMD is not sure how to react to this yet. I do know that there has been at least one face to face meeting between AMD and Electronic Arts since the announcement, but I do not know the outcome of said meeting.

I just have a hard time beleving AMD was just sideswipped and surprised by this announcement. They have known Nvidia is supporting and promoting physX in a big way.

Quote:
The holy grail of physics, and the way I want to see it implemented into games, is like we saw in Cell Factor... where the physics is actually part of the gameplay. However, out at Nvision, I sat down in several very interesting discussions with a number of top developers, physics middleware companies (many who are supported by TWIMTBP) and engineers (including several members of Nvidia's DevRel team) and they all agreed with me: until whatever physics engine is used is supported by every major hardware vendor, we will see nothing more than effects physics.

I guess this is just a differing in opinion's of ours. And I dont know if this the way Nvidia feels. But graphic effects, and the way they interact with the scene is immersion to me. When the GTX 280 first released. I wasnt nearly as impressed with it as I was PhysX. Seeing Glass shattering. Space men flying off the pattern was all "cool" to me. And much more interesting than the actual GT200 hardware was to me at the time. The one thing that really is important to me is. PhysX is basically free to any Nvidia user with DX10 + hardware. Theres no detriment to Nvidia users for having support for it and I only see positives. Even if its just shiny little effects like seen in Mirrors Edge. The other option is simply disable them.

In the long run this is to me. Just the same as turning off high quality shaders or other quality enhancements. The better the game looks and feels the more fun it can be to play. And I do agree that Nvidia's CPU CUDA compiler is relatively weak atm. There GPU one is soaring. And in a years time they have made some tremendous strides. 10 months after the purchase of AEGIA and the progress made has been amazing in my eyes.

I somewhat understand what you are saying about Physics and gameplay and how it actually "effects" gameplay. But I think we're a long ways away from that currently. And havok hasnt really been delivering it either.

Regards
Chris

SLIZONE Administrator
Nvidia User Group Member


*Edit* I made a change to the timing from when they bought AEGIA. For some reason thought it back in September 2007. So its even faster than I remember.
B3CK 11th December 2008, 23:21 Quote
I never read up on dx(version) very in depth. I was under the impression it was to help allow MS(windows), to provide a foundation that all visual/audio hardware could use so that the disparity between hardware vendors wouldn't cripple games because of ones installed hardware choices.
I think amd/ati are either trying to hold announcements of products until their version is at a more finished state with a launch date in site. Although this is quite questionable as if they were sure it would be done in less than a year, they wouldn't reply with a marketing jab, more of a counter "Look what we got".
Or Amd/Ati is having milestone issues with getting vendors on-board.

My opinion is that physX isn't dead. And that even realistic cloth or glass is just another feature that in 4 yrs will be a de-facto standard in games. Like how much lighting and shadows affect our experience in game. Try playing a game that has great shadows or realistic lighting effects for an hour, then replay that with out those effects on.. Makes a huge difference into bringing people into the game.
Actual bullet travel in FPS games that can be predicted by watching grass/foilage direction is a great way of bringing this into game dynamics when long range 'sniper' shots are being taken. or bullet trajectory drop over distance.
Or take explosions; when things blow up in games ,, it seems to me that each object thrown or moved by the explosion is coded to that particular explosion by distance and per object. With physX you should be able to apply size and weight to an object and allow the physx to take control of how far, what angle thsoe objects get pushed, instead of coding each object through tweaking and experimenting. Should allow the dev's to code many more objects in the same amount of time then testing and tweaking each object induvidually.

As I havn't had time to read on all these technologies after reading this, my experience/knowledge of this subject may be bent by marketing, so go easy on me when reading this.
Tim S 11th December 2008, 23:40 Quote
Chris, don't worry about selective quotes - it's fine. :)

Nobody is delivering gameplay physics at the moment and they won't for a while. Part of that is Nvidia's fault for keeping PhysX exclusive to Nvidia hardware (I understand why it is kept exclusive in the short term, but not in the long term and I doubt it will remain exclusive to CUDA in the long term) and some is AMD's for not adopting it. Stream is actually very similar - but not the same - as CUDA on RV770 and it wouldn't take that much effort to get it running IMO, but it's a matter of principle. The problem for developers though is that Stream and CUDA aren't the same and so they find themselves having to write two pieces of code to solve one problem. We're back in the days of SM2.0 and SM3.0 all over again and I thought we'd just fixed all that malarky with DirectX 10.

With all of that said, it doesn't mean gameplay physics shouldn't be a holy grail for the industry to aspire to and it's something I want to see because I think, if done properly, it could introduce some new gameplay ideas to a slew of bland titles. At the same time though, it could become a gimmick - I think the same could happen to effects physics because if every developer uses exactly the same effects in their games, the effects are no longer cool - they just become bland and uninspired in my opinion. Really speaking, AMD and Nvidia need to bang their heads together and think this through because the number of top PC games this year isn't as comprehensive as it should be. I think that's related to the uncertainty surrounding things like this in part and of course the uncertainty around piracy is another issue that needs to be resolved (I didn't want to bring it up in this thread, but it's a big factor in publishers' decision making).

Getting back onto topic though, Cell Factor wasn't a good game but it was a good demo because it showed what was possible with a relatively simple piece of silicon like the PhysX PPU - it should therefore be possible on the GPU if the industry can work itself out and align behind one physics API. AI event branching is something else I really want to see in games, but that doesn't mean it's going to happen yet. Unlike Physics though, that needs truck loads more computational power, but the good thing is that the industry appears to be thinking about more than just pretty pixels right now. I think that's a good thing because finally we're going to go beyond photo realism and actually thinking about other parts of the environment, which will enable the industry to move towards those 'cinematic' gaming experiences it has been promising for a long time. :)
HourBeforeDawn 11th December 2008, 23:42 Quote
For me after thoughts that become pretty because of physX is pointless I want something involves interaction, like say you have to shoot a certain barrel made from a certain material to make it hit something else or whatever and having physic play a role in the weight of the item and the condition of how it was shot and so forth to make the game more interactive should be the focus and thats why it needs to be in an open source so that its not limited, right now like other have said its simply just eye candy and provides no real use or need as the games already look pretty great even without the eye candy physics. So until its cross platform capable. physics wont be made into the being part of the story and just a turn on/off pretty feature. ~_~
ChrisRay 12th December 2008, 00:27 Quote
Tim did you attend round table meetings in 2007? In where they discussed the way they try to get developers to implement new features into games?

Be back later.

Chris
HourBeforeDawn 12th December 2008, 00:31 Quote
edit: comment retracted dont need to fuel any fires lol
chizow 12th December 2008, 01:08 Quote
Tim have you read any previous writings about the subject of PhysX? Much of what you have written and what Godfrey Cheng has said is actually a step backward from previous articles. I think at this point its very clear that AMD/ATI is willing to say and do anything to impede the adoption of PhysX as a standard. Why is obvious, because they have nothing in the way of accelerated PhysX and they probably don't want to directly support their rival's IP.
Quote:
Originally Posted by Tim S
Part of that is Nvidia's fault for keeping PhysX exclusive to Nvidia hardware (I understand why it is kept exclusive in the short term, but not in the long term and I doubt it will remain exclusive to CUDA in the long term) and some is AMD's for not adopting it. Stream is actually very similar - but not the same - as CUDA on RV770 and it wouldn't take that much effort to get it running IMO, but it's a matter of principle. The problem for developers though is that Stream and CUDA aren't the same and so they find themselves having to write two pieces of code to solve one problem.

This particular passage to me seems completely off. From all indications I've read, PhysX is not exclusive to Nvidia hardware, ATI/AMD has simply refused to get behind it. If ATI/AMD is refusing to get behind it "as a matter of principle" then the blame lies solely with them. Nvidia has a working PhysX API that developers are actually implementing and AMD/ATI has nothing. They have a PR blurb about a CPU-based physics engine and thats it.

http://www.tgdaily.com/content/view/38392/118/
http://www.extremetech.com/article2/0,2845,2324555,00.asp
http://www.custompc.co.uk/news/602205/nvidia-offers-physx-support-to-amd--ati.html

All sources seem to indicate Nvidia has extended the olive branch numerous times but AMD/ATI has either ignored all dialogue or actively blocked CUDA/PhysX development on their parts.

Also, I completely disagree with your assessment that Nvidia will struggle with PhysX implementation if they go on their own. NV owns some 60-65% of the discrete GPU market and at PhysX debut they estimated some 70 million CUDA-capable GPU install base. That's more than all current-gen consoles combined. Another proprietary API that has done pretty well over the years is EAX and Creative's SoundBlaster cards despite a much smaller market share that has only diminished over the years.

Lastly, PhysX and CUDA are an open standard and will completely support both OpenCL and DX11. This is where I actually agree with you. I know for sure AMD/ATI is just blowing smoke buying time until DX11 when compute shaders become standard and they'll get access to GPU-accelerated physics for free.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=10870&Itemid=1
http://en.expreview.com/2008/12/11/roadmap-indicates-cuda-30-in-q4-2009.html
"CUDA has integrated OpenCL 1.0 standard support, and is about to add DirectX 11 Compute Shader support. As an integration of OpenCL and DX11 computing, CUDA offers the best GPU computing choice for developers"

I look forward to a follow-up if you manage to get a response from Nvidia! Cheers!
frontline 12th December 2008, 01:12 Quote
OMG bit-tech daring to report the questioning of the need for physics (or is that physX) over gameplay! Hang them now!

FFS, the one trick pony that is PhysX NEEDS to be questioned and queried, as it currently doesn't offer any solution to around 30-40% of gamers that use Ati cards (or whatever the current figure is). Physics effects should be unobtrusive in a game, but add to the experience, and that should be available to ALL players. Until there is a solution that is universal and a matter of ticking a box, then what incentive is there for developers to concentrate on gameplay?

"I wasnt nearly as impressed with it as I was PhysX. Seeing Glass shattering." ooo, shiny
Tim S 12th December 2008, 01:47 Quote
Quote:
Originally Posted by chizow
Tim have you read any previous writings about the subject of PhysX? Much of what you have written and what Godfrey Cheng has said is actually a step backward from previous articles. I think at this point its very clear that AMD/ATI is willing to say and do anything to impede the adoption of PhysX as a standard. Why is obvious, because they have nothing in the way of accelerated PhysX and they probably don't want to directly support their rival's IP.
I've listened to plenty of industry insiders talk on the subject of PhysX - we've also written plenty about the subject as well. It is open, but not open and Havok isn't open either.

AMD doesn't want to adopt it on principle, not because it is bad technology. Just imagine an ATI graphics card box with an Nvidia CUDA/PhysX logo on it. Yeah, it's not going to happen and like you say it would put the game right into Nvidia's hand.
Quote:
Quote:
Originally Posted by Tim S
Part of that is Nvidia's fault for keeping PhysX exclusive to Nvidia hardware (I understand why it is kept exclusive in the short term, but not in the long term and I doubt it will remain exclusive to CUDA in the long term) and some is AMD's for not adopting it. Stream is actually very similar - but not the same - as CUDA on RV770 and it wouldn't take that much effort to get it running IMO, but it's a matter of principle. The problem for developers though is that Stream and CUDA aren't the same and so they find themselves having to write two pieces of code to solve one problem.

This particular passage to me seems completely off. From all indications I've read, PhysX is not exclusive to Nvidia hardware, ATI/AMD has simply refused to get behind it. If ATI/AMD is refusing to get behind it "as a matter of principle" then the blame lies solely with them. Nvidia has a working PhysX API that developers are actually implementing and AMD/ATI has nothing. They have a PR blurb about a CPU-based physics engine and thats it.

http://www.tgdaily.com/content/view/38392/118/
http://www.extremetech.com/article2/0,2845,2324555,00.asp
http://www.custompc.co.uk/news/602205/nvidia-offers-physx-support-to-amd--ati.html

All sources seem to indicate Nvidia has extended the olive branch numerous times but AMD/ATI has either ignored all dialogue or actively blocked CUDA/PhysX development on their parts.
I've heard conflicting stories from both sides. Nvidia says yes we have offered it but AMD doesn't want it, while AMD says yes and talks progressed far enough for us to get our engineers involved before Nvidia then pulled back. The truth will be somewhere in the middle I imagine.

You could argue that neither is to blame in some respects because while AMD Stream is similar to CUDA, it's not the same, which makes it hard for AMD to do the port (I'm hypothesizing here) because it doesn't have intricate knowledge in the workings of CUDA (and the Nvidia driver). On the other side, there wasn't an open standard compute language until just the other day so Nvidia couldn't open it out. However, what concerns me is the commentary from Nvidia when I ask whether there are plans to port PhysX to OpenCL. Not for AMD really, more for the developers. You know, the guys that keep this industry ticking over with content.

Please don't misunderstand me - Nvidia has a PhysX API that developers are adopting and that is A Good Thing. All that I have asked for is it to be 'opened up' so that we can actually start to do more than just pretty effects that have no real impact on gameplay. Roy Taylor said in an interview with me that "Physics is gameplay" - everything I am seeing from the PhysX implementations is anything but gameplay (see here: http://www.bit-tech.net/bits/2008/09/25/roy-taylor-on-physics-ai-making-games-fun/1). It's just fancy effects that have no impact on the way you play the game. Call me a sceptic, but when I'm promised something, that's what I expect - I'm just doing my job.
Quote:
Also, I completely disagree with your assessment that Nvidia will struggle with PhysX implementation if they go on their own. NV owns some 60-65% of the discrete GPU market and at PhysX debut they estimated some 70 million CUDA-capable GPU install base. That's more than all current-gen consoles combined. Another proprietary API that has done pretty well over the years is EAX and Creative's SoundBlaster cards despite a much smaller market share that has only diminished over the years.

If Nvidia goes alone, PhysX will never be more than fancy graphical effects that don't actually make the game any better. They make it prettier. I will be playing Mirror's Edge on an Nvidia card because it looks prettier, not because it makes the game any more compelling. There are more than 100 million CUDA enabled GPUs. Of those, I estimate no more than 40 million are able to accelerate PhysX to an acceptable level. Nvidia sold between 20-30 million 8800 GTs - the rest of that 40 million is made up of anything faster than an 8800 GTS.

PhysX is very easily portable to OpenCL because OCL actually isn't too far from CUDA. CUDA is open in the sense that you can download the specification, but it is not open because no other hardware vendor supports it or contributed to its development. The question is whether Nvidia wants PhysX to be a checkbox feature on its graphics cards that may disappear or whether Nvidia wants PhysX to be the industry's leading physics API in five or more years. PhysX has nothing to do with the hardware or with CUDA in fact - you've already seen Nvidia port it from the PPU to CUDA and there's nothing stopping Nvidia porting it to OpenCL or DirectX 11 Compute.
Quote:
Lastly, PhysX and CUDA are an open standard and will completely support both OpenCL and DX11. This is where I actually agree with you. I know for sure AMD/ATI is just blowing smoke buying time until DX11 when compute shaders become standard and they'll get access to GPU-accelerated physics for free.

http://www.fudzilla.com/index.php?option=com_content&task=view&id=10870&Itemid=1
http://en.expreview.com/2008/12/11/roadmap-indicates-cuda-30-in-q4-2009.html
"CUDA has integrated OpenCL 1.0 standard support, and is about to add DirectX 11 Compute Shader support. As an integration of OpenCL and DX11 computing, CUDA offers the best GPU computing choice for developers"

I look forward to a follow-up if you manage to get a response from Nvidia! Cheers!
Based on a recent email thread I've had with Microsoft's David Blythe (one of the chief architects on DirectX), DX11 Compute support isn't fully enabled on any current SM4.x GPU (regardless of whether it's CUDA or Stream compatible) - a subset of DX11 Compute will be exposed to DX10 GPUs accessing the DX11 Compute Shader. I'll have some more details on this tomorrow all being well when I get around to collaborating the information I've obtained from various sources.

Anyway, it's almost 2am here and I should have been in bed hours ago.

Tim
ChrisRay 12th December 2008, 01:55 Quote
Quote:
AMD doesn't want to adopt it on principle, not because it is bad technology. Just imagine an ATI graphics card box with an Nvidia CUDA/PhysX logo on it. Yeah, it's not going to happen and like you say it would put the game right into Nvidia's hand.

Tim this is what bothers me the most. They dont want to adopt it on "principle"? Well either way thats a loss to AMD users. Also it completely the opposite of what they are saying. "Its Nvidia only". The most glaring point to me is that AMD isn't being up front or honest about this. And is making claims ((and has been for a while)) about Nvidia is blocking them off.

Honestly. I dont see how they can support their biggest competitor. Aka Havok/Intel. And then not support PhysX based on principle. I just don't buy it.

Regards
Chris

SLIZONE Administrator
Nvida User Group Member.
HourBeforeDawn 12th December 2008, 01:59 Quote
Quote:
Originally Posted by ChrisRay
Tim this is what bothers me the most. They dont want to adopt it on "principle"? Well either way thats a loss to AMD users. Also it completely the opposite of what they are saying. "Its Nvidia only". The most glaring point to me is that AMD isn't being up front or honest about this. And is making claims ((and has been for a while)) about Nvidia is blocking them off.

Honestly. I dont see how they can support their biggest competitor. Aka Havok/Intel. And then not support PhysX based on principle. I just don't buy it.

Regards
Chris

SLIZONE Administrator
Nvida User Group Member.

probably because from the beginning when Intel bought Havok stated that it was going to remain free and open to all platforms where as when nVidia got a hold of physX only made claims of support for their cards only from the beginning and later for PR reasons probably then changed their tune but at the start is was all about nvidia support and no mention of being open.
Tim S 12th December 2008, 02:01 Quote
Quote:
Originally Posted by ChrisRay
Tim this is what bothers me the most. They dont want to adopt it on "principle"? Well either way thats a loss to AMD users. Also it completely the opposite of what they are saying. "Its Nvidia only". The most glaring point to me is that AMD isn't being up front or honest about this. And is making claims ((and has been for a while)) about Nvidia is blocking them off.

Honestly. I dont see how they can support their biggest competitor. Aka Havok/Intel. And then not support PhysX based on principle. I just don't buy it.

Regards
Chris

SLIZONE Administrator
Nvida User Group Member.

I honestly have no idea why Chris - I often get the "Nvidia is buying out development studios" argument when I talk to AMD. Nvidia does this, Nvidia does that, Nvidia is evil, etc. It's frustrating... I don't like politics... I prefer to focus on the technology myself because that's what makes the hairs on the back of my neck stand up. If there was one thing I wanted to see happen in the graphics industry over the next 6 to 12 months, it would be for the industry to adopt a ubiquitous physics API and that's all that gamers should want too. That way, developers can actually start to make some progression on the things that we should all really be caring about - the games.

It took the graphics industry six years to get behind a graphics API properly (DX7 was around the time when those who didn't adopt started to fall away IIRC) and I seriously hope it doesn't take six years for the industry to decide which physics API is the winning horse. Sigh.
ChrisRay 12th December 2008, 02:01 Quote
Havok is not an open standard. To say Havok is open and PhysX isn't is just a twist of the facts. Intel owns Havok. And will license it. AMD licensed Havok.
Quote:

It took the graphics industry six years to get behind a graphics API properly (DX7 was around the time when those who didn't adopt started to fall away IIRC) and I seriously hope it doesn't take six years for the industry to decide which physics API is the winning horse. Sigh.

Agree with you completely here.
Tim S 12th December 2008, 02:10 Quote
Quote:
Originally Posted by HourBeforeDawn
Quote:
Originally Posted by ChrisRay
Tim this is what bothers me the most. They dont want to adopt it on "principle"? Well either way thats a loss to AMD users. Also it completely the opposite of what they are saying. "Its Nvidia only". The most glaring point to me is that AMD isn't being up front or honest about this. And is making claims ((and has been for a while)) about Nvidia is blocking them off.

Honestly. I dont see how they can support their biggest competitor. Aka Havok/Intel. And then not support PhysX based on principle. I just don't buy it.

Regards
Chris

SLIZONE Administrator
Nvida User Group Member.

probably because from the beginning when Intel bought Havok stated that it was going to remain free and open to all platforms where as when nVidia got a hold of physX only made claims of support for their cards only from the beginning and later for PR reasons probably then changed their tune but at the start is was all about nvidia support and no mention of being open.

Quote:
Originally Posted by ChrisRay
Havok is not an open standard. To say Havok is open and PhysX isn't is just a twist of the facts. Intel owns Havok. And will license it. AMD licensed Havok.

Currently, neither PhysX nor Havok are open standards - they both have to be licensed (that's what EA/2K have just done according to Nvidia's own PRs, right?). Hopefully, the industry will get past that at some point though and realise that openness prevails in the long run. :)
ChrisRay 12th December 2008, 02:44 Quote
Quote:
Originally Posted by Tim S
Currently, neither PhysX nor Havok are open standards - they both have to be licensed (that's what EA/2K have just done according to Nvidia's own PRs, right?). Hopefully, the industry will get past that at some point though and realise that openness prevails in the long run. :)


Yes PhysX is licensed to EA/2K. :) I was just disputing that Havok was more "open" than PhysX. Since neither are "open" standards. But are able to be licensed by Nvidia/Intel.

Regards
Chris

SLIZONE Administrator
Nvida User Group.
B3CK 12th December 2008, 08:15 Quote
Quote:
Originally Posted by Tim S
I don't like politics... I prefer to focus on the technology myself because that's what makes the hairs on the back of my neck stand up. If there was one thing I wanted to see happen in the graphics industry over the next 6 to 12 months, it would be for the industry to adopt a ubiquitous physics API and that's all that gamers should want too. That way, developers can actually start to make some progression on the things that we should all really be caring about - the games.

Nail on the head. Even with the not so good economy here in the U.S.; In my little pc repair shop, we are starting to see more and more customers purchasing retail video cards for us to install in their pc's. Even a couple small business clients are upgrading their machines to run 3-D architectual or (autocad) of some sort. Where as most of my advise is based on what real functions they use the pc's for, and system requirements of the programs they want, prices on video cards are at a great place right now and I can recommend in confidence.

For the gamers that come in, and their parents; it is much harder to offer advise on the top end of the cards. The biggest reason, is they are willing to spend top dollar on the big performance cards; but they don't want to be left out of the next big tech that drops in 3 months according to the rumours out on the net. Core clocks or memory frequencies that are so close to each other that they wouldn't be able to tell the difference if I installed both and let them play it is not the concern. PhysX enabled or Cuda are bigger changes then adding 15hz to the clock in my opinion.
[USRF]Obiwan 12th December 2008, 11:02 Quote
The same can be said about DX10, available since 2007. How many games are now using the FULL potential of DX10? 2 or 3 games? Both AMD and Nvidia support it, yet its full use can be counted on one hand.

So how can AMD say Nvidia's Physics, just a couple of months old, is not used in games. And say DX11 and OpenCL is it. DX11 and OpenCL are not even used now and wont until maybe 2012. And we all know what happens in 2012. So it is to late anyway...
Xkaliber 12th December 2008, 14:15 Quote
A quick point about why Physx won't die: It's built into the Wii, PS 3 and Xbox 360.

Just look at Gears Of War 2 for example. That's a genuine example of where Physx is going.

Also some other fairly major titles:
City of Villains
Mass Effect
Gothic 3
Bladestorm: The Hundred Years' War
Heavy Rain
Unreal Tournament 3

And however many other Unreal Engine 3 games there are to come.

Suggesting that Physx is just going to die is a joke in its self. DX10 has barely been taken on board by most people, due to generalised shock and horror over Vista, so who even cares about DX11 yet? Most people just want to keep on using DX 9 and Windows XP for stability and simplicity's sake. If anything, Physx has made the right move by being accessible on the consoles, the place where games can drive its usage forwards faster than PCs.

At the moment, as previously noted, in-game physics are more just a novelty than anything else, however, Physx is building a user base worthy of note, which will draw more attention. This is not to say that Havok will die either, or any such. Havok has double the number of games using it as Physx, but has been around for donkey's years.
Hamish 12th December 2008, 16:08 Quote
dont both nvidia and ATi cards support OpenCL now?
surely the obvious answer is to do physx, havok and whatever other physics engine a developer might want to use in OpenCL and be done with it?
Virtuman 12th December 2008, 16:30 Quote
I can only hope that it's true about AMD not supporting PhysX on principle. It would truly be refreshing to have a company have some principles. In fact, this would be in keeping with the way AMD and ATI do/have been doing business for some time.

PhysX will definitely succeed in the short term but I'm more dubious about the long term. It is very expensive to pay for development houses to use your software. TWIMTBP has been buying titles for a long time and I get the distinct impression the up-to-now almost bottomless pit of money they've been drawing on to press AMD out of the picture is running low.

Here's another way to look at it. NV is providing support for PhysX to developers on TWIMTPB titles. Which titles that use PhysX are NOT part of TWIMTBP? Even if AMD was to support PhysX you wouldn't be able to see them market it on those titles because NV doesn't allow it. That's right. Titles that are part of TWIMTBP are not allowed to do any marketing with AMD.

NV doesn't want AMD supporting PhysX. Especially now that AMD is crushing NV in the price to performance game. Ok, so NV has released a new driver that brings them back into the game. It also adds a bunch of problems that didn't previously exist. Also, if just changing the driver can add (however dubiously) that much performance, what is to prevent AMD from tweaking their drivers as well and restoring themselves to the top seat?

There have been arguments all over the internet about console vs. PC for gaming. One thing is for sure, we are seeing more and more titles that go both ways. I don't see any consoles that use an NV-based CPU. Any Physics solution that can dynamically take advantage of BOTH the CPU and the GPU is going to be superior to one that doesn't. Would anyone care to describe how PhysX runs on both an AMD and Intel CPU?

I have no doubt that NV MUST continue to buy developers just to keep Havok out as much as they can. PhysX just isn't that compelling from a technology standpoint when compared to Havok.

The bottom line is that until there is something that both AMD and NV can use, no games will run physics for more than eye candy in any important title. As was said earlier here, physics can’t be an integral part of the game if every player can’t do it. Actually what that means is that every multiplayer player. Sure, single player games like Oblivion might be able to survive but why would they want to eliminate almost half of the people that could potentially play their game by requiring PhysX?

And when is NV going to try to support Havok? Why do people expect AMD to support NV software but don’t expect NV to support what AMD is using? Asking developers to code two ways isn’t really the best idea but it’s better than making them choose between two. It’s not personal, it’s just business.

I’d also like to see a review of the two technologies- Havok and PhysX. Each one is to some degree being touted as the best one. But which one is actually best? What if there isn’t a best one? What if one is better at some things than the other? Why couldn’t developers use BOTH and be able to write their code in the way that works best for them? By them I mean us, the gamers.
frontline 12th December 2008, 17:16 Quote
It makes clear sense for AMD to work with Intel on developing CPU based Physics in the short term, as they are both trying to sell the benefit of multi-core CPU's to the masses. Getting crossfire support included as standard on Intel chipset motherboards is also a bonus for AMD/Ati (at least until larrabee makes an appearance).
Tim S 12th December 2008, 17:43 Quote
Quote:
Originally Posted by Hamish
dont both nvidia and ATi cards support OpenCL now?
surely the obvious answer is to do physx, havok and whatever other physics engine a developer might want to use in OpenCL and be done with it?

That's what I'm pushing for... and I still haven't had the right answer from either Red or Green. :)
HourBeforeDawn 12th December 2008, 18:39 Quote
Quote:
Originally Posted by Tim S
That's what I'm pushing for... and I still haven't had the right answer from either Red or Green. :)

agreed I am hoping for this as well.
chizow 12th December 2008, 23:26 Quote
Quote:
Originally Posted by Tim S
I've listened to plenty of industry insiders talk on the subject of PhysX - we've also written plenty about the subject as well. It is open, but not open and Havok isn't open either.
Thanks for the reply Tim. As you and Chris covered, PhysX and Havok are open but not open, the main distinction is that they are not free and need to be licensed. DX11 and even OpenCL aren't technically open either as they have Microsoft/Khronos overseeing their development, but they are free. If AMD means "free" when they say "open", then their apprehension in adopting PhysX becomes more transparent.
Quote:
AMD doesn't want to adopt it on principle, not because it is bad technology. Just imagine an ATI graphics card box with an Nvidia CUDA/PhysX logo on it. Yeah, it's not going to happen and like you say it would put the game right into Nvidia's hand.
See this is my main issue with the tone of your interview. You seem to acknowledge many of the business decisions behind AMD's refusal to back PhysX, yet you fail to condemn it. Let's call it for what it is, by hiding behind this Havok smokescreen AMD is punishing the supporters of their products and the PC gaming industry in general because they lacked the foresight and cash to pull the trigger on Ageia years ago. It might be a matter of principle, but it sure isn't a noble or altruistic one.
Quote:
I've heard conflicting stories from both sides. Nvidia says yes we have offered it but AMD doesn't want it, while AMD says yes and talks progressed far enough for us to get our engineers involved before Nvidia then pulled back. The truth will be somewhere in the middle I imagine.
To me, asking Godfrey (and the appropriate Nvidia counterpart) these questions point-blank would've made for a more interesting article. Its quite obvious there is no technical, software or proprietary obstacle blocking AMD hardware from supporting CUDA and PhysX, so it comes down to politics, as you've said.
Quote:
You could argue that neither is to blame in some respects because while AMD Stream is similar to CUDA, it's not the same, which makes it hard for AMD to do the port (I'm hypothesizing here) because it doesn't have intricate knowledge in the workings of CUDA (and the Nvidia driver). On the other side, there wasn't an open standard compute language until just the other day so Nvidia couldn't open it out. However, what concerns me is the commentary from Nvidia when I ask whether there are plans to port PhysX to OpenCL. Not for AMD really, more for the developers. You know, the guys that keep this industry ticking over with content.
From what I've read all the competing standards are C based and compatible, with a bit of work. I've already linked to articles showing Nvidia plans for CUDA to fully support both OpenCL and DirectX11. Given the past history of proprietary APIs and DirectX (again EAX and even the various software physics implementations) I'm pretty sure all the necessary wrappers and extensions needed to make PhysX compatible with DX11 will be in place by the time its launched. At which point I'm sure AMD will suddenly do an about-face and proclaim DX11 Compute Physics as the greatest thing since sliced bread. Except another 8-9 months and another development cycle will have passed and retarded universal physics implementation in games.
Quote:
Please don't misunderstand me - Nvidia has a PhysX API that developers are adopting and that is A Good Thing. All that I have asked for is it to be 'opened up' so that we can actually start to do more than just pretty effects that have no real impact on gameplay. Roy Taylor said in an interview with me that "Physics is gameplay" - everything I am seeing from the PhysX implementations is anything but gameplay (see here: http://www.bit-tech.net/bits/2008/09/25/roy-taylor-on-physics-ai-making-games-fun/1). It's just fancy effects that have no impact on the way you play the game. Call me a sceptic, but when I'm promised something, that's what I expect - I'm just doing my job.

If Nvidia goes alone, PhysX will never be more than fancy graphical effects that don't actually make the game any better. They make it prettier. I will be playing Mirror's Edge on an Nvidia card because it looks prettier, not because it makes the game any more compelling. There are more than 100 million CUDA enabled GPUs. Of those, I estimate no more than 40 million are able to accelerate PhysX to an acceptable level. Nvidia sold between 20-30 million 8800 GTs - the rest of that 40 million is made up of anything faster than an 8800 GTS.
I agree with you that gameplay with PhysX may not be interactive and dynamic if Nvidia is the only one supporting PhysX, but that doesn't mean they won't be more compelling. I guess its more semantics than anything else, but better visuals and realistic interactions are always going to make games more immersive and compelling than without.

One example that I'm sure not all readers will remember is with the original Wing Commander and himem.sys with greater than 640KB RAM. Sure that extra memory didn't do anything in terms of changing gameplay, but the interactive hand/joystick and the HUD portraits certainly made the game more compelling. Similarly with the Speech Packs with a Sound Blaster......I didn't get any additional dialogue but actually hearing the voices definitely made the experience more enjoyable. And those upgrades cost around $500! PhysX is free if you have a CUDA capable GPU!

So yes, it will take some time before PhysX will noticeably impact gameplay if the installed user-base doesn't have capable hardware, but you've acknowledged Nvidia does already an impressive base of nearly 100m parts and ~60% of discrete GPUs. Even if AMD signed on tomorrow your lowest common denominator would be pre-GF8 and pre-R600 cards that wouldn't be able to accelerate physics.

In the meantime we can only wonder how much better games could be if dynamic and interactive physics were enabled. For example, some of those Mirror's Edge demos with flags/tarps flying and buildings and ledges would make for a perfect opportunity to jump off the side, grab hold, shoot the flag and let gravity tear the flag and drop you to the next platform below. Or perhaps rappel through the glass into the office below like they do in the movies a la Matrix. Obviously this can't be integrated into gameplay if ATI owners don't see a flag, jump to grab onto nothing and plummet horribly to their death. I could see a compromise though where ATI owners can run up to the edge, press some generic interact button like "F" which would trigger a pre-rendered cut scene that shows them navigating the edge. In the meantime I suppose we'll have to settle for interactive eye-candy, which is still certainly better than the software physics we see now.
Quote:
Based on a recent email thread I've had with Microsoft's David Blythe (one of the chief architects on DirectX), DX11 Compute support isn't fully enabled on any current SM4.x GPU (regardless of whether it's CUDA or Stream compatible) - a subset of DX11 Compute will be exposed to DX10 GPUs accessing the DX11 Compute Shader. I'll have some more details on this tomorrow all being well when I get around to collaborating the information I've obtained from various sources.

Anyway, it's almost 2am here and I should have been in bed hours ago.

Tim
I'll certainly look forward to any updates and thanks for reading if you made it this far! :)
Virtuman 13th December 2008, 00:21 Quote
Quote:
Let's call it for what it is, by hiding behind this Havok smokescreen AMD is punishing the supporters of their products and the PC gaming industry in general because they lacked the foresight and cash to pull the trigger on Ageia years ago. It might be a matter of principle, but it sure isn't a noble or altruistic one.

PhysX is at best the number two player in this field. By recognizing Havok as the industry leader and going that direction INSTEAD of taking the NV way out and simply buying what was available so they could keep all the branding and marketing rights for themselves is pretty compeling evidence there is at least a little altruism going on here.

Add to that the fact that ATI was working with Havok before AMD bought them but AMD continues the Havok relationship in the face of the most well known and longest lasting hardware rivalry in the PC industry I think it demonstrates they are going with what they really believe is the best solution.

Copmare that attitude with NV's historical approach to this sort of thing. They tried to hold Intel hostage for SLI chipsets and got their proverbial asses handed to them. It wasn't until they saw so much of their market share slip away on the chipset side they finally wized up and sold their chipsets/tech for a more reasonable price. Just another case of NV starts caring about the consumer when their wallet gets hurt. Definitely no altruism there.
Quote:
At which point I'm sure AMD will suddenly do an about-face and proclaim DX11 Compute Physics as the greatest thing since sliced bread. Except another 8-9 months and another development cycle will have passed and retarded universal physics implementation in games.

The very first Godfrey quote from this article seems to show you didn't read the article. He clearly states AMD will support OpenCL and DX11.
Quote:
"There is no plan for closed and proprietary standards like PhysX," said Cheng. "As we have emphasised with our support for OpenCL and DX11, closed and proprietary standards will die."
Quote:
For example, some of those Mirror's Edge demos with flags/tarps flying and buildings and ledges would make for a perfect opportunity to jump off the side, grab hold, shoot the flag and let gravity tear the flag and drop you to the next platform below. Or perhaps rappel through the glass into the office below like they do in the movies a la Matrix. Obviously this can't be integrated into gameplay if ATI owners don't see a flag, jump to grab onto nothing and plummet horribly to their death.

Ugh. If I hear any more buzz about this stupid game I'm just going to throw up. Yes, it's a cute idea. Yes, it will be fun to play for a while. yes, there will be more copies included in a bundle then will EVER sell at retail (at least for the PC). It's really nothing more than a twitch finger, pince of persia knock off with better graphics.

Take CS for example. It looks like crap and doesn't take advantage of any fancy hardware abilities with cool sounding names but there are still ridiculous numbers of people playing it on machines that can actually run Far Cry 2 maxed out. WHY?

Gameplay. They like the way the game plays. Monopoly is still pretty fun to play and certainly doesn't require physics of any sort. Why do people play it? It's fun to play.

Until physics can be something that all developers can BOTH code AND use to have an impact on the game there just isn't enough market to make this conversation even worth having. This whole thing is just a short lived data point if they can't make that happen.

So if you were going to make a wager on wheter PhysX or Havok will win try to remember that it's NV against the world. The world, BTW, includes Intel.
chizow 13th December 2008, 02:36 Quote
Quote:
Originally Posted by Virtuman
PhysX is at best the number two player in this field. By recognizing Havok as the industry leader and going that direction INSTEAD of taking the NV way out and simply buying what was available so they could keep all the branding and marketing rights for themselves is pretty compeling evidence there is at least a little altruism going on here.
Actually PhysX has no competition when it comes to physics beyond CPU acceleration. The good news for Nvidia is that their GPUs can and will always run on every CPU out there, and therefore, they will always benefit from Havok as well.

Not sure why you think Havok is the clear leader when it yields no advantage, but as I've already noted, PhysX is widely used in the same capacity, as are various other software physics engines:

http://www.nzone.com/object/nzone_physxgames_home.html
http://www.havok.com/content/blogcategory/29/73/

You can see both engines are quite popular and heavily utilized in both PC and console titles.
Quote:
Add to that the fact that ATI was working with Havok before AMD bought them but AMD continues the Havok relationship in the face of the most well known and longest lasting hardware rivalry in the PC industry I think it demonstrates they are going with what they really believe is the best solution.
Considering Nvidia has produced more in 10 months than ATI/AMD/Havok have in 2 years shows you just how little effort they have put into making hardware accelerated physics a reality.
Quote:
Copmare that attitude with NV's historical approach to this sort of thing. They tried to hold Intel hostage for SLI chipsets and got their proverbial asses handed to them. It wasn't until they saw so much of their market share slip away on the chipset side they finally wized up and sold their chipsets/tech for a more reasonable price. Just another case of NV starts caring about the consumer when their wallet gets hurt. Definitely no altruism there.
Well there's no doubt they held onto SLI licensing to protect their chipset business, but the difference is, they actually added value and a viable alternative. The same cannot be said of AMD's position on GPU physics.
Quote:
The very first Godfrey quote from this article seems to show you didn't read the article. He clearly states AMD will support OpenCL and DX11.
No, I did read the article. And it also happens that CUDA and PhysX will fully support OpenCL and DX11 as well. So while Brook+ doesn't support CUDA now, it will when once OpenCL and DX11 are released, which will give ATI parts access to PhysX as well. The point is that AMD isn't making any effort now for very different reasons than the ones they're giving.
Quote:
Ugh. If I hear any more buzz about this stupid game I'm just going to throw up. Yes, it's a cute idea. Yes, it will be fun to play for a while. yes, there will be more copies included in a bundle then will EVER sell at retail (at least for the PC). It's really nothing more than a twitch finger, pince of persia knock off with better graphics.
No matter what you think of the game, it will without a doubt be better on a machine with a CUDA PhysX capable machine. Nvidia is adding value to your gaming experience for free.
Quote:
Take CS for example. It looks like crap and doesn't take advantage of any fancy hardware abilities with cool sounding names but there are still ridiculous numbers of people playing it on machines that can actually run Far Cry 2 maxed out. WHY?
And many of those gamers prefer CS:S to 1.4. WHY?
Quote:
Gameplay. They like the way the game plays. Monopoly is still pretty fun to play and certainly doesn't require physics of any sort. Why do people play it? It's fun to play.
Gameplay certainly has its place, but you lose a lot of your target audience when you're referring to people who spend $300-$600 annually on PC graphics card updates.
Quote:
Until physics can be something that all developers can BOTH code AND use to have an impact on the game there just isn't enough market to make this conversation even worth having. This whole thing is just a short lived data point if they can't make that happen.

So if you were going to make a wager on wheter PhysX or Havok will win try to remember that it's NV against the world. The world, BTW, includes Intel.
Again, there is no advantage to Intel's solution, it will run on both AMD and Intel CPUs and both will run Nvidia GPUs. PhysX is already well established with both software and hardware acceleration and I fully expect it to continue gaining momentum, especially given Nvidia controls a majority of the discrete GPU market. That's not to say DX11 and Havok won't have their place, they'll just be co-existing but will all be cross-compatible. The only thing AMD is doing now is trying to slow the development and adoption rate, but there's no doubt GPU accelerated physics is the next big thing for PC gaming.
Virtuman 13th December 2008, 03:08 Quote
Quote:
Actually PhysX has no competition when it comes to physics beyond CPU acceleration. The good news for Nvidia is that their GPUs can and will always run on every CPU out there, and therefore, they will always benefit from Havok as well.

Are you saying that AMD GPUs won't run on some CPUs?

Quote:
Not sure why you think Havok is the clear leader when it yields no advantage, but as I've already noted, PhysX is widely used in the same capacity, as are various other software physics engines:

One way to gauge it is how many titles run each solution. Havok has a significant lead over PhysX. Until NV came along and bailed PhysX out the number of titles they were adding was decreasing. now that NV is on the scene they are spending considerable effort and money to get PhysX back into the game.

Since you brought up the development timeline earlier we could use that to gauge the relative position of each company. http://en.wikipedia.org/wiki/Ageia , Ageia was founded in 2002 and Havok in 1998 , http://www.havok.com/content/blogcategory/20/37/ .

You could also use the number of platforms supported by Ageia. Those four would be Sony Playstation 3, Microsoft Xbox 360, Nintendo Wii and PC. Compared to the number of platforms supported by Havok which is 8 and are Wii™, PLAYSTATION®3, PLAYSTATION®2, PSP™, Xbox™, Xbox 360™, GameCube™, and the PC. That would be right about twice as many.

As a side note, NV didn't even bother to show the trademark and copyright symbols when they listed their supported platforms. They did, however, take the time to put them in for thier own trademarked and copyrighted things.


As for the other stuff. Duh. Of course physics allows developers to make games look better. But you should not be using this as a premice for justifying the same old crap we've all come to know and love from the green Kool-Aid drinking crowd in Santa Clara. Until there is something that works across the board for everyone there is very little point in anyone going on about it. Nothing will ever come of it.

There are only three players in this game AMD, Intel and NV. Two out of the three agree on the engine and they all agree on the implementation (despite your attempt to get people to think otherwise by implying AMD doesn't support OpenCL and DX11).

AMD just released a FREE video transcoder and it crushes CUDA's performance. CRUSHES IT. Yes there are some minor issues. It took BadaboomAlongDingDong months to get their stuff right and they just got upstaged by AMD right out of the gate. It's only going to get worse. Rather than derailing this thread into an argument over how minor the issues are I would simply invite everyone to try AMD's solution and compare it to Badaboom's solution. That's assuming you're willing to spend $30 to do that.

This is what's important to understand about this. It's going to get worse for NVIDIA and not for AMD. AMD has chosen a technology to support FIRST on the basis of its ability to perform and SECOND on its ability to succeed commercially. NVIDIA on the other had chose PhysX FIRST on their availability (at no point has Havok been up for sale becuase they were failing so miserably) and SECOND because they believe they can market the software into a commercial success regardless of its ability to actualy perform.
Virtuman 13th December 2008, 03:14 Quote
I should add that I own both AMD-based and NV-based cards. I have more than one PC and I typically buy whatever I felt like represented the best value to me at the time. So don't go thinking I'm just some AMD fanboi blathering on about nothing. I would very much like to see physics hit the mainstream gaming segment as much as anyone. But I don't see the need to rush into this with a disfunctional plan that has the SOLE design of revenue generation.

I have yet to see any motivation from NV in this whole PhysX debacle that really represents any good faith attempt to take any action on any product just because it's the right thing to do. Nothing.

They didn't even extend the CUDA coverage down to the 8xxx series of cards until reviews and news people caught them with thier hand in the cookie jar. It was just some code chages required to make it work and NV purposefully withheld those changes.

Why you ask? Well to get people to upgrade their card from something like an 8600GT to a 9500GT or maybe go from an 8800GT to a 9800GT. What's that you say? Those are all the same cards on the inside (respectively)? Sure they are.

What's funny is that no self respecting computer enthusiast/human being would recommend to someone else they upgrade from an 8800GT to a 9800GT. But NV did. Well, only if you want to get PhysX and CUDA. Well, at least until they got caught.

Again.
chizow 13th December 2008, 03:42 Quote
Quote:
Originally Posted by Virtuman
So don't go thinking I'm just some AMD fanboi blathering on about nothing. I would very much like to see physics hit the mainstream gaming segment as much as anyone.

Actually its quite clear that you are. There's far too much propaganda and inconsistencies in your postings to correct on a point-by-point basis. In any case I didn't post here to trade jabs with someone who can barely maintain coherent thought, so believe what you will. :)
Virtuman 13th December 2008, 03:46 Quote
Yay. I win.

LOL
chizow 13th December 2008, 04:05 Quote
Quote:
Originally Posted by Virtuman
Yay. I win.

LOL

Sure you do.

http://www.nvidia.com/object/physx_8.06.12_whql.html

The driver states clearly only 3 parts were supported, so it was no secret at all. Nvidia often does that for new drivers for new parts before they validate older parts. All 8-series and higher parts supported PhysX in the next driver release.

Transcoding is nice, but its certainly a secondary consideration to features that improve your gaming experience. Oh, and how much does AVIVO crush Badaboom in Vista 64? ;)

Also I've said nothing that would indicate AMD GPUs won't support OpenCL or DirectX 11 or that they can't run on both Intel and AMD CPUs. In fact I've stated quite the opposite.

Its obvious that you're more interested in disinformation and peripheral noise than the actual topic being discussed, so again, believe what you will. :)
Virtuman 13th December 2008, 04:28 Quote
Quote:
Originally Posted by chizow
From what I've read all the competing standards are C based and compatible, with a bit of work. I've already linked to articles showing Nvidia plans for CUDA to fully support both OpenCL and DirectX11. Given the past history of proprietary APIs and DirectX (again EAX and even the various software physics implementations) I'm pretty sure all the necessary wrappers and extensions needed to make PhysX compatible with DX11 will be in place by the time its launched. At which point I'm sure AMD will suddenly do an about-face and proclaim DX11 Compute Physics as the greatest thing since sliced bread. Except another 8-9 months and another development cycle will have passed and retarded universal physics implementation in games.

This was the statement that I was refering to specifically. Saying that AMD would do an about face concerning DX11 means to me that they don't support the idea now. Since Godfrey Chang clearly said just the opposite I'm a bit confused.

I am unaware of any inconsistencies in any of my posts. I am not so arrogant as to think I can’t be wrong and I’m brave enough to admit my errors after I learn better. Please feel free to point out the inconsistencies and provide any supporting information so that I won’t continue to be wrong if that is in fact the case.
ChrisRay 13th December 2008, 05:28 Quote
Quote:
What's funny is that no self respecting computer enthusiast/human being would recommend to someone else they upgrade from an 8800GT to a 9800GT. But NV did. Well, only if you want to get PhysX and CUDA. Well, at least until they got caught.

What? There might be a few drivers out there which are Geforce 9 series only. But this only a testing/QA thing. There are drivers out there which only support the Geforce 8/200 series as well. Nvidia has been talking about GPU physX for the Geforce 8/9/GT200 cards since they bought Aegia back in febuary. They never were dishonest about which cards would support it. It simply took more time to get all cards QA'd for PhysX. The 9800GTX for instance came out with PhysX support first because it was one of the first cards to go through QA. But the entire time Nvidia was saying that "ALL" DX10 GPUS from them would support PhysX. Never once did they allude only the Geforce 9 series would support it and the 8 series would not.


Regards

Chris

SLIZONE Administrator
Nvidia User Group Member
Virtuman 13th December 2008, 06:45 Quote
You're correct on that. However the delivery was misleading IMO. It was all a mishmash of G80 vs. G92 and then which cards could and could not do it followed by not all functionality worked on all cards in a given group... It was all very confusing. In fact, I'm still not sure if it works on a G80 or not.

But in the end I'll give you that and rephrase to the above.
ChrisRay 13th December 2008, 08:28 Quote
No it wasnt. If anything was misleading it was the fault of the editors. Nvidia was extremely clear about the way PhysX would roll out. It was very clearly painted out to editors that 9800GTX + was the first card to go through PhysX QA testing for the purposes of reviewers using it and talking about PhysX on the 9800GTX + reviews The big issue was getting the PhysX driver out in a product review timeframe. Further G92 cards were enabled in the not to distant future. And the G80 series a month after.


Regards
Chris


SLIZONE Administrator
Nvidia User Group Member
frontline 13th December 2008, 15:56 Quote
I wonder why Valve think that PhysX is a waste of development time at the minute? Maybe because they have invested a lot of time and effort into performance scaling across a range of hardware and don't see any benefit in coding a bit of eye candy for a minority of their end users. Their hardware surveys make interesting reading too http://store.steampowered.com/hwsurvey/

46% with a DX10 capable card (but less than 22% of these running vista), compared to nearly 60% with a multi-core cpu. Assuming that around 2/3rds of the DX10 cards are Nvidia cards, that's still only around 1 in 3 users who will benefit from GPU based Physics at the present time.
Virtuman 13th December 2008, 17:09 Quote
Quote:
ChrisRay
06-17-08, 12:12 PM
That was a mistake on my part. I have to go through my editor to get it fixed and we havent had a chance. Currently. I'm not entirely sure. Nvidia owns the API for PhysX. But I only know of the demo'd games Nvidia showed at editors day running PhysX. None of the older PhysX supporting titles were included or Demo'd except an UT3 Mod which may also be exclusive to GPU physX. So those are the only games I am willing to comment on specifically. All of which were titles Nvidia was heavily influencing the introduction of GPU PhysX into. I have been trying to find out the answer to that very question but Nvidia has been slower than usual with responses due to how busy this launch has made them.


Chris

http://www.nvnews.net/vbulletin/archive/index.php/t-114856-p-6.html

This was your response to the following question:
Quote:
there are plenty of PhysX games on the market that worked with old Ageia cards. Does this mean that this new nvidia PhysX will only work with new games? no support for all the older games?

This conversation took place a month after Editor's Day and four months after the PhysX announcement.

I'm not saying that you said anything incorrect or less than factual. But it is confusing considering how clear NVIDIA's messaging was about what would be covered. Bear in mind I'm not talking about now. I'm talking about what was going on back then. Even you, someone very well connected (ostensibly) to NV didn't really know how things were playing out.

You also didn't have anything to say when he completely contradicted you by saying there were plenty of PhysX games on the market when you said there currently weren't any PhysX supported games available. I was at Editor's Day too and heard them say they would be supporting the existing stuff and that the new stuff would run on existing Ageia PPUs as well.

That's the kind of thing that allows confusion to proliferate.
Virtuman 13th December 2008, 17:52 Quote
There's some more history to the whole physics debacle that is also interesting. NV started out on board with Havok very close to the same time ATI did.

NV announces here-
http://www.reghardware.co.uk/2006/03/20/nvidia_havok_gpu_physics/

ATI announces here-
http://www.reghardware.co.uk/2006/06/06/ati_gpu_physics_pitch/

Despite the meaningless argument over who was there first, they were both on board.

What wasn't meaningless was that fact the ATI hardware was, again, outperforming NV hardware. Don't worry, wait a bit and it will swing the other way. This time it took quite a while longer for that to happen. Actually, concerning physics, it isn't clear yet whether or not NV has actually taken that title. At any rate....

For those that continue to want AMD to support PhysX and CUDA I'm still asking why no one is asking why NV won't support Havok. Especially considering they used to do so. It's where they both started.

AMD still supports Havok and NV bailed even though they didn't have to do so. Had they not, AMD and NV could have both been working to get developers to start including Havok physics in their code and we would likely be having a very different argument right now.

From my point of view, Havok is supported by more games already as evidenced by counting the number of titles both companies claim. They've been doing it longer and better than Ageia and it exists on more platforms than PhysX.

NV has decided that rather than being part of the solution they will create a problem. In return they get all the branding and marketing rights and just ignore the inferiority of the product.

As usual, AMD/ATI just accepts this marketing coup. But this time I'm not going to fault them too much. The reality of physics is such that, until there is a single method for getting the job done, no developers are going to do anything really significant with it. So AMD plods along quietly.

Which is kind of cool in a marketing sorrt of way. Here we are going on about NV over a non-subject but at least we're talking about NV. Given the current state of their hardware that's exactly what they need to happen if they are going to slow down their loss of market share to AMD.

AMD on the other hand isn't saying anything because there isn't much to say. Yet. Rather than confusing everyone with meaningless drivel about products that don't exist they just shut up.
ChrisRay 13th December 2008, 20:25 Quote
Virtua. That was regarding how older games that were PPU compatible would be ported to GPU compatibility or if they would at all.. And it was a mistake on my part when I listed certain games that worked and didn't. It has nothing to do with driver rollout for GPU PhysX support. UT3 was the first "Game" that they supported with a GPU PhysX driver in a very early driver stage. Even now with a full fledged and working GPU PhysX driver. Not everything that was made with the PPU in mind works with GPU physX. Some of it has been ported over. Some of it hasn't been. GPU PhysX was something that most developers intended to program for in the future. Again this is something Nvidia heavily emphasized to editors.. But it had nothing to do with the actual fact that all their presentations and documents said the 9800GTX would be the first to roll out with drivers. with Geforce 8 series shortly after.

You are really grasping at straws here. You've literally started talking about something entirely different to the subject at hand. You make something out of nothing. PhysX was a freebie for any Nvidia user who currently owned an Nvidia card. And has remained so since. You are rambling like a politician and changing your argument and points as you see fit. The fact that Nvidia was busy on a product launch((which is common for any company during a product launch)) doesn't mean they didnt adequately prepare reviewers and editors alike. Yes. I made mistake. It wont be the first or last time I do it either. But judging from your comments about editors day. It sounds to me like you were the one who got really confused.
Quote:
You also didn't have anything to say when he completely contradicted you by saying there were plenty of PhysX games on the market when you said there currently weren't any PhysX supported games available. I was at Editor's Day too and heard them say they would be supporting the existing stuff and that the new stuff would run on existing Ageia PPUs as well.

Have I touched a nerve here? Nvidia "Did" say they would be supporting "Some" games that were PhysX capable. UT3/GRAW. I said there were no current PhysX games available ((I was speaking of GPU PhysX games as I didnt follow the PPU scene very closely))They never said that all games would be supported. I followed the ones NVidia was promoting. I never followed the PPU before. When Nvidia demod its PhysX support at editors day. They specificallyshowed Future" games such a football game, a marine specs game, ect. If you were at editors day then you'd know this. I admit I did not follow PPU games closely because I never intended on owning a PPU. I also admitted I made a mistake. Let it go. I have no interest in exchanging jabs with you. For someone without bias you seem to be going out of your way show how Nvidia has done harm with PhysX. Everything I said here is exactly what I said in the nvnews thread. Which you have posted here way out of context.

http://www.nvnews.net/vbulletin/showthread.php?t=114856&page=9
Quote:
AMD on the other hand isn't saying anything because there isn't much to say.

Your right. They have nothing to talk about right now. All this DX11/OpenCL/Havok talk is nothing but smoke and mirrors to hide their actual lack of product.
Quote:
Rather than confusing everyone with meaningless drivel about products that don't exist they just shut up.

Oh they seem to have no problem running their mouth about PhysX. AMD is not the innocent bystander you are making them out to be,


I'm pretty much done conversing with you. Your rambling are incoherent at best. And you appear to have a serious axe to grind. I'm not interested in trading posts with you for the rest of the week until this thread eventually and probably gets locked.


Regards
Chris

SLIZONE Administrator
Nvidia User Group Member.
Virtuman 13th December 2008, 22:51 Quote
Nah, it's all good. We can have a difference of opinion and still get along and it's not my intent to start any kind of flame war or anything silly like that.

I agree with you that NV has not misrepresented the facts of what they were doing or what their intent was or is. The issue that I have is that they allow too many assumptions or incorrect information to float around and do very little to set anyone straight out in the community.

That's just a judgement call on my part. The only reason I referenced what you said in a different forum was an example of how this sort of thing takes place and how they indirectly direct it. It's great marketing and it really works.

You're an SLIZONE admin and are known to be very active in the community. That's a great thing. But for NV to leave you out of half of their plans i.e. you didn't have the answer and couldn't get it in a reasonable amount of time produced an inaccurate post.

I agree the particular content of that post wasn't of any major importance and it's just some guys talking. I also used that as an example because it doesn't make you look bad and I don't want you to think I'm going after you (which I'm not). It's just an example.

I appreciate you taking the time to respond to my posts. I spent a fair amount of time researching a few things that were originally opinions that i had to let go of because I found out they weren't true as well as a couple things I wasn't aware of. So all around it's another good learning experience for me.

That is always a pleasure.
maha_x 14th December 2008, 13:21 Quote
Gosh .. Is it not obvious that physX is Nvidia's cross-over product to DX11 and AMD doesn't have one. Nvidia will benefit from physX as long as DX11 is out, then we'll all just quietly bury these things and go with DX11 as we're all in Bill's leash. At least PC gamers are. It's just not happening without M$ and vista/win7. There's just no point debating beyond the obvious. These days nearly all games run DX, as openGL is too slow to keep up and all hardware is designed for DX in the first place anyway. On that note, M$ might (will) as well dictate how the API works and NV/AMD engineers go quietly to work and produce the hardware design to the spec M$ gave. Ooh, but NV/ATI have had [insert amount] much influence on DX spec. So? It's still M$'s product and those other guys cant design nor sell any hardware unless it complies with M$.

Mayby physX has life as a PS3 specific API in the future, it's fitting as it has an Nvidia GPU too. 360 SDK will be upgraded to include the DX11 physics for easier porting.

I dont really see where havok is going to fall in all of this. Mayby it faces an integration into intel hardware and is only an SSE style extension to functionality in future intel CPUs and GPUs.
Chocobollz 25th December 2008, 07:35 Quote
Tim, the problem is, there are too many nVidiot fanboys here so you should take their words as a grain of salt. Their main intention is to mock AMD whenever they have opportunities so please don't listen to what they say.

Chizow, you're a loser man~ ATI HD 4000 had beaten your nVidia graphics card, can't you live with it man? Stop saying negative about AMD, you better do something positive with your nVidia camp. If you're a loser, act like a loser, because if you don't, you'll looks more and more of a losers. I'm sick of your rant about AMD.
Nitro808 25th December 2008, 20:02 Quote
A lot of good points have been made in these posts but calling someone a loser for having a preference over one product or another is just ignorant. They are both good companies making competitive cards. I have had cards from each which have all worked well. I admit there are somethings that i like and don't like from each but you gotta take the good with the bad. Wise up chocobollz and respect others.
Chocobollz 26th December 2008, 04:24 Quote
Yeah I might be a little ignorant. You see I don't have any negative thinking about both companies, but what annoys me here is that I've seen to many green team supporter trolling on an ATI thread, that's what I don't like. I know we all have our own preferences, so I respect your choices. I didn't tell you to buy an ATI graphics card or saying that an nVidia graphics card (or nVidia itself) are bad, I'm just saying that you all should take those ChrisRay (and Chizow) words with a grain of salt because obviously, they're an nVidia supporter (and Chizow had been known to troll on whatever thread about ATI he finds, I think he wants to give negative press to ATI). I think we should give support to whatever company we prefer, so if ChrisRay and Chizow likes nVidia, that's fine with them, just DON'T give negative press about AMD/ATI. That's all.

P.S. You can see that most AT's forum member considered Chizow as trolling here in this thread: http://forums.anandtech.com/messageview.aspx?catid=31&threadid=2179958
Virtuman 26th December 2008, 14:24 Quote
Quote:
Originally Posted by Nitro808
A lot of good points have been made in these posts but calling someone a loser for having a preference over one product or another is just ignorant. They are both good companies making competitive cards. I have had cards from each which have all worked well. I admit there are somethings that i like and don't like from each but you gotta take the good with the bad. Wise up chocobollz and respect others.

+1
thehippoz 26th December 2008, 20:44 Quote
I dunno about them being both good companies.. they were in cahoots for a long time price fixing the gpu's.. that's why the prices skyrocketed and we are in the norm of 500+ dollar gpu's.. I think nvidia is even worse with the 200 series releasing at outrageous prices (I was an early adopter)

the 9 series was pretty weak.. I owned the 8800gtx and went to a 9800gtx on release- it was faster but gimped with aa.. thier 780i was no practical improvement over the 680 other than it could run yorkfield.. call me crazy but nvidia been dippin in the consumer till for every penny it can this year while offering nothing good

ati does have better image quality currently.. I don't want to hear the fanboys because I know they don't own anything ati- I remember they also had better image quality before the 8800gtx was released.. physx just makes me laugh, just another gimmick dippin in the till.. I would rather see havok (which runs on the cpu) take over as the standard and worked into all gpus instead of proprietary apple tactics- it's been around awhile.. physx was a failure since it was created- and cell factor was rigged.. you could play it just fine minus things like the cloth effects when they first released the demo way back- just had to edit the ini file with physx=off- after me and my buddy saw that we understood exactly what it was- physx is just a gimmick in it's purest form, unneeded- money making tactic :) yeah sorry I figured it out- I mean a whole gpu just dedicated to physx :) they hope you are that stupid.. more money
Virtuman 26th December 2008, 20:58 Quote
Both companies certainly do some things from time to time that aren't actually in the consumer's best interest. To some degree that's the nature of business. IMO, NV just does it as a matter of routine where AMD tends to use it as a fall back position and only when they really have to.

As for PhysX, I'm in your camp. I don't think that waving flags or better flowing water is going to make people (that have any sense) go one way or the other. The real key with physics and other computational models running on the GPU is making it do things that are actually useful.

CUDA is trying to go down that path with simulations and transcoding. So NV releases this code for people to use for free (as has been stated here) and someone like Badaboom comes along and builds a transcoding app.

But if using the code is free why do they charge so much for the program and why hasn't anyone else used the code to do the same thing and just undercut them on price? If NV wants to convince everyone that CUDA is so mainstream why haven't they identified the other people making software that does the same thing and support them as well?

I suppose it's because there just aren't as many people actually coding with CUDA as NV's marketing machine would have us believe. On the other hand, ATI builds support for transcoding into their driver and releases it for free. I think it's pretty clear that ATI has won this round.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums