bit-tech.net

Nvidia set to acquire Ageia

Nvidia set to acquire Ageia

Nvidia has announced its intention to acquire Ageia Technologies after signing a definitive agreement with the gaming physics specialists.

In a shock announcement just a few moments ago, Nvidia has revealed that it has signed an agreement to acquire Ageia Technologies—the company that raised the awareness of in-game physics with the launch of the PhysX physics processing unit in 2005—for an undisclosed sum.

Details are very slim on the ground at the moment, but Nvidia says that more information on the acquisition will be disclosed during the company's quarterly earnings call on February 13th at 10:00PM GMT.

Jen-Hsun Huang, Nvidia's president and CEO, hinted at bringing "GeForce-accelerated PhysX to hundreds of millions of gamers around the world" in the press release just issued.

"The computer industry is moving towards a heterogeneous computing model, combining a flexible CPU and a massively parallel processor like the GPU to perform computationally intensive applications like real-time computer graphics," claimed Huang.

"Nvidia's CUDA technology, which is rapidly becoming the most pervasive parallel programming environment in history, broadens the parallel processing world to hundreds of applications desperate for a giant step in computational performance," Huang said. "Applications such as physics, computer vision, and video/image processing are enabled through CUDA and heterogeneous computing."

The acquisition, Nvidia says, is still subject to customary closing conditions, but we expect this to go through fairly quickly.

I'm not quite sure what to think at the moment, as Ageia was a company almost waiting to be acquired by a bigger fish. That said, the prospects of on-GPU physics is an interesting one when you consider the fact that both ATI and Nvidia blew a lot of GPU-physics induced hot air in 2005. Back in September, Intel announced that it had bought Havok—the industry's leading physics middleware developer—so now that Nvidia is set to acquire Ageia, where does that leave AMD?

Share your thoughts with us in the forums.

30 Comments

Discuss in the forums Reply
Firehed 5th February 2008, 00:00 Quote
Good. I was always hoping PhysX would take off but knew it couldn't really go mainstream until it was built right into GPUs.
E.E.L. Ambiense 5th February 2008, 00:07 Quote
^QFT.
Sebbo 5th February 2008, 00:10 Quote
unfortunately, it now looks like it will be even longer before we see physics on crossfire...first havok, now this
only big physics company i can think of thats left is Crytek (though they aren't exclusively physics, as we know) and they've got their hands in nvidia's pocket anyway (or should that be the other way around?:?)
Cobalt 5th February 2008, 00:16 Quote
I don't think the PhysX technology in itself will take off. Maybe something similar, but I doubt that particular chip will make it out of the GPU physics tunnel alive.

Realistically a solution like havok is more likely because you can build general hardware and run it on any supporting piece of kit like DX. Hardware based solutions may be more efficient in individual cases but you can't convince most people to buy add in sound cards let alone something like this.

I recon software APIs processed by GPUs will lead the way, which will then be expanded so that a discreet add in card will be available at the high end. That way nobody has to buy any extra hardware to get the concept off the ground but once its established better solutions will be available for those who have the cash. Pretty much like 3D graphics came into the scene.
FeRaL 5th February 2008, 00:16 Quote
"Intel announced that it had bought Havok—the industry's leading physics middleware developer—so now that Nvidia is set to acquire Ageia, where does that leave AMD?"

A$$ed out I would guess. But, who knows...
Saivert 5th February 2008, 00:35 Quote
I never had any hope for this Ageia PhysX thing. Having yet another discrete processing card was just gonna fail. Only a set few ultra-enthusiasts have bought this PPU card and not many games use the PhysX middleware which was a requirement in order to utilize the PPU card. Havok is used by more games and VALVe is heavily invested in it (they actually improved a lot on it though as it's part of the Source engine).

I'm not all that sure what assets Ageia can bring to the table. NVIDIA surely must have a lot of physics technology already planned. At least now there is a clear direction and we don't have to have this distraction anymore.

As for what AMD is left with I don't know. Do they really need their own physics technology?

The best thing would be for Microsoft to implement a common physics API in DirectX 10.2 or a later version. Of course then there would be no technology competition which may be a bad thing too. But you can say that with the graphics subsystem as well.
DXR_13KE 5th February 2008, 01:13 Quote
"where does that leave AMD?"
in deep water without a fish.....
johnmustrule 5th February 2008, 01:37 Quote
I don't think AMD's in a lot of trouble, its been said by all the leading tech companies that physics simulation can be done on a GPU or an in house designed "specialty" processor. Of course this begs the question why buy Ageia unless their software is something really special?
Otis1337 5th February 2008, 01:51 Quote
AMD is doing VERY poor atm,,, but they will probs still keep going till they run out of money..... that's all they can do really, or sell them self like a hooker
Anakha 5th February 2008, 03:35 Quote
It looks like poor AMD can't catch a break.

Phenom under-performs it's older competitor (Core/Core 2)
ATi gets it's ass kicked by 1.5-year-old graphics cards (8800 GTX)
Heck, for that matter, ATi's very best cards, in crossfire, get their ass kicked by a SINGLE 1.5-year-old card
Intel buys HaVoK, NVidia buys PhysX (Leaving ATi with no major-developer-supported GPU-Physics system to leverage)

So, in pretty much every market they've put a stake in, AMD are getting their backsides handed to them. Here's hoping they can do something (ANYTHING!) to get them back on track.
Amon 5th February 2008, 03:43 Quote
Anyone else see this coming from a couple years ago?
Bluephoenix 5th February 2008, 04:33 Quote
only one direction to go from here:

intel starts making discrete GPUs, and nvidia buys AMD

you then have 2 companies that both make the CPU, GPU, and chipset competing for one market.


not a rosy future, but seems inevitable at this stage. :( :'(
mutznutz 5th February 2008, 07:40 Quote
Quote:
Originally Posted by Bluephoenix
only one direction to go from here:

intel starts making discrete GPUs, and nvidia buys AMD

you then have 2 companies that both make the CPU, GPU, and chipset competing for one market.


not a rosy future, but seems inevitable at this stage. :( :'(

Or intel nabs AMD, I don't think Nvidia would be allowed to buy it else it'd be a monopoly on it surely?

I think AMD/Ati has taken the right direction with the new gfx cards they just need to focus on getting the cash back in and make a good CPU like they used to
rhuitron 5th February 2008, 07:41 Quote
Hey AMD!

BOOOM

Headshot! ;)

I am seriously psyched (No pun intended) about this join!
Imagine playing a 8800 Gts with Aegia???!!!

Freaking Awesome! Go Nvidia!
Bindibadgi 5th February 2008, 09:01 Quote
I expect a certain generation of Nvidia IGPs to have Physics hardware built in - building on hybrid SLI. Ageia needed to have platform level integration - the specific hardware will die, but the middleware will remain strong and be including into NV Cuda for stream processing.

I can also see Nvidia doing a lot better on the workstation front with hardware acceleration for movie physics etc
The_Pope 5th February 2008, 09:16 Quote
It's certainly an interesting development. I too don't see a future for their PPU tech and amusingly, since AGEIA's business model was always selling chips, the PhysX API was largely free to developers (unlike Havok which costs $100,000's per game (PER PLATFORM I believe).

SO, now we have NVIDIA buying the company, potentially binning the PPU tech (and / or integrating it into future GPU designs) but otherwise, us gamers can use off-the-shelf NVIDIA graphics cards to hardware accelerate PhysX in games. Enthusiasts are excited, in spite of the complete lack of decent titles but here is THE key reason why this is Good News: once GPU PhysX is enabled in thousands / millions of graphics cards worldwide, developers will FINALLY have enough of a userbase to justify the extra dev time to integrate PhysX into their titles properly (rather than a token add-on afterthought).

Indeed, NVIDIA has a tremendous Developer Relations team full of skilled coders so it's conceivable that they will just farm those engineers out to help developers build hardware PhysX support as part of The Way Its Meant To Be Played scheme.

Winner!
Bauul 5th February 2008, 09:39 Quote
I personally think this is good news. Ageia were never going to hit the big time with their own, undersupported, dedicated physics card, but by integrating it into GPUs, I can see the whole concept of a dedicated physics chip becoming more of a reality. The main barrier to physics chips is that, unlike graphics cards, the gameplay is different depending on whether you have the physics chips. Graphics cards were around for years until games were released that required them (Q3 was the first, wasn't it?), until then they were optional extras. I can't see that happening with physics chips, as games either utilise the physics, or don't. It'll be interesting to see how nVidia utilise the new tech (and brand name) in the future.
Lurks 5th February 2008, 10:04 Quote
Ageia were pretty shagged by the poor hardware sales and poor developer uptake chicken and egg situation. But why would you look at Ageia as a game developer when you can use the much more mature cross-platform Havoc engine? And of course now Havoc are proper big boys having been aquired by Intel.

Even on the PC, I really need to call into question why we needed this thing when right now PC gamers are generally looking at at least one idle core on their multi-Gigahertz CPU doing absolutely nothing... Granted specialist hardware can do some things that aren't necessarily plausible on general CPUs but game developers are writing physics code threads to run on general CPUs (for 360 etc), so their central prospect was to use a kooky third-party Physx API and support hardware that no one owns that isn't remotely applicable to console development and therefore requires a lot of extra PC specific work for like 10 people. Not really a solid case is it?

So what are Nvidia after? Well, a Physics API probably and some engineering expertise which can be combined/shared with their GPU guys. <shrug>
[USRF]Obiwan 5th February 2008, 11:05 Quote
You could see this comming from miles away. I only hope Nvida will make this PPU intergration as 'universal' as possible. Because that was the problem with the Ageia setup. You needed te Ageia PPU and a game with the Ageia code. And not 'buy game, play game, see ppu, enhance game fx'. If this was the case everybody would have had a PPU card.

As for AMD, AMD is working for a while now on their Fusion project, a PPU based on the phenom cpu. And AMD is working on a mass parallel generic instruction scaler in the form of a hybrid 'swift' chip based on the Phenom core.

And AMD is also working with Nvidia on their joint nature-calculation GPU project. It's Possible that this is to be expected/introduced with DX11 in 2009. Maybe DX11 introduced interactive physics then.


I am also wondering if Nvidia is going to do a 'all in one chip' solution or place a PPU alongside the GPU on the 'next' Gforce card. I guess there are enough Ageia PPU's on the shelfs to solder this on the cards. Maybe a 8800GTPPU edition?
Lurks 5th February 2008, 12:54 Quote
It would be *far* more sensible to integrate logic into the GPUs.
zr_ox 5th February 2008, 12:54 Quote
The best way for AMD to get back into the game after this news is to release a Quad core GPU, with 1 core dedicated to PPU, passivley cooled and consuming 1 W.
Bluephoenix 5th February 2008, 15:43 Quote
Quote:
Originally Posted by mutznutz
Or intel nabs AMD, I don't think Nvidia would be allowed to buy it else it'd be a monopoly on it surely?

I think AMD/Ati has taken the right direction with the new gfx cards they just need to focus on getting the cash back in and make a good CPU like they used to


no, its intel that can't have AMD, or they'd have the entire CPU market to themselves

the GPU market would be relativelty split if you take onboard GPUs into consideration.
Tyinsar 5th February 2008, 15:54 Quote
Using GPU as PPU = more burden on an already overtaxed part.





As long as we've had discrete GPUs game developers have been pushing them to (or past) their limits. Given the choice of extra power on the GPU I'm guessing most of it will still go to better screenshots (which are never "real" enough) and not better motion (which is already fairly acceptable). The way I see to change that is to make the motion (physics) so convincing that we see what we've been missing and rave about the "new realism" that "just has to be experienced".
TreeDude 5th February 2008, 21:26 Quote
I'll never understand why we need a dedicated physics processor. Quad core CPUs are slowly becoming more and more apart of a gamers rig. Why can't they just dedicate 1 CPU core to physics? I feel like this is the easiest solution.

Anyone how actually has a physics card knows that you still loose FPS when the physics are set to full. So what the hell is the point? The in game physics are too complex for even the latest in physics hardware. Why can't we step it back and just optimize for a single dedicated CPU.

We need to be able to set in game processor affinity for multithreaded games. That would rock. Now we just need multithreaded games.......
Tyinsar 5th February 2008, 23:50 Quote
Quote:
Originally Posted by TreeDude
I'll never understand why we need a dedicated physics processor. ... Why can't they just dedicate 1 CPU core to physics? I feel like this is the easiest solution.
...
Because the CPU, being a general purpose processor, is nowhere near as fast at this task as a custom designed, single purpose, processor.

By your logic we could also dedicate one core to graphics and replace the GPU - hey, we could even take two and replace Crossfire / SLI. This may be done in the future BUT it won't work without a major change in the nature of the CPUs from we use today.
outlawaol 6th February 2008, 22:35 Quote
Sweetness-ness...
Whalemeister 11th February 2008, 09:47 Quote
Quote:
Originally Posted by mutznutz
Quote:
Originally Posted by Bluephoenix
only one direction to go from here:

intel starts making discrete GPUs, and nvidia buys AMD

you then have 2 companies that both make the CPU, GPU, and chipset competing for one market.


not a rosy future, but seems inevitable at this stage. :( :'(

Or intel nabs AMD, I don't think Nvidia would be allowed to buy it else it'd be a monopoly on it surely?

I think AMD/Ati has taken the right direction with the new gfx cards they just need to focus on getting the cash back in and make a good CPU like they used to

What and Intel Buying the only other serious CPU manufacturer isn't a monopoly?!!
Bindibadgi 11th February 2008, 10:11 Quote
Let me make this clear - Nvidia can't make a CPU (like we know one as) and Intel and AMD will never, ever join or be bought out.

Intel will make a GPU part, although I expect workstation GPGPU built on IA and it will continue to funnel money into Havok.
Sebbo 11th February 2008, 10:57 Quote
Quote:
Originally Posted by TreeDude

Anyone how actually has a physics card knows that you still loose FPS when the physics are set to full. So what the hell is the point? The in game physics are too complex for even the latest in physics hardware. Why can't we step it back and just optimize for a single dedicated CPU

i think the slow down is partly due to the extra coordination the CPU and GPU need to make with the PPU, as well as all the extra stuff the GPU now needs to draw each frame rather than the PPU being taxed too much. anyone know for sure?

i think we'll see something similar to ATI's Physics on Crossfire (http://ati.amd.com/technology/crossfire/physics/index.html) from NVidia first since the GPU's are supposed to be able to do general purpose calculations anyway (write a compiler/recompile the engine or design a whole new card/chipset, which sounds easier and cheaper to you?:p).
Bindibadgi 11th February 2008, 11:07 Quote
Its' because the PPU generates more things to render for the GPU, as well as inter-CPU/PPU/GPU/engine co-ordination, hence the slow down.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums