bit-tech.net

Nvidia announces world's 'most complex' GPU

Nvidia announces world's 'most complex' GPU

Nvidia's Kepler-based 28nm GK110 boasts 7.1 billion transistors, making it the most complex commercially-available integrated circuit on the planet.

Nvidia has revealed details of what it claims is the most complex commercially-available integrated circuit on the planet, the Kepler-based GK110 GPU.

Before you get too excited, however: this chip won't be making it into any high-end gaming cards. Instead, the company is aiming the GK110 firmly at high-performance computing (HPC) and supercomputing applications where a GPU's ability to rapidly churn through highly-parallel tasks is welcomed with open arms and blank cheques.

Unveiled at the GPU Technology Conference (GTC) this week, the GK110 is manufactured on a 28nm process node and boasts a whopping 7.1 billion transistors - making it, in Nvidia head Jen-Hsun Huang's own words, 'the most complex IC [integrated circuit] commercially available on the planet.'

In comparison, Nvidia rival AMD's commercial-grade Tahiti GPU, as found in the Radeon HD 7900 family, features just 4.3 billion transistors created on the same 28nm process node. For a real giggle: Intel's 4004 processor, the first commercially available microprocessor from the company released back in 1971, featured 2,300 transistors on a 10µm process node.

The GK110 itself boasts 15 Streaming Multiprocessor (SMX) units featuring 192 CUDA cores each, for a total of 2,880 CUDA cores in each GPU. Comments made by Huang at the event suggest that several grades of products will be made available, each with fewer SMX units enabled, as Nvidia seeks to increase its yields on what will be a very complex chip to manufacture.

Nvidia's first outing for the GK110 will be the HPC-centric Tesla K20 series of products, featuring a 384-bit memory bus made up of six 64-bit controllers running in parallel. The company has yet to indicate the quantity of memory available, but given the target market it seems likely that each GK110 will have between 2GB and 4GB of GDDR5 to play with.

The Tesla K20 boards won't be out until the end of the year, but Nvidia also unveiled a dual-GK104 Kepler-based Tesla in the form of the K10, featuring 4.58 teraflops of single-precision floating point performance. The board also introduces Dynamic Parallelism, which Nvidia claims allows the GPU to adapt dynamically to data by spawning new threads, and Hyper-Q, which allows multiple CPU cores to address the CUDA cores on a single Kepler-based GPU simultaneously.

Nvidia, naturally, did not share pricing information at the event.

35 Comments

Discuss in the forums Reply
iwod 18th May 2012, 12:10 Quote
If Nvidia could master the technique of building 7B transistor chips with 28nm, which makes you wonder what powerful GPU it would be if there is an GK114 GPU, aka GK104 with 7B transistors.
shrop 18th May 2012, 12:19 Quote
Ah finally, the PROPER 580 replacement.
SaNdCrAwLeR 18th May 2012, 12:24 Quote
Quote:
Originally Posted by shrop
Ah finally, the PROPER 580 replacement.

Ahh, a comment without reading the story :D
faugusztin 18th May 2012, 12:30 Quote
Quote:
Originally Posted by shrop
Ah finally, the PROPER 580 replacement.

You could maybe buy one card for price of the setup in your signature. Maybe.
shrop 18th May 2012, 12:31 Quote
Quote:
Originally Posted by SaNdCrAwLeR
Ahh, a comment without reading the story :D

Whoops! :D
GoodBytes 18th May 2012, 12:51 Quote
Quote:
Originally Posted by shrop
Whoops! :D

Image your face after you buy it, and open the box.
http://images.anandtech.com/doci/5840/Tesla_GK104_K10_3Qtr_Covr_575px.jpg
Good Luck plugging that monitor
shrop 18th May 2012, 12:53 Quote
hahah! D'oh! :)
bowman 18th May 2012, 13:01 Quote
wonder what the die size is.
r3loaded 18th May 2012, 13:11 Quote
It does make you wonder if they'll make a GK110 based GTX 685 or something.
AmEv 18th May 2012, 14:28 Quote
Quote:
Originally Posted by GoodBytes
Good Luck plugging that monitor

WHDI. Duh :p
Eldorado 18th May 2012, 15:41 Quote
Quote:
Originally Posted by r3loaded
It does make you wonder if they'll make a GK110 based GTX 685 or something.

I reckon summat will come of this, but it won't be cheap!
GoodBytes 18th May 2012, 15:47 Quote
Not really, they can price it like the 680, and drop the other price.
It will be out as soon as Nvidia solve their production chip issues, and AMD releases a better GPU
sb1991 18th May 2012, 16:04 Quote
Not really the most complex IC, just the biggest. Most x86 CPUs are quite a bit more complex.
faugusztin 18th May 2012, 16:12 Quote
Quote:
Originally Posted by GoodBytes
Not really, they can price it like the 680, and drop the other price.
It will be out as soon as Nvidia solve their production chip issues, and AMD releases a better GPU

I don't think we will see GK110 in consumer graphics cards. If they will need quicker than GTX 680 single chip card, then they will supersize their GK104 with 1-2 additional blocks. It is pointless even for NVIDIA to bring such enormous chip to consumer market. Sure, folders will celebrate, but 70% of the extra performance is useless for majority of the buyers who would buy that card - that is why we have gaming oriented GTX680 after all.

My opinion is that we won't see a consumer GeForce card based on GK110 - it will show up as Quadro at best.
Gradius 18th May 2012, 16:50 Quote
Die size: ~550mm²
thehippoz 18th May 2012, 17:02 Quote
most complex.. there he goes again- he's back xD you gotta love huang or hate him because he's full of ****.. now the wood screws
GoodBytes 18th May 2012, 18:21 Quote
Quote:
Originally Posted by thehippoz
most complex.. there he goes again- he's back xD you gotta love huang or hate him because he's full of ****.. now the wood screws

I don't see what's to hate about him. Sure he doesn't practice for 3 month his presentation, and that you feel he just comes on set, no practice, and just talk, so he says many silly things, and his enthusiasm of his company, is really visible, making him overly exiting and results in saying/doing silly things on set. But at least it is known that he actively work in the company as engineer, and not play golf.

And at the end of the day... it doesn't really mater. I mean he is not aiming at the overall population. He aims at people who already don't trust anything a company says until reviews comes out, and look at gaming/research/simulation performance over anything.

I enjoy his presentations, in all honestly, it's always fun to watch him. As long as he is proud, overly-exited, and you can see from his eyes the passion, excitement and motivation from his eyes, you know that the company will not go down, and try to innovate.
r3loaded 18th May 2012, 18:36 Quote
I'd also add that he believes in his company enough to have its logo tattooed on his left arm!
thehippoz 18th May 2012, 18:51 Quote
Quote:
Originally Posted by GoodBytes
I don't see what's to hate about him. Sure he doesn't practice for 3 month his presentation, and that you feel he just comes on set, no practice, and just talk, so he says many silly things, and his enthusiasm of his company, is really visible, making him overly exiting and results in saying/doing silly things on set. But at least it is known that he actively work in the company as engineer, and not play golf.

And at the end of the day... it doesn't really mater. I mean he is not aiming at the overall population. He aims at people who already don't trust anything a company says until reviews comes out, and look at gaming/research/simulation performance over anything.

I enjoy his presentations, in all honestly, it's always fun to watch him. As long as he is proud, overly-exited, and you can see from his eyes the passion, excitement and motivation from his eyes, you know that the company will not go down, and try to innovate.

yeah me too good.. I like to poke fun at his marketing though :D the wood screws in a fake video card was the funniest.. who does that?

seems he's one of those rare people like jobs who can draw in fans and really charge whatever he wants.. don't forget he was in cahoots with ati inflating gpu prices back in the day (before g80).. and look at where the prices are at today
Ayrto 19th May 2012, 12:22 Quote
Would this have been the 680 had AMD released a beast?
edzieba 19th May 2012, 12:29 Quote
Quote:
Originally Posted by thehippoz
wood screws in a fake video card was the funniest.. who does that?
Someone who has to reassemble the test card in the next 5 minutes, finds out the bolts have gone walkabouts and the nearest supplier who might have some is probably half an hour away, and doesn't care if the holes get threaded to hell as long as it holds together for an hour or two of photos.
HourBeforeDawn 19th May 2012, 20:15 Quote
so we all know how well nVidia does with complexity my guess is yields for this chip will probably be what in the low 40% and the rest of that be failed chips lol.
maverik-sg1 20th May 2012, 23:38 Quote
It's always good to remind ourselves that the current GXT680 ws actually made from the replacement gpu for the GTX560 and for every GK104 GPU sold, Nvidia's margins go through the roof.

I am surprised though, this GPU sounds far too big for home computing isn't it? - well some would argue no, but I really fear for the cost of this and the cost to us when it finally materialises into the real GTX580 replacement.....

Maybe that's another reason for having an artificially high price for the GK104 derivatives - best case secenario, I don't expect to see much change from £550 when this is released in it's gaming GPU form.

Performance should be more than the GTX690 though, I just hope, sincerely hope, that AMD have a strong enough replacement for the 7970 ready to help push prices into the realms of affordable or provide a viable lower cost alternative.
K404 21st May 2012, 00:42 Quote
Won't be out until the end of the year.... I figure there will be something else for the desktop around Christmas though
GoodBytes 21st May 2012, 01:39 Quote
Quote:
Originally Posted by maverik-sg1
It's always good to remind ourselves that the current GXT680 ws actually made from the replacement gpu for the GTX560 and for every GK104 GPU sold, Nvidia's margins go through the roof.
huh? I am not following you, sorry. :/

Quote:
I am surprised though, this GPU sounds far too big for home computing isn't it? - well some would argue no, but I really fear for the cost of this and the cost to us when it finally materialises into the real GTX580 replacement.....
Double huh?
sorry for my following remark if inappropriate, as I am not sure I follow you, but: Tesla GPU's aren't design for video output. They are designed to be used as processors, to... well process stuff. It's like if you add another GPU for PhysX, but instead of PhysX, it's for CUDA or any GPU processing. And it's specially designed for it with no focus on gaming at all, nor even CAD software. It's really a GPU for research and simulations over anything. It's like the "super computers" of GPU's if you will.

Nvidia builds them themselves, as the market is very very small. The reason why Nvidia showcase it to a wider public is to showcase what Nvidia can do, show that Nvidia is more than just doing GPU's and that they are actively working in graphical and technological improvements.
Quote:

Maybe that's another reason for having an artificially high price for the GK104 derivatives - best case secenario, I don't expect to see much change from £550 when this is released in it's gaming GPU form.
As disgust, the GK110 is the "true" Kepler. The GK104 is the cut down model. A game that BOTH AMD and Nvidia plays since day 1, is not to release their best stuff, else it's easy to
beat it, and then you come to a point that everyone looses, and then you have 3-4 or even 5 years with no new GPU's while the company are hard at work on a new architecture as they are out of things to release meanwhile. 3Dfx did that... it ended up with canceled GPU, after canceled GPU's as they were unable to release something more powerful very quickly to attract sales, especially that their stuff was expensive. And now they are gone.

An example of this with Nvidia, is when Nvidia released the Geforce 8800GT. The rest of the series was a magma spilling GPU, and then you have, out of no where the 8800GT which is significantly cooler, cheaper and performs better. Then you have the 9000 series out, which the 9800 is practically the same performance as the 8800GT, did Nvidia re-branded the 8800GT OR they released it in advance under the 8000 series model to compete against AMD offering as Nvidia sales where hurt at the time. Clearly you can see that the 9000 series was done shortly after the 8000 was out. It was quick like this, as the architecture is based on the 8000 series. They were tweaking it (changing part of the GPU architecture), in a way to solve the 8000 series problem (heat).

In this case, Maxwell is far from done (as we can see), so Nvidia needs a model in between to counter AMD reply.
Quote:

Performance should be more than the GTX690 though, I just hope, sincerely hope, that AMD have a strong enough replacement for the 7970 ready to help push prices into the realms of affordable or provide a viable lower cost alternative.
If AMD covered the cost of engineering a new GPU, and have the budget to work on a new GPU, and no one (AMD and their card builders) wants to make money, then yes, they can sale their highest end GPU at 30$ if not less.
Elton 21st May 2012, 02:13 Quote
Fascinating, 7 Billion transistors? I wonder how they'll cool these monsters? Because that's quite a bit of silicon and heat output. Especially in terms of density.

I doubt they'll ever actually release a version of this. Unless of course they release it as a GTX285-esque or 7900/7950-esque (G71) where they shrink and tweak the die and release it as next gen.

Actually I'd say it's pretty brilliant of them to make a derivative that's more shader focused than compute focused for the gamers. It makes for a cheaper card (the cost of the GTX 480 comes to mind, it was both and it brute forced it through) and cooler one since there's less compute but more shader power.

Speaking of which, I need to try out a Green Team card, haven't done so yet in ages. Last card I had from the Green Team was a 7600GT.
maverik-sg1 21st May 2012, 10:10 Quote
@goodbytes - you're clearly looking for a reason to get your own back on your Vista-isms right, let me help you in your time of "huh" and "double huh"

Disgust? You meant discussed right?

I will repsond in reverse order because one is easier to answer than the other incredibly OT dribble.

To the Double HUH, GK110 is not only the base GPGPU for the parallel compute K20 but also the GK110 will be the Keplar high performance desktop chip - I accept I did not make it clear that they wont just take the K20 chip and put "TWIMTBP" on the cover, but a GK110 derivative will be used for the high performance desktop GPU...... the mistake I made was I assumed you would already know this.


In response to the 'huh':

Any GPU with GK in front of it - means it's Keplar right (GF = Geforce Fermi, GK= Geforce Keplar), so I trust when you say 'true' Keplar, you actually refer to the high performance model based on Keplar architecture? Which is what I posted and what others had also infered in previous posts.

What does "magma spilling" mean when you refer to other GPU's after the 8800GT - The 9800GT is identical to an 8800GT, although some were manufactured using a 55 nm technology instead of the 65 nm first debuted on the 8800GT. The 55 nm version supports HybridPower while 65 nm doesn't. The G92 design has been later re-badged for a second time and sold as GTS 250 and later, a third time only by OEMs as the GT 330.... as for the heat, most GPU's on 65nm we're pretty toasty, but most came with good enough cooling to overcome that, although I am sure you are right, some people upgraded from 8800GT to 9800GT because of it (although a change from the cooler may of sufficed)....

It's fair to say though that each revision was optimised from an existing architecture which in itself was revolutionary at it's time, the G80 GPGPU has it's place in 'greatest moments of GPU evolution'

Back from your history lesson and into May 2012 and the original post:

The GK104 was originally meant to be the GTX660, but was re-badged because they found it outperformed the 7970 (that and GK110 was nowhere near ready)? The GK110 is (from an architecture PoV) a replacement for the GF110 which was the GPU used for the GTX580 - The desktop derivative will be called soemthing like GK111 or GK110.X hope this helps clear up your confusion.

Summary:
I see your point about the 8800GT - but it has no relevance to what I was saying - that version of the G80 was always going to be the 8800GT and subsequent revisions were always destined to be that model and that references' successor - the GK104 was designated as such to be the replacement fot the GF104 GPU which was used in the GTX560, so the GK104 was to be the GTX660 - the natural successor by designation and chip design (mid range Fermi to mid range Keplar) and that is not the case, Nvidia upgraded it's name for all the reasons stated in my threads.

PS: In late 2000, not long after Voodoo 4's launch, several of 3dfx's creditors decided to initiate bankruptcy proceedings. 3dfx, as a whole, would have had virtually no chance of successfully contesting these proceedings, and instead opted to be bought by Nvidia - they're not gone, they are like ATI - absorbed into larger entities.
maverik-sg1 21st May 2012, 11:00 Quote
Quote:
Originally Posted by K404
Won't be out until the end of the year.... I figure there will be something else for the desktop around Christmas though

Aye Kenny mate, probably the desktop version of the GK110 (which the media suggests will probably be called the GTX780) and something from ATI perhaps (I really hope so)?

In terms of value, anything released in Q4 is such bad value, prices drop dramatically half way through the following quarter, as exciting as the products may be, patience really does pay off.

PS: Come back we miss you :)
Elton 22nd May 2012, 06:57 Quote
Well then Maverik I'd say that they did a damned good job then. If they had a midrange chip that propelled itself that high.

Mind you the GK110 at this point should be released either as an improvement (not likely) or improved then released as the G?1xx. In other words, next gen is going to be a GK110 on crack and refined a tad.
maverik-sg1 23rd May 2012, 19:59 Quote
Quote:
Originally Posted by Elton
Well then Maverik I'd say that they did a damned good job then. If they had a midrange chip that propelled itself that high.

Mind you the GK110 at this point should be released either as an improvement (not likely) or improved then released as the G?1xx. In other words, next gen is going to be a GK110 on crack and refined a tad.

I agree the GK104 is mighty impressive, but the performance increase of 20% for the high performance chipset over the previous generations , is, by what have been used to in recent generations, a tad underwhelming (at least 50% increase over outgoing model), in todays games there's not much reason for gamers with a 580GTX running @ 1900x1200 to upgrade.

I am sure when the GK11X gamers gpu will be exactly as you described, but also because of that, it's unlikely to be a 600series badge on it.
faugusztin 23rd May 2012, 20:45 Quote
I am just wondering why do you guys think GTX780 will be a downsized version of GK110 ? It will be more likely an upgraded version of GK104 instead.
What is easier to do :
a) add two or four more blocks of what you can find in GTX680 - GTX680 has 8 of them, GTX670 has 7 of them :
http://www.geforce.com/Active/en_US/shared/images/articles/introducing-the-geforce-gtx-680-gpu/Die2.png
b) take the GK110 not designed for gaming with features useless for gamer and cut down the not features you don't need for gaming (but important for CUDA) and add features you need for gaming (video outputs?).

In the end, GTX780 will more likely be based on GK104 than on GK110. But i won't stop you dreaming about a GK110 gaming card, it just probably won't happen.
maverik-sg1 25th May 2012, 11:59 Quote
Quote:
Originally Posted by faugusztin
I am just wondering why do you guys think GTX780 will be a downsized version of GK110 ? It will be more likely an upgraded version of GK104 instead.
What is easier to do :
a) add two or four more blocks of what you can find in GTX680 - GTX680 has 8 of them, GTX670 has 7 of them :
http://www.geforce.com/Active/en_US/shared/images/articles/introducing-the-geforce-gtx-680-gpu/Die2.png
b) take the GK110 not designed for gaming with features useless for gamer and cut down the not features you don't need for gaming (but important for CUDA) and add features you need for gaming (video outputs?).

In the end, GTX780 will more likely be based on GK104 than on GK110. But i won't stop you dreaming about a GK110 gaming card, it just probably won't happen.

I bet you 20 'Scott Mills Points' that the next Flagship performance gaming GPU will be based on a GK110 - and lets wait and see :)
faugusztin 25th May 2012, 13:01 Quote
I don't think NVIDIA will hard time choosing between a limited amount of GeForce GK110 based cards sold at $500 versus a limited amount of Tesla GK110 based cards/systems sold for thousands of dollars (just check out the prices of the Fermi based Tesla cards). And considering GK110 based Tesla cards are already ordered in quantities which will be probably fulfilled only in 2013, the chance of GK110 based GeForce is very slim. First because of the limited amount of chips available at all, then because the chip has features useless for a GeForce card and would have to be disabled anyway to stop them competing with Tesla cards...

The only real option for GK110 based GeForce cards if NVIDIA decides to use not fully functional GK110 chips for them, let's say with 1 of 5 blocks disabled because of the damage. That would still give them 4x3=12 working blocks and 256 bit memory interface. But again, you will hit the issue of competing with the Tesla cards, because why would they use those damaged cores for GeForce when they can just use them for the lower versions of the Tesla cards (which usually have less cores than the highend model).

So, if GK110 is not going to make into a GeForce, what options do we have. Take the huge GK110, cut out the compute features and cores which would make it too powerfull; or take the optimized GK104, increase the core count from 8 to 12, increase the memory interface from 256 to 384 bit, optimize it a bit more and end up with a ~400mm2 chip ?
maverik-sg1 29th May 2012, 17:56 Quote
Some intersting points there for sure and it's a compelling reason for sure.

Although quaddro and FirePro gpu's were (I think) based on same tech as the gaming GPU's and they were considerably more expensive too - so we won't know until Q4 this year and both camps have made compelling reasons for and against a gaming Gerforce GPU based on GK110..... my money is on it will be.
GoodBytes 29th May 2012, 18:07 Quote
Quote:
Originally Posted by maverik-sg1
Some intersting points there for sure and it's a compelling reason for sure.

Although quaddro and FirePro gpu's were (I think) based on same tech as the gaming GPU's and they were considerably more expensive too - so we won't know until Q4 this year and both camps have made compelling reasons for and against a gaming Gerforce GPU based on GK110..... my money is on it will be.

The way it usually works, is that Nvidia/AMD makes the big chip, with everything in them, and they make the circuit board super fancy, super high-quality, and all fancy features, like ECC memory. Once that's done, they remove everything not needed for gaming, cut the components quality, simplify the circuitry to make it less reliable but much cheaper to produce, and so on, and call it they high end Radeon/Geforce. That is why Quadro/Tesla/FirePro cards are more expensive (of course, due to reduce demand, it also cost more, and also Nvidia/AMD charges a lot more, to maximize revenue to try and pay a large part of of R&D back. The Greforce/Radeon pays back the rest.. and hopefully some profit at the end of the cycle.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums