Published on 14th April 2008 by
Originally Posted by r4tch3tHmm, that bit about the video encoding on the GPU sounds very interesting. My current choice in CPU is based heavily on the fact I will be encoding DVDs fairly regularly.
... that will be known as Tesla2 and there have been several hints in the presentations at new features that we're likely to see included in that architecture.
Originally Posted by AnakhaAgain, little mention that RayTracing + CUDA = Massive Win! Raytracing is a very simple algorithm that is naturally parallel, and the more cores (Or in this case, "Stream Processors") you can throw at it, the better.
With 128 "cores", each tracing a single ray on a screen 1600x1200, with 10 "Bounce" rays, you would need a 9MHz processor to get a stable 60Hz display. ((1600*1200*60)/128*1000000). Considering most GPUs are in the hundreds of MHz range (If not the thousands of MHz), this gives a VERY detailed scene with LOTS of reflection. All you'd really need is a way to use that RayTraced image directly on the card (IE, output from the Stream Processors directly to the Frame Buffer to be rendered) and you have Real-Time RayTracing OOTB.
I'd put a dollar down for that. Anyone else?
Originally Posted by sbenrapI'm quoting the last part of the article:
"Huang doesn't seem fazed by Intel's push into his territory at the moment, but he said he remembers the scars he got following the release of the GeForceFX architecture"
Can someone please elaborate what "scars" he's talking about?
I don't remember any big deal between nVidia and Intel when the GeForceFX was released...
You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.
1st March 2017
28th February 2017
© Copyright bit-tech