bit-tech.net

Intel’s vision for ray tracing exposed

Intel’s vision for ray tracing exposed

Chien believes that ray tracing gives developers much more creative freedom than they have using traditional raster graphics techniques.

Andrew Chien, director of research at Intel, has said that ray tracing techniques will not just show up and replace traditional raster graphics pipelines – instead we’ll see a combination of the two for quite some time.

He admitted that “ray tracing presents many challenges from a performance perspective,” but believes that Intel can overcome this to deliver products that help to make ray tracing a desirable tool for developers.

The problem for Intel though is that 3D engines based on traditional raster graphics pipelines are forever getting better and with developers now starting to get familiar with the capabilities of DirectX 10, they’re likely to move forwards at a faster rate.

Year on year, new benchmarks for realism are set with traditional raster graphics pipelines and both AMD and Nvidia believe there is still a lot left to achieve with raster graphics. Crysis was last year’s poster child and it looks like FarCry 2 will at least match it—and may even take things to another level again.

This means that Intel’s task of making ray tracing a reality is an even bigger one than you’d think.

The traditional graphics pipeline has always been about the end result and the quality of the scenes rendered,” said Chien. “I think there is a compelling case for the authoring flexibility [that ray tracing will enable].

Developers spend a lot of effort crafting models in order to get acceptable performance [in raster graphics engines],” he continued. “If you were really go get outside of the box of what it allows, there are a lot of problems in the flexibility of content.

We asked whether it would replace raster graphics straight away or whether it’d take time to become an indispensable tool—especially given the challenges of competing with current raster graphics engines. “We expect it to first penetrate areas where the additional flexibility is of benefit to developers, said Chien. “If the image quality benefits are there, but performance isn’t acceptable, developers aren’t going to use it.

His view of where ray tracing is going was quite a bit more down to earth than what we’ve heard in the past—and more to the point, it appears to make some sense. It’s an opportunity for Intel to deliver more flexibility and creative freedom to developers in order to make development a much simpler task and I’m sure that’s something developers will be pleased to hear.

The question now is whether or not Intel will be able to deliver products with enough performance fast enough for ray tracing to make the leap required to catch up with traditional raster graphics architectures and engines.

The more I talk about this topic with Intel, the more the size of the task at hand becomes apparent. Simply put, it’s not going to be easy and there is a long way to go before the transition even starts to happen, but the company may well pull this off in the long run if it can get the support of the development community.

Discuss in the forums

12 Comments

Discuss in the forums Reply
Flibblebot 3rd April 2008, 12:04 Quote
What are the realistic timescales before we see a mainstream graphics chipset capable of raytracing, or even a hybrid raster/ray tracer? Is it likely to be within the next decade, or are we talking long-term research here?
TreeDude 3rd April 2008, 13:41 Quote
Well more creative freedom is great. But it also means much more work must done in order to get the desired result. This is why OpenGL is not used as much as DirectX. It requires more time and effort to get what you want. But OpenGL is more robust and allows for more creative freedom than DirectX (plus you can still use DX along with OpenGL).

I don't see ray tracing taking over as the main way to create games any time soon. I think within the coming years we will see the big budget games start using it though. Maybe Crysis 2.....
johnmustrule 4th April 2008, 06:11 Quote
Although I know it's not a very good comparison, the image quality in 3ds max when using ray tracing is simply so much higher than alternatives it's always been worth it to me to wait a week for a scene to render vs just a day.
zero0ne 4th April 2008, 06:24 Quote
in 5 years we will have processors that are clocked at ~30GHz going by moores law (doubling every 18 months, so 3.3 cycles of doubling with a start point of 3Ghz)

Of course this will probably be slowed down since the major companies are looking less towards clock speed and more towards multi core processors.

in 20 years we will have a quantum computer (since moores law will be failing us at about this point; quantum mechanics easily proves that this WILL happen), and in 50, that quantum computer could theoretically run at the speed our brain does. (of course our brain is a "learning neural network" and trying to simulate it will probably not happen)

check out "michio kaku" for more about these farther reach points I made, He is one of the better Geniuses of our time, since he is able to gracefully explain all these crazy topics in laymen's terms. (and don't confuse crazy with not-true, every idea of his is coming from one research paper or another about quantum mechanics, string theory, etc etc)
wuyanxu 4th April 2008, 11:11 Quote
so, in about 10 years time, we'll probably have to cough out lots money for a ray-tracing CPU to play ray-tracing games smoothly, while also pay through the nose for nVidia's monster graphics card to play traditionally programmed games properly?
TreeDude 4th April 2008, 13:15 Quote
Quote:
Originally Posted by zero0ne
in 5 years we will have processors that are clocked at ~30GHz going by moores law (doubling every 18 months, so 3.3 cycles of doubling with a start point of 3Ghz)

By that logic we should already have 5ghz+ CPUs. Yet we do not. Speed isn't everything. Efficiency means a lot more.
knyghtryda 4th April 2008, 14:42 Quote
Quote:
Originally Posted by zero0ne
in 5 years we will have processors that are clocked at ~30GHz going by moores law (doubling every 18 months, so 3.3 cycles of doubling with a start point of 3Ghz)

Number 1 misconception about moore's law. It has NOTHING to do with speed. The original Moore's law is that the density of processors will double every 18 months. By doubling density one can normally get ~2x in performance, but that doesn't mean clock speed/memory speed/any speed will go up during that process.

As for ray tracing... a hybrid render path would be interesting (more realistic reflections, shadows, and particles comes to mind), but remember, games like Crysis are still currently very CPU bound as well as graphics bound, so speeds are going to need to increase significantly in order to really utilize this hybrid render path.
Flibblebot 4th April 2008, 14:51 Quote
Moore's law doesn't say anything about speed, it's about the number of transistors on similar sized chips. Moore's law states that the number of transistors on a similar sized chip will double roughly every 18 months.

But it's not a law in the strict scientific sense, it's more of an observation.

EDIT:
Quote:
Originally Posted by knyghtryda
Number 1 misconception about moore's law. It has NOTHING to do with speed. The original Moore's law is that the density of processors will double every 18 months. By doubling density one can normally get ~2x in performance, but that doesn't mean clock speed/memory speed/any speed will go up during that process.
Damn, got there before me!
Quote:
Originally Posted by knyghtryda
As for ray tracing... a hybrid render path would be interesting (more realistic reflections, shadows, and particles comes to mind), but remember, games like Crysis are still currently very CPU bound as well as graphics bound, so speeds are going to need to increase significantly in order to really utilize this hybrid render path.
Not necessarily true. The trend for processors at the moment is towards multiple cores, so it's not inconceivable to image CPUs with 8 or 16 cores within the next 5 years or so. If that's the case, you could quite easily use one or more of those cores solely for managing your render path.

Parallelism ftw :D
Hamish 4th April 2008, 14:55 Quote
Quote:
Originally Posted by knyghtryda
The original Moore's law is that the density of processors will double every 18 months.

http://www.anandtech.com/cpuchipsets/showdoc.aspx?i=3276 table on the first page here shows that quite nicely
original pentium: 3.1m transistors and 294mm^2
penryn: 410m and 107mm^2
wuyanxu 4th April 2008, 15:00 Quote
Moore's Law is a marketing tool generated by Intel, Intel put the 18 month there. all Moore did was made a very, very rough observation, Intel marketing people did the rest.
(IET magazine and Wiki both agree on this)
E.E.L. Ambiense 4th April 2008, 15:19 Quote
Quote:
Originally Posted by wuyanxu
so, in about 10 years time, we'll probably have to cough out lots money for a ray-tracing CPU to play ray-tracing games smoothly, while also pay through the nose for nVidia's monster graphics card to play traditionally programmed games properly?

[sarcasm] Didn't you know? By then PC gaming will be dead and it'll all be console-based droll. [/sarcasm]

:)
Phil Rhodes 4th April 2008, 17:01 Quote
This is a good idea if it can be made to work well - imagine graphics cards with arrays of raycasting processors rather than stream processors.

The DirectX/openGL triangle renderers are very lovely, but the only reason they look as good as they do is that they've had an awful lot of work put into a limited number of goals, and that's why computer games are all starting to look the same. I can't be the only person around who remembers the variety of gaming that was available on the Amiga, from top down to side scrollers to platform games and even the beginnings of first-person shooters, it went on and on. These days you have FPS or FPS and they all have the same effects and tricks because that's what the systems will do.

In short they aren't very general. Every time someone wanted to do effect X, be it refraction or anisotropics or glows or whatever, they put in an extension for it. You can't do anything that someone else hasn't thought of. Everything's basically being faked, and there's a million examples of how you can make that rather obvious. The best one that comes to mind is the stack-o-planes effect you can get when smoke intersects other geometry. DX10 tries to solve this and is partially successful, but it's a hack on top of a hack to try and get away with not really doing it properly in the first place. I think you would certainly get less of these sorts of glitches - depth sorting problems with distant objects, intersection issues - with a raytracer.

If Intel are saying that this is not the way to go in the future, I fully agree with them.

P
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums