bit-tech.net

Intel says video encoding belongs on the CPU

Intel says video encoding belongs on the CPU

Intel and Nvidia are still sitting in a tree in their war of facts and friction. This time, Intel snaps back by saying video encoding is better on the CPU.

Last night over dinner here in Taipei, Intel said that video encoding has always been a task designed for the CPU – and nothing will change that in the future.

We were talking about CUDA and how Nvidia’s claimed massive performance increases could potentially change the paradigm for software developers, especially with industry heavyweights like Adobe already on-board.

Intel’s representatives said that DivX uses the CPU to dynamically adjust detail across the scene to ensure that detailed areas of scenes have the detail they require.

When you’re encoding on the CPU, the quality will be higher because we’re determining which parts of the scene need higher bit-rates applying to them,” said François Piednoel, senior performance analyst at Intel.

Piednoel claimed that the CUDA video encoder will likely deliver poor quality video encodes because it uses a brute force method of splitting the scene up and treating each pixel the same. It’s interesting that Intel is taking this route, because one thing Nvidia hasn’t really talked about so far is video quality.

The science of video encoding is about making smarter use of the bits and not brute force,” added Piednoel.

I asked Piednoel what will happen when Larrabee turns up because that is, after all, a massively parallel processor. I thought it’d be interesting to see if Intel would change its tune in the future once it had something that had the raw processing power to deliver similar application performance to what is being claimed with CUDA. Intel said that comparing this to a GPU is impossible, because the GPU doesn’t have full x86 cores. With CUDA, you can only code programs in C and C++, while x86 allows the developer to choose whatever programming language they prefer – that’s obviously a massive boon to anyone that doesn’t code in C.

Intel claimed that not every developer understands C or C++ – while that may be true to an extent, anyone that has done a Computer Science degree is likely to have learned C at some point in their careers, as the first language I learned during my Computer Science degree was, you guessed it, C. And, after learning procedural programming in C, I then applied the knowledge I gained from that to then learn to write procedural programs in other languages.

What’s more, Intel said that GPUs aren’t very good at branching and scheduling, while the execution units themselves are suited only to graphics processing – this was something I disagreed with rather vocally, because not only are today’s GPUs handling tens of thousands of threads at once, they’re also designed to be very good at branching, as it’s a part of the spec for both major graphics APIs. And that technology definitely isn’t limited to just working in graphics workloads. From what I can make of this, Intel believes that the stream processing units in AMD’s and Nvidia’s latest graphics architectures are too dumb and not accurate enough to do anything other than push pixels – and I guess that’s why Intel is using fully functional x86 cores in its Larrabee architecture.

What do you make of all of this? Share your thoughts in the forums.

26 Comments

Discuss in the forums Reply
bowman 4th June 2008, 16:00 Quote
So, someone want to do a test of this and see if it holds any water? The quality is better?

I have an AMD CPU, and one thing that makes me cringe is other AMD users defending themselves from Intel fans by claiming that AMD 'feels smoother'. It's patently ridiculous. And before I can see some sort of tangible proof of this, this seems like just that from a different perspective.
amacieli 4th June 2008, 16:05 Quote
nVidia is clearly encroaching on one of the key Intel marketing messages for its new multi-core procs. Of course Intel is going to hit back, even if what they say doesn't make 100% sense (although I believe that you'll still need a good processor to keep the GPU fed).

With Photoshop soon to be nVidia-powered, and (gasp!) even a Folding@Home client on the way, let's judge the actual results.

But think about it - Adobe's Photoshop is one of the best in its class, used by image pros everywhere - why would Adobe include nVidia acceleration if it produced poor-quality pics? Can anyone tell me why a Premiere accelerator would be any different?
MrMonroe 4th June 2008, 16:10 Quote
They are way scared.

Intel's strategy through this entire fight has been to compare current nVidia and ATI technology to future Intel tech. (And to misrepresent current GPU tech to begin with) When they finally do release Larrabee, CUDA will have even more force behind it and even ATI cards will have jumped radically ahead of what Intel has projected.

Long story short, Intel will succeed in making their products better at doing the things they are already good at, and they'll get nowhere near pushing nVidia or ATI out of their established markets. Too bad they will have wasted time trying.
amacieli 4th June 2008, 16:29 Quote
@MrMonroe: I don't think they're wasting their time - the vast majority of PC users use integrated graphics, and Larrabee will probably have the most impact in that segment. Why not bandy about comparisons with super hi-tech to make yours look great ("hey, little guy, the stuff we're pumping out compares well with what the hard-core gamers use!!"). But like you say, the day that Intel starts to overtake nVidia for regular gamers will be when a certain Biblical place becomes a magnet for skiing, if you catch my drift (no pun intended). Unless Intel just ups and buys nVidia (once it's had a chance to swallow its pride).
chicorasia 4th June 2008, 16:32 Quote
Quote:
Originally Posted by MrMonroe

Intel's strategy through this entire fight has been to compare current nVidia and ATI technology to future Intel tech.

Exactly. It is like claiming that current cars are nothing compared to future flying cars. Has anyone seen Larrabee-accelerated Photoshop? Or Premiere? Or Final Cut?

Anyway, videos are made of pixels AND frames. If it splitting the frame into pixels yields "poor" results, why not send a whole frame to each stream processor?
salesman 4th June 2008, 16:35 Quote
Intel better do something amazing with statements like this.
mclean007 4th June 2008, 16:48 Quote
Quote:
Originally Posted by amacieli
why would Adobe include nVidia acceleration if it produced poor-quality pics? Can anyone tell me why a Premiere accelerator would be any different?
Photoshop editing is a different animal to video encoding. Applying a Photoshop transform or filter performs a specific mathematical calculation on each pixel. This is exactly what GPUs are excellent at doing. As stated in the article, video encoding requires two things, because it is a lossy compression process - first, you need to analyse the moving image to determine how best to deploy your available "budget" of bits per frame. Then you use that "budget" to apply various mathematical models to pixels. In other words, the different parts of the image interact more with one another than in a Photoshop operation, and Intel is suggesting that this makes the CPU a superior platform for encoding.

That said, there is no reason why you can't use the CPU for the analysis and the GPU for the brute force required to run the compression algorithms afterwards. I don't see how Intel can justify a statement that the image quality of GPU encoded video will necessarily be inferior.
badders 4th June 2008, 17:03 Quote
Some clever dick will apply this well - a multi threaded process that runs on a mult-cored CPU, decides which parts of the frame need what bitreates. The encoding is then done in CUDA on the GPU, leaving the CPU free to check the next part of the frame.
Zurechial 4th June 2008, 17:08 Quote
Not every developer knows C?
I'm nothing more than an amateur coder, but I personally wouldn't imagine there are many professional developers who don't have a grasp of C..
amacieli 4th June 2008, 17:28 Quote
@mclean007 - your second paragraph is certainly what I had in mind - hits the issue on the head - and like I said, CPU will certainly be required too. On the other side of the coin, nVidia is also spinning way more than it should about GPU capabilities. GPU excels at many (but not all) math problems, but nobody's saying that the CPU is going to die in favor of the GPU.

badders +1
Bluephoenix 4th June 2008, 18:28 Quote
badders +2
Tyinsar 4th June 2008, 19:38 Quote
Quote:
Originally Posted by Article
I thought it’d be interesting to see if Intel would change its tune in the future once it had something that had the raw processing power to deliver similar application performance to what is being claimed with CUDA. Intel said that comparing this to a GPU is impossible, because the GPU doesn’t have full x86 cores.
Shades of Windows 3 vs. OS/2 - tons of F.U.D. and little substance - I hope it doesn't turn out the same.
Icy EyeG 4th June 2008, 21:01 Quote
badders +3, indeed!
Quote:
With CUDA, you can only code programs in C and C++, while x86 allows the developer to choose whatever programming language they prefer – that’s obviously a massive boon to anyone that doesn’t code in C.

How about GPULib, a library of mathematical functions for Very High Level Languages (Java, Python, MATLAB, IDL)?

My guess is that more examples like this will follow, that will allow the use of CUDA with Very High Level Languages (I use Perl a lot, so I would definitely make use of a "CUDA extension"...).

Moreover, here's a crazy thought: couldn't nVIDIA develop a Larrabee-like GPU using cores with ARM or Power architecture (since they don't have a x86 license)?
Tyinsar 4th June 2008, 21:09 Quote
Quote:
Originally Posted by Icy EyeG
...Moreover, here's a crazy thought: couldn't nVIDIA develop a Larrabee-like GPU using cores with ARM or Power architecture (since they don't have a x86 license)?
or buy Via
HourBeforeDawn 4th June 2008, 21:12 Quote
lol why cant we all get a long and have both the cpu and gpu work together instead of fighting each other lol
DXR_13KE 5th June 2008, 00:21 Quote
is it me or intel has a bull's anus for a mouth... they are always talking BS....
Haltech 5th June 2008, 01:19 Quote
Summary of both companies
Intel - Jack of all trades, master in nothing
Nvidia - Master in Graphics and Parallel processing, dosn't do anything else

Why cant there be a middle ground????
Mentai 5th June 2008, 02:09 Quote
So intel are losing a couple of major apps to gpu's. Big deal. People are still going to want/need fairly decent cpu's, they're probably not going to sell any less, it just means nvidia is going to sell more. Being that intel are directly invading nvidia's market space, and not the other way round, I don't see why they're always bitching about everything. Sheesh.
C0nKer 5th June 2008, 03:08 Quote
Quote:
Originally Posted by Zurechial
Not every developer knows C?
I'm nothing more than an amateur coder, but I personally wouldn't imagine there are many professional developers who don't have a grasp of C..

Exactly. Engineers of all discipline, heck, even high school kids in India know C.
desertstalker 5th June 2008, 06:02 Quote
Dont be so sure, My uni just changed their 1st year CS stuff to python....
Tim S 5th June 2008, 06:52 Quote
Quote:
Originally Posted by desertstalker
Dont be so sure, My uni just changed their 1st year CS stuff to python....

interesting info!
Cthippo 5th June 2008, 08:42 Quote
It's going to take nVidia about a week to come up with a CUDA video encoder which will make Intel look pretty stupid. The bit about quality is pure FUD because that's going to come down to how the software is written.
wuyanxu 5th June 2008, 09:36 Quote
Quote:
Originally Posted by C0nKer
Exactly. Engineers of all discipline, heck, even high school kids in India know C.
come on! C is the most basic language, how can any computer/embedded systems/hardware developer engineer don't know C? how can they survive?

although the accuracy point is taken. BOINC distributed computing don't use GPU is because they say it's not accurate enough to produce reliable results.
perplekks45 5th June 2008, 09:37 Quote
Not knowing C? We get to learn C at the earliest in 5th semester over here. First is Scheme then Java. If you're interested you can learn all those web-based stuff as well but the main language we're taught is Java.
[/offtopic]

I think this whole "battle of encoding" is pure marketing. We'll see who'll come out as a winner in this one but I have a feeling that both are right... somewhere... a bit... maybe. :)

Oh, and badders +4. ;)
yakyb 5th June 2008, 10:25 Quote
afaic CUDA and video encoding via cuda is in early stages of development in comparison to the seasoned encoders we have for CPU heck the DivX codec is ten years old and people have writing encoders for that time

as HDD sizes increase people will be asking for greater quality (therefore less compression) and could this be a realm for GPU or CPU who knows i personally beleive that the GPU is the next major leap and as soon as few freeware (along the lines of staxrip or handbrake) encoding apps are released that utilise CUDA Nvidia will start to win out.
ecktt 10th June 2008, 20:50 Quote
Sigh...
If NViadia GPU has all the branching power, then they're wasting silicon that would have better been utilized for pushing pixels or massively parallel calculations. Intel is right in that a general processor is much better suited to branching. How much of that is required for video encoding is beyond me because I've not written any code for that type of application. I do understand Intel's point about IQ of the video as I have some understanding about what goes on during video encoding/compression. That said, any programmer worth his salt know ANSI C.
Anyway, a Hybrid approach to video encoding utilizing both types of processors is probably the best approach. The CPU for the branching and the GPU for the parallel calculations. Someone asked why not have a processor hybrid. Well you have to ask how much silicon would have to be partition for matrix type calculations and how much for branching. For every type of task the answer would be different. As is, CPUs have some degree of Single Instruction Multiple Data (SIMD). Intel calls it SSE/2/3/4/... and AMD called it 3D NOW. I think AMD adopted SSE a while ago. My point here is while Intel might be full of marketing, what they're saying isn't necessarily BS and their processor have been capable of doing similar type calculations to a GPU for quite some time (even before GPUs were marketed, although no where near as fast).
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums