In defence of multi-core

Written by Wil Harris

November 5, 2006 // 12:06 p.m.

Tags: #kentsfield #multi-core #quad #quad-core #source #valve

This week has been all about multi-core. We were hit by one big announcement that we knew was coming - Intel's Core 2 Quad chip, codenamed Kentsfield. This has been on the books for a while, and we've been playing with the chip since the beginning of October - and having a lot of fun.

This announcement was preceded by the news, which we reported on in the early hour of Thursday morning, of Valve's move to multi-core for the Source engine. We weren't quite sure what to expect when we rocked up in Bellevue to see the legendary software developer, but when we'd finished the meeting, it was clear that Valve's team is wholly committed to a multi-core future and is beginning to work towards that vision now.

"I've heard more than one person say - 'but why?'"

There's been something of a negative tone, however, towards all this stuff recently. I've heard more than one person say - 'but why?' When I appeared on CrankyGeeks a few weeks back, one of the questions I faced was - 'Why do we need 80 or more cores in the future? Why is Intel working on this stuff?' We saw the same thing in much of the technical press this week - 'Kentsfield is great, but it's overkill, why bother?' This is, in my opinion, a very short-sighted way of looking at things.

Throughout computing history, invention has been blighted by what I like to call the Why-ners. (See what I did there?) Why do we need quad core? Why do we need 512MB of graphics RAM? Most famously, why would anyone need more than 640k of system memory?

The answer to this question is, and always has been, to facilitate progress. The relationship between software and hardware is very much chicken and egg - we need the software to be created to provide a use for uber hardware, but we also need that hardware to be created. Which comes first? Well, unlike the chick and egg, we can trace back through time to the hardware coming first - software isn't a necessary condition for the creation of hardware, but the inverse is true. Thus, we must innovate hardware to drive software forward.

And software brings functionality and usage. Most people do not doubt that dual core brings benefits to a system - the ability to multi-task further, get things done quicker. The argument gets more complicated when we consider quad-core and, going further, multi-core, because if the Why-ners have one thing right, it's that traditional software does not really need to go faster. Word doesn't need to go faster, so what use is faster hardware? The internet and email doesn't require faster hardware, so what use? To some degree, even the OS is about as fast as we could possibly require it to be, so what use?

The answer is, as always, new uses. Sure, the current functionality in Microsoft Word doesn't require faster hardware. But what about the future functionality that we will see in a year, two years to come? Wouldn't it be great to be able to have Word parse a document, go out to Wikipedia and automatically pull in references and citations, which could then update in real time? What about if your operating system could do an on-the-fly search of any image on your hard disk to identify a particular photo you wanted to find based only on your description of what the photo looked like? How about natural language synthesis that actually worked?

For all these things to be possible, hardware needs to keep up a rapid pace of development. Sure, right now, multi-core will probably be overkill for a lot of people. Does that make it a bad product? No, it just makes it a good product for those that have the requirement. For instance, we've already seen use for it at the office - Tim, our hardare guru, is a major multitasker and has seen a decent performance boost in general desktop usage through Kentsfield. Quad-core is already a no-brainer for server installations - who wouldn't jump at the chance to drop in a new chip and double performance in key applications?
[separator]
So for people to take it to task for being 'overkill', for me, is like saying that a Ferrari is overkill for driving to work. So what? That doesn't make the Ferrari a bad car, it doesn't make it bad value, it just means that most people will be buying a Clio, or whatever. The Ferrari still rocks out.

"As technology moves on, quad-core will become mainstream"

The big difference is that a Ferrari will stay the car of the super-rich, whereas Kentsfield, or at least the technology inside it, will swiftly become the everyday kit of many people. As technology moves on, quad-core will become mainstream.

And that is just another one of the reasons why people scoffing at technological progress aren't thinking straight. Without this never-ending march up the high-end, stuff on the low-end won't ever get better. The technology used to implement quad-core - the processes, manufacturing, design and the like - can be used in the future on different applications, such as chips for mobile phones. There are plenty of areas where technology really does need to progress further to become useful, and the high-end pulls all these areas up further. Think of the graphics cards - if the GeForce 8800 wasn't about to hit the shelves and decimate everything in its path, you guys wouldn't then have the mainstream 8600 and 8300 technology (assuming that's what NVIDIA will call them) which will be far more reasonably priced (we imagine).

So I think the latest Intel chip has come in for some unjust criticism. Now, a different argument against it is that, architecturally, it can be seen as inefficient. This is a far harder criticism to simply shirk off, since it has some merit.

Kentsfield is, in fairly simple terms, a dual-dual-core. It's two Conroe cores whacked next to each other. To communicate with each other, the cores have to go out to the front side bus for a chat, rather than just having a natter over the hedge, so to speak. Not only does this add latency to the inter-core comms, it also occupies valuable space on the front side bus which could otherwise be used by main memory. Technically, this makes Kentsfield a slower chip than a native quad-core implementation.

AMD has been making a lot of noise about the fact that its first quad-core chip will be native, and is saying that it will be faster. I'm sure that K8L will certainly be faster than Kentsfield, but by then, there will be new revisions to Intel's architecture, and who's to say that Intel won't have something to compete?

There's also the question of just how much difference it is really going to make to people creating the software. Sure, the differences between K8L and Kentsfield might make a small performance difference to enthusiasts - but to programmers, they're just high-end quad core architectures. They are going to be programming for four cores, and not caring too much on the actual implementation. Think about how programmers work on games - they program to DirectX specifications and let the hardware manufacturers sort out relative performance of DirectX implementation. We can argue over NVIDIA v ATI all day, but the balance swings back and forth and it cannot be denied that, fundamentally, the results are pretty close.

Which isn't to say that AMD won't surprise us all next year and come out of the gates with a stonkingly fast quad-core that pwns Intel to within an inch of its life. But does that make Kentsfield worse? No, it just means that if you're buying quad-core in six month's time, you might have a better choice.

Besides, being a hardware enthusiast isn't always about value. It's about performance, hardware, technology for technology's sake. If all we ever cared about was pricing, we'd end up with rubbish hardware that had costs cut to keep the masses happy. Multi-core processing is going to do some very cool things, and we should be excited about that - I know that I certainly am. Why wouldn't you want it as soon as humanly possible?


QUICK COMMENT

View this in the forums

SUBSCRIBE TO OUR NEWSLETTER

WEEK IN REVIEW

TOP STORIES

SUGGESTED FOR YOU