bit-tech.net

IDF Day 3 - 32nm Westmere Performance

Comments 1 to 13 of 13

Reply
Yoy0YO 25th September 2009, 03:52 Quote
This definitely looks good for a SG05 super-portable LAN box build with DDR3 and 32nm. Thanks again Bit-Tech for the performance review
Darkraven 25th September 2009, 05:30 Quote
Wait-wait-wait, and BAM, your head gets dizzy. So much for napping till Sandybridge. Going to get even more interesting by Christmas I'll bet.
perplekks45 25th September 2009, 09:22 Quote
3.33GHz against that competition? Of course it can keep up! :|

Still very nice to have it run at 70W max. But I think I'll wait for it to hit retail before I make a decision about whether I like it or not.
Jack_Pepsi 25th September 2009, 09:45 Quote
Ymmm... just imagine a low powered 32nm quad based set up. Now that is something I'd be interested in purchasing.
[PUNK] crompers 25th September 2009, 12:06 Quote
you'd think they might do that, stick two of the dual cores together again hyperthreading enabled too would mean 8 cores. would be a great chip.

the integrated gpu seems like a good idea if it can handle multi-media at low power. the chances of these being an option for gamers is fairly slim though i should imagine
HourBeforeDawn 25th September 2009, 21:29 Quote
hmm alrighty this I think is what I will go for when I go to build my carputer, its coming out around the right time too as I wanted to build this as a xmas gift to my car and myself =p
Andy Mc 26th September 2009, 03:50 Quote
Quote:
Originally Posted by Article
CPUs based on the Westmere architecture introduce something else that's never been seen before, as they're the first processors to integrate the GPU onto the same packaging as the CPU.

I'm pretty sure thats not the first x86 chip to have done that Cyrix beat Intel to this back in 1996 with their MediaGX range of chips. I remember the local PC builder I worked for sold a load of cheap systems based on the MediaGXm chip.
hiei-warrior 26th September 2009, 08:56 Quote
i wonder why it's always when i upgrade that intel releases a bunch of lower nm cpus :(
i think the next time i upgrade im gonna get a 22nm cpu :)
TSR2 26th September 2009, 17:26 Quote
I don't really see the point of the integrated GPU, its just another unupgradable feature that adds cost, although fortunately you say Intel has made it able to be turned off so it won't use power.
aussiebear 27th September 2009, 20:37 Quote
Quote:
Originally Posted by TSR2
I don't really see the point of the integrated GPU, its just another unupgradable feature that adds cost, although fortunately you say Intel has made it able to be turned off so it won't use power.

Because the future direction of computing is a heterogeneous processor that combines various other processors in one package. There's a reason why software frameworks like DirectCompute and OpenCL exist. (OpenCL allows you to grab whatever processor you have, CPU, GPU, etc and use them in a unified manner. In later versions, it'll combine with OpenGL, so you'll be able to dynamically switch the GPU from graphics to physics or other GPU roles.)

Clarkdale's IGP is basically an evolutionary improvement over the X4500HD.
(Increased performance, and some HD playback features...Still no match for Nvidia or ATI's IGPs in gaming performance.)

The next one, Sandy Bridge (2011?) also uses the same IGP technology, and improved again.
(Some say this is when AMD will release Fusion...Its a big question mark whether they can achieve the rumoured 2011 date.)

The one after, Haswell (2012?) uses technology from their Larrabee project.
(Larrabee technology is, in layman's, just using lots highly vectorised x86 processors for GPU and GPGPU roles. It won't mean much to the end user. Its more interesting for programmers though, due to its flexibility).

What makes Intel's eventual approach (2012) interesting, is that their design will be completely x86. Not sure about AMD's approach, as they've revealed very little.
Chocobollz 27th September 2009, 21:18 Quote
Quote:
Originally Posted by TSR2
I don't really see the point of the integrated GPU, its just another unupgradable feature that adds cost, although fortunately you say Intel has made it able to be turned off so it won't use power.

Though the cost would be so negligible you'd think that Intel has given it to you for free! (I think :p)
Quote:
Originally Posted by hiei-warrior
i think the next time i upgrade im gonna get a 22nm cpu :)

And then in the next few days after you got your upgrade, you gonna ends up here again saying the same thing because Intel just released a 16nm CPU :p
Tim S 29th September 2009, 05:42 Quote
Quote:
Originally Posted by aussiebear
What makes Intel's eventual approach (2012) interesting, is that their design will be completely x86. Not sure about AMD's approach, as they've revealed very little.

Richard Huddy, AMD's head of worldwide developer relations, effectively said to me last year that "it doesn't make sense" to implement x86 on a GPU since virtually everything is API-based. For once, he agreed with David Kirk (Nvidia CTO), who told me that x86 "was a waste of die space" on a GPU.

Of course, GPU compute is something that can be API based, but it doesn't have to be if there's a suitable compiler available (like the C compiler Nvidia uses with C for CUDA) - that's an area where x86 could be useful on a GPU (hell, it's why Intel thinks that is the right direction). Time will tell, I guess, and we'll have to see what happens when Larrabee eventually makes it out of the door.
tqiw 29th September 2009, 09:19 Quote
Which case is pictured in the article?
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums