bit-tech.net

Intel releases 48-core processor

Intel releases 48-core processor

Intel's prototype CPU builds on the work carried out by its Tera-scale research unit and provides 48 physical processing cores.

Intel has announced that "limited quantities" of an experimental processor featuring 48 physical processing cores will be shipping to researchers by the middle of the year.

As reported over on ITworld, the Intel Labs evangelist Sean Hoehl announced the company's plans to provide academic researchers with early versions of its massively multi-core processors as part of the company's Tera-scale research programme - almost certainly a variant on its Single Chip Cloud Computer project.

Whilst hard details as to the precise specifications of the processor weren't made available, engineer Christopher Anderson hinted that each physical core on the chip would clock in at around the same speed as an Atom processor - so we're looking at anything between 1.2GHz and 1.8GHz per core. While that might not sound like much, remember that Intel is building forty-eight of these on a single processor - and there'll be nothing to stop dual- and even quad-processor motherboards being manufactured to accept these new chips.

Interestingly, one fact that did come out during the announcement was the power draw of the new chip: depending on workload, the 48-core processor is expected to draw between 25 and 125 Watts - with the massive difference due to the chip's ability to shut down groups of processing cores if they're not actively being used.

The prototype chip is based around the same mesh technology behind the more impressive 80 core version that the Tera-scale research team developed back in 2007, but unlike previous efforts this latest version is a fully working processor - albeit in prototype form.

Sadly, Intel hasn't offered any hints on precisely when processors with this many cores will become a commercial reality - but at least this will give researchers time to develop the tools and skills required to take advantage of such a massively parallel processing platform.

Are you impressed that Intel has made it to a viable 48-core processor already, or are you only going to take notice when the triple-figure chips hit shop shelves? Is the future truly in many low-powered cores, or should the company be concentrating on fewer, higher-speed cores in its processors? Share your thoughts over in the forums.

31 Comments

Discuss in the forums Reply
liratheal 8th April 2010, 10:58 Quote
How long until some madman puts one in a regular PC?
Fod 8th April 2010, 11:08 Quote
of course, those who need 48 physical processing cores in a workstation right now can go off an build an AMD magny-cours system. Ok so that's going to be a multi-CPU machine, but that's the price you pay for massive scalability, damnit!
EnglishLion 8th April 2010, 11:12 Quote
Well it's all very interesting but as to whether or not we'll see these in home PCs I'm not really sure. I don't see the need unless home computing takes a major step in some direction or other.

48 core won't be much use to the average internet surfer. Maybe the technology will help with the production of very low energy single and dual core models that will pop up in all sorts of embedded applications?
scawp 8th April 2010, 11:13 Quote
Will it run crysis on high settings?
livenoise 8th April 2010, 11:14 Quote
But can it run Crysis?
proxess 8th April 2010, 11:20 Quote
Can you play Kirby Fun Pack 8 in 1 on it?!
Hugo 8th April 2010, 11:23 Quote
Quote:
Originally Posted by liratheal
How long until some madman puts one in a regular PC?
Depends how long it takes for Intel to ship one to me :)
xaser04 8th April 2010, 11:28 Quote
Quick question - What are all of the leads (black and red) going into the motherboard on each side of the cpu socket?
Fod 8th April 2010, 11:34 Quote
they deliver the magic smoke to the CPU for it to work. Because it's a prototype they haven't finalised the way the smoke is held permanently inside the CPU package, so it's always letting magic smoke out, therefore requiring a constant fresh supply in order to continue operating.
confusis 8th April 2010, 11:47 Quote
Quote:
Originally Posted by liratheal
How long until some madman puts one in a regular PC?

How about a folding rig?
rickysio 8th April 2010, 12:20 Quote
How about properly multithreaded general applications?
Xir 8th April 2010, 12:36 Quote
Quote:
Originally Posted by Fod
they deliver the magic smoke to the CPU for it to work. Because it's a prototype they haven't finalised the way the smoke is held permanently inside the CPU package, so it's always letting magic smoke out, therefore requiring a constant fresh supply in order to continue operating.

+1
Have you never noticed how in EVERY SciFi setting, everything always smokes?
Star Trek-Star Wars-Stargate...as soon as someone hits a console with a kids toy gun, the things smokes and explodes.
(I'd always thought it was steam, but nobody's ever burnt, just blastwounds...so it must be smoke...Smoke's the future!)
cgthomas 8th April 2010, 12:37 Quote
The question is; will it run paint?
borandi 8th April 2010, 12:43 Quote
The Cell almost had it good. A few fast cores, then a ton of slower cores.

Think about it, that's almost identical to the CPU/GPU setup - quad core + 1600 shaders is 4 powerful cores and 1600 smaller, slower cores.

If the 48 cores are all x86/64-bit compatible though with reasonable logic, that's the middle ground. Expect to see workstations with a quad/hex/octo core and then 4/8 of these chips inside.
true_gamer 8th April 2010, 13:02 Quote
The question is will if run Dos? ha ha.
RichCreedy 8th April 2010, 13:42 Quote
can i become an academy so i can get one?
cgthomas 8th April 2010, 13:48 Quote
I can be in your academy
javaman 8th April 2010, 14:47 Quote
Does it support hyperthreading?

read recently they they've managed to multi thread word.
Adnoctum 8th April 2010, 15:18 Quote
Sound like it will only be strong in massively threaded applications or large numbers of applications, and incredibly weak for applications that are lightly threaded, such as games. Plus, the cores are probably quite simple given the die and socket constraints, so the cores are probably weaker than even Atom cores despite the similar clocks. The only advantage is there are lots of them. And I don't see any mention of the core being x86-based, so they may not support those instructions.
Quote:
Originally Posted by borandi
Quad core + 1600 shaders is 4 powerful cores and 1600 smaller, slower cores.

That sounds like AMD's Fusion/Llano.
Bloodburgers 8th April 2010, 15:37 Quote
i think the most important thing is whether it cnan tell the difference between 0 and 1.

Watch this 1110011 1110000 1100001 1100011 1100101!
Adnoctum 8th April 2010, 16:07 Quote
Quote:
Originally Posted by Bloodburgers
i think the most important thing is whether it cnan tell the difference between 0 and 1.

Watch this 1110011 1110000 1100001 1100011 1100101!

Binary fail!

Read this (ASCII): 01000010 01101001 01101110 01100001 01110010 01111001 00100000 01100011 01101111 01100100 01100101 00100000 01101101 01110101 01110011 01110100 00100000 01100010 01100101 00100000 01100100 01101001 01110110 01101001 01110011 01101001 01100010 01101100 01100101 00100000 01100010 01111001 00100000 00111000

I think x86 compatibility would be important to most users. I don't think the market is particularly large for a processor that requires a custom OS.
eddtox 8th April 2010, 16:41 Quote
Am I the only one who is impressed by the TDP of this thing? I'm sure I've owned single-core processors with higher TDP's than that.

As for multi core systems, I've said it before and I'll say it again, I really hope it's a long time before anything more than 4 cores becomes mainstream.
I think we are already starting to have problems with lazy developers not optimizing applications because dual core 2gb ram systems are becoming the norm. I dread to think what would happen if 20 cores and 20 gigs of ram became the norm. We'd probably end up with notepad apps taking up 100% of a 2ghz core and 6 gigs of ram. Imagine iTunes... :P
Aracos 8th April 2010, 17:51 Quote
Quote:
Originally Posted by livenoise
But can it run Crysis?

I'd have to say not very well since crysis only uses 2 cores it should perform miserably compared to a i7/i5 :P
knutjb 8th April 2010, 20:58 Quote
Some times it IS necessary for hardware to pull software up to its capabilities. If we followed the idea that a couple cores is all we'll ever need we would still be using, and be happy with TRS-80s. Current programing really must catch up with the hardware. How many programs scale well? Not enough.
Gradius 8th April 2010, 21:02 Quote
I'm pretty sure this will be fully x64 compatible, x86 will fade out from now on.
eddtox 8th April 2010, 21:07 Quote
Quote:
Originally Posted by knutjb
Some times it IS necessary for hardware to pull software up to its capabilities. If we followed the idea that a couple cores is all we'll ever need we would still be using, and be happy with TRS-80s. Current programing really must catch up with the hardware. How many programs scale well? Not enough.

True, but ample processing power should not be an excuse for sloppy programming. It is, always has been, and always should be, imperative that executable code is optimized to run as economically as possible (within reason).

I didn't buy a dual core processor and 2 gig of ram so iTunes could stretch itself to 50%+ cpu cycles and hundreds of megs of ram. Put another way, if my computer is capable of running even something as old as FarCry (original) at 1280x1024 at full quality, cover flow in iTunes should not stutter.

(Sorry to pick on iTunes, I know it's not the only one (zune software anyone?))
knutjb 8th April 2010, 23:21 Quote
Quote:
Originally Posted by eddtox


True, but ample processing power should not be an excuse for sloppy programming. It is, always has been, and always should be, imperative that executable code is optimized to run as economically as possible (within reason).

I didn't buy a dual core processor and 2 gig of ram so iTunes could stretch itself to 50%+ cpu cycles and hundreds of megs of ram. Put another way, if my computer is capable of running even something as old as FarCry (original) at 1280x1024 at full quality, cover flow in iTunes should not stutter.

(Sorry to pick on iTunes, I know it's not the only one (zune software anyone?))

I wasn't implying that programmers to be reckless but that they should catch up with where the hardware is going. If 4 cores are common place, and 6, 8, 12, etc... are going to be common place very soon; programmers must catch up with that. They also must use it as efficiently as possible too. I think that efficiency is part of a program's ability to scale and respond to the users load on the system. But with all of the hardware capacity available it's negligent to the consumer, commercial or private, to not take advantage of their purchase of new technology. I have experienced this with some commercial computer controlled equipment.

NO I AM NOT PICKING ON PROGRAMMERS. (This is a generalization) I hold the MANAGEMENT who sets the WORK priorities for the programmers as responsible. It seems to me that the programming paradigm is stuck on dated ideals, i.e. it works in XP why update it? I do recognize the complexities and difficulties involved with programming and am not marginalizing that, however it can't be used as an excuse to live with merely acceptable performance because it still works. Maybe Microsoft needs to drop more of its legacy support in the name of efficiency.
Farfalho 9th April 2010, 02:18 Quote
The question is:

Can it run Ubisoft's servers 24/7?
proxess 9th April 2010, 10:22 Quote
Quote:
Originally Posted by Farfalho
The question is:

Can it run Ubisoft's servers 24/7?

I lolled so much to that one.
BabyJhonny 14th January 2011, 23:28 Quote
Software can no longer be developed for a single platform architecture. Software needs to be developed in such away that allows for universal access to that software on multiple platforms with multiple archtecture configerations. Software needs to be both adaptable and interchangable, for example you can play final fantasy 7 playstation, but you can not play it in a laptop or desktop computer, you can run windows 95 software on windows 95 but you can not run it on windows 7. If software was made to be both adaptable and interchangable the problems of compatability would not even exist. you could run your playstation 3 games on your windows 98 computer, you could watch your dvds from your cd drive and so on and so forth. Software emulation is much easier to do when it is programed in to nanoscale EEPROM's and while many would still call that Hardware emulation its still Software emulation cause it still requiers coding. One chip, with one EEPROM for every OS, Architecture, CPU type and clock speed, and a few EEPROM's interconected with some logic circuitry and coded with data for adaptibility, and interchangability and you will have a chip that will potentialy defeat the problems of compatibility.
eddtox 15th January 2011, 15:56 Quote
Quote:
Originally Posted by BabyJhonny
Software can no longer be developed for a single platform architecture. Software needs to be developed in such away that allows for universal access to that software on multiple platforms with multiple archtecture configerations. Software needs to be both adaptable and interchangable, for example you can play final fantasy 7 playstation, but you can not play it in a laptop or desktop computer, you can run windows 95 software on windows 95 but you can not run it on windows 7. If software was made to be both adaptable and interchangable the problems of compatability would not even exist. you could run your playstation 3 games on your windows 98 computer, you could watch your dvds from your cd drive and so on and so forth. Software emulation is much easier to do when it is programed in to nanoscale EEPROM's and while many would still call that Hardware emulation its still Software emulation cause it still requiers coding. One chip, with one EEPROM for every OS, Architecture, CPU type and clock speed, and a few EEPROM's interconected with some logic circuitry and coded with data for adaptibility, and interchangability and you will have a chip that will potentialy defeat the problems of compatibility.

But at what cost?

One of the main reasons we have so many different platforms is the manufacturers' desire to exercise full control over their platform, including deciding what software can be released on the platform, by whom, and what royalties they will be charged. Developer kits for some platforms (nintentdo/playstation for example) can cost thousands of pounds, and I don't see those manufacturers giving up that money cow willingly.

Another consideration is the cost of actually developing and adding such hardware and whether people would actually care enough to pay for it. Most things that I could do on Windows 95, I can do better in Windows 7, and for the few things that I can't I could use an emulator, if I was really desperate. The point is that saying to a customer "This is £50 more expensive because it can run Windows 95 software" is likely to get you laughed at.

Finally, it is important to consider the different hardware platforms and their relationship the the software running on them. Having the ability to run iOS software on my desktop, would be of very limited use, as the software is specifically tailored for the iOS devices. The same goes for running Playstation software on iOS, or Windows NT on android.

In short, while this sort of thing sounds good in theory, and it would be nice to have one piece of software that runs on everything, in practice we find that it rarely works very well.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums