bit-tech.net

Intel admits to Moore's Law hiccough

Intel admits to Moore's Law hiccough

Intel has admitted that its adherence to Moore's Law has hit a speed-bump, but reassures investors that while it's "getting really hard" its competitors have the same struggles.

Intel has warned investors that following Moore's Law is getting increasingly difficult, as it admits to the first blip on the roadmap for the past several generations.

Moore's Law, named for Intel co-founder Gordon Moore, is the observation that the number of transistors found in complex integrated circuits like microprocessors doubles roughly every eighteen months - leading to a proportional increase in performance. While originally couched as a historical observation, Moore's rule has become a true law for the semiconductor industry which seeks to meet or exceed its expectations - and, generally, succeeds in doing so.

There are problems on the horizon, however. Doubling the number of transistors on a given integrated circuit typically requires halving the size of said transistors - otherwise we'd all be using CPUs the size of our houses by now. In lithography - the means by which the design of the processor is shrunk and applied to the semiconductor itself - this is known as the process node, with Intel's next-generation chips targeting a 14nm node.

As the size of the transistors themselves decreases, so does the size of everything else. There's a lower limit to how many times the process node can shrink: silicon, the material typically used for modern semiconductors, suffers from current leakage when the sizes involved are small enough - meaning components begin to interfere with their neighbours, turning what should be neatly-ordered data into gibberish. The lithographic machines themselves also struggle, needing ever-smaller wavelengths of light to etch their patterns onto the substrate.

Traditionally, Intel has met these challenges head-on and in doing so continued with its adherence to Moore's Law. At its most recent investor's day, however, the company revealed a slide that showed its struggles with getting the 14nm process right. These struggles, which caused higher than expected defect rates in the chips, are no surprise given that they are direction responsible for the 14nm Broadwell being delayed to 2014, but to hear Intel talk candidly on the subject is rare indeed.

'It’s the first time [we've had these troubles] in quite a number of generations,' Intel's William Holt told the Wall Street Journal at the invitation-only investor's day. 'It’s just getting really hard. [The problem is] just getting the size down. As hard as it is, it’s going to be just as hard for everybody else,' Holt added, reassuring investors that the problems aren't exclusive to the company.

An end to Moore's Law could spell trouble for the company. Traditionally, it's been that doubling in performance that has kept its customers upgrading year after year - and each process node shrink comes with a corresponding decrease in base manufacturing costs which helps offset the research and development needed in order to reach a given node. 'I’m not about to start predicting the end [to Moore's Law],' Holt told the paper, 'since anybody who has tried has been wrong.'

The issues surrounding a continuation of Moore's Law has led to several potential solutions over the past decade, from silicon oxide chips and nanowire grids to carbon nanotube transistors and purely optical computing. Few, however, are ready for mass production just yet.

38 Comments

Discuss in the forums Reply
[USRF]Obiwan 22nd November 2013, 12:08 Quote
Then make larger chips, problem solved.
GuilleAcoustic 22nd November 2013, 12:13 Quote
Quote:
Originally Posted by [USRF]Obiwan
Then make larger chips, problem solved.

Larger chip = less chip per slice of silicon = higher price per die.
Gareth Halfacree 22nd November 2013, 12:17 Quote
Quote:
Originally Posted by [USRF]Obiwan
Then make larger chips, problem solved.
Doesn't scale. Intel's predicting that its 14nm chips will cost 27 per cent more to make - but the drop in node size means they're smaller, bringing the cost back down. If they weren't smaller, every processor generation would cost more and more - until they're entirely unaffordable. Let's say you can fit 100 Pentium 5 processors on a single wafer; the Pentium 6, as required by Moore's Law, has roughly double the transistors and so, by your plan, is double the size - meaning you can only fit 50 on the wafer. Whoops, your per-chip manufacturing costs just doubled. The Pentium 7? Doubled again. Pentium 8? That's another doubling.

If the entry-level Pentium 5 cost £50, you're looking at £100 for the entry-level Pentium 6, £200 for the Pentium 7, and £400 for the Pentium 8. Eventually, by the Pentium 11, you can only get one chip per wafer - and that's the end of that game, unless you're going to make the Pentium 12 span two wafers and glue 'em together. Which is fine, except you're looking at £6,400 (plus the cost of the interconnect) per chip at this point for the entry level model.

TL;DR: Shrink or get off the pot, to coin a phrase.
kenco_uk 22nd November 2013, 12:39 Quote
Imagine the heatsink to cool that bugger though.
Corky42 22nd November 2013, 13:00 Quote
It had to come to an end one day, or at least slow down.
Physics starts getting in the way when things get small, the brains need to work their magic and come up with new ways to do things.
GuilleAcoustic 22nd November 2013, 13:03 Quote
Quote:
Originally Posted by Corky42
It had to come to an end one day, or at least slow down.
Physics starts getting in the way when things get small, the brains need to work their magic and come up with new ways to do things.

That's when it starts to become interesting :D. Return of the co-processors ? I see more and more co-processors being released / tested (Xeon PHY, Caustic R2500, etc.). It's probably time to offload some calculus to specialized chips / cards and let the CPU do the basic stuff (dispatch the workload, do the single threaded computing, etc.).
benji2412 22nd November 2013, 13:05 Quote
Quote:
Originally Posted by GuilleAcoustic
That's when it starts to become interesting :D

Or perhaps pretty stagnant!
GuilleAcoustic 22nd November 2013, 13:10 Quote
Quote:
Originally Posted by benji2412
Or perhaps pretty stagnant!

I mean't Interesting on the "chipset designer side". They have to find innovative stuff instead of just shrinking the size and throwing more transistors at it.
Pete J 22nd November 2013, 13:30 Quote
Does Moore's Law really matter? As long as progress is still being made, who cares?

I'm sure quantum computing will blow Moore's Law out of the water when it's tamed.
Dave Lister 22nd November 2013, 14:06 Quote
I don't believe MS for one minute that "it's getting really hard". The reason they are not pushing harder is that AMD IS finding it really hard and MS don't have to try anymore.
Gareth Halfacree 22nd November 2013, 14:10 Quote
Quote:
Originally Posted by Dave Lister
I don't believe MS for one minute that "it's getting really hard". The reason they are not pushing harder is that AMD IS finding it really hard and MS don't have to try anymore.
s/MS/Intel/g ;) (Also, Intel is a process node in front of everyone else - so they'll hit these problems first. They're not making it up, you know: at these sizes, every single node transition is like getting blood out of a stone.)
Dave Lister 22nd November 2013, 15:50 Quote
Quote:
Originally Posted by Gareth Halfacree
Quote:
Originally Posted by Dave Lister
I don't believe MS for one minute that "it's getting really hard". The reason they are not pushing harder is that AMD IS finding it really hard and MS don't have to try anymore.
s/MS/Intel/g ;) (Also, Intel is a process node in front of everyone else - so they'll hit these problems first. They're not making it up, you know: at these sizes, every single node transition is like getting blood out of a stone.)

I'm sure its difficult but I still feel intel are being lazy because they don't have any real competition anymore (not in the mid-high end x86 market anyway).
GeorgeK 22nd November 2013, 15:52 Quote
Quote:
Originally Posted by Dave Lister
I don't believe MS for one minute that "it's getting really hard". The reason they are not pushing harder is that AMD IS finding it really hard and MS don't have to try anymore.

By MS don't you mean Intel? MS don't make chips...
Nexxo 22nd November 2013, 15:56 Quote
That is a subjective feeling, not an informed opinion (unless you are a chip designer, in which case I take it all back).

There are other technologies coming our way: optical computing, quantum computing, new substrates. They'll find a way around it. But just as the leap from valve to transistor, and then transistor to silicon took time and effort, so will the leap to the next technology.
Dave Lister 22nd November 2013, 16:08 Quote
Quote:
Originally Posted by GeorgeK
Quote:
Originally Posted by Dave Lister
I don't believe MS for one minute that "it's getting really hard". The reason they are not pushing harder is that AMD IS finding it really hard and MS don't have to try anymore.

By MS don't you mean Intel? MS don't make chips...

lol well spotted, for some reason i get those two mixed up from time to time.

@ Nexxo, yes that it just my gut feeling. When intel first came out with core 2 I was reading stuff saying they were predicting they would have over 16 cores on a chip in the next few years - that must have been over five years ago now and I'm pretty sure they are still shoveling out dual core parts at the lower end and (I may be wrong) quad and hex core parts at the top end. So my gut feeling does have a reason for the mistrust.
Margo Baggins 22nd November 2013, 16:13 Quote
Quote:
Originally Posted by Dave Lister
That must have been over five years ago now and I'm pretty sure they are still shoveling out dual core parts at the lower end and (I may be wrong) quad and hex core parts at the top end. So my gut feeling does have a reason for the mistrust.

8 core/16 thread xeons are out there.
Gareth Halfacree 22nd November 2013, 16:17 Quote
Quote:
Originally Posted by Margon
8 core/16 thread xeons are out there.
Never mind that: how about the 12-core, 24-thread Xeon that's going into the new Mac Pro? Plus, if core count is your thing, there's Xeon Phi: 50 Pentium-class cores on a single board, with Knights Landing bringing the same tech to dedicated CPUs in the near future.
Dave Lister 22nd November 2013, 16:21 Quote
I stand corrected, but it's still nowhere near what they said to expect. I guess I just have a deep trust issues towards big corporations !
Nexxo 22nd November 2013, 16:31 Quote
Intel may be doing well in the desktop/laptop market, but ARM is already encroaching on the server market and handing Intel its ass in the fast-growing mobile market. Intel rightly perceives that more powerful chips are not the priority --current products comfortably meet most people's needs. Low consumption chips is where it's at. Their focus lies there for the moment.
rollo 22nd November 2013, 17:02 Quote
If they did not control 90% of said server market you may have valid points. Till that time you just dont. They have dropped some in the last year but they need to lose another 10% by my maths before the competition board complains about them.

They have enough money to make 1 good chip for mobile, If they do Samsung or Apple will jump at them thats 200million units of said product a year. ( thats basically the profitable section of the smartphone / tablet market )

ARM itself does not really do alot just sells its licences to others like qualcomm and samsung / Apple who between the 3 of them control alot of the arm market. ( Close to 100% in phones and pretty similar in tablets )

My gut feeling is if ARM is less than twice the power per watt of the x86 chip, it's not going to catch on. It needs a big advantage. It needs to be three times the throughput per watt or better to make a big impact.

Facebook and Google will determain if ARM will be a player in the server markets if they take on ARM as there chip supplier it could take off in a big way.
bawjaws 22nd November 2013, 17:02 Quote
Quote:
Originally Posted by Dave Lister
I stand corrected, but it's still nowhere near what they said to expect. I guess I just have a deep trust issues towards big corporations !

There's also a big difference between what you expect to acheive and what you actually do achieve - maybe it's just proved more difficult than they thought?
Gareth Halfacree 22nd November 2013, 17:29 Quote
Quote:
Originally Posted by rollo
They have enough money to make 1 good chip for mobile, If they do Samsung or Apple will jump at them thats 200million units of said product a year. ( thats basically the profitable section of the smartphone / tablet market )
No, they won't. Samsung is an ARM licensee, and currently builds its own ARM chips for high-end devices while farming out other stuff to fellow ARM licensees. If Samsung switched to Intel chips, it'd be giving up on the ability to make its own processors and moving to a single-source vendor - which is not a winning move in any way, shape or form.

As for Apple, the company is again an ARM licensee and designs its own chips. It, too, isn't going to want to give that control up, and even if it did iOS is tied to the ARM instruction set architecture at a very low level; porting that to x86 is not straightforward, and then you've got the apps to worry about.

This, in point of fact, is why Intel is struggling in mobile. I tested a Motorola Razr i with an Atom chip in it once. Ran Android nice and fast, but a big chunk of the apps I used on my ARM-based Android devices simply wouldn't run on x86. It's a Catch 22 situation: Intel struggles to convince manufacturers to switch to x86 because software vendors support ARM, and software vendors support ARM 'cos the number of x86 devices in the mobile market is vanishingly small.
Quote:
Originally Posted by rollo
ARM itself does not really do alot just sells its licences to others like qualcomm and samsung / Apple who between the 3 of them control alot of the arm market. ( Close to 100% in phones and pretty similar in tablets )
"ARM doesn't do a lot, it just creates the entire technology behind every single ARM processor in the world." What would you want it to do?!
Quote:
Originally Posted by rollo
Facebook and Google will determain if ARM will be a player in the server markets if they take on ARM as there chip supplier it could take off in a big way.
Interesting you should name those companies: Facebook has been testing ARM in the datacentre for years, and last month had its head of hardware design and supply chain operations join the board of ARM server specialist Calxeda. Google is doing the same, but it's keeping that rather more quiet - unlike its Chinese equivalent, Baidu, which has trumpeted its use of ARM chips to all and sundry. It's not just low-complexity, high-concurrency web types making the move, either: when I was researching a piece on supercomputing for PC Pro, every single company or expert I spoke to except Intel said that ARM as the serial processor is the next big thing in high-performance computing.
rollo 22nd November 2013, 17:36 Quote
I listed those companies as thats what my own research had shown as the likely companies who could take on the ARM chips and actually have the man power to recode as needed.
Gareth Halfacree 22nd November 2013, 17:40 Quote
Quote:
Originally Posted by rollo
I listed those companies as thats what my own research had shown as the likely companies who could take on the ARM chips and actually have the man power to recode as needed.
Recode? What's to recode? Everything Google and Facebook uses is already ARM-compatible.
GuilleAcoustic 22nd November 2013, 17:54 Quote
recompile ;-) ... all my linux software can run on an ARM rig using Linux. That's prolly my next working / toying rig. I'm bored of x86, need something new :D, freescale can have my money for an i.MX6 quad.
rollo 22nd November 2013, 18:26 Quote
Quote:
Originally Posted by Gareth Halfacree
Recode? What's to recode? Everything Google and Facebook uses is already ARM-compatible.

So there x86 software will auto run on ARM? They would already be on ARM servers if that was the case.

The software will need recompiling and recoding to ARM if they plan to use it.

Thats also why they are one of the few companies who could do it.

Big business wont thats for sure as they can not afford to recode that software to ARM.
Nexxo 22nd November 2013, 18:28 Quote
If that is so, why is ARM making such good inroads into the server market (which it is very new to)? It is not that hard to recompile for ARM --just look at Windows RT.

There are rumours that Apple may migrate its OSX devices to ARM --it can then make its own CPUs (and tie people into their ecosystem on another level --no more hackingtoshes) and converge its iOS and OSX devices.

Meanwhile we already saw how Microsoft was disadvantaged in entering the mobile arena by Intel's complacency in developing a decent Atom CPU, hence Windows RT. Make no mistake --if Microsoft hadn't made the asinine move to lock the desktop, all that publishers would have to do is recompile their existing software to ARM and away you go: capable, long-lasting Windows laptops and tablets. But despite its dismal launch Microsoft is persisting with Windows RT. It does that for a reason.

There is no reason whatsoever for the mobile market to move from ARM to x86. It already has a capable platform, which is highly customisable and allows manufacturers a high level of control, and all their OS and apps run on it just fine. So what advantage does switching to a dependence on Intel, a total n00b in this market, bring them?

Intel has a lot of catching up to do. The only reason people are pining for powerful atoms is because they want full-fat Windows devices with decent battery life. But the day Microsoft unlocks the desktop on Windows RT it's curtains for Intel in that department.
Gareth Halfacree 22nd November 2013, 18:54 Quote
Quote:
Originally Posted by rollo
So there x86 software will auto run on ARM? They would already be on ARM servers if that was the case.
I thought you'd done research on this topic? They *are* running some ARM servers; they're not running more yet because when Facebook first got big there weren't any ARM servers to buy. And yes, their software will auto-run on ARM: compilation is automatic and the codebase architecture independent. Hell, my ARM-based Raspberry Pi is running the same webserver software as Facebook right now. To suggest only big companies have the "manpower" to "recode" for ARM is, I'll be honest, staggeringly ignorant.
Quote:
Originally Posted by rollo
The software will need recompiling and recoding to ARM if they plan to use it.
No, it won't -because that step has already been done, long ago.
Quote:
Originally Posted by rollo
Thats also why they are one of the few companies who could do it.
Really? Care to explain, if it's such a massive challenge, how a single person with no financial backing whatsoever made Raspbian for the Raspberry Pi? I can run Firefox, LibreOffice, Minecraft, Apache, Midori, Scratch, Chrome and thousands of other packages right damn now on an ARM chip. Where did they come from?
Quote:
Originally Posted by rollo
Big business wont thats for sure as they can not afford to recode that software to ARM.
Seriously, all I ask is a little basic research before you post. Start with Calxeda's white papers and client stories - see how many companies are *already* using ARM and you'll see how far off base you really are.

EDIT:
Actually, you've reminded me of a story: when I first started running an ARM home server, there was no precompiled DLNA client available. So, I grabbed the code to an open source one written with x86 in mind. Do you know how I "recoded" and "recompiled" that? Three commands: ./configure; make; make install. That's the process in its entirety. Total amount of time spent: five minutes, including downloading the source. Good job I didn't know it was impossible back then!

When I switched to a Pi, I didn't even have to do that. One command: apt-get install minidlna. Such challenge! Much recoding! Very doge!
Blackshark 22nd November 2013, 19:50 Quote
Erm... what are you young-uns doing with your PCs to need a faster CPU? Seriously? I can understand the continued need to upgrade graphics cards, but I doubt there is anyone here who regularly has their CPU pegged at 100% for more than a moment.

As long as we can encode video to the latest standard, at a reasonable speed, there is no other task I can think of that needs CPUs to double in speed every 18 months.
Pete J 22nd November 2013, 20:30 Quote
Quote:
Originally Posted by Blackshark
Erm... what are you young-uns doing with your PCs to need a faster CPU? Seriously? I can understand the continued need to upgrade graphics cards, but I doubt there is anyone here who regularly has their CPU pegged at 100% for more than a moment.

As long as we can encode video to the latest standard, at a reasonable speed, there is no other task I can think of that needs CPUs to double in speed every 18 months.
I was thinking of posting something similar but then I thought about the great '640K is enough for anyone' statement. The boundaries still have to be pushed for faster performance and more graphics intense games.
qualalol 22nd November 2013, 20:46 Quote
Quote:
Originally Posted by rollo
So there x86 software will auto run on ARM? They would already be on ARM servers if that was the case.

The software will need recompiling and recoding to ARM if they plan to use it.

Thats also why they are one of the few companies who could do it.

Big business wont thats for sure as they can not afford to recode that software to ARM.

Most servers run Apache, nginx share is also increasing. Both can be built for ARM.

Linux is available for arm, e.g. debian which is a popular server distro is available for arm. It has precompiled packages for apache, nginx, most scripting languages, and in fact just about any linux software, hence most websites can... run on ARM.

(Most websites are written in various scripting languages: php, python, ruby, perl -- so they are architecture independent.)

You probably won't get windows and IIS running on an ARM server anytime soon, but windows and IIS are not not the biggest players in the server market.

Even if you have custom C/C++ software: most of it should be easily recompilable for arm if you have followed good practices. If it can be built for x86 and x64, ARM isn't a huge step away. In fact trying to keep up with changing build environments is much trickier than architecture agnosticism (changing compilers, especially with newer versions of MSVS which like to break things, is in fact more of an issue than architectures).

I.e. it's a complete non-issue for most of the server market.

The desktop market on the other hand... Who knows. I'm quite happy with my ARM based desktop board (although I've used Linux since forever), others might want to stick with Windows, but MS seem to be shooting themselves in the foot with Windows 8, so maybe that'll change too.
Corky42 22nd November 2013, 21:25 Quote
Quote:
Originally Posted by Pete J
I was thinking of posting something similar but then I thought about the great '640K is enough for anyone' statement. The boundaries still have to be pushed for faster performance and more graphics intense games.

Indeed they do, if we took a "its good enough" attitude to everything we would still be riding around on horses. :D
bawjaws 22nd November 2013, 22:46 Quote
Quote:
Originally Posted by Pete J
I was thinking of posting something similar but then I thought about the great '640K is enough for anyone' statement. The boundaries still have to be pushed for faster performance and more graphics intense games.

I'm still rocking a C2D E8500 :D Haven't felt the need to upgrade (or rather, haven't been able to justify the price/performance cost-benefit analysis :D)
rollo 23rd November 2013, 00:26 Quote
As of september 2013 Intel held 90% of the server market with AMD at 5% and others the remaining. At best ARM has 5% of the server market. 1-2 companies aint gonna change that % to something massive.

If 1 of the big 3 companies in tech ( Apple / Google / Microsoft ) show that ARM is viable and use it in there servers then things might change. Intel and Apple are pretty well knit hence the Iris gpu been made in the first place as thats who it was made for. They even basically have it exclusive till broadwell from what i remember reading.

ARM as I mensioned in my first post needs to show a cpu that is 2-3 times the performance per watt for the market to take notice. Most people looking into ARM relise that as well. Its share price would be several times its current price if that was not the case also. Its actually dropped from its peak of 1100p ( £11) ( Currently its just below £10 Arm themselves get aprox 4.9 cents per cpu chip that is sold for the record ( its in its q3 financials ))

Intel also hold a 1 year process lead over Samsung there biggesat rival in that regard. Whos gonna make and sell these chips in the quantities that would really bother intel. Arm themselves do not have that level of resources.( Lets say google asked ARM Holdings for 100k of its chips tommorow could they really fullfill that order anytime soon?)

Samsung or qualcomm wanted to sell server chips they would surely of done so by now.

The question id put out there is how many really want to compete with Intel. A much bigger company in AMD already lost and ran away from that battle. Not like we will see a release of windows rt to the desktop space which would help ARM.
Harlequin 23rd November 2013, 00:49 Quote
/facepalm.

performance per watt? you can stuff 3 times the amount of ARM chips on power, to 1 intel chip.... that's whats happening.


please stop the intel fanboisms - again.

intel are very aware the world is moving away from the huge limitations of x86 - intel don't want this and are doing everything, including bribing anything that moves (allegedly) to not buy into the new and cheaper to run tech.


and google are already trying an arm data centre and the results are very good.


edit:

IBM are the worlds largest supplier of servers:

http://www.forbes.com/sites/chuckjones/2013/08/29/ibm-regains-1-server-market-share-position/

and guess who they recently got a licence from to make chips?

that's right - ARM..
Fordy 23rd November 2013, 02:39 Quote
Quote:
Originally Posted by GuilleAcoustic
Quote:
Originally Posted by benji2412
Or perhaps pretty stagnant!

I mean't Interesting on the "chipset designer side". They have to find innovative stuff instead of just shrinking the size and throwing more transistors at it.

Yup, I agree. As a student, I've never really seriously considered a career in chip design (or rather, considered and quickly dismissed) - but actually an end to "just make it smaller" could make it a more interesting career move.

Quite frankly, making things smaller is boring, IMO. If you can't make things smaller, you have to be more clever about the architecture, think of new ways to do things, and that's what's interesting. Don't get me wrong - of course Intel and ARM etc. continue to do 'clever things' as well as 'just making it smaller', but that's more to the tune of Haswell's power efficiency; the performance/price increase still coming from Moore's. At least as far as I understand.
ch424 23rd November 2013, 02:49 Quote
So, everyone's gone off-topic on the whole ARM vs Intel thing...

The "Moore's law doesn't work any more" thing is old news. Cost/transistor has been rising since 45nm; it's only because a big semi person has admitted it that it's in the news now ;)
Initialised 23rd November 2013, 15:39 Quote
Last time I worked I am a semiconductor fab and transistor design was during the 90nm node transition so I may be a little out of touch. Something will come along, a new idea, like copper interconnect and chemical mechanical polishing back then. Maybe a way of integrating graphene or optical clocking. Or maybe instead of PCs in future we'll hook up a collection of NUCs with different functions. In the mean time designers have to make their silicon do more for less power or size without a major node transition until the next step is sifted out of all the promising but mainly impractical bits of research out there. On the other hand maybe software needs to get smart rather than lazily assuming that next year's hardware will be that much faster.
Log in

You are not logged in, please login with your forum account below. If you don't already have an account please register to start contributing.



Discuss in the forums