Intel's Steve Smith.
This afternoon on the last day of IDF Spring 2007 in Beijing, bit-tech
along with a few other European journalists got the opportunity to sit down with Steve Smith, Vice President and Director of the Digital Enterprise Group Operations at Intel to ask questions on a variety of topics. He has overseen Intel driving ramps of next gen technology on desktop, mobile and server over the past six years from 130nm to 45nm.
The following is in no particular topic order, because questions were thrown out at random, but the interesting answers were recorded for bit-tech
When will we see a quad-core notebook?
This will arrive on the Penryn generation, but is not to be expected at the initial launch. 45nm, Penryn based quad-core notebooks will obviously have a slightly higher thermal envelope than the dual-core variants so don’t expect one in a thin & light, they will likely be in desktop replacements.
How does Intel deal with DRM being the middle man between consumer and content provider?
All media is in the process of going digital, so the digital home group is focusing on this and is working with the digital entertainment industry to provide a legitimate business model. Intel has helped them understand what has transpired in music and digital distribution to form a successful business model.
Intel supports secure data distribution, but it’s up to the publisher to decide how it’s implemented.
When will we see discrete graphics from Intel again?
They’ve (Intel engineers) had projects going on for a while with a video orientation, but there is no product time frame yet.
Does Nehalem use CSI?
By moving to a future generation of silicon Intel can integrate a fundamentally power saving application by orientating the board signaling to use fewer silicon nodes. Intel’s serial “point to point tech” is synonymous with CSI, but whether it’ll be explicitly discussed as “CSI” in the future has yet to be seen.
The fundamental differences between Intel’s “point to point” (or CSI) inter-connectivity and HyperTransport are that Intel believes its technology is higher performance and higher bandwidth by being specifically tailored to its products.
Will we see a change in 3D graphics with an integrated core that will enable alternative rendering techniques like ray tracing?
It’ll be an evolution of the current “programmable” G965 architecture. The integrated cores in general are fundamentally different from a CPU architecture.
What is the proportional expected adoption of quad-core: dual-core in the future?
For Servers there has been and will continue to be a very strong uptake because of the task driven high workloads typical to the segment.
In the desktop environment, fewer applications are inherently setup for multi-threading, so it’s a more limited audience until we see a progression of gaming and media that utilises it. There will be co-existence for quite a while between dual- and quad-core. In the short term, Intel expects to continue seeing a single digit percentage uptake for quad-core series.
Has there been any revision in Intel's future in the multi-GPU arena given that Nvidia is content on locking down SLI and AMD has owned ATI for a little while now?
Intel would love to support both for CrossFire and SLI, but it’s down to the decisions on driver support from Nvidia and ATI. Intel already has the hardware support.
What does Intel see the future of CPU performance to be as it’s already found you can’t keep scaling clock speeds and to keep adding more cores offers diminishing returns?
Intel is committed to driving single thread performance by either increasing the clock rate, wider machine or optimisations, as well as going more multi-core.
Optimisations include SSE4 and typically you have specialists in algorithms with media, like DivX. They’ve come up with a new version of DivX that gives a 2x performance increase with SSE4. It has taken a few weeks worth of work for the engineers to give advice to DivX, but it’s not a huge project. Compared to a graphics driver it’s a very small amount of work.
How do you decide what goes into these optimisations then?
Intel has an architecture team that works with OS developers and other developers that get constant feedback as to what algorithms are needed and should be included into the next SSE set in order to accelerate the next calculation. If enough of the industry asks for it and there’s a wide enough future potential for performance it’ll make an SSE set.
They (Intel Engineers) have a certain budget of complexity through cost of integration. This leads them to having a priority list which is then worked on at including into the next set of SSE instructions. It’s a rolling priority list year on year, so if it didn’t make SSE4 it might make SSE5.
Intel seems to have changed its position on overclocking, what’s the view now?
Whilst Intel still suffers from the problem of CPU rebadging, despite laser etching and a hard coded CPUID string, we have listened to the enthusiast community, and have seen that it has been consistently growing. Two to three years ago we changed our thinking and allowed overclocking on our Extreme Edition processors, and we will continue to do that.
Discuss in the forums