Though we've moved from one to multiple cores and to smaller and smaller die sizes, one thing has stayed constant - by and large, processors have functioned using the same intrinsic design. L2 cache, L1 cache, pipe. This basic technology has been with us ever since the CPU has been around, and though tweaks to the system have given us increases, the basic technology has stayed the same. Now, it appears IBM will make the first real component change
in a very long time - the SRAM normally found in the cache is being changed to DRAM.
The alteration is subtle, but its effects could be very potent. DRAM has a considerably lower transistor count than an equal amount of SRAM, taking up only 1/3 of the footprint. This will greatly reduce both energy usage and (subsequently) thermal output, especially in standby modes. In standby, the chip will use only 1/5 the current power draw, since so much less energy is required to keep the data stored in DRAM.
The change has been long overdue, but until now there wasn't the technology to lay the transistor arrangements out right on a chip die. As the die shrink race has continued, though, chip makers have been running out of options - SRAM is just too big. That size comes with another issue - transistor leakage and latency issues are beginning to affect overall chip designs as everything else scales down.
DRAM will allow a lot more cache, which (though slightly slower) will provide some excellent buffering for larger graphics files like games as well as multi-tasking. According to IBM, the change is already in place for the move to 45nm, and we can expect to start seeing it appear in the company's processors starting in 2008.
Could we be seeing IBM come back strong into the consumer chip market? Only time will tell. In the meantime, leave us your thoughts on DRAM-on-die in our forums