Tabula Rasa Semantics, in Microprocessor Burn-in. Part-III

CPU by KeithSuppe @ 2003-08-04

Tabula Rasa Semantics, in Microprocessor Burn-in. Part-III:
There are a great many overclockers who swear by the benefits of Burn-in, even from a prima facie perspective it "seems" logical. Yet, truth of the matter is, the moment current begins to pass through an electronic device, its finite existence is predestined.

  • next

Tabula Rasa, and Burn-in

Madshrimps (c)
3 part Article!
Part I - Part II - Part III


I'm going to conclude this article, addressing a somewhat controversial issue. I say somewhat controversial, because even though there's not an engineer who would condone it, the subject, and practice of "Burn-in" has been a long-held doctrine among enthusiasts. Recently I came across an article at Anandtech concerning "Burn-in", and whether the end-user's practice of Burn-in yielded real world performance increases. There are a great many overclockers who swear by the benefits of Burn-in, even from a prima facie perspective it "seems" logical. Yet, truth of the matter is, the moment current begins to pass through an electronic device, its finite existence is predestined. Here's a quote from the Anandtech:


There is no practical physical method that could cause a CPU to speed up after being run at an elevated voltage for an extended period of time. There may be some effect that people are seeing at the system level, but I'm not aware of what it could be. Several years ago when this issue was at it's height on the Internet, I walked around and talked to quite a few senior engineers at Intel asking if they had heard of this and what they thought be occurring. All I got were strange looks followed by reiterations of the same facts as to why this couldn't work that I had already figured out by myself. Finally, I was motivated enough to ask for and receive the burn-in reports for frequency degradation for products that I was working on at the time. I looked at approximately 25,000 200MHz Pentium CPU's, and approximately 18,000 Pentium II (Deschutes) CPU's and found that, with practically no exceptions at all, they all got slower after coming out of burn-in by a substantial percentage.



Although the results are conclusive, we seem to have stumbled into another conundrum. With so many people experiencing positive results after following the universally accepted "burn in" methods, which usually entails a prime computational, or algorithm type program, and/or increased voltage, (many also employ a graphics intensive program, killing two silicon ,with one algorithm) it would be difficult to disband the occultism of "Burn-in," and I'm certainly not going to try. I'd would, however; like to offer recent enthusiasts and experienced overclockers alike a point of view, which is fully grounded in electrical theory, and microprocessor design. There's not one article, or abstract authored by an engineer, which advises, or empirically verifies the benefits of end-user Burn-in. I've done an exhaustive search in this respect and haven't found any conclusive technical material supporting burn-in. When overclocking the ideal scenario is simply running a processor at its theoretical limit, not necessarily above it.

There is of course a ceiling above the default speed at which the processors multiplier and Front Side Bus are predetermined (locked) by the manufacturer. In AMD's case, one can often manipulate (increase) this frequency by changing the multiplier in conjunction with the FSB. In Intel's case only the latter (FSB) is adjustable, if one wishes to increase or even decrease the CPU's frequency (except in the case of ES). Overclocking is simply circumventing the manufacturer's constraints, albeit multiplier or FSB, raising or lowering one or the other, and in many circumstances Vcore as well. Where voltage is raised, however; so thermal demands will increase, the experienced overclocker constantly monitors temperature (as should anyone using a PC). The counter-point, being, if the ideal overclocking scenario is manipulate the slowest processor from a specific core architecture (design), and run it as fast as the fastest processor in that line, without increasing Vcore, it seems one is attempting to keep voltage, and temperatures at a minimum. In extreme overclocking, the first part of the equation is similar, except one utilizes Vcore much more readily to attain even greater speeds.

In this scenario thermal dissipation becomes the greater challenge, and is proportionately related to the clock-speed achieved. The accomplished overclocker, seeks to overclock beyond the fastest processor's default speed, yet focuses on keeping the voltage, and temperature at or even below default settings. Experienced overclockers, focus on stability and reliability whilst attaining the highest speeds possible, and without raising Vcore. This is perhaps more a matter of careful silicon selection then anything else, which can only come with experience. Almost anyone can pour LN2 on a processor, and push unsafe voltages through it, running a quick WCPUID, SISoft Sandra, and a few other benchmarks, but I don't see much of an accomplishment in that. LN2 has no "real world" benefits, as its effects upon the microchip are so temporary as to be insignificant, and its benchmarks (in my opinion) shouldn't even be considered. Of course this becomes somewhat of a philosophical issue, as there are those who may consider phase-change cooling "unnatural" as well. Phase-change is at least permanent, and definitely more permanent then air or H20-cooling in prolonging a processor's life. I would therefore, consider any permanent solution as acceptable, and if one were to find a method for permanent LN2 overclocking, I'd have a different outlook on it.

What necessitates reiteration here is the basis for which overclocking exists. Theoretically the experienced overclocker is seeking to exploit the manufacturing process. In that all CPU's in a particular model line (i.e. Northwood-C) are manufactured from one set of photo-masks, SSOI wafer process (purity,) and stringent lithographic controls. There is a concerted effort to attain strict uniformity along the entire production line. Every core should be capable of running as fast as the lines fastest processor plus additional headroom, albeit the slowest CPU, or fastest, their consanguineous of the same Fab.

Once you realize these similarities exist, ones focus should be to understand the core architecture, especially the manufacturing processes behind it, and what technologies the manufacturer is implementing in a given core. The goal is not to overstress the processor, such that computational errors occur. And what's even more surreptitious, is the unseen damage occurring at the transistor intra-architectural level, were error correction is in a constant state of recompense, adversely effecting performance. It is therefore essential one understands raising the voltage for burn-in, and/or overclocking essentially stresses the processor, and in the long term, and/or very high Vcore environment causes irreversible thermal damage:


Heat adversely effects the processor in many ways. Firstly as the temperature in the conducting pathways of the processor increase, the atoms in the lattice structure of the conductor vibrate violently, this increases their collision cross section open to the charge carrying electrons and therefore increases electrical resistance of the conductors causing further heat dissipation. This vibration of atoms also causes electro migration in which the vibrating atoms cause pathways between conducting strands to form, causing short circuiting on a microscopic scale. These inter-conducting pathways cause system instability and eventually permanent damage to the processor itself.

  • next