Madshrimps Forum Madness

Madshrimps Forum Madness (https://www.madshrimps.be/vbulletin/)
-   Articles & Howto's (https://www.madshrimps.be/vbulletin/f6/)
-   -   AMD ingnots, sliced "TBread" with the crusts cut off (https://www.madshrimps.be/vbulletin/f6/amd-ingnots-sliced-tbread-crusts-cut-off-2324/)

Liquid3D 28th May 2003 03:45

Quote:

Originally posted by jmke
....really can tick someone of, I find it very :super: you remain cool all along....I've seen the nick Liquid3D around for some time now, as I read on XS, both [M] and XS are homes away from home. what is your real home forum. www.ninjamicros.com ?
Just to answer that, jmke ,atually my original forum was at MajorGeeks, then I tried ForumOC & OCIA, and I really didn't have a "home". If I can say any forum is my Home it would have to be Xtremesys, and then AOA (although I've only rcently joined), I also particpate at VRZone, TechPC, Bitbender, LiquidNinjas finall [M].
But I have to say, I may have had the most fun yet, here at madshrimps. Now I understand why so many Xtremesys members are also Liquidninja, and Madshrimp members as well. There's a great group of people here. Halaluah

I found some more comments I wanted to share with RobT (wafer theif (kidding)), not to defer (as I've said ealier) responsibility owning my theories, but some of the basis for my premises.

I was thinking back at which point I swayed on my conviction manufacturers fabricate at a certain process (i.e..13 micron) and the wafer yeild is consistant across the entire model line in so far as speed. All capable of reaching the same top end performance. Then are assigned PR models by multiplier adjustment, voltage, etc... As RobT pointed out, photolithographic consistancy across the entire wafer implies there should be no variation. However there are many theories in direct contradiction to this. Ed Stroglio makes the following assertions; How to tell then apart.;
"There are high-end and low-end TBredBs. They aren't all the same. The high-end ones on average perform several hundred MHz better than the low-end ones. Update 3/28/03: The gap between the two has narrowed somewhat with the latest (week 8 and thereafter) "J" chips. You can identify which type of TBredB it is by looking at code that begins the second line of coding on the processor. If you see a code like "AIUHB" that begins with the letter "A," that's a high-end TBredB. If you see a code like "JIUCB" that begins with the letter "J," that's a low-end TBredB"
Alhtough this doesn't specifically attribute the differences to location of the cores on the wafer, perhaps RobT can understand where my stupidities derive from. Between articles like this from authorities in the field such as the editor of Overclockers.com, and many other sources, maybe RobT can see where I might become misdirected, and possibly forgive my (as well as Ed Stroglio's by definition)"stupidities."
:grin:

Unregistered 28th May 2003 07:23

Your attempt to uncover the mysteries of the coding is indeed well intentioned - however there would have been a few queries I would have made before others had rather rudely rebuffed your hypothesis. Namely if A is better than J etc - meaning a high end chip why did all durons start with the letter A on their 5 digit coding. Also with regards to the 8th and 9th numbers - this one is also clear cut as Bartons have existed with numbers lower than their rating and when the XP2200 Tbred A first came out (and was the highest rated cpu for AMD at the time) the cpu evaluated at Toms Hardware had the numbers 25 in these two positions.

Unregistered 28th May 2003 09:46

Quote:

Originally posted by TeuS


*RobT is chip designer and gets quite some info on the manufacturing.


Dust? Sorry, but those chips are made in a class 10 environment, if it isn’t already a class 1 by now (the number represents the amount of dust particles that’s bigger then 1micron, for every cubic metre); you can have a few bad chips on a wafer due to such impurities, but half of them??In that last case I’d want less people smoking in the waferstepper. …

Not saying his other points are incorrect but this definition of a class 10 environment is totally wrong - A class 10 environment is a room which has 10 or less particles bigger than 0.5 microns per cubic FOOT! I think he has his definition confused with the ISO14644-1 standards, whereby an ISO1 room has less than 10 particles which are bigger than 0.1micron per cubic metre. The reason for the difference is the class 10 or 100 etc standard is American based (hence the use of cubic foot) whereas the ISO one is European based.

TeuS 28th May 2003 11:14

thx for that info, Rob should check that

by the way, feel free to register!

jmke 28th May 2003 23:02

Quote:

Okay, a few points.

One of your big questions seems to be why can I overclock a 1700 by 100%, but I can't do the same with a 2400. To me, this doesn't seem all that surprising. They're the same processor, right? Same design, same masks, same process. Some just come out faster and others slower. This isn't due to mistakes or impurities, it's normal. There is always random variation in the fab process, and some wafers come out fast and some slow. AMD tests the finished wafers to see how fast they end up. Now, if a certain fab is 'fast' for a month or two, there may not be any slow parts, but they still have to sell 1700's. So they just label them 1700, even though they may actually qualify for a higher speed. Other 1700's get the label because that's all they can do. There's no conspiracy, it's just AMD's way of selling all of the processors they manufacture, and getting more money for the good ones. If you get lucky, you'll buy a 1700 that's actually fast enough to be a 2400. You probably will not, however, buy a 2400 that is actually fast enough to be a '3300'. It's already towards the higher end of the speed distribution, so there isn't much left to gain by overclocking.

A lot of what you refer to as 'insider' information is actually publically available, you just have to look for it. I think many of the factual errors you made were not confidential information at all: the information has been published. I'm not saying it's always easy to find information, but you can't just throw up your hands and despair of understanding the process because you're not an insider. Many good books have been written that explain silicon fabrication techniques, and journal articles go into great detail about modern methods. It's work to wade through the literature, but if you're going to present yourself as someone knowlegeable about fabrication, then it rests upon you to make sure your facts are straight. When you publish an article, people trust you to have checked your facts and assumptions. By presenting what you imagine to be true as actual truth, you are misleading your readers and doing them a disservice. I hope you don't feel I'm attacking you, that's not my purpose. You're curious and want to know more, like many of us, and that's great. I'm by no means an expert either, it's very difficult to become one. When you publish an article, though, you're effectively saying "I know what I'm talking about." It's not acceptable to do your fact-checking after the article is published, by that time the damage is done.

A few more minor points:

You state that the price for one mask is $1 million. Actually, for .13um, it's about $1 million for the entire set of masks. One individual mask is ~ $30,000 or so.

The fact that Intel processors can be clocked at >3GHz while AMD hits the ceiling at considerably less than that has nothing to do with a conspiracy on AMD's part to keep you from overclocking, or anything else mysterious. The processors are fundamentally different designs. The P4 has a longer pipeline, so it does less per clock cycle, roughly speaking. This lets them clock the chips faster. Athlons just can't be clocked as fast as P4's.

There certainly are 'physical' limitations to making faster processors, but I wouldn't say there are 'metaphysical' limitations. Metaphysics is the branch of philosophy that explores the nature of reality. So I guess an AMD engineer could question whether her experience of processor design is reality, or just a dream, but I doubt that's what you meant.

Actually many of us think processor design is a dream job, but that's another matter altogether.

As always, take any of this information with a grain of salt. I think I'm correct in my assertions, but I may not be, and appreciate corrections.

Regards.


jmke 29th May 2003 13:00

SGtroyer @ Anandtech:

Quote:

I'll take a whack.

First, a little basic information. Any integrated circuit consists of transistors and wires to hook them up. For a given process, there are rules describing the minimum size of a transistor, the minimum width of wires, the minimum spacing, etc. Moore's Law has been enabled by the exponential decrease of all of those dimensions. So why is smaller faster? Intuition tells us that a signal travelling a shorter distance takes less time, but this isn't actually the dominant factor. Larger wires present a larger capacitive load, which means they take longer to transistion from a zero to one, or one to zero. Smaller wires = less capacitance = faster chip. There's a second reason why smaller processes are faster, but first we'll have to talk about the transistor.

The transistor is what drives signals to zero and one, it does the actual work of a processor. A faster transistor can produce more current, discharging the capacitace more quickly. The speed of a transistor is proportional to the Width divided by the Length. A wider transistor provides a wider path for the electrons, so more electrons flow. A shorter transistor sort of produces a shorter path for the electrons, so more electrons flow. (It's actually a little more complicated than that, but we'll brush over the details). So the fastest transistor is very wide and has a very short length. We can make both the width and length as large as we want, but the minimum is determined by the design rules I talked about above. So for fast circuits, you almost always use the minimum length, to produce the fastest circuit. Since the minimum channel length, as it's called, is the dominant factor in circuit speed, that's the rule we use to refer to a fabrication process. The .13um process has a minimum channel length of .13um.

So how does this relate to 157nm or 248nm, or whatever? .13u, or 130nm, refers to the dimensions of a physical part of the circuit, the channel length. 157nm refers to the wavelength of light used to produce the image to fabricate the part. 157nm light can create a picture of a .13um gate, or a .18um gate, or a 5um gate, if you so desire. If you try to go much lower, however, it gets really tricky, because light doesn't image well at all below its own wavelength. This is one of the challenge to going to smaller processes, although not the only challenge, by far. By using light with a smaller wavelength, smaller features can be resolved. Smaller features enable faster circuits.

Is that clear? It's complicated stuff. I'll try and find a link that goes into more detail on device physics, for those who are interested.

On a side note, I want to say that I'm very suspicious of the theory that chips from the center of the wafer are much faster than the outer chips. I won't say it's not possible, but I've never heard such a thing. In general, there is a small variation between circuits on the same chip, a somewhat larger variation between chips on the same wafer, a much larger variation between wafers in the same lot, and a very large variation between wafers from different lots. In other words, I would expect a much larger variation between a chip produced on Wednesday and one produced the next Friday than between a chip at the center of the wafer and one at the edge of the same wafer.

It seems there are a lot of theories floating around that are based on intuition or guesswork, but with no physics to back them up. This is complicated stuff, and you can't just reason based on what seems right.

Magazines like EE Times are great. I read it too. But EE Times mostly talks about the business of electrical engineering, not the technology. EE Times doesn't really teach you anything, it's only useful if you already understand the technology. I'd recommend looking at some textbooks to learn about device physics and fabrication. That will give you the theoretical background to understand what you read in the journals. If you live near a university and have access to their library, that would be the easiest route. The other great source is online course materials from some universities, such as Stanford, MIT, and Berkeley. They have notes and often even videos of the lectures online. It's great what you can come up with for free.

Again I prattle on...


jmke 29th May 2003 13:02

and PM@Anandtech also shared some very good info!
this is in reply to Sgtroyer

I agree with everything Steve wrote above me. I'll try to add additional comments around what he wrote so eloquently.

Quote:

--------------------------------------------------------------------------------
And in so far as my subscribing the center wafer producing higher quality cores, there still seems to be some disgreement. Lynx516 wrote; The whole wafter is used but due to the probablility of errors in the waffer increases the further out you go you are more likely toget dead chips not ones which work but dont OC so far. This in fact promulgates thatn theory, which was so strongly criticized. And this was one reason I beleived Austins Guide to be accurate. And it was from similiar descriptions I extrapolated perhaps the "reason" the outer wafer yeilded slightly "less overclockable" Cores, this was perhaps how different speed processors were chosen. And it seems that theory is sound in it's premesis, ergo it's conclusion.
--------------------------------------------------------------------------------
A long time ago, in an industry very young, the idea that the fastest chips came from the center of the wafer was correct. Now, it's not. On an 8" or 12" wafer, there is a small, but somewhat higher probability of a failed chip on the periphery of the wafer due, typically, to dislocations/defects in the silicon cystalline matrix caused by stress/strain, but once you get in from the immediate edge, there is no less of a statistical likelihood of failure (or a slower than typical part) than in any one part of the wafer. Generally what determines a chip's speed on the same wafer more than anything else is which part of the stepped image it is in. The stepper works by "stepping" a lithographic image of usually multiple chips (depends on the size of the chip) like a stamp. Using the case of having four chips in a stepped image, usually each one will consistently be faster or slower than the others depending on where it is in the image. For example, the upper left one may consistently be the fastest. This has to do with transistor orientation on the critical paths, and the incidence of light into the mask that creates the transistors for that path.

Still, Steve's right. Variation is smallest between parts on the same wafer, no matter where they are on the stepped image. Then there is a larger statistical variation between wafers in the same lot. And the largest between different lots.


Quote:


--------------------------------------------------------------------------------
1.) in my describing the origin of .13 micron measurement, I defined this as the gate "WIDTH" across those "lines" along which voltage (binary information) travels along the core's microcircuitry. I've read twice in the last few days this measuremant actually describes "gate LENGTH". If the latter is in fact true, then could this be explained to me, using a visual analogy?
--------------------------------------------------------------------------------
Width and length on a transistor are, in my opinion, backwards from how a normal person would probably term them. The "length" of a gate is the distance between the source and the drain and is the small resolvable feature on a given process. When people talk about what process generation something is, they are referring to the transistor length. The width of the transistor is the distance that the gate travels across the transistor in between the source and the drain. I have hunted for a diagram, but I can't seem to find one.
So I hand-drew one and put it on my website. It's extremely rudimentary, but it will work for purposes of illustration. It's here

I've always thought this length/width thing is backwards... but not as backwards as the whole "source/drain" "current direction" thing.


Quote:


--------------------------------------------------------------------------------
2.) In my description of the 157nm (acutally it's 248nm for .13 micron) process. Is this the ultraviolet light wave-LENGTH? Hence, the shorter the wavelength, the smaller the gate "WIDTH /LENGTH"? AND the smaller these lines imaged, then etched into the wafer surface as the resist is washed away, the less voltage required, and the faster these (binary pulses?) voltage travel. So that the smaller the die shrink, less voltage is required, and faster the processor becomes?
--------------------------------------------------------------------------------
Steve answered this very well, but I'll add my comments.

157nm is the wavelength of the light that comes out of the laser. The shorter the wavelength, the smaller things that you can lithography draw with the light. If you try to draw a transistor that is smaller than the wavelength of the light used to draw it through the mask, the image of the transistor will be blurry. So if you are drawing a 0.13um square using a 193nm laser, it will end up looking like a circle. Through the use of various tricks such as OPC, and to a lesser extent phase shifting, you can still get the shape that you want without upgrading the laser to a smaller (and more expensive) wavelength... up to a point anyway. Link to OPC company webpage and to another one on Synopsys's page with a more graphic example.

As far as voltage, signalling and die shrinks, I wouldn't have termed it the way that you did, but what you said is fundamentally correct. It's worth noting that reducing voltage is something is forced on the industry due to reliability concerns. The circuitry doesn't "require" less voltage. It's more like if you give the circuit a higher voltage, it will break either immediately, or in time. Voltage reductions reduce the operating power of the part, and improve the reliability. But it is much harder to design robust circuitry for lower voltages even with the improvements in process technology.


All times are GMT +1. The time now is 22:35.

Powered by vBulletin® - Copyright ©2000 - 2024, Jelsoft Enterprises Ltd.
Content Relevant URLs by vBSEO