"Where do microprocessors come from, Daddy?" That's an awkward question we all must answer at some stage in our careers. What mysterious process converts elemental silicon into elemental forces like Intel's Itanium or Motorola's PowerPC? Let us explore the wonder that is semiconductor creation.
"When a customer and a vendor love each other very much . . ." they make a commitment to produce new chips. It's a big commitment, too. New chips generally cost a few million dollars to design, but that's small beer compared to what it costs to build a new chip-making factory. Fabs or foundries, as they're called, cost upwards of $2 billion to build. You could buy a lot of cruise missiles for that kind of money or several small Caribbean republics (island not included).
The amortization sucks, too. That $2 billion foundry will be obsolete in less than five years, so you're looking at more than $1 million of depreciation every single day. Very little of that cost goes into the silicon itself. You're mostly paying for the exotic equipment inside, including the neat-o air conditioners in the clean room.
In the beginning
Silicon chips all start out with, well, silicon. It's one of Earth's basic immutable chemical elements (element 14 in the periodic table, for those keeping score at home) and is basically purified beach sand. We're not likely to run out of this resource anytime soon. Tell your in-laws, by the way, that silicon is not the same as silicone. Silicone makes good weather stripping, a lubricant for squeaky hinges, and a source of income for cosmetic surgeons. It's not good for making microprocessor chips.
Raw silicon is grown into crystal ingots, which look like giant silver bolognas. Then it's sliced into exceptionally thin wafers about 6 to 8 inches (200 to 300mm) across, depending on the diameter of the ingot. Wafer (and ingot) diameters are standardized so that anyone's wafers can be processed in anyone's fab. A 300mm wafer is about as big around as a dinner plate and large enough for about 500 average-size chips.
From this point on, everything else happens in the fab's fancy clean room. "Clean" understates the case; these rooms are astonishingly, unbelievably sanitary. The best clean rooms are 1,000 times more pure and unpolluted than a hospital operating room. Stainless steel is everywhere; the floors and ceiling are perforated to promote air circulation; horizontal surfaces are sloped to avoid trapping dust, and yellow lighting discourages growth of single-cell organisms.
Clean room workers wear the now-familiar bunny suits. Looking like astronauts, these people are fully encapsulated and learn to recognize coworkers by their eyes. Getting in or out of a bunny suit takes about 15 minutes and involves walking across sticky floor mats and through an air shower. Breaks need to be carefully planned.
Let's see what develops
If you're a photographer or develop film in your own darkroom you'll already be familiar with what comes next. Silicon chips are made the same way that black and white prints are made. The entire fab is basically an enormous one-hour photo lab. The silicon wafer is the photographic print paper and the chip design is the negative. Mass-producing chips involves exposing the same negative a few hundred times over the entire surface of the wafer. When the wafer's been completely covered with chip "prints," you're done.
A whole lot of things make this process more complicated than it sounds. First off, silicon wafers aren't photosensitive, so simply exposing them to light doesn't do anything. The wafers have to be coated with photoresist, a chemical concoction that conducts electricity but is also sensitive to light. After the wafer is evenly coated with resist—which itself is a tricky process—you can expose it by shining light through your chip's film "negative." That casts a chip-shaped shadow and imprints one copy of the chip onto the resist-covered wafer. After you wash away the exposed resist using ultrapure water and some other chemicals you've made one layer of one chip.
The idea here is to build up a three-dimensional stack of silicon, metal, and insulators. Chips are wired in 3D, they're not flat. They appear flat to the naked eye—extraordinarily flat, in fact—but they're actually more like layered wedding cakes. A low-cost 8-bit microcontroller might have 8 to 10 layers while an exotic Athlon or Itanium has more than 40. Each of these is called a mask step or mask layer, and they all have to be done in sequence, from bottom to top.
Which brings us to the next problem. Each chip design has multiple layers, each with its own film negative. These layers need to be exposed one after another onto the same piece of silicon, exactly lining up. If the registration isn't perfect the layers of silicon, metal, and insulation will blur and the chip won't work. Unfortunately, you won't know that until after the chip's done and tested, and by that time you've already spent the time and money. Those chips wind up as paperweights, tie tacks, and sparkly souvenirs.
Superman, we need you
The other problem is that the film is invisible. Really. The patterns on each layer of the negative are so small and so fine that they're invisible—not just to the naked eye, but to anything. The features are literally smaller than the wavelength of visible light. Shining normal light through a film layer would be like aiming a spotlight at a spider web; it won't cast a shadow. No shadow, no developing photoresist.
X-ray vision comes to the rescue. Instead of using visible light, chip makers use X-rays, extreme ultraviolet light (EUV), or laserlike beams of electrons (e-beam) aimed at the film layers. Even these science fiction techniques only forestall the inevitable. Film features are vanishingly small and getting smaller. Some chip makers now rely on interference patterns, like Moir patterns, to "trick" the equipment into casting sharp shadows from blurry images.
How small are we talking? Current state-of-the-art processing can create 90-nanometer-thin lines in silicon. That's 0.09 microns (micrometers), or 0.0000035433 of an inch. It's also only about 300 atoms. We're talking really small. This is what's known as the "feature size," and it describes the smallest feature that can be resolved or, in other words, the thinnest wire that you can make.
Features sizes shrink in discrete steps because only a few companies produce the breathtakingly expensive chip-making equipment. Before 90nm production the smallest reliable size was 130nm (0.13 micron), and before that, 180nm (0.18 micron). If you go back enough years, features were all bigger than a micron. Chip-making technology has improved by several orders of magnitude since the 1960s and shows no sign of letting up.
When people talk about a chip made in "point one-three" they're talking about the feature size (0.13 micron). When they talk about "200 millimeters" they're talking about the wafer size. There's no relationship between wafer size and feature size; you can make any size features on any size wafer. In practical terms, though, companies almost always use the largest wafers and the smallest features possible. Here's why.
Economics 101
Smaller features (finer lines) are a good thing because they make for smaller chips. Smaller chips run faster because the electricity has less distance to travel. More important, smaller chips mean more profit. And more profit is a good thing.
For an example, let's look at a 200mm silicon wafer, which has about 986cm2 of surface area. That's about the size of a salad plate. Let's say your chips are square (most are) and they measure 10mm on a side—that's 100mm2 per chip. If the silicon wafer was also square you could fit 986 chips on your wafer. Alas, wafers are round so you can really only get about 279 chips on a wafer. But if you could reduce the size of your chip by just 10% to 90mm2, you'll fit 312 chips on a wafer. That's 12% more chips on the same amount of silicon. Not a bad deal.
Realistically, shifting to the next-smaller feature size slashes the size of a chip by about half, doubling the number of chips produced per wafer. Smaller features also reduce power consumption and heat dissipation, so finer lines are a win all the way around. The only downside is cost. Outfitting your fab with the latest lithography equipment to make these fine lines is not an inexpensive proposition.
Expensive real estate
Because most of the cost of chip making is in the equipment, not the silicon, your profitability depends entirely on volume. It's fairly accurate to say that the first chip costs you $2 billion to make; all the chips after that are free. Once you've paid for the fab, the labor and materials are, uh, immaterial. That's why smaller chips don't cost less, per se. They cost less because they increase the volume of product your $2 billion factory can produce. Silicon is like real estate: you're not paying for the dirt. You're paying for the space.
Lather, rinse, repeat
So now we've made one chip on a big wafer; how do we make more? That's the job of a stepper, a machine that carefully moves the wafer side to side until it's been completely covered with images of our new chip. As we saw before, a few hundred images will fit on a typical wafer. A few dozen more will partially fit and overlap the edge of the wafer. That's okay; we'll cut them off and discard them later.
Why not just use one big piece of film to expose the entire wafer at once? The problem is focus. As any photographer knows, the bigger the picture the blurrier the image. That's why big-screen TVs don't look so great up close. Chip images need to be ultra sharp, so a blurry "mega mask" wouldn't cut it.
Technically, today's chips are already slightly blurry at the edges. High-end chip designs compensate for this by putting less-critical circuitry in the corners. Intel's old i960MX microprocessor was octagonal. It was so big its corners had to be cut off.
Bringing out the diamonds
Once all our chips are exposed, rinsed, and exposed again, it's time to cut them apart into, well, chips. Up until now, our entire wafer has been handled all at once. All the chips were given a quick test while still on the wafer to see if any of them work. If they don't, the entire wafer gets tossed. If they do, the chips get cut apart. Using a diamond-edged saw, the wafer is diced up into individual chips and the "silicon sawdust" gets vacuumed away to avoid contaminating the finished chips.
A chip that's been cut loose from its wafer is called a die, and several die together are also called die, not dice. There's no particularly good reason for this grammatical inconsistency.
After each chip is tested to see if it works, it's usually tested again to see how fast it runs. Surprisingly, a 500MHz processor and a 700MHz processor aren't really different chips. They're probably neighboring chips from the same wafer that happen to run at different speeds. Slight variations in chemistry, contamination, or the phase of the moon seemingly can affect a chip's speed. It's common for microprocessor companies to sort their chips into at least two or three speed grades. The fastest 10% get sold at a premium price, while the slowest ones go to the bargain basement—or get called something else.
Moore's Law No discussion of semiconductors would be complete without a gratuitous mention of Moore's Law, usually misquoted and generally misunderstood. So for completeness, here goes. In 1965 Electronics magazine published an article by Fairchild's head of R&D, Gordon Moore. In it, he speculated that his firm, and probably others, would be able to squeeze twice as many transistors onto a given area of silicon every year. Ten years later (that would be 1975) his prediction was right on the money but he watered down his doubling rate to once every 18 months. Over the years a number of people, including Moore himself, predicted the end of his eponymous "law," which is really just an empirical observation. Moore never said anything about speed, performance, prices, computers, the Internet, or world peace—just packing density. All the other claims made in his name are the result of overzealous (or undereducated) marketing people. |
Chip makers commonly lie about a chip's features. Well, maybe not lie exactly, but omit certain facts. You see, embedded processors with different features or peripherals often aren't different chips at all. Vendors will produce a single silicon design but then package and market it as different chips. For example, one version might have two UARTs and Ethernet while another version has five UARTs and no Ethernet. Chances are, they're really the same chip. Sometimes the "missing" features are disabled with a laser or by blowing a fuse. Sometimes they're disabled with firmware. As often as not, they aren't disabled at all, but just aren't mentioned on the data sheet. Programmers have occasionally found "secret" peripherals that aren't connected and aren't mentioned in the manuals.
Production quality tends to improve over time, so faster chips will become more plentiful. Sometimes it's not in the vendor's best interest to let customers know that, however. Even if half of the mature parts run at the peak speed, the vendor might arbitrarily limit the number of fast chips to, say, 15% of its volume to maintain an air of exclusivity. Enterprising customers have discovered this and over-clock their parts to gain a speed advantage.
Most chips are no bigger than your fingernail yet they contain the power and performance of room-sized mainframes from yesteryear. Any smaller and they'd be cheaper than the plastic package they're housed in; any bigger and they'd give off enough heat to melt themselves. Current semiconductor features are only a few hundred atoms thick in places. Surely we must be approaching the end of the road. But it doesn't look that way; new developments in lithography, epitaxy, and molecular manipulation should keep this family tree growing for many generations to come.
In
0 comments:
Post a Comment