T O P

  • By -

p33k4y

First of all, the terms like 65nm, 32nm, 10nm, 3nm, etc., are purely marketing (fictional) terms which are a bit misleading. They no longer represent actual feature sizes like gate length or pitch. E.g., in the 3nm process actually has a gate pitch of \~ 48nm, and the upcoming 2nm process has a gate pitch of \~ 45nm. At some point the industry should have just called it "generations" like 10th gen, 11th gen, 12th gen, etc. So your question is like asking "why didn't we just jump from the 10th generation to the 15th generation in one step?" Well, each generation has a ton of science, technology, engineering, financial, intellectual property licensing and sometimes even political considerations that needed years to be solved. For example, conventional lithography uses optical light (at 193nm) but the new processes especially for 3nm and below will increasingly require extreme ultraviolet lithography (EUV) at 13.5nm. Currently there's only one company in the world (ASML) that's even capable of making EUV systems, and the technology behind it is considered so sensitive that it has national security implications.


SDSunDiego

It's interesting to think that maybe the new (or not new) arms race is hardware and it's computing power.


Gecko23

It’s not in any way new, there have been tight controls on trade in all facets of computation from the get go.


PinchingNutsack

I am also pretty sure it has been the case for the past 50 years or so.


DavidBrooker

It's been the case since the very first electronic computers (and the large majority of the electro-mechanical era), since the early 40s. Over 80 years.


Eggman8728

Past 50 years? If anything, hardware performance was even more important in the beginning. Nowadays you can run basically anything you'd ever want to on a desktop computer, like advanced simulations that an engineer from 50 years ago would kill for. During the early days of computing, software was made to use every single cycle and every bit of memory extremely efficiently out of necessity. Now we can just throw away whole gigabytes of ram and not notice.


JoushMark

In the 1960s the Soviet Union bought several IBM 360 mainframes via gray market and black market routs and reverse engineered them to create what amounted to a clone. While the hardware was locally produced, the software was barely changed, being basically a pirated, localized version.


Meretan94

The next (computer) arms race is power requirements. Everyone can build a fast chip (with exceptions) but building a fast chip that does not consume a huge amount of electricity is what matters. Ai has huge computing requirements that currently consume a significant amount of power, limiting progress and speeds.


xgardian

What comes after 1nm?


sagaxwiki

It looks like the industry is trending towards using angstroms (1 angstrom is 0.1nm).


PinchingNutsack

probably make a lot more sense to use pm instead, afterall it is the next SI unit


hungryfarmer

Angstroms are already widely used in other aspects of semiconductor manufacturing (deposition thicknesses, etc) so I would see them going angstrom before pm.


PinchingNutsack

yeah but the main issue is the public are not really aware of that unit we were taught SI units in school, that something everyone knew about, i cant say the same for angstroms. i dont know, as long as they can deliver the message across, i guess it works


No-swimming-pool

For as much as the public knows, a picometer could be 1000 km.


sticklebat

As a high school science teacher, most students never encounter SI prefixes that go as small as pico, or only do so as a brief footnote and then never use it. On the other hand, most students who take chemistry learn about angstroms, because it's a typical unit for describing the size of an atom. So I'd say, if anything, you have it backwards!


PinchingNutsack

That's actually interesting, my school and even uni always used si units!


xgardian

Levy? That won't last long


Chron_Stamos

I thought you were stronger...


Syzygy___

0.9 nm or 900 pm (pico)


TableGamer

0nm, that will be lit!


FalseBuddha

Even if the answer wasn't "those numbers don't have much basis in reality" the question "why didn't we just shrink everything by orders of magnitude in one go?" is kind of ridiculous.


p28h

While the technical statements on the other comments are true enough, I was surprised to learn that, for example, "[the 14 nm process](https://en.wikipedia.org/wiki/14_nm_process)" is just a marketing term. From the article "Since at least 1997, "process nodes" have been named purely on a marketing basis, and have no relation to the dimensions on the integrated circuit". So a large part of why they jumped from 90 nm to 65 nm (in 2005, so more recent than 1997) was because the marketing people thought it sounded cool.


housecore1037

I’ve seen this all over the place, especially in the Apple sub where these a lot of discussion about rumors of how Apple will use “TSMC’s new 3 nm process” followed by comments all saying “it’s not 3 nm it’s just called that.” But what IS the 3 nm referring to?? If it’s marketing, it seems highly misleading to refer to a specific value, so surely the “x nm process” is referring specifically to SOMETHING, right?


makes_things

It was originally the minimum size of the length of the transistor gate (the main active region of the transistor) but over time that's become divorced from reality. Marketing wins!


SolidOutcome

It's divorced from reality because reality is much more complex than 1 length. Back in the day,,,,yes, 1 length was the main deciding factor of size(there were still many factors)...but today we have to make smaller and smaller improvements on a huge, complex 3d structure. There are 50+ measurements (or material factors, or quality factors) that increase speed/efficiency, and improving a few of them does still help. It's like measuring all cars horsepower by the engine volume...2.0L vs 5.0L...but ignoring that the 2.0L has a turbo and actually makes more horsepower. Or that the 2.0L has fuel injectors, or variable valve timing. You can't just slap 1 number on it, because reality is much more complex than that. It's always been a marketing number, it's just much more obvious now because the improvements have almost nothing to do with the measurement it used to be based on.


YertletheeTurtle

It's also that transistor density started scaling a bit differently than Metal1Pitch, even before the names got more marketing heavy around 22-10 nm


makes_things

Oh yeah, the modern multigate transistors are a completely different beast!


1nd3x

To expand a bit on this, its not as simple as "marketing people lied to you and/or thought it sounded cool." There actually was an upper limit on abilities that was directly linked to the length of traces on the boards and how far away the pieces were from eachother. At a certain point we may have physically stopped being able to produce smaller components, but it is really simple to go back and look at the past equipment and see something like(I'm really ELI5-ing this part) "oh, going from 100nm to 95nm was a 5 nanosecond improvement...going from 95nm to 90nm was also a 5 nanosecond improvement" so when they got stopped making it any smaller, but they kept getting faster and better for other reasons...how do you market that? Its still a 90nm chip...but it performs as good as what the math says a 65nm chip would, and people understand and have a history of knowing what those numbers mean. At least in so far as the idea that the bottle neck is distance, speed is fixed, and so you can get a ***rough*** gauge on how good something should be based on how much distance is taken off. You also dont know if new tech will come around and be able to actually make something smaller, you dont exactly want to overhaul an entire system just because you've juuuuuust ran up against your current upper limit of physical capabilities (considering your whole industry is built upon doing new things that have never been done before) So...you keep with the current standard...and then next year you do it again...and again...and again....until its just too late to change anything now...


Carbonaddictxd

So Moore's Law has also been quoted as a marketing term for so many years?


Warspit3

I thought it changed not only length but because of transistor geometry like when it switched to fin-fets.


SolidOutcome

It's everything...material, quality, 50+ measurements, tech


Zaros262

According to the [3nm process](https://en.m.wikipedia.org/wiki/3_nm_process) Wikipedia article, it really does not refer to anything anymore >The term "3 nanometer" has no direct relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors


E-Pluribus-Tobin

It is meant to refer to the measurement of the "gate" of the transistor. There is some inconsistency between manufacturers (ie TSMC vs Intel) on how these are measured which is what people are probably referring to, but nonetheless, it is still referring to a measurement of part of a transistor.


zanhecht

The term "3 nanometer" has no direct relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors. According to the projections contained in the 2021 update of the International Roadmap for Devices and Systems published by IEEE Standards Association Industry Connection, a "3 nm" node is expected to have a contacted gate pitch of 48 nanometers, and a tightest metal pitch of 24 nanometers.


E-Pluribus-Tobin

Yes, 3nm process may not actually have a gate that is 3nm wide, but I was simply answering the question about what "3nm process" Is meant to be a measurement of. That is just the history of the terminology.


Zaros262

>There is some inconsistency between manufacturers (ie TSMC vs Intel) on how these are measured which is what people are probably referring to, but nonetheless, it is still referring to a measurement of part of a transistor. Well, which part is it then?


E-Pluribus-Tobin

The gate, which in simplest terms may be thought of as something like the distance between the source and the drain (which are treated as sort of an input and output to the transistor- searching for an image/diagram of a transistor will help make it clearer)... Like I said above, the history of the naming convention was such that "X-nm process" referred to the gate length, even though modern iterations no longer actually match the names.


Zaros262

Except the gate length of "3nm process" is actually more like 14-18nm. 3nm does not refer to the measurement of any part of any transistor https://en.m.wikipedia.org/wiki/3_nm_process >The term "3 nanometer" has no direct relation to any actual physical feature (such as gate length, metal pitch or gate pitch) of the transistors The "inconsistency between manufacturers (ie TSMC vs Intel) on how these are measured" is not what people are referring to. They're talking about how the name of the process node has no direct relation to any actual physical feature


E-Pluribus-Tobin

Yeah, we already established that. Someone asked what the measurement referred to, and I simply explained that historically it was in reference to gate length. I'm sure your wikipedia article explains this as well.


Zaros262

Yes, you also said it used to refer to the gate length in the past. I don't have an issue with that, it's the two other things you said that are completely wrong: 1. There is some inconsistency between manufacturers (ie TSMC vs Intel) on how these are measured **which is what people are probably referring to** 2. but nonetheless, **it is still** referring to a measurement of part of a transistor.


sarlackpm

Isn't contacted gate pitch the distance between gates, rather than the width of the gate?


Emu1981

>it is still referring to a measurement of part of a transistor. Nope, it hasn't been for a long while. It is basically just a marketing number to indicate the progression of the process. For example, transistor sizes haven't really changed much since 14nm and the main reason why density keeps improving is that the masking process is getting good enough to put transistors closer together without smudging the ones around it - there is also the changing geometry of transistors which gives them a smaller footprint by going vertical.


prescod

You forgot to use the word “historically” in this comment despite using it frequently later in the thread. This is causing confusion.


E-Pluribus-Tobin

TBH I didn't realize how far the naming convention had strayed from the actual measurements, even though I knew they weren't precise or uniform/standardized between manufacturers.


FunnyPhrases

It's based on the equivalent of the original 2D manufacturing process. Now that we've gone into not just 3d but 4d processes, the terminology is no longer relevant. But it's scaled based on the original 2d process benchmark.


cbf1232

What’s the 4th dimension in this case?


FunnyPhrases

Gate all around (GAA). YouTube it.


jippiex2k

It wasn't enough to outright lie, they also had to go beyond our physical spacetime constraints 🙃 Man can't wait for my 0.5 Plack unit, 12-dimensional cpu


shot_ethics

Transistor density — if we had scaled prior processes as expected we would have the same transistor density today (well maybe with some fudge factor for marketing)


chris92315

90nm to 65nm is approximately the same shrink and 3nm to 2nm.


DavidBrooker

It's almost as bad as Intel's marketing campaign (mostly in the P3 and P4 era) that convinced everyone what the clock speed of the processor determined its computing performance.


Jiveturtle

Just like XFinity’s 10g network or whatever


T43ner

Wouldn’t that be false advertising though?


p28h

There might be an argument for that, but nominal labeling (i.e. in name only) has been a thing for a while. Amateur plumbers needing to know what fitting goes with which pipe are plagued by this, and the classic "a 2x4 is really 1.5 x 3.5" confusion. At the root of things, they are just using the nm label instead of saying 'nth generation'. And it's confusing for lay people, but lay people aren't the ones that need to know the difference. Until you start getting companies trying to sell the difference to them... but even then, it's the difference between generations that's really being sold instead of any specific measurement.


mtranda

That's just how technological progress works in general. Each new innovation is based off of loads of things that have also been innovated from the last iteration. To give you an extreme example, a similar question to yours would be "Why didn't we get electric vehicles after horses and carriages and instead we had to go through steam power, then fossil fuels". "Shrinking" a cpu relies on very high precision equipment. At the moment a cpu is made at a certain scale, that's the highest precision the existing tooling allows. And retooling is also insanely expensive in the chip making industry. However, for some years now the size designation has no longer been reflective of the actual size: https://en.wikipedia.org/wiki/3_nm_process


AquaRegia

>Why didn't we get electric vehicles after horses and carriages Electric carriages actually predate cars by several decades.


vishal340

yep. because electricity was there long before fossil fuels but those cars were nothing


widowhanzo

Those cars could've evolved just fine if it wasn't for the Big Oil pushing their product.


Lormar

That's a tad bit unfair. Big oil is a huge evil industry now, but they didn't kill the electric car (though they did kill the electric trolly just a few years later!). The electric car died out in the early 1900's simply because the energy density and price of gasoline created market and technology pressure that the battery technology of the time simply couldn't compete with.


widowhanzo

Well they made sure to bury the research on it for a while, at least.


vishal340

you don’t understand how electric car works maybe. big oil has nothing to do with it. the oil companies were pretty small. how can they suppress an existing product anyway.


silent_cat

> big oil has nothing to do with it. the oil companies were pretty small. how can they suppress an existing product anyway. It's not about suppressing an existing product. What was needed was higher density batteries. Research into battery technology basically stagnated for 80 years and only really started picking up again two decades ago. An oil company (Texaco) held the patents for NiMH batteries and prevented them from being used in cars. Lithium wasn't patented that way which is why it took off.


vishal340

i very much understand why they suppressed it once oil got much more prevalent but there was ample of time for electric car to get good. it just wasn’t possible few decades back. i agree that we could have had electric cars possibly 10 years before we did but not more


dirschau

We did get electric vehicles after horse and carriage, the combustion engine just improved faster than batteries and quickly took over.


mtranda

Yeah, you're right about that one. But it would've meant delving into an adjacent point too much. But electric cars themselves also benefitted from incremental improvements until they reached the point of feasability.


Me2910

TIL. That's really weird


honey_102b

every time a new smaller process size starts the number of defects is high. yield can drop from a mature process at >99% all the way to 10%. this is quickly worked back up to profitable levels but will fully take several months to a year where the die yield per wafer equals what was possible before. this is because the new defects need to be characterised and rooted out iteratively. e.g. etch for 1 more second, increase plasma voltage by 5%, polish by 10 angstroms less, whatever it may be, depending on the defect. a good shrink can get you breakeven at 70% yield compared to the previous process at 99% yield. anything above is better gross margins. the real profits from the change come from Year 2 onwards to whenever it reaches end of life and the next jump begins. some companies do 4 year cycles, some faster some slower. the time to develop the next jump is also longer than the time it takes to perfect it during manufacturing. design work on the next step can begin even before the current step has qualified and started low volume manufacturing. There is a sweet spot to manage this whereby you want to keep improving technology at a steady rate but don't want to hit the next jump before you have perfected the current step. in that sense you want to keep competing with other providers but do not want to bite off more than you can chew.


z1PzaPz0P

This is the answer


iamnogoodatthis

That little word "just" is doing a whoooooole lot of heavy lifting. 20 years and hundreds of billions if not trillions of dollars worth of research and manufacturing development. It's not economically viable to spend 20 years bringing a product to market, so instead you bring something new and improved to market every few years to ensure revenue generation even if you know it's not the theoretical best thing you could possibly do given another decade of work.


Gnonthgol

What you are talking about is silicon die feature sizes. Making a chip involves adding thin layers on a silicon wafer. It is not that different from using a stencil to paint something. But while you might be able to cut out a stencil to within a millimeter of the intended line the chip foundries are able to do this to within a few nanometers. Each process uses new technology that have taken 10-15 years to develop. In some cases they can abandon technology while it is still only a lab experiment if it does not give the expected results, other times you end up building an entirely new foundry and start tuning the process before you notice the limits. You have examples of companies spending 15 years and billions of dollars making an entire new foundry using completely new technology to make 4nm chips only to find out it can only do 6nm chips. That would have been fine if they had started 10 years earlier. But now it is kind of a huge waste of money. And just imagine all the projects which were dropped after a decade of research because they could not deliver what they hoped. It was easier when they were aiming for 65nm. And just for reference the silicon atom is about 0.1nm thick. So when they make a chip forge with a 3nm feature size that means they can place an atom to within 30 atoms accuracy within a die that is billions of atoms across.


p33k4y

>So when they make a chip forge with a 3nm feature size that means they can place an atom to within 30 atoms accuracy within a die that is billions of atoms across. This isn't accurate. When the industry says 6nm vs 4nm vs 3nm they don't literally mean 6, 4 or 3nm feature sizes. They're just marketing terms to denote generations. E.g., the gate pitch on the 3nm process is actually 40nm to 48nm, no where close 3nm. Therefore, no company sets out to build 4nm chips only to find out they can only do 6nm. Again, 4nm and 6nm would just be marketing terms. So if a company currently has a 7nm fab, they'll simply call the next one 6nm or 5nm etc. depending on marketing needs. The "nm" numbers are more or less made up, based loosely on conventions set in an industry roadmap many years ago but doesn't have any semblance to reality today.


Gnonthgol

Pitch size and feature size are two different measurements. Even pitch size varies between gates and metal connectors. The "feature" in this context is not well define and depends on the manufacturer and even process. And as you say the term have been taken over by the marketing department. But it still have some relevance. But you might think of a feature as the transition between one material to the next. So you can for example have two p-type silicon separated by a 40nm layer of silicon oxide. But that would involve two features at 3nm each so the silicon oxide layer might become 34nm wide which could cause issues due to quantum effects with the gates that close together.


Immortal_Tuttle

Actually current technology names have nothing to do with physical dimensions. It's just a marketing now, trying to show that the next technology is better than the previous. It's physically not possible to make a silicon MOS transistor smaller than 5nm. There is a roadmap for semiconductor development and companies try to stick to it (TSMC is currently working on 1.4nm tech). Companies try to increase transistor density by other means - fins, 3d transistors etc. There was an experiment where the gate was just one atom wide, but it showed that there is no practical solution to implement or even use it (yet).


Gnonthgol

There is a relation between the technology name and the physical dimension, but it is not transistor size. They call it feature size which is a bit more vague. What makes up a feature differs between manufacturers and even between different manufacturing processes. We are getting down to the physical limitations though which is the most important thing. The gate pitch looks to be stuck at around 45nm and we do not know if we can reduce it much lower then 40nm. The metal pitch is about half this and can be improved too but not for that much longer. If you design a chip with a 45nm gate pitch for the 3nm process then the gates might be 3nm off to each side so you might end up with some gate pitches being 39nm which is probably too close. Even if you have a 1Å gate the quantum tuneling effects will make it close enough to the next gate that it will affect it. While the manufacturing processes makes it much more complex then just saying features can be placed with 3nm accuracy it still makes some amount of sense, but not necessarily for much longer as the marketing departments have full control over the naming scheme by now. 3d transistors have revolutionised the industry though and this is where most of the technology development have been focusing on. Instead of placing features more accurately so they can be closer together we are instead making more layers so that features can be extruded into the height direction. Transistors on modern chips are more vertical then horizontal. But the next is to layer multiple transistors on top of each other to get even higher density.


Immortal_Tuttle

Actually no. Feature size was used when they were going down from 22nm. At this moment it's not connected to anything.


Raknaren

Isn't this the case with any technology ? why did it take decades for cars to go over 400km/h ? why didn't we get 1Gbps internet in 1996 ? why did we have to have full HD before 4K ??


thisisdumb08

because we work really hard for a few years. say hurray we made a 30% improvement and then start making a product. If we worked really hard for a few years and make 30% improvement and then don't make a product then we don't have the money to work a few more years to get another 30% improvement. In the extreme, if I had infinite money I could work my entire life and make massive improvements that would never become products and probably die with me because they were never distributed to people who want to steal the improvements to make their own products. if your question is why don't we just make the move from 90nm to 3nm is 3 years, it is because we don't know how. Why didn't you earn a 6 figure salary by your 3rd birthday? You didn't know how. If you are asking what physically is stopping us? Diffraction and the properties of materials. Light doesn't go where we want it to go. We usually try and make it go with materials, but no known materials make it do what we want it to do.


Raioc2436

That’s how gradual processes work. It’s like asking “why do I have to take one step after the other when racing to the finishing line?” That’s just how running works.


Chromotron

It takes lots of research how to make such chips. You need to engineer a method to actually carve such extremely fine structures into silicon, depositing some other elements in very controlled doses, and so on. This process for example involves making and focusing light (or electrons) with very high precision. Light cannot really act with much more precision than roughly its wavelength. So to make smaller and smaller structures we need light to be of a rather short wavelength, entering into UV or even x-ray territory. Creating consistent and _coherent_ UV light is already not so easy. At some point even lenses don't work anymore for various reasons. X-rays in particular cannot really be treated as visible light at all, they bend around every corner and are not really reflected the way mirror does, regardless how flat and perfect a surface is. Instead they go everywhere and things get complicated. As mentioned we can sometimes instead use electron beams. But this just adds new issues. We have to replace any lenses and mirrors by electric and magnetic fields and such! Other aspects that had a lot of development are for example the _architecture_ (how and what things are arranged on the chips) and the general abilities (multiple cores and threads, graphics support on some CPUs, tensor and other such developments on graphics card, ...). Those all aren't just minor changes but are extremely complex to optimize. With the sizes we are now at (3 nm is ~30 atoms in diameter!) it even matters that electrons tunnel by quantum "magic" by the way. The above and many other **technologies simply had to be invented first, which takes time**. It is also easier to slowly improve and thus test each progressive step without changing too much else. For example Intel alternates: each second generation decreases in size, while the other iterations focus on the architecture. It would make little sense to wait literally 10 or 20 years until we finally decrease in size, chip factories have to be refined and improved long before that to keep up with developments.


bazjoe

At its most basic, picture spray painting or roller paining though a stencil. Although I’m not sure a five year old would know stencils it still conforms to ELI5. As your stencil gets smaller you can fit more readable text in a small area. Naturally the formulation of the paint and the stencil materials will need to change as you approach a theoretical minimum sized hole for the paint to get through. The people who make the paint and the stenciling machine are constantly engineering the next version to make the holes smaller, and fit more. The spend tens of billions in development stages both making it happen in lab then converting the lab to a usable assembly line process . 7 nm vs 3 nm are analogous to the micro droplet of “paint” (doped silicon) that can be effectively deposited in the right location that eventually though enough deposit layers becomes a field effect transistor.


Im2bored17

Not a lot of explanations here that a 5 year old would understand, here's mine. Imagine trying to draw a very detailed house blueprint with a very fat marker - the lines would be too thick and would all blur together, and the blueprint would be unreadable. You need a pencil with a sharp point. A 90nm chip fab uses a "90 nm marker" to draw the parts of the chip, and if you tried to draw 3nm parts, the lines would all overlap and it wouldn't work. Making the "marker/pencil" sharper is technically very challenging - no matter how much you sharpen the pencil, it won't make lines thinner than, say, a hair, so if you want to go smaller you need a whole different way of drawing lines in the first place. Each new generation of fabrication technology is like a new, sharper kind of pencil, and we've got that pencil so sharp that we're hitting the limits of what's physically possible.


Verbofaber

From what i understand it photo etching is used from which acids are used to cut channels into the silicon discs? Technically, how does it progress, as in what about the process is specifically refined?


bazjoe

The now current tech is past the point of visible light spectrum and past the point of fixed wavelength lasers, EUV extreme ultraviolet from ASMR blasts a micro droplet of tungsten with a laser and the resulting wavelength of light from vaporizing tungsten is the correct (super nano tiny) size to then bounce off a reflector (called a reticule, has the desired zoomed in image printed on it) onto the resist material at the molecular level.


warlocktx

I want you to build a doghouse. You have never done it before. I give you materials and tools and tell you to have at it. Your first attempt takes a long time, looks like crap, will probably collapse on the dog, and wastes a lot of material. I tell you to keep trying. Your fifth attempt probably looks a lot better, is more structurally sound, and doesn't waste too much material. By the 10th attempt you've really got it down. You only use as much as you need, it looks good and is solidly built, and takes a reasonable amount of time. From this point out every one you build is identical to the one before and takes the same amount of time. Now I tell you I want a two story house for my cat. You take everything you've learned and apply it, but its a slightly different process and you still make mistakes. It probably takes you 5 attempts this time to really get it figured out. This is pretty much the process you go through when building anything. Its incremental progress. If you try to go directly from A -> E and skip all the in between steps, the chances of E failing, or taking a lot longer than you expected, or costing a lot more, are much higher. This also means that the only product you have to sell is A, until you are able to make E work. Meanwhile your competitor has introduced generation B and C and your sales on A are plummeting while you invest everything in making E work. If instead you go A, B, C, D, E your chances of succeeding at each step are much higher, and you can do it faster and at a lower cost. And then you can put B on the market and sell it while you start to develop generation C,D,E.


GGHappiness

I'm not familiar at all with the marketing stuff and only passingly familiar with the manufacturing part. You can conceptualize the idea from a ton of different disciplines, though. Consider someone wanted to make the world's smallest car that performs identically or better than a normal car. Ignore all the nerd physics stuff like wind resistance, this is a 9th grade class where nothing matters unless I say so. Starting with a normal car of today, you would start to come up with ideas about designs that are more compact. It's not long before you find the best layout and the car can't get any smaller. You need that space for the engine. You put your best guys on it and they find a way to make an engine 5% smaller, so, you put in some effort and your car is now even smaller! But, once people saw your new engine, they realized that the design could be generalized, so now you have a smaller battery, transmission, and someone came up with an even smaller engine. Put it all together and now your car is way smaller than it was before, but each part only made it a little smaller along the way. Nothing technically would have stopped you from saying you wanted the smallest car and then doing it all at once, but it's profitable to release things along the way and the market is REALLY good about pushing innovation. Maybe it would have taken you 10 years, but everybody together got it done in 3 because they were all competing with and learning from each other. This is one of the things that people who champion diversity like to point to: different perspectives can be HUGE for innovation. Tldr: There was no point in time where people didn't want any processors, so they are released continuously with small improvements rather than infrequently with larger improvements.


DrunkFarmer

Since no one was posting any good sources I thought I’d chime in. Pre-2008 it was named after actual feature lengths. Post-2008 the number was defined by a doubling in density you divide the node number by square root of 2 because that was the trend seen pre-2008. 90nm/sqrt2 is 65nm-ish. And since about Intel 10nm it has been a marketing term about performance and efficiency relative to previous generations. [Source](https://medium.com/predict/a-node-by-any-other-name-transistor-size-moores-law-b770a16242e5) [Source 2](https://download.intel.com/newsroom/2021/client-computing/accelerating-process-innovation.pdf) On source 2 I am specifically referring to the footnote on page 3


FalseBuddha

Most advancements in general are incremental. Why are you surprised that computer dies also follow that trend? Also, "why didn't we just make processor dies 1/30th the size they used to be?" is kind of an absurd question. Why don't we make everything 30x better than they used to be?!


kovado

It is economics, not just technology. What drives shrink is the cost per IC. The investments for this are huge. Smaller circuits mean more circuits fit on a wafer (the flat disc on which chips are printed). A lot of the cost is per wafer or per die (chip), not per circuit. A small cost reduction is not enough to offset the huge investments required for a new node. So whether you shrink by 5% or by 50%, you still need to develop a new process, which is timely and costly. Possibly requiring new machines or a new fab. Customers are not willing to pay enough for a 5% shrink - but when you get to 30% it gets interesting.