You’ll hear generally around when you’re shopping for a CPU, is the process node measured in nanometers, and how a smaller one is better. The tech Gaint headlines (INTEL& AMD) about how chipmakers are racing to cram more and more tiny transistors onto their processors.

Past:

More transistors mean better performance and efficiency. Yes, that’s true because the electrons don’t have to travel as far to each transistor so they can switch on and off and process information more quickly. The process node was originally a measure of how long the gate in the transistor was. This is the part that actually controls the flow of electrons from the source to the drain. This was considered an accurate enough proxy for transistor size up to about 1997 when the 350-nanometer process was popular. The reason this is important is that when you double the number of transistors on a chip, it’s fair to expect roughly double the performance at a given dye size. And for a long time, these doublings, if you will, took place at such predictable intervals the number of transistors on a chip would double about every two years.

Present:

This gave the chipmakers an easy rhythm to follow for naming each process node
because they could expect each one to be smaller by a factor of about 0.7. Why 0.7, you might ask? Well, the transistors are roughly square in shape and if you multiply 0.7 by 0.7, you get 0.49 or roughly one half. So for example, when the industry went from the 1000 nanometer process node to the 700-nanometer process node, while the manufacturers were able to start shrinking the gate length by more than a factor of 0.7, other parts of the transistor weren’t shrinking as quickly anymore.

So, gate length was no longer a good way for the overall transistor density in the entire chip and therefore, the performance rather than changing the naming scheme outright
though, we started to see a process node defined by the size of a group of transistors called a cell. This was done to give people an estimate of the equivalent level of processing power accounting for components that weren’t shrinking as quickly.

So, the first node we saw under this new naming system was the 250-nanometer process. The performance was about double the previous node, as you would expect from the name but the gate length was actually around 190 nanometers which is much smaller. It’s just that other stuff prevented the transistors from being packed more tightly than that. This game involving cell area lasted until around 2012 and the 22-nanometer process
when a whole new type of trend sister was introduced.

FinFET :-

Chip makers found that at these sizes, the Gates were so small that you could have electrons leaking through them due to quantum tunneling. This could cause undesirable behavior. So, engineers needed a way to make their chips more powerful without shrinking the Gates even further.

The solution was to take the channel the electrons go through and raise it up like a shark fin, hence the name FinFET, increasing the surface area of the channel and allowing lots more electrons to pass through. Of course, this also meant that transistors were now three dimensional instead of planar making it much harder to accurately measure their size.

Now, the industry has still continued to use that 0.7 factor to describe a generation of improvement, like going from 14 to 10 to seven-nanometer processors. But the truth of the matter is that these numbers don’t actually measure the real size of the transistor anymore and they can even vary wildly between the different manufacturers.

Intel, for example, attempts to measure a process node by taking the weighted average of the two most common standard cell sizes. Really, a more important consideration though, is transistor density.

Finally :-

That’s how many can be packed into the same space without decreasing the size of the actual transistor features very much if at all. In addition to density, chipmakers are using other techniques like improved materials to boost performance. This can include everything from squeezing the crystal structure of the channel to make the electrons go through faster to lower resistance traces between transistors to gate materials with a
high dielectric constant for better control of electron flow.

Of course, this process can require some trial and error. Intel’s well-publicized difficulties with their 10-nanometer process, was due in large part to them trying to overscale. In other words, pack more than double the number of transistors into the same space which required them to try out a lot of new technologies inside the chip all at one time which caused delays and manufacturing problems.

But as our technology continues to improve, chip makers look poised to keep Moore’s law even if it’s a little slower or live to some extent as well as keeps silicon the base material for our processors for a long time to come before we have to really start considering more exotic solutions like carbon nanotubes.

Just remember that the process node isn’t the be-all and end-all when you’re shopping for a CPU anyway. It’s always more important to pay attention to the real-world performance that you’ll see in games and applications that you actually use.