Alexandre Debieve / Unsplash

The End of Moore’s Law?

As chips are getting smaller, prices are going up

135 1

This phenomenon is responsible for creating the most valuable companies on Earth while completely redefining humanity's relationship with technology. So it’s particularly startling that all that is now in the past.

“Moore’s Law is dead,” pronounced Nvidia’s CEO Jensen Huang in September 2022, before adding, “The idea that the chip is going to go down in price is a story of the past.”

Today’s chips come packed with billions of transistors, capable of executing billions of instructions per second. The smallest transistors on the market are now hitting the 3 nanometer mark, a length equivalent to 15 Silicon atoms. But getting ever smaller presents tremendous financial, let alone physical, challenges, and many are bracing themselves for a world where achieving our digital dreams is only going to get more expensive from here.

Every major company that relies on computing is exploring ways to approach this new normal. Some are rethinking the designs of chips in their devices while others are beefing up the number of silicon experts they have in house. We can trace the evolution of chip economics and design to better understand how we got here, and explore what the future might look like as a new chapter opens up in the world of hardware.

The flow of electricity

Electricity plays a crucial role in computers: it represents the physical form of information. So to manipulate information, we first needed to figure out how to manipulate the flow of electricity.

In the early twentieth century, one of the biggest puzzles in electrical engineering was finding a mechanism that could switch electrical signals on and off—using electrical signals themselves. In 1911, Palo Alto native Lee DeForest figured out one of the first ways of doing it when he discovered the vacuum tube.

A vacuum tube is a glass vial with two components: a cathode at one end, which heats up and produces a stream of electrons, and an anode at the other end. By regulating whether or not a voltage is applied to the anode, you can control whether electrons will be permitted to pass across the vacuum from the cathode end to the anode end, powering a circuit.

These circuits, made by soldering together vacuum tubes, copper wires, and other rudimentary electrical components, could be assembled to perform basic logic functions. If you put enough logic functions together, you could make a primitive calculator—one that could do basic addition, subtraction, division, and multiplication. The University of Pennsylvania's ENIAC, built in 1946, was one of the first attempts at a large-scale electrical computer that could perform these calculations. But it was composed of 18,000 vacuum tubes and miles of copper wire, and required 150,000 watts of power to operate. There was a rumor that whenever the ENIAC was switched on, the lights in Philadelphia dimmed.

The vacuum tubes would glow with heat, which required fans to cool them down. The glowing also attracted moths, which would short-circuit the machine, requiring engineers to literally de-bug it.

An improvement on glowing-hot vacuum tubes soon arrived from Bell Labs, where researchers were experimenting with a new class of materials they called semiconductors. Somewhere in between a conductor, like a metal, and an insulator, like glass, semiconductor materials selectively allowed electrical current to flow through them. The trick was to figure out how to make the semiconductors allow current to flow through them when prompted.

The breakthrough came in 1947 from John Bardeen and Walter Brattain, a duo working under the guidance of William Shockley at Bell Labs. Together, Bardeen and Brattain built the first working prototype of a solid state electrical switch, the transistor. Transistors allowed electricity to flow selectively through solid material, rather than having to detour through fragile and hot glass bulbs. That meant smaller circuits, which could be turned on and off faster and use less energy to do it.

Two years later in an interview, Shockley said, "There has been a great deal of thought spent on electronic brains, or computing machines. It seems to me that in these robotic brains, the transistor is the ideal nerve cell."

The integrated circuit

Shockley realized the transistor would soon take over the world of electronics, replacing everything that was touched by vacuum tubes. He started his own company, Shockley Semiconductors, to start mass manufacturing the devices, and sought out America's finest minds to help him. That quest led him to Robert Noyce, a physicist from MIT, and Gordon Moore, a chemist from Caltech. Shockley certainly had the right idea, but he was a deeply unpleasant personality. Within a year of signing up to work with him, all the engineers Shockley hired walked out to start their own company instead.

That operation, named Fairchild Semiconductors, after their main backer, was arguably Silicon Valley's first “startup.” The opportunity of mass-producing transistors was unparalleled, but the experience of making them was hell. The transistors were tiny and still needed to be hand-soldered to other electrical components, which was a grueling, inaccurate, and frustrating feat of human labor.

Rather than spend hours hunched over a soldering iron, Robert Noyce figured it might be a better idea to etch the complete circuit—transistors, wires, and all—onto one solid wafer of semiconductor material. The integrated circuit was born. "I was lazy," said Noyce, reflecting on his bout of genius. "It just didn't make sense having people soldering together these individual components."

The etching was done by a process called photolithography, which shone light rays at silicon treated with light-sensitive material that hardened when exposed to light. This process allowed manufacturers to produce the circuit's grooves. Year after year, photolithographic techniques got better and better, which allowed Fairchild to make their circuits more and more complex.

The first integrated circuit made by Fairchild Semiconductors had only a single transistor, capable of completing just one logic function. By 1968, the transistors on Fairchild's integrated circuits numbered over 1,000.

As integrated circuits got smaller, they started to be called chips, but despite the cutesy name, something remarkable was happening on their surface. The smaller the transistors got, the smaller the current needed to activate them, and the smaller the transistors, the more transistors you could fit on the same chip. In perhaps the greatest example of economies of scale in history, the tinier transistors got, the more computation you could do for less power. The economics of miniaturization were so good they were almost unbelievable. By scaling down the size of a transistor by two, you scaled up the computational power by a factor of eight.

Naturally, this begged the question how far this miniaturization could go. Already by 1965, Gordon Moore had predicted, or maybe prophesied, that the number of components on a chip would double every two years. But Moore's colleague at Caltech, the engineer and physicist Carver Mead, wanted an answer to how small these transistors could get. After all, transistors were just little gates that allowed electrons to flow through when activated and blocked electrons when deactivated. The gates, in theory, could get really small indeed. Though the technology didn't exist to make transistors this small yet, Mead theorized that you could theoretically get them small enough to fit something like 10^7 to 10^8 onto an area of 1cm^2.

For 1972, these claims were outrageous. At the time, integrated circuits only had about 1,000 transistors on them, but Mead was predicting an improvement of five to six magnitudes. While many academics and computer specialists were busy doubting Mead's predictions, Gordon Moore was already working tirelessly to make them a reality.

The central processing unit

By 1968, Fairchild Semiconductors was a big, clunky company with 30,000 employees. It wasn't a startup anymore—it was, as Robert Noyce put it, a supertanker. In order to compete with the fast-moving world of electronics, Noyce and Moore left Fairchild to start a new, nimbler venture. They called their new company Integrated Electronics, which they shortened to Intel.

Right around the seventies, these integrated circuits were starting to count pretty well, but they couldn't remember much. Noyce and Moore figured that if transistors could increase the speed and energy efficiency of processing logic functions, maybe they could do the same for memory storage.

Intel's first product was the 1103—a semiconductor-based memory chip. Transistors would open and close to either store or release an electric charge, thereby storing or releasing information.

The insight for Intel's next great invention came from Ted Hoff, an engineer who wondered what might happen if you could integrate both a 1103 memory chip and a processing unit all into the same integrated circuit. The result was the 4004 chip, released in the third year of Intel's existence. The 4004 wasn't just a chip—it was an entire computer on a chip. It cost $360, had 2,300 transistors, and could perform 60,000 instructions per second, but most importantly, it could execute instructions stored in a memory bank and had working "scratch pad" memory to help it complete more complicated calculations.

Subscribe to read the full article

Ideas and Apps to
Thrive in the AI Age

The essential toolkit for those shaping the future

"This might be the best value you
can get from an AI subscription."

- Jay S.

Mail Every Content
AI&I Podcast AI&I Podcast
Cora Cora
Sparkle Sparkle
Spiral Spiral

Join 100,000+ leaders, builders, and innovators

Community members

Already have an account? Sign in

What is included in a subscription?

Daily insights from AI pioneers + early access to powerful AI tools

Pencil Front-row access to the future of AI
Check In-depth reviews of new models on release day
Check Playbooks and guides for putting AI to work
Check Prompts and use cases for builders

Comments

You need to login before you can comment.
Don't have an account? Sign up!
@ilya.gaidarov over 2 years ago

🔥

This one's worth a reread!

Great work Anna-Sofia.