Is It the End of the Road for Computing Power?

It may not be an actual law of nature, but a bold statement made decades ago has predicted the dizzying advance of technology. Now it is about to fail. The business that powers everything around us from cars to smartphones to our utilities faces a reckoning.

In the early days of semiconductor technology, Intel INTC 0.66%

co-founder Gordon Moore posited that the number of components on an integrated circuit would double every year. The 1965 prediction, now known as Moore’s Law, was later revised to the doubling of the number of transistors roughly every two years. Progress has marched on for decades as the chip industry has cranked out once-unimaginable devices and then consistently one-upped itself.

For example, Apple‘s

M1 Max chip, which powers its high-end laptops, has an incredible 57 billion transistors. Technology has kept advancing to shrink the size of chips: Tens of thousands of transistors can fit in an area no wider than a human hair. Smaller transistors, which are also faster and cheaper, have enabled exponential progress in computing power and boosted productivity. The smartphone in your pocket is now more capable than the massive computers that helped send men to the moon more than 50 years ago, and a fraction of the cost.

Many have written obituaries for the law over the years, but chip makers have pushed the technological envelope, finding new ways to cram in more computing power. One of the latest examples is the extreme ultraviolet lithography, or EUV, technology that uses shorter-wavelength light to etch superfine features on chips. Dutch company ASML is the only company that makes EUV machines for chip makers. Each one costs around $150 million, yet the manufacturer has a major backlog as the world scrambles to add capacity.

But with transistor size now approaching the atomic level, it seems inevitable that it will soon hit some physical limits. Distance between transistors is now measured in tens of nanometers. A nanometer is about 5 silicon atoms wide.

Even before the laws of physics finally end the trend Mr. Moore predicted 57 years ago, the laws of economics probably will. The cost to fabricate the most advanced chip has surged and is getting more expensive with each iteration of technology.

“Moore’s can probably continue, but just not economically at all,” says industry analyst Douglas O’Laughlin, who publishes a newsletter on semiconductors.

Each new generation of chips—called process node—requires substantially more steps. That means lower production yield due to a higher chance of errors in a process that involves thousands of sophisticated steps. Costs per transistor have stopped falling, as they did for decades, and are rising.

A global chip shortage is affecting how quickly we can drive a car off the lot or buy a new laptop. WSJ visits a fabrication plant in Singapore to see the complex process of chip making and how one manufacturer is trying to overcome the shortage. Photo: Edwin Cheng for The Wall Street Journal

There is a Goldilocks zone. Shrinking the whole chip…would actually cost more and doesn’t necessarily mean better performance and power.


—Dylan Patel

That also means the cost of building a leading-edge fabrication plant could be well over $10 billion—prohibitively expensive for all but those firms with the deepest pockets. Currently there are only three that are trying. Taiwan Semiconductor Manufacturing Co.

, the world’s largest contract semiconductor manufacturer, has planned capital spending of $40 billion to $44 billion for this year. South Korea’s Samsung Electronics plans to invest $17 billion to build a chip-making facility in Taylor, Texas. And Intel, the company Mr. Moore co-founded, will build two chip plants in Ohio for an initial investment of $20 billion.

But maybe Moore’s Law is no longer the right way to think about further advances. There is more than one way to skin a cat. Designers could, for example, make chips better suited for their tasks. Apple has demonstrated this by designing its own processors for the iPhone, iPad and most recently its Mac computers—replacing Intel’s processors in the latter. Custom-built chips can better use a device’s computing power than off-the-shelf silicon, especially when they are made by a company like Apple that also controls the device’s software.

Chip makers can also use a “chiplet” approach, which effectively shrinks parts of a processor and packages those chiplets with memory and other components made in a way that is less expensive but still achieves overall performance improvements.

SHARE YOUR THOUGHTS

Do you ever feel as though electronic gadgets have grown too small? Join the conversation below.

“There is a Goldilocks zone in process technology now, where different nodes are actually better for different pieces of the puzzle,” says Dylan Patel, chief analyst at SemiAnalysis. “Shrinking the whole chip or putting everything on the most advanced node would actually cost more and doesn’t necessarily mean better performance and power.”

Using new materials, like gallium nitride, to replace silicon could be a way to keep packing more transistors at lower costs. Another possible candidate is carbon nanotubes—tubelike structures made from graphene that are only nanometers in diameter.

Calling something a “law” makes it sound immutable. Mr. Moore’s prediction was more like a prophecy—a self-fulfilling one industry players strove to live up to as long as they could. That doesn’t mean that now-routine devices that would have seemed magical a couple of decades ago will still represent the cutting edge in 20 years. The rate of progress just might be less incredible in hindsight.

The narrow definition of the law—the regular doubling of transistor density—probably can’t continue, but human ingenuity knows no limits.

Write to Jacky Wong at jacky.wong@wsj.com

Copyright ©2022 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

.

Leave a Comment