The Legacy of Moore's Law

Flashback to 1965. The integrated circuit was already heralding a revolution in computation, empowering Fairchild Semiconductor to become its sole profitable manufacturer in the United States. The 35th anniversary edition of the Electronics magazine contained a piece titled ‘Cramming more components onto integrated circuits.’ It was written by Goordon Moore (PhD ‘54), the then Director of Research and Development at Fairchild Semiconductor, under the helm of this burgeoning industry.

It’s in this edition that Moore wrote the first estimation of what would eventually be popularized by Caltech’s professor, Dr Carver Mead (B.S. ‘56; M.S. ‘57; Ph.D. ‘60), as the ‘Moore’s law.’ Within that piece, although without any empirical support, was Moore’s prediction that by 1975, technology would exist to incorporate up to 65,000 components, including transistors, resistors, diodes, or capacitors, on a single quarter-square-inch (~1.6 square centimeter) semiconductor.

Come 1975, and that prophecy was fulfilled, confirming Moore’s prediction that the number of transistors possible on an integrated circuit would double every two years. Since then, Moore’s law has undoubtedly influenced the domain of science and technology. Despite similar proposed estimates on the processing ability of dense, integrated circuits and components, it has certainly been Moore’s estimate that has remained the most famous.

In the same year in 1975, Moore revised his original forecast rate. Based upon the advent of new materials for semiconductors, as well as numerous other factors, he predicted that the complexity of semiconductors would double annually till 1980, after which, the rate would decrease to doubling once every two years again. This trend has largely held consistent. In fact, due to the advent of numerous semiconductor materials and technology, we have exceeded this trend. These comprise everything bit-size, including nanoelectronic devices like nanoscale transistors, and even alternative nano-materials for computation.

The principal technique used to deliver Moore’s Law is by making transistors tinier and tinier. Smaller transistors are faster and more energy-efficient, and have often been cheaper. In fact, it is this reduction in cost that led to Moore’s second law, which formulated that as research and development costs continued to soar to accommodate ‘cramming’ of more and more components into smaller and smaller transistors, the cost of such fabrication plants would increase exponentially over time due to the drop in costs for consumers.

Moore’s Law

Gordon Moore and Robert Noyce at Intel in 1970

Today, Moore’s Law has become an almost self-fulfilling prophecy. Numerous organizations in the semiconductor industry have used it to set future targets. In fact, undoubtedly, advancements in digital electronics are now being linked to and governed by Moore’s law. This relevance is significant despite the law being simply a historical trend, including improvement in sensors, microprocessor prices, and memory capacities. This lucid log-linear relationship between device complexity and time still holds true today. After all, we have seen a reduction in the size of computers from those that took up entire rooms to those that now fit our pockets.

Moore dedicated his life to constructing revolutionary computer chips, as one of the stalwarts at Fairchild Semiconductor and as one of the founders of what eventually became Intel. Since his passing one month ago today on March 24, where does Moore’s eponymous law leave us?

Moore’s Law

Salmon fishing with Gordon Moore near Mavericks. Still searching for Moore’s Law of fishing…. “Moore Fish” by Steve Jurvetson CC BY 2.0

Scientists, industry leaders, and thinkers, are in debate on when we might inevitably need to depart Moore’s Law. The endless shrinking of computers and increases in computational complexity cannot continue forever. After all, we live in a discrete universe of particles, atoms, and electrons. We cannot, therefore, build computers smaller than an electron!

The world of quantum computing — using quantum mechanics with qubits and quantum logic gates, and even utilizing Einstein’s spooky action at a distance — threatens to divert us today from Moore’s Law. Due to the reach of current bit-sized integrated circuits, many corporate honchos such as Nvidia CEO Jensen Huang have actually considered Moore’s Law to have already reached its predictive limit. Today, our chips still approximately follow Moore’s Law — the only real limit being that the size of transistors cannot get any smaller.

Yet, as computers touch the atomic limit, with tiny nanometer chips being developed, such as IBM’s announcement two years ago of a two nanometer-sized computer chip, many of us have begun to explore the benefits of the new quantum-scale world that Moore’s Law has already brought us to.

In any case, no matter where future forays into transistor development or quantum computing take us, we might still learn a lot from Moore’s famous law that has guided us over decades. His law still tells us a fascinating story as to how historical prediction became an industrial observational law, and how even the earliest stories from the history of computing still have impacted us today. It continues to revolutionize future computing and may even still have a hand in technology when the very bit becomes spooky.