Why Moore’s law is not a law?

Moore's law became so important because it was a self-fulfilling prophecy. When Gordon Moore first made his prediction in 1965, it was based on the trend of transistor density doubling every year. However, the semiconductor industry took Moore's prediction as a challenge, and they invested heavily in research and development to make it happen.

What is Moore’s law:

Moore’s Law is the observation that the number of transistors in an integrated circuit (IC) doubles roughly every two years.

This observation and projection are based on historical trends, and it’s not a law of physics but rather an empirical relationship tied to production experience.

The name comes from Gordon Moore, who co-founded Fairchild Semiconductor and Intel, and was also its former CEO.

In 1965, he suggested that the number of components per integrated circuit would double every year and believed this pace would continue for at least a decade.

In 1975, he adjusted the prediction to a doubling every two years, which translates to a compound annual growth rate of 41%.

Although Moore didn’t rely on concrete evidence to predict the trend’s continuation, his forecast has held true since 1975 and is now commonly referred to as a “law.”

Moore’s law became so important because it was a self-fulfilling prophecy. When Gordon Moore first made his prediction in 1965, it was based on the trend of transistor density doubling every year.

However, the semiconductor industry took Moore’s prediction as a challenge, and they invested heavily in research and development to make it happen.

As a result, Moore’s law has held true for over 50 years, and it has been a major driver of technological innovation.

Join our WhatsApp News here

Moore predicted; became a law

In 1959, Douglas Engelbart studied the potential downsizing of integrated circuit (IC) dimensions, sharing his findings in the piece titled “Microelectronics, and the Art of Similitude.” He presented his conclusions at the 1960 International Solid-State Circuits Conference, an event attended by Gordon Moore.

In 1965, Gordon Moore, then the director of research and development at Fairchild Semiconductor, was invited to contribute to Electronics magazine’s thirty-fifth-anniversary edition.

He was tasked with predicting the future of the semiconductor components industry for the next ten years.

Moore’s response came in the form of a concise article titled “Cramming more components onto integrated circuits.”

In it, he speculated that by 1975, it might be possible to accommodate as many as 65,000 components on a single quarter-square-inch (~1.6 square-centimeter) semiconductor.

The pace of increasing complexity while maintaining minimal component costs had been progressing at approximately a twofold rate per year.

In the short term, one could reasonably expect this rate to continue or even accelerate. Looking further ahead, the precise rate of increase became less certain, although there was no compelling reason to doubt its relative stability for at least a ten-year period.

Moore posited a logarithmic-linear relationship between device complexity (achieving higher circuit density at reduced cost) and time.

…I just made an ambitious projection, suggesting that it will keep doubling every year for the next 10 years

~Gordon Moore, in a 2015 interview Reflecting on the 1965 article.

Read more: How Gordon Moore Built the Most Important Company on the face of earth

Moore’s law 10 years later

During the 1975 IEEE International Electron Devices Meeting, Moore revised his prediction.

He anticipated that semiconductor complexity would continue to double annually until around 1980, and then slow down to a rate of doubling about every two years.

He attributed this exponential trend to several factors:

  1. Introduction of metal-oxide-semiconductor (MOS) technology
  2. Growth in die sizes accompanied by reduced defective densities, allowing larger working areas for semiconductor manufacturers without compromising yields
  3. Decreasing minimum dimensions
  4. Moore’s notion of “circuit and device cleverness”

Moore’s law became a guiding light

Shortly after 1975, Caltech professor Carver Mead made the term “Moore’s law” well-known.

Over time, Moore’s law became a widely embraced target for the semiconductor sector. Competitive manufacturers in the semiconductor industry pointed to it as they aimed to enhance processing power.

Moore saw his namesake law as an unexpectedly positive concept, stating,

“Moore’s law is a contradiction of Murphy’s law. Everything keeps improving and getting better.”

~Gordon Moore

This observation was even seen as a prediction that influenced its own realization.

The following are some of the reasons why Moore’s law became so important:

  • It has led to the development of smaller, faster, and more powerful computers.
  • It has made possible the development of new technologies such as artificial intelligence and machine learning.
  • It has helped to drive economic growth and innovation.
  • It has changed the way we live and work.

Moore’s law is for 24 Months ?

The interval of doubling is sometimes inaccurately cited as 18 months, which stems from a distinct forecast made by David House, an Intel executive and Moore’s colleague.

In 1975, House observed that Moore’s updated law of doubling transistor count every two years led to the implication that computer chip performance would approximately double every 18 months, all while power consumption remained unchanged.

Moore’s Law, in mathematical terms, projected the doubling of transistor count every two years due to advancements like shrinking transistor dimensions.

A concept known as Dennard scaling, arising from these reduced dimensions, predicted that power consumption per unit area would stay constant.

Taking these factors into account, David House concluded that computer chip performance would see a roughly 18-month doubling.

Furthermore, due to Dennard scaling, this heightened performance wouldn’t be accompanied by increased power usage—meaning the energy efficiency of silicon-based computer chips would roughly double every 18 months.

However, Dennard scaling came to an end in the 2000s.

Koomey later demonstrated that a similar pace of efficiency improvement existed prior to silicon chips and Moore’s Law, encompassing technologies like vacuum tubes.

What kept Moore’s law going?

Numerous innovations by scientists and engineers have sustained Moore’s Law since the inception of integrated circuits (ICs).

Here are key examples of breakthroughs that have driven advancements in semiconductor fabrication, enabling transistor counts to grow significantly in a relatively short span:

Integrated Circuit (IC): The foundational basis of Moore’s Law. The germanium hybrid IC was created by Jack Kilby at Texas Instruments in 1958, followed by Robert Noyce’s invention of the silicon monolithic IC chip at Fairchild Semiconductor in 1959.

Complementary Metal-Oxide-Semiconductor (CMOS): Developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963.

Dynamic Random-Access Memory (DRAM): Robert H. Dennard’s work on DRAM took shape at IBM in 1967.

Chemically-Amplified Photoresist: Introduced by Hiroshi Ito, C. Grant Willson, and J. M. J. Fréchet at IBM around 1980. This innovation significantly increased sensitivity to ultraviolet light, finding use in DRAM production in the mid-1980s.

Deep UV Excimer Laser Photolithography: Developed by Kanti Jain at IBM circa 1980, this technology became instrumental in semiconductor production, particularly for lithography.

Interconnect Innovations: In the late 1990s, advancements like chemical-mechanical polishing (CMP), trench isolation, and copper interconnects improved wafer yield, allowed more metal wire layers, reduced device spacing, and lowered electrical resistance. While not directly related to shrinking transistors, these innovations had significant positive effects.

Computer industry roadmaps from 2001 anticipated the continuation of Moore’s Law for several generations of semiconductor chips.

Is Moore’s law a law?

The concept is not grounded in a physical law; rather, it emerged from an observation and projection made by Gordon Moore in the early days of microprocessor development at Intel.

Originally, it might have been more aptly termed “Moore’s Observation,” but that phrase wasn’t as linguistically smooth.

Both the technical community and social media swiftly embraced the term “Moore’s Law” despite its inaccuracy, and this label has since become firmly established.

Moore’s Law is not a true “law” in the scientific sense, but rather an observation made by Gordon Moore, a co-founder of Intel, in 1965.

He observed that the number of transistors on a microchip doubles about every 18 to 24 months, leading to a corresponding increase in computing power.

While this trend has held true for many years, it is becoming increasingly difficult to maintain as transistors reach the size of atoms and quantum computing becomes a reality.

So, while it is not a “law” that can never be broken, it is more of a prediction that has held true for a long period of time and is likely to continue to hold true to some extent in the future.

The end of Moore’s law

Since around 2010, microprocessor architects have observed a widespread slowdown in semiconductor progress, falling below the projected pace of Moore’s Law. Former Intel CEO Brian Krzanich pointed to Moore’s 1975 revision as a precedent for this current deceleration.

He explained that technical hurdles have led to this slowdown, which is a natural progression in the history of Moore’s Law. The trajectory of improving physical dimensions, known as Dennard scaling, also came to an end in the mid-2000s.

Consequently, much of the semiconductor industry has shifted its focus towards addressing the demands of major computing applications rather than solely concentrating on semiconductor scaling.

Nonetheless, leading semiconductor manufacturers such as TSMC and Samsung Electronics assert that they have managed to maintain alignment with Moore’s Law, evident in the production of 10, 7, and 5 nm nodes.


The semiconductor industry has used Moore’s prediction for long-term planning and research and development goals, effectively making it somewhat self-fulfilling.

Advances in digital electronics, like cheaper microprocessors, increased memory capacity (both RAM and flash), better sensors, and even more and larger pixels in digital cameras, are closely connected to Moore’s Law.

These ongoing changes in digital electronics have been major drivers of technological, social shifts, productivity, and economic growth.

Experts in the industry don’t fully agree on when Moore’s Law will no longer apply.

Since around 2010, there has been a noticeable industry-wide slowdown in semiconductor progress, slightly falling short of Moore’s Law predictions.

In September 2022, Jensen Huang, the CEO of Nvidia, declared Moore’s Law as no longer applicable, while Pat Gelsinger, the CEO of Intel, held the opposite perspective.

Editorial Team
Editorial Team
Articles: 1790