In 1965, Gordon Moore, Intel’s cofounder, predicted that the number of transistors on an integrated circuit would double every 18 months. In some eyes, that law — which has become somewhat of an accepted axiom in the computer industry — has been losing veracity.
There are some — Moore included, according to several accounts — who think there are limits to the growth potential of silicon-based semiconductors. But others, including researchers at Lucent’s semiconductor division, reckon that silicon technology still has enough capacity to handle many more generations of innovation.
Predicting the long-term viability of traditional semiconductor technology is important, for if U.S. technology companies cannot maintain the pace of innovation in computing, there could be dire consequences for the economy.
“The industry plans its manufacturing [according to] Moore’s Law and invests accordingly,” Martin Reynolds, general vice president and research fellow at Gartner, told TechNewsWorld. “The transistors can still get smaller, the die larger and the connections denser.”
Researcher M. Ashraf Alam, a member of Lucent’s Agere Systems division, last year proved Reynolds’ point.
Alam’s work showed that transistors could indeed continue to shrink even smaller without losing reliability. “Our discovery shows that silicon has more steam left in it and gives everyone a little breathing room while we try to discover new ways to continue to shrink transistor structures,” he told TechNewsWorld.
Researchers say there is an array of challenges involved in extending the performance of a circuit or a processor by putting more and more transistors onto individual chips. The most fundamental challenge involves a thin layer of insulation, called a dielectric film, that is an integral part of each transistor.
Semiconductor researchers had believed that silicon dioxide — the dielectric material traditionally used in semiconductor manufacturing — would not be usable at a thickness of less than 20 angstroms. (One centimeter is equivalent to 100 million angstroms.) That’s because silicon transistors use thicker films as insulators, and any small breakdown across this insulator would quickly lead to other defects nearby, resulting in a dramatic increase in electric current leakage that would cause the failure of the entire chip.
Late last year, Alam and colleagues made a significant advance in understanding the behavior of these thin films. The research team found that breakdowns in insulators are independent of one another, so they do not “snowball” into the kind of sudden failure seen in devices with thicker films, as scientists previously had thought.
“As transistors get smaller, this film gets thinner and the time to breakdown becomes shorter,” Alam told TechNewsWorld. “Researchers had previously found that the breakdown in thin films was not as catastrophic as breakdowns in thicker films, but this work shows how fundamentally different a breakdown in a thin film is.”
Alam said this discovery not only allows engineers to calculate slight increases in leakage current, but also demonstrates that, in general, the increase in leakage will not affect a circuit’s performance. “This is good news for the communications semiconductor industry, but also for the scientific community at large,” he noted.
Action on Thin Films
Alam’s discovery could lead to technologies that significantly extend the performance limits of silicon as a viable technology, allowing companies to continue to develop new, innovative electronic products.
“Without this breakthrough, [integrated circuit] technology could not be proven to meet standard reliability assurance,” Mark Pinto, vice president of Agere Systems, told TechNewsWorld. “Previous-generation process technologies could never offer the level of performance that’s being required by customers today.”
Other research teams — at Texas Instruments, for example — are tackling the difficulties associated with making chips smaller and faster. Gene Franz, a principal fellow at TI, disagrees with the sceptics, offering his own unique postulate that parallel’s Moore’s Law. He says processing power will increase and power consumption will decrease exponentially over time, indicating that the entire system built around chips — not just the chips themselves — will get smaller.
But these developments still have not quieted the skeptics, who note that the fabrication technology needed to make computer chips is being stretched to its limits.
At a roundtable sponsored by technology promoter Dubai Silicon Oasis earlier this week, Dave Chavoustie, executive vice president of semiconductor equipment manufacturer ASML, noted that customers are aggressively trying to extend the life of existing fabrication tools. “We’re being challenged by our customers to find more creative ways to take advantage of existing fabs,” said Chavoustie.
Migration to a new size of silicon wafers historically has happened every two to three chip generations — roughly the equivalent of three to five years. Today’s sluggish economy, however, has cut the cash flow of companies so much that they cannot afford to invest in the tools to take silicon chip fabrication to the next level.
Costs the Key
“The low return on capital means that the industry can’t afford to develop the tools and infrastructure,” said Don Mitchell, president and CEO of semiconductor equipment manufacturer FSI International.
But many, including Gartner’s Reynolds, still do not buy into this pessimism. Cost might be an issue, but most analysts agree that R&D spending exists to support development of smaller and smaller chips.
“A good example is Intel’s investment in [extreme ultraviolet] technology that will take the industry through another four iterations of technology,” Reynolds said.
“Along the way, we’ll see new techniques emerge to make ever-smaller transistors work,” he told TechNewsWorld. “And, ultimately, when we run out of land, the industry will build up by stacking chips.”