Aiming to become the global leader in chip-scale photonic solutions by deploying Optical Interposer technology to enable the seamless integration of electronics and photonics for a broad range of vertical market applications

Free
Message: When Silicon Leaves the Valley

Technology Quarterly: Q1 2014

Chipmaking

When silicon leaves the valley

Semiconductors: As it becomes harder to cram more transistors onto a slice of silicon, alternative ways of making chips are being sought

Mar 8th 2014 | From the print edition

THE computer industry is built on sand. For sand contains silicon, and silicon is an excellent material for building transistors, the tiny electrical switches used in a microprocessor chip. It is the ability to constantly shrink those transistors that has driven the industry. The components in Intel’s 4004, the first microprocessor, were 10,000 nanometres wide, about a tenth as wide as a human hair. The features in the company’s latest products, after decades of shrinkage, are just 22 nanometres across—about as wide as 50 of the silicon atoms from which they are made.

The obsession with size arises from the almost magical results of shrinking a transistor. More transistors mean more capable hardware. Smaller transistors switch on and off more quickly, so can carry out calculations faster. And they use less electricity. The result has been an explosion in computer power that has been one of the defining features of the past 50 years.

The rate of shrinkage has followed Moore’s law, an observation made in 1965 by Gordon Moore, one of Intel’s founders, that the number of transistors that can be crammed into a given area doubles every couple of years. One way to view that is to examine the cost of transistors. In 1982, when Intel launched its 80386 chip, $1 bought several thousand transistors. By 2002, you could get 2.6m for the same price. By 2012, when chips routinely sported more than a billion transistors, the price had fallen to 20m to the dollar.

But the old magic is no longer working as well. This year something unprecedented may happen: the Linley Group, a Silicon Valley consultancy, reckons the price of cutting-edge transistors will rise—to 19m per dollar. Modern transistors are so tiny that shrinking them still further is becoming difficult and expensive. Nor are the benefits so great. Transistors’ tininess is beginning to turn against them, making chips misbehave and limiting how much extra performance can be wrung from them.

That does not mean that Moore’s law is coming to an end, at least not yet. But no exponential trend can carry on for ever, and some think the end of the silicon transistor may now be in sight. “Silicon will probably go down by another three or four [shrinkage steps],” says Supratik Guha, director of physical sciences at IBM’s Thomas J. Watson research laboratory. “After that, though, it’s going to hit a wall.”

Engineers have lots of ideas about how to build future transistors, and new research papers come out regularly. Many look at redesigning the transistor itself and changing the materials it is built from to keep shrinking them. And when that proves no longer possible, there are new, exotic approaches to computing that may ride to the rescue.

The quick switch

Chipmakers are struggling with both physics and economics. Start with the physics. For all their usefulness, transistors are simple devices. Current flows from a “source”, through a “channel” and into a “drain”. Applying a separate voltage to a “gate” allows that flow to be switched on or off, providing the basic building-blocks of computing. As transistors shrink towards atomic dimensions, the gate’s control over the channel gets weaker. Modern transistors leak, with current flowing even when the device is meant to be switched off. That wastes power and generates heat, which must be disposed of somehow.

One way to get around the problem is to build upwards. In 2012 Intel introduced chips with upright transistors, in which the channel rises from the rest of the circuit, like a tall building above a cityscape. The gate is then wrapped around the channel’s three exposed sides, making it better able to impose its will. Other manufacturers have similar plans, although they have found it harder to master the technology. Taiwan Semiconductor Manufacturing Corporation, the world’s biggest chipmaker, will not have its version ready until 2015.

As these finned transistors shrink they will suffer leakage problems too. The next logical step, says Mike Mayberry, Intel’s vice-president of component research, is to surround the channel on all four sides, encasing it entirely within the gate. These “gate-all-around” transistors take the form of tiny, vertically stacked wires, with the gate threaded across the wire rather like a bead on a necklace. They have been built in laboratories, but will be difficult to mass produce. A related idea is to stack transistors on top of one another to form three-dimensional chips without actually shrinking the transistors themselves. It would help keep Moore’s law chugging along, but will exacerbate the heat problem.

Besides redesigning the transistor, another option is to use new materials, like so-called III-V elements, which are closely related to silicon. Some of these can conduct current through a transistor more efficiently than silicon, which would allow lower voltages, cutting power consumption and extending battery life. “The general consensus is you can expect a performance increase of between 20 and 50%,” says Richard Hill of Sematech, a chip-industry research consortium.

Carbon is a favourite among many researchers, either in the form of tiny rolled-up “nanotubes” or flat, atom-thick sheets known as graphene. Like the III-V materials, electrons can flow through carbon with great ease. But carbon has significant downsides, too. The electrical properties of nanotubes depend crucially on their diameter, which means manufacturing must be flawless. Graphene is, in some ways, worse: in its natural state, the material lacks a “bandgap”, which means that its conductivity cannot be switched off—a showstopping problem in a transistor. Several research groups are investigating ways to create a bandgap in graphene, says Dr Hill. But he thinks that flat sheets of other materials such as molybdenum sulphide—which sport a bandgap intrinsically—may prove better candidates for future generations of ultra-small devices.

But it is a long road from the lab to the fab, as the factories which make chips on large discs of silicon (pictured) are known. And just because a device is physically plausible does not mean it makes economic sense to put them into production. New technologies are likely to be integrated slowly, with hybrid chips featuring both standard silicon transistors and exotic new devices, expects Mr Mayberry.

Even a gradual approach will push prices up. Modern fabs are already eye-wateringly expensive: Intel recently cancelled one with costs rumoured to have topped $5 billion. If raw speed is vital, a customer might be willing to pay more for exotic, high-speed chips, says Linley Gwennap, who runs the Linley Group. But for many products, like midrange smartphones, that will not make sense. All the big chipmakers intend to keep shrinking their circuits until at least 2020 or so, but if that is at the expense of rapidly rising production costs, then economics could bring the curtain down on Moore’s law before physics does.

Even if that does happen, it need not be the end of faster computers. “Fifty years of Moore’s law has made the industry fat, dumb and happy,” says Dr Gwennap. Decades of ever-faster hardware have diverted firms from trying to squeeze performance increases out of software and clever programming. It may also be possible to use existing transistors more efficiently. Many modern chips are generalists; competent at any task but excelling at nothing. Specialist hardware, designed to do a small number of tasks well, could offer significant speed-ups. This approach is already employed in supercomputers, which use chips originally designed for the fast-action graphics used by video games.

And then it gets fuzzy

In the future it might even be possible to build a computer without devices that resemble the traditional transistor

In the future it might even be possible to build a computer without devices that resemble the traditional transistor. The best known approach is quantum computing, which harnesses the fuzzy nature of quantum mechanics to perform rapid computational work. But the hype far exceeds the reality, at least for now (see article). Building a quantum-mechanical replacement for a traditional computer is very difficult. And even then their speed advantage is heavily restricted. They are known to be faster than ordinary computers at only a few (admittedly quite useful) tasks, such as searching unsorted information, or finding the prime factors of colossal numbers. For other jobs, they may offer no advantages.

There are other ways to harness quantum effects. Several firms, including Intel, are investigating an idea called spintronics, in which the “spin” of subatomic particles—a quantum-mechanical property that bears little relation to the classical notion of spin—is used to perform computation. Spintronics offers much lower power consumption, and brings other advantages too. Some spintronic devices may be able to do more logical work with a given number of components than traditional chips can manage. That could allow machines to be built from fewer devices. A typical adder, one subcomponent of a modern chip, is built from around 30 separate transistors, says Intel’s Dr Mayberry. A spintronic one could be built from just five, allowing more computational power to be packed into a given area.

The most ambitious idea goes by the general name of “neuromorphic computing”. First proposed by Carver Mead, a computer scientist, in the 1980s, it looks to biology for inspiration. Biological brains are different from silicon computers in fundamental ways, says Dr Guha at IBM. Computers are electronic devices, whereas brains rely on a mixture of electricity and chemistry. The fundamental information-processing unit of the brain, the neuron, can be connected to thousands of other neurons, whereas a typical transistor connects to just a handful of peers. A transistor may switch on and off billions of times a second; neurons fire around a million times slower. Neurons can make or break connections on the fly, allowing brains to adapt themselves to a task, while the wiring of most silicon chips is fixed.

Biological brains are not a patch on traditional computers when it comes to raw number-crunching. But they excel at other useful tasks such as pattern recognition. And compared with modern computers, they are stunningly power-efficient, adds Dr Guha. A good comparison is IBM’s own Watson computer, which famously beat all human challengers at Jeopardy!, an American word-game, in 2011. Watson was built from 90 highly specified computers, lived in a special air-conditioned room and consumed dozens of kilowatts of power. By contrast, the brain of Ken Jennings, one of the human players that the machine defeated, weighs a few pounds and draws about 20 watts of power, all of which is provided by cornflakes and sandwiches.

The problem with learning from Mother Nature is that humans do not yet understand her subtleties. Despite decades of research, no one really knows how the brain works, and that makes it hard to apply its magic to technology. But supporters of neuromorphic computing may receive help from other quarters. Brain science is a hot topic with plenty of research money sloshing around. Two big new projects, Europe’s €1 billion ($1.4 billion) Human Brain Project, and America’s similar SyNAPSE initiative, promise even more. And if such projects come to fruition, then today’s marvellous, intricate masterpieces in silicon may one day look as clunky and primitive as the hand-cranked mechanical calculators they replaced.

From the print edition: Technology Quarterly

Share
New Message
Please login to post a reply