Jack Kilby’s revolutionary idea was to make all the
different components of a circuit out of the same flat block of
semiconductor material. Not only would this get rid of wires and
faulty connections, it would make the entire circuit much more
compact. Kilby demonstrated his first “integrated circuit” on
Sept. 12, 1958.
Six months later, in California, another engineer, Robert
Noyce, independently came up with the idea of making an
integrated circuit. Noyce’s chip was better suited to be
manufactured in large numbers, and soon he was part of a young
company called Intel.
Thus was launched a revolution. The first chip-based
computer was the first U.S. Air Force computer, built in 1961.
The true potential of the integrated circuit was shown when
Texas Instruments unveiled the pocket calculator. Previously
calculators had been bulky devices that needed to be plugged in
to electrical mains. The pocket calculator, small enough to hold
in one’s palm, had a chip inside and batteries were adequate to
power it.
Progress was rapid thereafter. Many have already heard of
Moore’s law, which has become a mantra of the digital age. First
put forward by the Intel co-founder Gordon Moore in the 1960s,
it says that the processing power of a chip doubles every two
years, while the price falls by half. For more than four decades,
Moore’s law has held, driving incredible growth and
miniaturization — and wealth.
The question is whether the semiconductor industry can
sustain this pace. Further increasing the processing power of
chips is proving to be problematic as certain fundamental
physical barriers are being reached. At the same time, new
frontiers are opening up. The quest is on to make chips that are
powered by light instead of electricity, which will enable much
faster computers.
Saswato Das. The Chip that Changed the World. Internet: <www.nytimes.com> (adapted).