The race is on to control the global supply chain for AI chips
The focus is no longer just on faster chips, but on more chips clustered together
In 1958 Jack Kilby at Texas Instruments engineered a silicon chip with a single transistor. By 1965 Fairchild Semiconductor had learned how to make a piece of silicon with 50 of the things. As Gordon Moore, one of Fairchild’s founders, observed that year, the number of transistors that could fit on a piece of silicon was doubling on a more or less annual basis.
In 2023 Apple released the iPhone 15 Pro, powered by the a17 bionic chip, with 19bn transistors. The density of transistors has doubled 34 times over 56 years. That exponential progress, loosely referred to as Moore’s law, has been one of the engines of the computing revolution. As transistors became smaller they got cheaper (more on a chip) and faster, allowing all the hand-held supercomputing wonders of today. But the sheer number of numbers that ai programs need to crunch has been stretching Moore’s law to its limits.
More from Schools brief
AI firms will soon exhaust most of the internet’s data
Can they create more?
A short history of AI
In the first of six weekly briefs, we ask how AI overcame decades of underdelivering
Finding living planets
Life evolves on planets. And planets with life evolve
On the origin of “species”
The term, though widely used, is hard to define
Making your way in the world
An individual’s life story is a dance to the music of time
How organisms are organised
Like any well-run operation, a body is made of specialised parts