Silicon Renaissance with AI

Hardware is changing to enable more applications of artificial intelligence

I am writing this article today based on an article by Rob Toews writing as a contributor to Forbes.

He argues that there is innovation in the semiconductor industry.

He traces this improvement back since Intel introduced the world’s first microprocessor in 1971.

Computer chips have increased a lot since then (50 years ago).

He talks of the changing architecture of computer chips.

That is because AI and machine learning in particular has somewhat unique demands.

Many hardware companies are therefore purpose-building for AI.

Bay Area startup Cerebras Systems recently unveiled the largest computer chip in history, purpose-built for AI — by Jessica Chou, The New York Times

The central processing unit (CPU) has become relatively dominant.

Toews argues this is because of its flexibility.

Deep learning is fundamentally trial-and-error-based: parameters are tweaked, matrices are multiplied, and figures are summed over and over again across the neural network as the model gradually optimises itself.

“Parallelisation — the ability for a processor to carry out many calculations at the same time, rather than one by one — becomes critical.”

What he says is that CPUs are ill equipped for this task because: “CPUs process computations sequentially, not in parallel […] This creates a choke point in data movement known as the “von Neumann bottleneck”. The upshot: it is prohibitively inefficient to train a neural network on a CPU.”

The graphics processing unit (GPU) has become increasingly important.

Unlike CPUs, GPUs can complete many thousands of calculations in parallel.

Nvidia’s gaming chips were in fact well suited to handle the types of workloads that machine learning algorithms demanded.

The company has reaped incredible gains as a result: Nvidia’s market capitalization jumped twenty-fold from 2013 to 2018.

Toew mentions five AI chip companies that have emerged in the last 24 months:

  • Nervana Systems (bought for $408M in April 2016).
  • Habana Labs (bought for $2B in December 2019).
  • Groq.
  • Lightmatter (raised $33M from GV, Spark Capital and Matrix Partners).
  • Horizon Robotics.
  • SambaNova Systems.
  • Graphcore.
  • Wave Computing.
  • Blaize.
  • Mythic.
  • Kneron.

He asks the question of who will be the next Intel.

“Cerebras’ audacious approach is, to put it simply, to build the largest chip ever. Recently valued at $1.7B, the company has raised $200M from top investors including Benchmark and Sequoia. The specifications of Cerebras’ chip are mind-boggling. It is about 60 times larger than a typical microprocessor. It is the first chip in history to house over one trillion transistors (1.2 trillion, to be exact). It has 18 GB memory on-chip — again, the most ever.”

However this is a challenging (or ludicrous) task.

However Cerebras’ AI chips are already in commercial use:

“…just last week, Argonne National Laboratory announced it is using Cerebras’ chip to help in the fight against coronavirus.”

Groq is building a chip with a batch size of one, meaning that it processes data samples one at a time.

Perhaps no company has a more mind-bending technology vision than Lightmatter. Founded by photonics experts, Boston-based Lightmatter is seeking to build an AI microprocessor powered not by electrical signals, but by beams of light. The company has raised $33M from GV, Spark Capital and Matrix Partners to pursue this vision. According to the company, the unique properties of light will enable its chip to outperform existing solutions by a factor of ten.

He briefly mentions Google’s Tensor Processing Unit (TPU), mentioned above that I have covered previously. Apparently Tesla, Facebook and Alibaba, among other technology giants, all have in-house AI chip programs.

According to Toew there is a race on in the hardware industry, and it will be an interesting one to keep an eye on.