Less than two decades of Google Data Centers – from corkboard x86 hardware via GPU to TPU – now they make their own chips too
Posted by jpluimers on 2016/05/20
Remember the image on the right? It was the first “corkboard” production server Google used in 1998 (it’s a museum piece now).
From there they were using commodity-class x86 server computers running customized versions of Linux for a “long” time which around 2005 even got their own 12V battery as UPS inside the machine and running 1160 machines in a 1AAA shipping container.
Later whey started using a mix of CPU and GPU increasing the performance per watt and recently went from 12V to 48V and even contributed 48V DC Data Center Rack to Open Compute.
In the mean time, Tensor Flow and AI got even more important for Google and during the Google I/O 2016 keynote, they revealed yet another step: TPU chips especially made for TensorFlow providing even better performance per watt for machine learning than GPU. The TPUs are not FPGAs (popular for instance when mining BitCoins), but ASICs that perform orders of magnitude better.
So in about 18 years, Google moved from cleverly assembled commodity hardware to highly specialised custom chips.
Exciting times are ahead of us. I’m really looking forward to the next steps.
- Google platform – Wikipedia, the free encyclopedia
- Google “Corkboard” Server, 1999 | National Museum of American History
- Google uncloaks once-secret server – CNET
- Open Hardware Paves the Way to Commodity Water Cooled Servers | ClusterDesign.org
- Google, Intel Prep 48V Servers | EE Times
- Google Contributes 48V DC Data Center Rack to Open Compute | Data Center Knowledge
- Google Has Built Its Own Custom Chip for AI Servers | Data Center Knowledge
- Google’s Making Its Own Chips Now. Time for Intel to Freak Out | WIRED
- Google’s Tensor Processing Unit could advance Moore’s Law 7 years into the future | PCWorld
- TensorFlow – Wikipedia, the free encyclopedia