By Joe Osborne
When Google unveiled its Tensor Processing Unit (TPU) during this year’s Google I/O conference in Mountain View, California, it finally ticked for this editor in particular that machine learning is the future of computing hardware.
Of course, the TPU is only a part of the firm’s mission to push machine learning – the practice that powers chat bots, Siri and the like – forward. (It’s also the chip that defeated the world Go champion recently.) Google also has TensorFlow, its open source library of machine intelligence software.
And sure, the chips that we find in our laptops and smartphones will continue to get faster and more versatile. But, it seems as if we’ve already seen the extent of the computing experiences that these processors can provide, if only limited by the devices they power.
Now, it’s the TPU, a meticulous amalgamation of silicon built specifically for one purpose, and other specialized processors both already here (like Apple’s M9 co-processor) and to come, that stands to push the advancement of mankind’s processing power – and in turn our device’s capabilities – further and faster than ever before.
So, we wanted to learn more about this new kind of chip, how it’s different exactly, …read more
Source:: techradar.com – Computing Components