By Joe Osborne
Move over, Watson – if you haven’t already. Nvidia has just unveiled the DGX-1, the “world’s first deep learning supercomputer” built on the firm’s newly announced Pascal architecture.
Designed to power the machine learning and artificial intelligence efforts of businesses through applying GPU-accelerated computing, the DGX-1 delivers throughput equal to that of 250 servers running Intel Xeon processors.
Specifically, the DGX-1 can pump out 170 teraflops – that’s 170,000 floating operations per second – with its eight 16GB Tesla P100 graphics chips. By comparison, and while built for a different purpose, IBM’s Watson supercomputer is clocked at 80 teraflops.
“It’s like having a data center in a box,” Nvidia CEO and co-founder Jen-Hsun Huang said on stage during his GPU Technology Conference (GTC) keynote address.
As an example, Hsung used the time taken to train AlexNet, a popular neural network for computer image recognition developed by University of Toronto graduate Alex Krizhevsky. These neural networks have to be “trained” by powerful computers to properly recognize images – or whatever their primary function is – on their own.
For the 250 servers running on Intel Xeon chips, that would take 150 hours of computation time. The DGX-1 can do the work in two hours with its …read more
Source:: techradar.com – Gaming