
This post from Sumit Gupta, our GM of accelerated computing, originally appeared on IBM’s Building a Smarter Planet blog.
Last week, while on a road trip to southern California with my family, I had one of those moments that parents treasure. I impressed my kids with what I do for a living.

They wanted to know what song was playing on the radio, so I ran the song through the Shazam music app on my phone. I proudly told my kids that Shazam uses a type of high-performance computer processor from my group at NVIDIA to rapidly search and identify songs from its 27-million track database. That lightning-quick computing task took place in a far-off data center in the cloud, but, for the kids, it seemed like magic happening in the palm of my hand. “Cool, dad!”
The moment was especially thrilling for me because I foresee an explosion of innovation taking place in cloud data centers. One of the forces fueling this phenomenon is an initiative called the OpenPOWER Foundation.
The Foundation, formed late last year, is widening the market reach of IBM’s Power microprocessor technology, which, until recently, was reserved for use in its high-end servers. Using the IBM POWER CPU’s unique value in performance, more companies can innovate in data centers – creating computers and networks of systems that are custom-tailored for their needs.
As cloud data centers become more pervasive in our lives, a la mobile apps like Shazam, companies require more computational capabilities to harness massive amounts of data and detect patterns and extract insights. These new computational “big data” tasks require higher performance processors and accelerators, which is where IBM POWER CPUs and NVIDIA GPU accelerators come in.
Along with IBM and the other members of the OpenPOWER Foundation, we can offer the high performance required to tap into the big data being generated every second and offer better insights, faster. The Foundation enables technology suppliers to offer solutions customized for cloud services, by means of optimized servers, interconnections and even microprocessor implementations. Think of it as a license to innovate.
It’s also a license to collaborate. NVIDIA is working closely with IBM to combine our next-generation Tesla GPU accelerators with IBM’s next-generation microprocessor technology, POWER8, in new high-performance computing systems. Working together side by side, these two very sophisticated machines – the GPU and the microprocessor – each does certain things better than the other.
To optimize their use, NVIDIA developed a new high-speed interconnect technology for linking the GPU and CPU. NVLink will make it possible to move data back and forth between them extremely rapidly – five to 12 times faster than they can today. NVIDIA will offer NVLink as licensed technology to OpenPOWER Foundation members and other processor vendors, paving the way for new systems 50-100 times faster than today’s most powerful ones.
This post originally appeared on IBM’s Building a Smarter Planet blog.
With the marriage of POWER8 and NVIDIA GPUs connected via NVLink, these systems are going to be simply unbeatable when it comes to handling some of the most demanding computing tasks for enterprises and popular consumer Web services. One example: harvesting insights in real time from massive amounts of medical data. Also, IBM is working to accelerate a range of its enterprise data analytics applications with GPUs, allowing customers to take advantage of these high-performance POWER-GPU systems.
As the center of gravity in personal computers shifted from PCs to smartphones and tablets, new mobile operating systems have taken hold – unleashing innovation and giving device makers, web service providers and consumers plenty of technology choices. The same kind of thing is happening now in data center computing.
With the innovation unleashed by OpenPOWER, I foresee a practically endless stream of wonderful apps being invented. So heads up, kids. You won’t believe what comes next. And, by the way, the song playing in the car was Burn by Ellie Goulding.