Posts Tagged 'HPC'

July 10, 2015

GPU Accelerated Supercomputing Comes to IBM’s SoftLayer Cloud Service

NVIDIA GPU technology powers some of the world’s fastest supercomputers. In fact, GPU technology is at the heart of the current #1 U.S. system, Titan, located at Oak Ridge National Labs. It will also be an important part of Titan’s forthcoming successor, Summit, an advanced new supercomputer based on next-generation, ultra-high performance GPU-accelerated OpenPOWER servers.

But, not everyone has access to these monster machines for their high-performance computing, deep learning, and scientific computing work. That’s why NVIDIA is working with IBM to make supercomputing-class technology more accessible to reserachers, engineers, developers, and other HPC users.

IBM Cloud announced earlier this week that NVIDIA Tesla K80 dual-GPU accelerators are now available on SoftLayer bare metal cloud servers. The team worked closely together to test and tune the speedy delivery of NVIDIA Tesla K80 enabled servers. The Tesla K80 GPU accelerators are the flagship technology of our Tesla Accelerated Computing Platform, delivering 10 times higher performance than today’s fastest CPU for a range of deep learning, data analytics and HPC applications.

Bringing Tesla K80 GPUs to SoftLayer means that more researchers and engineers worldwide will have access to ultra-high computing resources – without having to deal with the cost and time commitment of purchasing and maintaining their own HPC clusters. On-demand high performance computing can now be delivered in a matter of hours instead of the weeks or months it takes to build and deploy a dedicated system. Never before has bare-metal compute infrastructure been so agile. Fully populated Tesla K80 GPU nodes can be provisioned and used in two to four hours. Then, they can be de-provisioned or reassigned just as quickly.

With support for GPU accelerators, SoftLayer is providing full-scale data center resources for users to build a compute cluster, burst an existing cluster, or launch a compute intensive project—all on easy to use, cost effective, and easily accessible cloud infrastructure.

The strength of SoftLayer’s API and the experience of IBM Cloud make it easy for users to provision and reclaim resources to enable true cloud bursting for compute clusters, and controlling resources is key to controlling costs.

We’re delighted to expand the reach of GPU-accelerated computing broader than ever before. For more info on IBM Cloud’s GPU offerings on SoftLayer or to sign up, visit


Michael O’Neill is an established leader for NVIDIA. He provides specialized strategic thought leadership and technical guidance to customers on NVIDIA GRID and Tesla GPUs in virtualized environments. He works closely with business leaders to develop innovative solutions for graphical and compute heavy workloads. With over twenty years of experience in planning, developing, and implementing state of the art information systems, he has built a significant body of work empowering people to live, work and collaborate from anywhere on any device. His guidance has provided Fortune 500 companies with cloud computing solutions to help IT and service providers build private, hybrid and public clouds to deliver high-performance, elastic and cost-effective services for mobile workstyles.

August 21, 2012

High Performance Computing - GPU v. CPU

Sometimes, technical conversations can sound like people are just making up tech-sounding words and acronyms: "If you want HPC to handle Gigaflops of computational operations, you probably need to supplement your server's CPU and RAM with a GPU or two." It's like hearing a shady auto mechanic talk about replacing gaskets on double overhead flange valves or hearing Chris Farley (in Tommy Boy) explain that he was "just checking the specs on the endline for the rotary girder" ... You don't know exactly what they're talking about, but you're pretty sure they're lying.

When we talk about high performance computing (HPC), a natural tendency is to go straight into technical specifications and acronyms, but that makes the learning curve steeper for people who are trying to understand why a solution is better suited for certain types of workloads than technology they are already familiar with. With that in mind, I thought I'd share a quick explanation of graphics processing units (GPUs) in the context of central processing units (CPUs).

The first thing that usually confuses people about GPUs is the name: "Why do I need a graphics processing unit on a server? I don't need to render the visual textures from Crysis on my database server ... A GPU is not going to benefit me." It's true that you don't need cutting-edge graphics on your server, but a GPU's power isn't limited to "graphics" operations. The "graphics" part of the name reflects the original intention for kind of processing GPUs perform, but in the last ten years or so, developers and engineers have come to adapt the processing power for more general-purpose computing power.

GPUs were designed in a highly parallel structure that allows large blocks of data to be processed at one time — similar computations are being made on data at the same time (rather than in order). If you assigned the task of rendering a 3D environment to a CPU, it would slow to a crawl — it handles requests more linearly. Because GPUs are better at performing repetitive tasks on large blocks of data than CPUs, you start see the benefit of enlisting a GPU in a server environment.

The Folding@home project and bitcoin mining are two of the most visible distributed computing projects that GPUs are accelerating, and they're perfect examples of workloads made exponentially faster with the parallel processing power of graphics processing units. You don't need to be folding protein or completing a blockchain to get the performance benefits, though; if you are taxing your CPUs with repetitive compute tasks, a GPU could make your life a lot easier.

If that still doesn't make sense, I'll turn the floor over to the Mythbusters in a presentation for our friends at NVIDIA:

SoftLayer uses NVIDIA Tesla GPUs in our high performance computing servers, so developers can use "Compute Unified Device Architecture" (CUDA) to easily take advantage of their GPU's capabilities.

Hopefully, this quick rundown is helpful in demystifying the "technobabble" about GPUs and HPC ... As a quick test, see if this sentence makes more sense now than it did when you started this blog: "If you want HPC to handle Gigaflops of computational operations, you probably need to supplement your server's CPU and RAM with a GPU or two."


April 17, 2012

High Performance Computing for Everyone

This guest blog was submitted by Sumit Gupta, senior director of NVIDIA's Tesla High Performance Computing business.

The demand for greater levels of computational performance remains insatiable in the high performance computing (HPC) and technical computing industries, as researchers, geophysicists, biochemists, and financial quants continue to seek out and solve the world's most challenging computational problems.

However, access to high-powered HPC systems has been a constant problem. Researchers must compete for supercomputing time at popular open labs like Oak Ridge National Labs in Tennessee. And, small and medium-size businesses, even large companies, cannot afford to constantly build out larger computing infrastructures for their engineers.

Imagine the new discoveries that could happen if every researcher had access to an HPC system. Imagine how dramatically the quality and durability of products would improve if every engineer could simulate product designs 20, 50 or 100 more times.

This is where NVIDIA and SoftLayer come in. Together, we are bringing accessible and affordable HPC computing to a much broader universe of researchers, engineers and software developers from around the world.

GPUs: Accelerating Research

High-performance NVIDIA Tesla GPUs (graphics processing units) are quickly becoming the go-to solution for HPC users because of their ability to accelerate all types of commercial and scientific applications.

From the Beijing to Silicon Valley — and just about everywhere in between — GPUs are enabling breakthroughs and discoveries in biology, chemistry, genomics, geophysics, data analytics, finance, and many other fields. They are also driving computationally intensive applications, like data mining and numerical analysis, to much higher levels of performance — as much as 100x faster.

The GPU's "secret sauce" is its unique ability to provide power-efficient HPC performance while working in conjunction with a system's CPU. With this "hybrid architecture" approach, each processor is free to do what it does best: GPUs accelerate the parallel research application work, while CPUs process the sequential work.

The result is an often dramatic increase in application performance.

SoftLayer: Affordable, On-demand HPC for the Masses

Now, we're coupling GPUs with easy, real-time access to computing resources that don't break the bank. SoftLayer has created exactly that with a new GPU-accelerated hosted HPC solution. The service uses the same technology that powers some of the world's fastest HPC systems, including dual-processor Intel E5-2600 (Sandy Bridge) based servers with one or two NVIDIA Tesla M2090 GPUs:


SoftLayer also offers an on-demand, consumption-based billing model that allows users to access HPC resources when and how they need to. And, because SoftLayer is managing the systems, users can keep their own IT costs in check.

You can get more system details and pricing information here: SoftLayer HPC Servers

I'm thrilled that we are able to bring the value of hybrid HPC computing to larger numbers of users. And, I can't wait to see the amazing engineering and scientific advances they'll achieve.

-Sumit Gupta, NVIDIA - Tesla

Subscribe to hpc