Posts Tagged 'Hybrid Cloud'

January 15, 2015

Hot in 2015: Trends and Predictions

As cloud technology moves into 2015, the pace of innovation in the cloud space continues to accelerate faster and faster. Being no stranger to innovation ourselves, we’ve got our collective finger on the pulse of what’s up and coming. Here are some trends we see on the horizon for cloud in 2015.

Hybrid cloud
As more and more workloads move to the cloud, many companies are looking for a way to leverage all of the value and economies of scale that the cloud provides while still being able to keep sensitive data secure. Hybrid cloud solutions, which can mean an environment that employs both private and public cloud services, on- and off-prem resources, or a service that combines both bare metal and virtual servers, will continue to grow in popularity. With 70 percent of CIOs planning to change their company’s sourcing and technology relationship within the next three years, Gartner notes that hybrid IT environments will dominate the space as they offer many of the benefits of legacy, old-world environments but still operate within the new-world as-a-service model.

Read more:
+IBM Hybrid Clouds

Bare metal
In 2015, the term bare metal will be officially mainstream. Early on, bare metal servers were seen as a necessity for only a few users, but now it has become the ideal solution for processor-intensive and disk I/O-intensive workloads like big data and analytics. We’ve been in the business of bare metal (formerly called dedicated servers) for 10 years now, and we’re happy to see the term become a standard part of the cloud dialogue. As cloud workloads get tougher and more complex in 2015, companies will continue to turn to bare metal for its raw performance.
Security
Security has been a hot topic in the news. In 2014, major retailers were hacked, certain celebrity photos were leaked, and issues surrounding government surveillance were in the spotlight. More than ever, these incidents have reminded everyone that the underlying architectures of the Internet are not secure, and without protections like firewalls, private networks, encryption, and other security features, private data isn’t truly private. In response to these concerns, tech companies will offer even higher levels of security in order to protect consumers’ and merchants’ sensitive data.

Read more:
+SoftLayer Cloud Security

Big data
Big data moves from hype and buzzword status to mainstream. The cloud industry has seen a change in the way big data is being put to work. It’s becoming more widely adopted by organizations of all types and sizes, in both the public and private sectors. One such organization is the Chicago Department of Public Health, which is using predictive analytics and data to experiment and improve food inspection and sanitation work. The city’s team has developed a machine-learning program to mine Twitter for tweets that use words related to food poisoning so that they can reply directly to posters, encouraging them to file a formal report. We’ll see much more of this kind of smart application of big data analytics to real-life problems in the year to come.

Read more:
+ In Chicago, Food Inspectors are Guided by Big Data

Docker
Docker is an open platform for developers and system administrators to build, ship, and run distributed applications. It enables apps to be quickly assembled from components and eliminates the friction between development, QA, and production environments. Streamlining workflow, the Docker software container allows developers to work on the exact same deployment stack that programmers use and contains all the dependencies within it. It can also be moved from bare metal to hybrid cloud environments—positioning it to be the next big thing on the cloud scene in 2015. IBM has already capitalized on Docker’s simplicity and portability by launching its IBM Containers service, part of Bluemix, last month. IBM Containers will help enterprises launch Docker containers directly onto the IBM Cloud via bare metal servers from SoftLayer.

Read more:
+Docker
+At DockerCon Amsterdam, an Under Fire Docker Makes a Raft of Announcements

Health care
The medical and health care industries will continue to adopt cloud in 2015 to store, compute, and analyze medical data as well as address public concerns about modernizing record-keeping and file-sharing practices. The challenge will be keeping patients’ sensitive medical data secure so that it can be shared among health care providers, but kept safely away from hackers.

Read more:
+Coriell Life Sciences

Data sovereignty
In order to comply with local data residency laws in certain regions, many global companies are finding it necessary to host data in country. As new data centers are established worldwide, it’s becoming easier to meet data sovereignty requirements. As a result of launching new data centers, cloud providers are increasing the size and power of their network—creating even lower latency connections—and creating an even more competitive cloud marketplace. As a result, smaller players might be left in the dust in 2015.

Read more:
+ Cloud Security Remains a Barrier for CIOs Across Europe

Enterprises
Last, but certainly not least, 2015 will see an aggressive move to the cloud by enterprise organizations. The cost- and timing-saving benefits of cloud adoption will continue to win over large companies.

Read more:
+IBM Enterprise Cloud System

Looking Ahead
Martin Schroeter, senior vice president and CFO, finance and enterprise at IBM has projected approximately $7 billion in total cloud-related sales in 2015, with $3 billion of that coming from new offerings and the rest from older products shifted to be delivered via the cloud.

SoftLayer will continue to match the pace of cloud adoption by providing innovative services and products, signing new customers, and launching new data centers worldwide. In Q1, our network of data centers will expand into Sydney, Australia, with more to come in 2015.

Read more:
+IBM’s Cloud-Based Future Rides on Newcomer Crosby
+InterConnect 2015

-Marc

June 30, 2014

OpenNebula 4.8: SoftLayer Integration

In the next month, the team of talented developers at C12G Labs will be rolling out OpenNebula 4.8, and in that release, they will be adding integration with SoftLayer! If you aren't familiar with OpenNebula, it's a full-featured open-source platform designed to bring simplicity to managing private and hybrid cloud environments. Using a combination of existing virtualization technologies with advanced features for multi-tenancy, automatic provisioning, and elasticity, OpenNebula is driven to meet the real needs of sysadmins and devops.

In OpenNebula 4.8, users can quickly and seamlessly provision and manage SoftLayer cloud infrastructure through OpenNebula's simple, flexible interface. From a single pane of glass, you can create virtual data center environments, configure and adjust cloud resources, and automatic execution and scaling of multi-tiered applications. If you don't want to leave the command line, you can access the same functionality from a powerful CLI tool or through the OpenNebula API.

When the C12G Labs team approached us with the opportunity to be featured in the next release of their platform, several folks from the office were happy to contribute their time to make the integration as seamless as possible. Some of our largest customers have already begun using OpenNebula to manage their hybrid cloud environments, so official support for the SoftLayer cloud in OpenNebula is a huge benefit to them (and to us). The result of this collaboration will be released under the Apache license, and as such, it will be freely available to the public.

To give you an idea of how easy OpenNebula is to use, they created an animated GIF to show the process of creating and powering down virtual machines, creating a server image, and managing account settings:

OpenNebula

We'd like to give a big shout-out to the C12G Labs team for all of the great work they've done on the newest version of OpenNebula, and we look forward to seeing how the platform continues to grow and improve in the future.

-@khazard

Categories: 
October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

September 24, 2012

Cloud Computing is not a 'Thing' ... It's a way of Doing Things.

I like to think that we are beyond 'defining' cloud, but what I find in reality is that we still argue over basics. I have conversations in which people still delineate things like "hosting" from "cloud computing" based degrees of single-tenancy. Now I'm a stickler for definitions just like the next pedantic software-religious guy, but when it comes to arguing minutiae about cloud computing, it's easy to lose the forest for the trees. Instead of discussing underlying infrastructure and comparing hypervisors, we'll look at two well-cited definitions of cloud computing that may help us unify our understanding of the model.

I use the word "model" intentionally there because it's important to note that cloud computing is not a "thing" or a "product." It's a way of doing business. It's an operations model that is changing the fundamental economics of writing and deploying software applications. It's not about a strict definition of some underlying service provider architecture or whether multi-tenancy is at the data center edge, the server or the core. It's about enabling new technology to be tested and fail or succeed in blazing calendar time and being able to support super-fast growth and scale with little planning. Let's try to keep that in mind as we look at how NIST and Gartner define cloud computing.

The National Institute of Standards and Technology (NIST) is a government organization that develops standards, guidelines and minimum requirements as needed by industry or government programs. Given the confusion in the marketplace, there's a huge "need" for a simple, consistent definition of cloud computing, so NIST had a pretty high profile topic on its hands. Their resulting Cloud Computing Definition describes five essential characteristics of cloud computing, three service models, and four deployment models. Let's table the service models and deployment models for now and look at the five essential characteristics of cloud computing. I'll summarize them here; follow the link if you want more context or detail on these points:

  • On-Demand Self Service: A user can automatically provision compute without human interaction.
  • Broad Network Access: Capabilities are available over the network.
  • Resource Pooling: Computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned.
  • Rapid Elasticity: Capabilities can be elastically provisioned and released.
  • Measured Service: Resource usage can be monitored, controlled and reported.

The characteristics NIST uses to define cloud computing are pretty straightforward, but they are still a little ambiguous: How quickly does an environment have to be provisioned for it to be considered "on-demand?" If "broad network access" could just mean "connected to the Internet," why include that as a characteristic? When it comes to "measured service," how granular does the resource monitoring and control need to be for something to be considered "cloud computing?" A year? A minute? These characteristics cast a broad net, and we can build on that foundation as we set out to create a more focused definition.

For our next stop, let's look at Gartner's view: "A style of computing in which scalable and elastic IT-enabled capabilities are delivered as a service using Internet infrastructure." From a philosophical perspective, I love their use of "style" when talking about cloud computing. Little differentiates the underlying IT capabilities of cloud computing from other types of computing, so when looking at cloud computing, we really just see a variation on how those capabilities are being leveraged. It's important to note that Gartner's definition includes "elastic" alongside "scalable" ... Cloud computing gets the most press for being able to scale remarkably, but the flip-side of that expansion is that it also needs to contract on-demand.

All of this describes a way of deploying compute power that is completely different than the way we did this in the decades that we've been writing software. It used to take months to get funding and order the hardware to deploy an application. That's a lot of time and risk that startups and enterprises alike can erase from their business plans.

How do we wrap all of those characteristics up into unified of definition of cloud computing? The way I look at it, cloud computing is as an operations model that yields seemingly unlimited compute power when you need it. It enables (scalable and elastic) capacity as you need it, and that capacity's pricing is based on consumption. That doesn't mean a provider should charge by the compute cycle, generator fan RPM or some other arcane measurement of usage ... It means that a customer should understand the resources that are being invoiced, and he/she should have the power to change those resources as needed. A cloud computing environment has to have self-service provisioning that doesn't require manual intervention from the provider, and I'd even push that requirement a little further: A cloud computing environment should have API accessibility so a customer doesn't even have to manually intervene in the provisioning process (The customer's app could use automated logic and API calls to scale infrastructure up or down based on resource usage).

I had the opportunity to speak at Cloud Connect Chicago, and I shared SoftLayer's approach to cloud computing and how it has evolved into a few distinct products that speak directly to our customers' needs:

The session was about 45 minutes, so the video above has been slimmed down a bit for easier consumption. If you're interested in seeing the full session and getting into a little more detail, we've uploaded an un-cut version here.

-Duke

Subscribe to hybrid-cloud