Cloud Posts

July 14, 2014

London Just Got Cloudier—LON02 is LIVE!

Summer at SoftLayer is off to a great start. As of today, customers can order SoftLayer servers in our new London data center! This facility is SoftLayer's second data center in Europe (joining Amsterdam in the region), and it's one of the most anticipated facilities we've ever opened.

London is the second SoftLayer data center to go live this year, following last month's data center launch in Hong Kong. In January, IBM committed to investing $1.2 billion to expand our cloud footprint, and it's been humbling and thrilling at the same time to prepare for all of this growth. And this is just the beginning.

When it comes to the Europe, Middle East, and Africa region (EMEA), SoftLayer's largest customer base is in the U.K. For the last two and a half years I’ve been visiting London quite frequently, and I've met hundreds of customers who are ecstatic to finally have a SoftLayer data center in their own backyard. As such, I'm especially excited about this launch. With this data center launch, they get our global platform with a local address.

The SoftLayer Network

Customers with location-sensitive workloads can have their data reside within the U.K. Customers with infrastructure in Amsterdam can use London to add in-region redundancy to their environments. And businesses that target London's hyper-competitive markets can deliver unbelievable performance to their users. LON02 is fully integrated with the entire SoftLayer platform, so bare metal and virtual servers in the new data center are seamlessly connected to servers in every other SoftLayer data center around the world. As an example of what that means in practice, you can replicate or integrate data between servers in London and Amsterdam data centers with stunning transfer speeds. For free. You can run your databases on bare metal in London, keep backups in Amsterdam, spin up virtual servers in Asia and the U.S. And your end users get consistent, reliable performance—as though the servers were in the same rack. Try beating that!

London is a vibrant, dynamic, and invigorating city. It's consistently voted one of the best places for business in the region. It's considered a springboard for Europe, attracting more foreign investors than any other location in the region. A third of world’s largest companies are headquartered in London, and with our new data center, we're able to serve them even more directly. London is also the biggest tech hub in-region and the biggest incubator for technology startups and entrepreneurs in Europe. These cloud-native organizations have been pushing the frontiers of technology, building their businesses on our Internet-scale platform for years, so we're giving them an even bigger sandbox to play in. My colleagues from Catalyst, our startup program, have established solid partnerships with organizations such as Techstars, Seedcamp and Wayra UK, so (as you can imagine) this news is already making waves in the U.K. startup universe.

For me, London will always be the European capitol of marketing and advertising (and a strong contender for the top spot in the global market). In fact, two thirds of international advertising agencies have their European headquarters in London, and the city boasts the highest density of creative firms of any other city or region in the world. Because digital marketing and advertising use cases are some of the most demanding technological workloads, we're focused on meeting the needs of this market. These customers require speed, performance, and global reach, and we deliver. Can you imagine RTB (real-time-bidding) with network lag? An ad pool for multinationals that is accessible in one region, but not so much in another? A live HD digital broadcast to run on shared, low-I/O machines? Or a 3D graphic rendering based on a purely virtualized environment? Just thinking about those scenarios makes me cringe, and it reinforces my excitement for our new data center in London.

MobFox, a customer who happens to be the largest mobile ad platform in Europe and in the top five globally, shares my enthusiasm. MobFox operates more than 150 billion impressions per month for clients including Nike, Heineken, EA, eBay, BMW, Netflix, Expedia, and McDonalds (as a comparison I was told that Twitter does about 7 billion+ a month). Julian Zehetmayr, the brilliant 23-year-old CEO of MobFox, agreed that London is a key location for businesses operating in digital advertising space and expressed his excitement about the opportunity we’re bringing his company.

I could go on and on about why this news is soooo good. But instead, I'll let you experience it yourself. Order bare metal or virtual servers in London, and save $500 on your first month service.

Celebrate a cloudy summer in London!

-Michalina

July 1, 2014

The Cloud in 100 Years

Today’s cloud is still in its infancy, with less than 10 years under its belt, yet it has produced some of the most advanced products and solutions known to date. Cloud, in fact, has helped change how the world connects by making information, current events, and communication available globally, at the speed of light.

The Internet itself was born in the 1960s and in just 44 years, look at what it has accomplished! Websites like Google, Bing, and Yahoo provide up-to-the-second information that is reinventing and replacing the role dictionaries and encyclopedias once played. Facebook, Twitter, and Instagram are revolutionizing how most of the world communicates. WordPress, Tumblr, and bloggers give voices to many journalist and writers who were once only heard by few, if any. It is truly a new landscape today. Do you think when Herman Hollerith thought he invented the punch card in the 1890s that it would evolve data processing to “the cloud” in just 100 years? IBM 100 explains:

One could argue that the information age began with the punch card, and that data processing as a transformational technology began with its 1928 redesign by IBM. This thin piece of cardboard, with 80 columns of tiny rectangular holes made the world quantifiable. It allowed data to be recorded, stored, and analyzed. For nearly 50 years, it remained the primary vehicle for processing the essential facts and figures that comprised countless industries, in every corner of the globe. (IBM 100)

What about the future?

It’s obvious that predicting 10 decades into the future is a difficult task, but one thing is for sure, this cloud thing is just getting started.

  • What will we call it? The Internet/World Wide Web is now almost synonymous with the term cloud. I predict that in the next 20 years it will take on another name. Something even more nebulous than the cloud … maybe even “The Nebula.” Or … quite possibly, Skynet!
  • How will it be accessed? In 100 years, I think the more fitting question will be, “how will you hide from it?” Today, we are voluntarily connected with our smart phones. You can be found and contacted using varying mediums from a single, handheld device. FaceTime, WhatsApp, Skype, Tango … you name it. You can make video calls to people halfway around the world in seconds. If Moore’s law still applies in 100 years, our devices could potentially be 50 times smaller than what they are today.
  • Ultimate Control: Nanotechnology will have the ability to control the weather and not only determine if we will have rain but regulate it. Weather control could rid the world of drought and make uninhabitable areas of the world flourish.
  • Medicine: The term “antibiotics” will take on a whole new meaning for medicine in 100 years. Imagine instead of getting a shot of penicillin, you receive 50mL of microscopic robots that can attack the virus directly, from within. The robots then send a push notification to your ‘iPhone 47S’ notifying you that your flu bug has been located and irradiated and that you can press “OK” to send the final report to your physician. The Magic School Bus finally becomes a reality!

Without a doubt, cloud services will be everywhere in the future. The change is already taking place with early adopters and businesses. In the 10 years since the industry coined the term cloud, it’s become a birthplace for technology and industry disruptive behavior. This has caught the attention of the traditional IT organizations as a way to save capital, lower time to market, and increase research and development on their own products and services.

SoftLayer is dedicated to helping the transformation of mid-market and enterprise companies alike. We understand that the cloud is virtually making this world smaller as companies reach into markets that were once out of reach; which is why we’re in the process of doubling our data center footprint to reach those unreachable areas of the world. Don’t be surprised when we announce our first data center on the moon!

-Harold

Categories: 
June 30, 2014

OpenNebula 4.8: SoftLayer Integration

In the next month, the team of talented developers at C12G Labs will be rolling out OpenNebula 4.8, and in that release, they will be adding integration with SoftLayer! If you aren't familiar with OpenNebula, it's a full-featured open-source platform designed to bring simplicity to managing private and hybrid cloud environments. Using a combination of existing virtualization technologies with advanced features for multi-tenancy, automatic provisioning, and elasticity, OpenNebula is driven to meet the real needs of sysadmins and devops.

In OpenNebula 4.8, users can quickly and seamlessly provision and manage SoftLayer cloud infrastructure through OpenNebula's simple, flexible interface. From a single pane of glass, you can create virtual data center environments, configure and adjust cloud resources, and automatic execution and scaling of multi-tiered applications. If you don't want to leave the command line, you can access the same functionality from a powerful CLI tool or through the OpenNebula API.

When the C12G Labs team approached us with the opportunity to be featured in the next release of their platform, several folks from the office were happy to contribute their time to make the integration as seamless as possible. Some of our largest customers have already begun using OpenNebula to manage their hybrid cloud environments, so official support for the SoftLayer cloud in OpenNebula is a huge benefit to them (and to us). The result of this collaboration will be released under the Apache license, and as such, it will be freely available to the public.

To give you an idea of how easy OpenNebula is to use, they created an animated GIF to show the process of creating and powering down virtual machines, creating a server image, and managing account settings:

OpenNebula

We'd like to give a big shout-out to the C12G Labs team for all of the great work they've done on the newest version of OpenNebula, and we look forward to seeing how the platform continues to grow and improve in the future.

-@khazard

Categories: 
April 29, 2014

The Media Industry is Making the Move to Cloud

Rumor has it that at the entire rendering of James Cameron’s “Avatar” using 3DFusion required more than 1 petabyte of storage space. This is equivalent to 500 hard drives of 2 terabytes each, or a 32 year-long MP3 file! The computing power behind this would consist of about 34 racks, each with 4 chassis containing 32 machines. All of that adds up to roughly 40,000 processors and 104 terabytes of RAM.

High-res, long-form media files that can reach hundreds of gigabytes of storage are regular phenomena in the media industry. Whether it’s making the next “Avatar” or creating the next big, viral ad campaign, technology is fundamental to the media industry. But, the investment required to set these up is enough to boggle the mind and dissuade even the high risk-takers. So, why buy when you can rent?

Cloud allows you to rent, own, use, and return the infrastructure with no capex. That gives users access to unlimited compute power, including servers, network, storage, firewalls, and ancillary services, all available on demand, with pay-as-you-go billing offered hourly or monthly.

Cloud services are an increasingly viable avenue for the industry to leverage and support the performance needs of online media storage, as well as collaboration environment. The benefits of a customizable approach to the cloud include: digital archives, production support, broadcast facility resiliency, high-intensity processing, and derivatives manufacturing for transcoding and encrypting. An on-demand, scalable infrastructure is the next step toward reducing production and operations costs, simplifying data access, and delivering content faster to the end user.

This year at ad:tech asean, SoftLayer will present on how the media industry is utilizing cloud infrastructure. So, I thought this would be a good opportunity to share some interesting customer stories about media companies at the top of their games and successfully growing their businesses on the cloud. Here are two of those stories.

The Loft Group, an Australian creative digital agency, specializes in creating e-learning campaigns for global brands. The company won a contract with cosmetics giant L’Oreal but realized that in order to go big with their platform, they needed technology that provided their support team with the necessary analytics. The Loft Group selected SoftLayer as the cloud platform for its digital e-learning campaigns. Moving their services to the cloud helped the company achieve global scale, consistent performance across multiple countries and grow at a pace which slashed a 3- to 5-year transformation timeline down to just months.

According to eMarketer’s forecast, global e-commerce sales will top $1.2 trillion by 2016. That growth is projected to continue by 20 percent every year. Ad personalization is playing a larger part in maximizing e-commerce business. To keep up with the demands of real-time ad personalization, companies like Struq, an ad personalization platform, require an infrastructure that can process high volumes at high speeds.

Struq offers highly targeted ad campaigns across a range of promotional platforms. The company often handles more than 2 terabytes of raw event data every day, processing more than 95 percent of requests in fewer than 30 milliseconds. And when the company’s growing European customer base demanded immediate server allocation, Struq turned to SoftLayer for scalability. We were able to offer on-demand provisioning as well as the low latency their customers required. A detailed story of how Struq achieved the requisite scalability and success with SoftLayer is available here.

More stories to come, so stay tuned! In the meantime, you can hear more customer stories during the first leg of ad:tech asean, a prelim roadshow in Jakarta, Kuala Lumpur and Bangkok.

-@namrata_kapur

March 7, 2014

Why the Cloud Scares Traditional IT

My background is "traditional IT." I've been architecting and promoting enterprise virtualization solutions since 2002, and over the past few years, public and hybrid cloud solutions have become a serious topic of discussion ... and in many cases, contention. The customers who gasped with excitement when VMware rolled out a new feature for their on-premises virtualized environments would dismiss any recommendations of taking a public cloud or a hybrid cloud approach. Off-premises cloud environments were surrounded by marketing hype, and the IT departments considering them had legitimate concerns, especially around security and compliance.

I completely understood their concerns, and until recently, I often agreed with them. The cloud model is intimidating. If you've had control over every aspect of your IT environment for a few decades, you don't want to give up access to your infrastructure, much less have to trust another company to protect your business-critical information. But now, I think about those concerns as the start of a conversation about cloud, rather than a "no-go" zone. The cloud is different, but a company's approach to it should still be the same.

What do I mean by that? Enterprise developers and engineers still have to serve as architects to determine the functional and operational requirements for their services. In that process, they need to determine the suitability of a given platform for the computing workload and the company's business objectives and core competencies. Unfortunately, many of the IT decision-makers don't consider the bigger business context, and they choose to build their own "public" IaaS offerings to accommodate internal workloads, and in many cases, their own external clients.

This approach might makes sense for service providers, integrators and telcos because infrastructure resources are core components of their businesses, but I've seen the same thing happen at financial institutions, rental companies, and even an airline. Over time, internal IT departments carved out infrastructure-services revenue streams that are totally unrelated to the company's core business. The success of enterprise virtualization often empowered IT departments through cost savings and automation — making the promise of delivering public cloud “in-house” a natural extension and seemingly attractive proposition. Reshaping their perspectives around information security and compliance in that way is often a functional approach, but is it money well spent?

Instead of spending hundreds of thousands or millions of dollars in capital to build out (often commoditized) infrastructure, these businesses could be investing those resources in developing and marketing their core business areas. To give you an example of how a traditional IT task is performed in the cloud, I can share my experience from when I first accessed my SoftLayer account: I deployed a physical ESX host alongside a virtual compute instance, fully pre-configured with OS and vCenter, and I connected it via VPN to my existing (on-prem) vCenter environment. In the old model, that process would have probably taken a couple of days to complete, and I got it done in 3 hours.

Now more than ever, it is the responsibility of the core business line to validate internal IT strategies and evaluate alternatives. Public cloud is not always the right answer for all workloads, but driven by the rapidly evolving maturity and proliferation of IaaS, PaaS and SaaS offerings, most organizations will see significant benefits from it. Ultimately, the best way to understand the potential value is just to give it a try.

-Andy

Andreas Groth is an IBM worldwide channel solutions architect, focusing primarily on SoftLayer. Follow him on Twitter: @andreasgroth

February 6, 2014

Building a Bridge to the OpenStack API

OpenStack is experiencing explosive growth in the cloud market. With more than 200 companies contributing code to the source and new installations coming online every day, OpenStack is pushing hard to become a global standard for cloud computing. Dozens of useful tools and software products have been developed using the OpenStack API, so a growing community of administrators, developers and IT organizations have access to easy-to-use, powerful cloud resources. This kind of OpenStack integration is great for users on a full OpenStack cloud, but it introduces a challenge to providers and users on other cloud platforms: Should we consider deploying or moving to an OpenStack environment to take advantage of these tools?

If a cloud provider spends years developing a unique platform with a proprietary API, implementing native support for the OpenStack API or deploying a full OpenStack solution may be cost prohibitive, even with significant customer and market demand. The provider can either bite the bullet to implement OpenStack compatibility, hope that a third party library like libclouds or fog is updated to support its API, or choose to go it alone and develop an ecosystem of products around its own API.

Introducing Jumpgate

When we were faced with this situation at SoftLayer, we chose a fourth option. We wanted to make the process of creating an OpenStack-compatible API simpler and more modular. That's where Jumpgate was born. Jumpgate is a middleware that acts as a compatibility layer between the OpenStack API and a provider's proprietary API. Externally, it exposes endpoints that adhere to OpenStack's published and accepted API specification, which it then translates into the provider's API using a series of drivers. Think of it as a mechanism to enable passing from one realm/space into another — like the jumpgates featured in science fiction works.

Connection

How Jumpgate Works
Let's take a look at a high-level example: When you want to create a new virtual instance on OpenStack, you might use the Horizon dashboard or the Nova command line client. When you issue the request, the tool first makes a REST call to a Keystone endpoint for authentication, which returns an authorization token. The client then makes another REST call to a Nova endpoint, which manages the computing instances, to create the actual virtual instance. Nova may then make calls to other tools within the cluster for networking (Quantum), image information (Glance), block storage (Cinder), or more. In addition, your client may also send requests directly to some of these endpoints to query for status updates, information about available resources, and so on.

With Jumpgate, your tool first hits the Jumpgate middleware, which exposes a Keystone endpoint. Jumpgate takes the request, breaks it apart into its relevant pieces, then loads up your provider's appropriate API driver. Next, Jumpgate reformats your request into a form that the driver supports and sends it to the provider's API endpoint. Once the response comes back, Jumpgate again uses the driver to break apart the proprietary API response, reformats it into an OpenStack compatible JSON payload, and sends it back to your client. The result is that you interact with an OpenStack-compatible API, and your cloud provider processes those interactions on their own backend infrastructure.

Internally, Jumpgate is a lightweight middleware built in Python using the Falcon Framework. It provides endpoints for nearly every documented OpenStack API call and allows drivers to attach handlers to these endpoints. This modular approach allows providers to implement only the endpoints that are of the highest importance, rolling out OpenStack API compatibility in stages rather than in one monumental effort. Since it sits alongside the provider's existing API, Jumpgate provides a new API interface without risking the stability already provided by the existing API. It's a value-add service that increases customer satisfaction without a huge increase in cost. Once full implementations is finished, a provider with a proprietary cloud platform can benefit from and offer all the tools that are developed to work with the OpenStack API.

Jumpgate allows providers to test the proper OpenStack compatibility of their drivers by leveraging the OpenStack Tempest test suite. With these tests, developers run the full suite of calls used by OpenStack itself, highlighting edge cases or gaps in functionality. We've even included a helper script that allows Tempest to only run a subset of tests rather than the entire suite to assist with a staged rollout.

Current Development
Jumpgate is currently in an early alpha stage. We've built the compatibility framework itself and started on the SoftLayer drivers as a reference. So far, we've implemented key endpoints within Nova (computing instances), Keystone (identification and authorization), and Glance (image management) to get most of the basic functionality within Horizon (the web dashboard) working. We've heard that several groups outside SoftLayer are successfully using Jumpgate to drive products like Trove and Heat directly on SoftLayer, which is exciting and shows that we're well beyond the "proof of concept" stage. That being said, there's still a lot of work to be done.

We chose to develop Jumpgate in the open with a tool set that would be familiar to developers working with OpenStack. We're excited to debut this project for the broader OpenStack community, and we're accepting pull requests if you're interested in contributing. Making more clouds compatible with the OpenStack API is important and shouldn’t be an individual undertaking. If you're interested in learning more or contributing, head over to our in-flight project page on GitHub: SoftLayer Jumpgate. There, you'll find everything you need to get started along with the updates to our repository. We encourage everyone to contribute code or drivers ... or even just open issues with feature requests. The more community involvement we get, the better.

-Nathan

Categories: 
February 3, 2014

Risk Management: 5 Tips for Managing Risk in the Cloud

Security breaches have made front-page news in recent months. With stories about Target, Neiman Marcus, Yahoo! and GoDaddy in the headlines recently, the importance of good information security practices is becoming harder and harder to ignore — even for smaller businesses. Moving your business into the cloud offers a plethora of benefits; however, those benefits do not come without their challenges. Moving your business into the cloud involves risks such as multi-tenancy, so it's important to be able to properly manage and identify these risks.

1. Know the Security Your Provider Offers
While some SaaS providers may have security baked-in, most IaaS providers (including SoftLayer) leave much of the logical security responsibility of a customer's systems to the customer. For the security measures that an infrastructure provider handles, the provider should be able to deliver documentation attesting these controls. We perform an annual SOC2 audit, so we can attest to the status of our security and availability controls as a service organization. With this information, our customers use controls from our report as part of their own compliance requirements. Knowing a provider's security controls (and seeing proof of that security) allows business owners and Chief Information Security Officers (CISO) to have peace-of-mind that they can properly plan their control activities to better prevent or respond to a breach.

2. Use the Cloud to Distribute and Replicate Your Presence
The incredible scalability and geographical distribution of operating in the cloud can yield some surprising payoff. Experts in the security industry are leveraging the cloud to reduce their patch cycles to days, not weeks or months. Most cloud providers have multiple sites so that you can spread your presence nationally, or even globally. With this kind of infrastructure footprint, businesses can replicate failover systems and accommodate regional demand across multiple facilities with the minimal incremental investment (and with nearly identical security controls).

3. Go Back to the Basics
Configuration management. Asset management. Separation of duties. Strong passwords. Many organizations get so distracted by the big picture of their security measures that they fail to manage these basic rights. Take advantage of any of your provider's tools to assist in the ‘mundane’ tasks that are vitally important to your business's overall security posture. For example, you can use image templates or post-provisioning scripts to deploy a standard baseline configuration to your systems, then track them down to the specific server room. You’ll know what hardware is in your server at all times, and if you're using SoftLayer, you can even drill down to the serial numbers of your hard drives.

4. Have Sound Incident Response Plans
The industry is becoming increasingly cognizant of the fact that it’s not a matter of if, but when a security threat will present itself. Even with exceedingly high levels of baked-in security, most of the recent breaches resulted from a compromised employee. Be prepared to respond to security incidents with confidence. While you may be physically distanced from your systems, you should be able to meet defined Recovery Time Objectives (RTOs) for your services.

5. Maintain Constant Contact with Your Cloud Provider
Things happen. No amount of planning can completely halt every incident, whether it be a natural disaster or a determined attacker. Know that your hosting provider has your back when things take an unexpected turn.

With proper planning and good practice, the cloud isn't as risky and frightening as most think. If you're interested in learning a little more about the best practices around security in the cloud, check out the Cloud Security Alliance (CSA). The CSA provides a wealth of knowledge to assist business owners and security professionals alike. Build on the strengths, compensate for the weaknesses, and you and your CISO will be able to sleep at night (and maybe even sneak in a beer after work).

-Matt

January 31, 2014

Simplified OpenStack Deployment on SoftLayer

"What is SoftLayer doing with OpenStack?" I can't even begin to count the number of times I've been asked that question over the last few years. In response, I'll usually explain how we've built our object storage platform on top of OpenStack Swift, or I'll give a few examples of how our customers have used SoftLayer infrastructure to build and scale their own OpenStack environments. Our virtual and bare metal cloud servers provide a powerful and flexible foundation for any OpenStack deployment, and our unique three-tiered network integrates perfectly with OpenStack's Compute and Network node architecture, so it's high time we make it easier to build an OpenStack environment on SoftLayer infrastructure.

To streamline and simplify OpenStack deployment for the open source community, we've published Opscode Chef recipes for both OpenStack Grizzly and OpenStack Havana on GitHub: SoftLayer Chef-Openstack. With Chef and SoftLayer, your own OpenStack cloud is a cookbook away. These recipes were designed with the needs of growth and scalability in mind. Let's take a deeper look into what exactly that means.

OpenStack has adopted a three-node design whereby a controller, compute, and network node make up its architecture:

OpenStack Architecture on SoftLayer

Looking more closely at any one node reveal the services it provides. Scaling the infrastructure beyond a few dozen nodes, using this model, could create bottlenecks in services such as your block store, OpenStack Cinder, and image store, OpenStack Glance, since they are traditionally located on the controller node. Infrastructure requirements change from service to service as well. For example OpenStack Neutron, the networking service, does not need much disk I/O while the Cinder storage service might heavily rely on a node's hard disk. Our cookbook allows you to choose how and where to deploy the services, and it even lets you break apart the MySQL backend to further improve platform performance.

Quick Start: Local Demo Environment

To make it easy to get started, we've created a rapid prototype and sandbox script for use with Vagrant and Virtual Box. With Vagrant, you can easily spin up a demo environment of Chef Server and OpenStack in about 15 minutes on moderately good laptops or desktops. Check it out here. This demo environment is an all-in-one installation of our Chef OpenStack deployment. It also installs a basic Chef server as a sandbox to help you see how the SoftLayer recipes were deployed.

Creating a Custom OpenStack Deployment

The thee-node OpenStack model does well in small scale and meets the needs of many consumers; however, control and customizability are the tenants for the design of the SoftLayer OpenStack Chef cookbook. In our model, you have full control over the configuration and location of eleven different components in your deployed environment:

Our Chef recipes will take care of populating the configuration files with the necessary information so you won't have to. When deploying, you merely add the role for the matching service to a hardware or virtual server node, and Chef will deploy the service to it with all the configuration done automatically, including adding multiple Neutron, Nova, and Cinder nodes. This approach allows you to tailor the needs of each service to the hardware it will be deployed to--you might put your Neutron hardware node on a server with 10-gigabit network interfaces and configure your Cinder hardware node with RAID 1+0 15k SAS drives.

OpenStack is a fast growing project for the implementation of IaaS in public and private clouds, but its deployment and configuration can be overwhelming. We created this cookbook to make the process of deploying a full OpenStack environment on SoftLayer quick and straightforward. With the simple configuration of eleven Chef roles, your OpenStack cloud can be deployed onto as little as one node and scaled up to many as hundreds (or thousands).

To follow this project, visit SoftLayer on GitHub. Check out some of our other projects on GitHub, and let us know if you need any help or want to contribute.

-@marcalanjones

January 29, 2014

Get Your Pulse Racing

What will the future bring for SoftLayer and IBM? Over the past six months, you've probably asked that question more than a few times, and the answer you got may have been incomplete. You know that IBM is supercharging SoftLayer expansion and that our platform will be the foundation for IBM's most popular enterprise cloud products and services, but you've really only seen a glimpse of the big picture. At IBM Pulse, you'll get a much better view.

SoftLayer is no stranger to conferences and events. Last year alone, we were involved in around 70 different trade shows, and that number doesn't include the dozens of meetups, events, and parties we participated in without an official booth presence. It's pretty safe to say that Pulse is more important to us than any of the shows we've attended in the past. Why? Because Pulse is the first major conference where SoftLayer will be in the spotlight.

As a major component in IBM's cloud strategy, it's safe to assume that every attendee at IBM's "Premier Cloud Conference" will hear all about SoftLayer's platform and capabilities. We'll have the Server Challenge on the expo hall floor, we're going to play a huge part in connecting with developers at dev@Pulse, a number of SLayers are slated to lead technical sessions, and Wednesday's general session will be presented by our CEO, Lance Crosby.

If you're interested in what's next for IBM in the cloud, join us at Pulse 2014. SoftLayer customers are eligible for a significant discount on registration for the full conference, so if you need details on how to sign up, leave a comment on this blog or contact a SoftLayer sales rep, and we'll make sure you get all the information you need. To make it easier for first-time attendees to experience Pulse, IBM offers a special Pulse Peek pass that will get you into the general sessions and expo hall for free!

If you're a developer, we need to see you at dev@Pulse. Happening in parallel with the main Pulse show, dev@Pulse is focused on helping attendees design, develop, and deploy the next generation of cloud-based systems and applications. In addition to the lightning talks, hands-on labs, free certification testing, and code jam competition, you'll get to try out the Oculus Rift, meet a ton of brilliant people, and party with Elvis Costello and Fall Out Boy. The cost? A whopping $0.

Whether you're chairman of the board or a front-line application developer, you'll get a lot out of IBM Pulse. What happens in Vegas ... could change the way you do business. (Note: The parties, however, will stay in Vegas.)

-@khazard

October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

Subscribe to cloud