Posts Tagged 'Cloud'

February 6, 2014

Building a Bridge to the OpenStack API

OpenStack is experiencing explosive growth in the cloud market. With more than 200 companies contributing code to the source and new installations coming online every day, OpenStack is pushing hard to become a global standard for cloud computing. Dozens of useful tools and software products have been developed using the OpenStack API, so a growing community of administrators, developers and IT organizations have access to easy-to-use, powerful cloud resources. This kind of OpenStack integration is great for users on a full OpenStack cloud, but it introduces a challenge to providers and users on other cloud platforms: Should we consider deploying or moving to an OpenStack environment to take advantage of these tools?

If a cloud provider spends years developing a unique platform with a proprietary API, implementing native support for the OpenStack API or deploying a full OpenStack solution may be cost prohibitive, even with significant customer and market demand. The provider can either bite the bullet to implement OpenStack compatibility, hope that a third party library like libclouds or fog is updated to support its API, or choose to go it alone and develop an ecosystem of products around its own API.

Introducing Jumpgate

When we were faced with this situation at SoftLayer, we chose a fourth option. We wanted to make the process of creating an OpenStack-compatible API simpler and more modular. That's where Jumpgate was born. Jumpgate is a middleware that acts as a compatibility layer between the OpenStack API and a provider's proprietary API. Externally, it exposes endpoints that adhere to OpenStack's published and accepted API specification, which it then translates into the provider's API using a series of drivers. Think of it as a mechanism to enable passing from one realm/space into another — like the jumpgates featured in science fiction works.

Connection

How Jumpgate Works
Let's take a look at a high-level example: When you want to create a new virtual instance on OpenStack, you might use the Horizon dashboard or the Nova command line client. When you issue the request, the tool first makes a REST call to a Keystone endpoint for authentication, which returns an authorization token. The client then makes another REST call to a Nova endpoint, which manages the computing instances, to create the actual virtual instance. Nova may then make calls to other tools within the cluster for networking (Quantum), image information (Glance), block storage (Cinder), or more. In addition, your client may also send requests directly to some of these endpoints to query for status updates, information about available resources, and so on.

With Jumpgate, your tool first hits the Jumpgate middleware, which exposes a Keystone endpoint. Jumpgate takes the request, breaks it apart into its relevant pieces, then loads up your provider's appropriate API driver. Next, Jumpgate reformats your request into a form that the driver supports and sends it to the provider's API endpoint. Once the response comes back, Jumpgate again uses the driver to break apart the proprietary API response, reformats it into an OpenStack compatible JSON payload, and sends it back to your client. The result is that you interact with an OpenStack-compatible API, and your cloud provider processes those interactions on their own backend infrastructure.

Internally, Jumpgate is a lightweight middleware built in Python using the Falcon Framework. It provides endpoints for nearly every documented OpenStack API call and allows drivers to attach handlers to these endpoints. This modular approach allows providers to implement only the endpoints that are of the highest importance, rolling out OpenStack API compatibility in stages rather than in one monumental effort. Since it sits alongside the provider's existing API, Jumpgate provides a new API interface without risking the stability already provided by the existing API. It's a value-add service that increases customer satisfaction without a huge increase in cost. Once full implementations is finished, a provider with a proprietary cloud platform can benefit from and offer all the tools that are developed to work with the OpenStack API.

Jumpgate allows providers to test the proper OpenStack compatibility of their drivers by leveraging the OpenStack Tempest test suite. With these tests, developers run the full suite of calls used by OpenStack itself, highlighting edge cases or gaps in functionality. We've even included a helper script that allows Tempest to only run a subset of tests rather than the entire suite to assist with a staged rollout.

Current Development
Jumpgate is currently in an early alpha stage. We've built the compatibility framework itself and started on the SoftLayer drivers as a reference. So far, we've implemented key endpoints within Nova (computing instances), Keystone (identification and authorization), and Glance (image management) to get most of the basic functionality within Horizon (the web dashboard) working. We've heard that several groups outside SoftLayer are successfully using Jumpgate to drive products like Trove and Heat directly on SoftLayer, which is exciting and shows that we're well beyond the "proof of concept" stage. That being said, there's still a lot of work to be done.

We chose to develop Jumpgate in the open with a tool set that would be familiar to developers working with OpenStack. We're excited to debut this project for the broader OpenStack community, and we're accepting pull requests if you're interested in contributing. Making more clouds compatible with the OpenStack API is important and shouldn’t be an individual undertaking. If you're interested in learning more or contributing, head over to our in-flight project page on GitHub: SoftLayer Jumpgate. There, you'll find everything you need to get started along with the updates to our repository. We encourage everyone to contribute code or drivers ... or even just open issues with feature requests. The more community involvement we get, the better.

-Nathan

Categories: 
January 31, 2014

Simplified OpenStack Deployment on SoftLayer

"What is SoftLayer doing with OpenStack?" I can't even begin to count the number of times I've been asked that question over the last few years. In response, I'll usually explain how we've built our object storage platform on top of OpenStack Swift, or I'll give a few examples of how our customers have used SoftLayer infrastructure to build and scale their own OpenStack environments. Our virtual and bare metal cloud servers provide a powerful and flexible foundation for any OpenStack deployment, and our unique three-tiered network integrates perfectly with OpenStack's Compute and Network node architecture, so it's high time we make it easier to build an OpenStack environment on SoftLayer infrastructure.

To streamline and simplify OpenStack deployment for the open source community, we've published Opscode Chef recipes for both OpenStack Grizzly and OpenStack Havana on GitHub: SoftLayer Chef-Openstack. With Chef and SoftLayer, your own OpenStack cloud is a cookbook away. These recipes were designed with the needs of growth and scalability in mind. Let's take a deeper look into what exactly that means.

OpenStack has adopted a three-node design whereby a controller, compute, and network node make up its architecture:

OpenStack Architecture on SoftLayer

Looking more closely at any one node reveal the services it provides. Scaling the infrastructure beyond a few dozen nodes, using this model, could create bottlenecks in services such as your block store, OpenStack Cinder, and image store, OpenStack Glance, since they are traditionally located on the controller node. Infrastructure requirements change from service to service as well. For example OpenStack Neutron, the networking service, does not need much disk I/O while the Cinder storage service might heavily rely on a node's hard disk. Our cookbook allows you to choose how and where to deploy the services, and it even lets you break apart the MySQL backend to further improve platform performance.

Quick Start: Local Demo Environment

To make it easy to get started, we've created a rapid prototype and sandbox script for use with Vagrant and Virtual Box. With Vagrant, you can easily spin up a demo environment of Chef Server and OpenStack in about 15 minutes on moderately good laptops or desktops. Check it out here. This demo environment is an all-in-one installation of our Chef OpenStack deployment. It also installs a basic Chef server as a sandbox to help you see how the SoftLayer recipes were deployed.

Creating a Custom OpenStack Deployment

The thee-node OpenStack model does well in small scale and meets the needs of many consumers; however, control and customizability are the tenants for the design of the SoftLayer OpenStack Chef cookbook. In our model, you have full control over the configuration and location of eleven different components in your deployed environment:

Our Chef recipes will take care of populating the configuration files with the necessary information so you won't have to. When deploying, you merely add the role for the matching service to a hardware or virtual server node, and Chef will deploy the service to it with all the configuration done automatically, including adding multiple Neutron, Nova, and Cinder nodes. This approach allows you to tailor the needs of each service to the hardware it will be deployed to--you might put your Neutron hardware node on a server with 10-gigabit network interfaces and configure your Cinder hardware node with RAID 1+0 15k SAS drives.

OpenStack is a fast growing project for the implementation of IaaS in public and private clouds, but its deployment and configuration can be overwhelming. We created this cookbook to make the process of deploying a full OpenStack environment on SoftLayer quick and straightforward. With the simple configuration of eleven Chef roles, your OpenStack cloud can be deployed onto as little as one node and scaled up to many as hundreds (or thousands).

To follow this project, visit SoftLayer on GitHub. Check out some of our other projects on GitHub, and let us know if you need any help or want to contribute.

-@marcalanjones

January 17, 2014

What's Next? $1.2 Billion Investment. 15 New Data Centers.

SoftLayer was founded in a living room on May 5, 2005. We bootstrapped our vision of becoming the de facto platform for cloud computing by maxing out our credit cards and draining our savings accounts. Over the course of eight years, we built a unique global offering, and in the middle of last year, our long-term vision was validated (and supercharged) by IBM.

When I posted about IBM acquiring SoftLayer last June, I explained that becoming part of IBM "will enable us to continue doing what we've done since 2005, but on an even bigger scale and with greater opportunities." To give you an idea of what "bigger scale" and "greater opportunities" look like, I need only direct you to today's press release: IBM Commits $1.2 Billion to Expand Global Cloud Footprint.

IBM Cloud Investment

It took us the better part of a decade to build a worldwide network of 13 data centers. As part of IBM, we'll more than double our data center footprint in a fraction of that time. In 2006, we were making big moves when we built facilities on the East and West coasts of the United States. Now, we're expanding into places like China, Hong Kong, London, Japan, India, Canada and Mexico City. We had a handful of founders pushing for SoftLayer's success, and now we've got 430,000+ IBM peers to help us reach our goal. This is a whole new ballgame.

The most important overarching story about this planned expansion is what each new facility will mean for our customers. When any cloud provider builds a data center in a new location, it's great news for customers and users in that geographic region: Content in that facility will be geographically closer to them, and they'll see lower pings and better performance from that data center. When SoftLayer builds a data center in a new location, customers and users in that geographic region see performance improvements from *all* of our data centers. The new facility serves as an on-ramp to our global network, so content on any server in any of our data centers can be accessed faster. To help illustrate that point, let's look at a specific example:

If you're in India, and you want to access content from a SoftLayer server in Singapore, you'll traverse the public Internet to reach our network, and the content will traverse the public Internet to get back to you. Third-party peering and transit providers pass the content to/from our network and your ISP, and you'll get the content you requested.

When we add a SoftLayer data center in India, you'll obviously access servers in that facility much more quickly, and when you want content from a server in our Singapore data center, you'll be routed through that new data center's network point of presence in India so that the long haul from India to Singapore will happen entirely on the private network we control and optimize.

Users around the world will have faster, more reliable access to servers in every other SoftLayer data center because we're bringing our network to their front doors. When you combine that kind connectivity and access with our unique hybrid offering of powerful bare metal servers and scalable virtual server instances, it's easy to see how IBM, the most powerful technology company of the last 100 years, is positioned to remain the most powerful technology company in the world for the next century.

Now it's time to get to work.

-@lavosby

December 11, 2013

2013 at SoftLayer: Year in Review

I'm going into my third year at SoftLayer and it feels like "déjà vu all over again" to quote Yogi Berra. The breakneck pace of innovation, cloud adoption and market consolidation — it only seems to be accelerating.

The BIG NEWS for SoftLayer was announced in July when we became part of IBM. Plenty has already been written about the significance of this acquisition but as our CEO, Lance Crosby, eloquently put it in an earlier blog, "customers and clients from both companies will benefit from a higher level of choice and a higher level of service from a single partner. More important, the real significance will come as we merge technology that we developed within the SoftLayer platform with the power and vision that drives SmartCloud and pioneer next-generation cloud services."

We view our acquisition as an interesting inflection point for the entire cloud computing industry. The acquisition has ramifications that go beyond IaaS market and include both PaaS and SaaS offerings. As the foundation for IBM's SmartCloud offerings, the one-stop-shop for an entire portfolio of cloud services will resonate for startups and large enterprises alike. We're also seeing a market that is rapidly consolidating and only those with global reach, deep pockets, and an established customer base will survive.

With IBM's support and resources, SoftLayer's plans for customer growth and geographic expansion have hit the fast track. News outlets are already abuzz with our plans to open a new data center facility in Hong Kong in the first quarter of next year, and that's just the tip of the iceberg for our extremely ambitious 2014 growth plans. Given the huge influx of opportunities our fellow IBMers are bringing to the table, we're going to be busy building data centers to stay one step ahead of customer demand.

The IBM acquisition generated enough news to devote an entire blog to, but because we've accomplished so much in 2013, I'd be remiss if I didn't create some space to highlight some of the other significant milestones we achieved this year. The primary reason SoftLayer was attractive to IBM in the first place was our history of innovation and technology development, and many of the product announcements and press releases we published this year tell that story.

Big Data and Analytics
Big data has been a key focus for SoftLayer in 2013. With the momentum we generated when we announced our partnership with MongoDB in December of 2012, we've been able to develop and roll out high-performance bare metal solution designers for Basho's Riak platfomr and Cloudera Hadoop. Server virtualization is a phenomenal boon to application servers, but disk-heavy, I/O-intensive operations can easily exhaust the resources of a virtualized environment. Because Riak and Hadoop are two of the most popular platforms for big data architectures, we teamed up with Basho and Cloudera to engineer server configurations that would streamline provisioning and supercharge the operations of their data-rich environments. From the newsroom in 2013:

  • SoftLayer announced the availability of Riak and Riak Enterprise on SoftLayer's IaaS platform. This partnership with Basho gives users the availability, fault tolerance, operational simplicity, and scalability of Riak combined with the flexibility, performance, and agility of SoftLayer's on-demand infrastructure.
  • SoftLayer announced a partnership with Cloudera to provide Hadoop big data solutions in a bare metal cloud environment. These on-demand solutions were designed with Cloudera best practices and are rapidly deployed with SoftLayer's easy-to-use solution designer tool.

Cutting-Edge Customers
Beyond the pure cloud innovation milestones we've hit this year, we've also seen a few key customers in vertical markets do their own innovating on our platform. These companies run the gamut from next generation e-commerce to interactive marketers and game developers who require high performance cloud infrastructure to build and scale the next leading application or game. Some of these game developers and cutting-edge tech companies are pretty amazing and we're glad we tapped into them to tell our story:

  • Asia's hottest tech companies looking to expand their reach globally are relying on SoftLayer's cloud infrastructure to break into new markets. Companies such as Distil Networks, Tiket.com, Simpli.fi, and 6waves are leveraging SoftLayer's Singapore data center to build out their customer base while enabling them to deliver their application or game to users across the region with extremely low latency.
  • In March, we announced that hundreds of the top mobile, PC and social games with more than 100 million active players, are now supported on SoftLayer's infrastructure platform. Gaming companies -- including Hothead Games, Geewa, Grinding Gear Games, Peak Games and Rumble Entertainment -- are flocking to SoftLayer because they can roll out virtual and bare-metal servers along with a suite of networking, security and storage solutions on demand and in real time.

Industry Recognition
SoftLayer's success and growth is a collective effort, however, it is nice to see our founder and CEO, Lance Crosby get some well-deserved recognition. In August, the Metroplex Technology Business Council (MTBC), the largest technology trade association in Texas, named him the winner of its Corporate CEO of the Year during the 13th Annual Tech Titans Awards ceremony.

The prestigious annual contest recognizes outstanding information technology companies and individuals in the North Texas area who have made significant contributions during the past year locally, as well as to the technology industry overall.

We're using the momentum we've continued building in 2013 to propel us into 2014. An upcoming milestone, just around the corner, will be our participation at Pulse 2014 in late February. At this conference we plan to unveil the ongoing integration efforts taking place between SoftLayer and IBM including how;

  • SoftLayer provides flexible, secure, cloud-based infrastructure for running the toughest and most mission critical workloads on the cloud;
  • SoftLayer is the foundation of IBM PaaS offerings for cloud-native application development and deployment;
  • SoftLayer is the platform for many of IBM SaaS offerings supporting mobile, social and analytic applications. IBM has a growing portfolio of roughly 110 SaaS applications.

Joining forces with IBM will have its challenges but the opportunities ahead looks amazing. We encourage you to watch this space for even more activity next year and join us at Pulse 2014 in Las Vegas.

-Andre

October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

August 19, 2013

The 5 Mortal Sins of Launching a Social Game

Social network games have revolutionized the gaming industry and created an impressive footprint on the Web as a whole. 235 million people play games on Facebook every month, and some estimates say that by 2014, more than one third of Internet population will be playing social games. Given that market, it's no wonder that the vast majority of game studios, small or big, have prioritized games to be played on Facebook, Orkut, StudiVZ, VK and other social networks.

Developing and launching a game in general is not an easy task. It takes a lot of time, a lot of people, a lot of planning and a lot of assumptions. On top of those operational challenges, the social gaming market is a jungle where "survival of the fittest" is a very, VERY visible reality: One day everyone is growing tomatoes, the next they are bad guys taking over a city, and the next they are crushing candies. An army of genius developers with the most stunning designs and super-engaging game ideas can find it difficult to navigate the fickle social waters, but in the midst of all of that uncertainty, the most successful gaming studios have all avoided five of the most common mortal sins gaming companies commit when launching a social game.

SoftLayer isn't gaming studio, and we don't have any blockbuster games of our own, but we support some of the most creative and successful gaming companies in the world, so we have a ton of indirect experience and perspective on the market. In fact, leading up to GDC Europe, I was speaking with a few of the brilliant people from KUULUU — an interactive entertainment company that creates social games for leading artists, celebrities and communities — about a new Facebook game they've been working on called LINKIN PARK RECHARGE:

After learning a more about how Kuuluu streamlines the process of developing and launching a new title, I started thinking about the market in general and the common mistakes most game developers make when they release a social game. So without further ado...

The 5 Mortal Sins of Launching a Social Game

1. Infinite Focus

Treat focus as limited resource. If it helps, look at your team's cumulative capacity to focus as though it's a single cube. To dedicate focus to different parts of the game or application, you'll need to slice the cube. The more pieces you create, the thinner the slices will be, and you'll be devoting less focus to the most important pieces (which often results in worse quality). If you're diverting a significant amount of attention from building out the game's story line to perfecting the textures of a character's hair or the grass on the ground, you'll wind up with an aesthetically beautiful game that no one wants to play. Of course that example is an extreme, but it's not uncommon for game developers to fall into a less blatant trap like spending time building and managing hosting infrastructure that could better be spent tweaking and improving in-game performance.

2. Eeny, Meeny, Miny, Moe – Geographic Targeting

Don't underestimate the power of the Internet and its social and viral drivers. You might believe your game will take off in Germany, but when you're publishing to a global social network, you need to be able to respond if your game becomes hugely popular in Seoul. A few enthusiastic Tweets or wall post from the alpha-players in Korea might be the catalyst that takes your user base in the region from 1000 to 80,000 overnight to 2,000,000 in a week. With that boom in demand, you need to have the flexibility to supply that new market with the best quality service ... And having your entire infrastructure in a single facility in Europe won't make for the best user experience in Asia. Keep an eye on the traction your game has in various regions and geolocate your content closer to the markets where you're seeing the most success.

3. They Love Us, so They'll Forgive Us.

Often, a game's success can lure gaming companies into a false sense of security. Think about it in terms of the point above: 2,000,000 Koreans are trying to play your game a week after a great article is published about you, but you don't make any changes to serve that unexpected audience. What happens? Players time out, latency drags the performance of your game to a crawl, and 2,000,000 users are clicking away to play one of the other 10,000 games on Facebook or 160,000 games in a mobile appstore. Gamers are fickle, and they demand high performance. If they experience anything less than a seamless experience, they're likely to spend their time and money elsewhere. Obviously, there's a unique balance for every game: A handful of players will be understanding to the fact that you underestimated the amount of incoming requests, that you need time to add extra infrastructure or move it elsewhere to decrease latency, but even those players will get impatient when they experience lag and downtime.

KUULUU took on this challenge in an innovative, automated way. They monitor the performance of all of their games and immediately ramp up infrastructure resources to accommodate growth in demand in specific areas. When demand shifts from one of their games to another, they're able to balance their infrastructure accordingly to deliver the best end-user experience at all times.

4. We Will Be Thiiiiiiiiiiis Successful.

Don't count your chickens before the eggs hatch. You never really, REALLY know how a social game will perform when the viral factor influences a game's popularity so dramatically. Your finite plans and expectations wind up being a list of guestimations and wishes. It's great to be optimistic and have faith in your game, but you should never have to over-commit resources "just in case." If your game takes two months to get the significant traction you expect, the infrastructure you built to meet those expectations will be underutilized for two months. On the other hand, if your game attracts four times as many players as you expected, you risk overburdening your resources as you scramble to build out servers. This uncertainty is one of the biggest drivers to cloud computing, and it leads us to the last mortal sin of launching a social game ...

5. Public Cloud Is the Answer to Everything.

To all those bravados who feel they are the master of cloud and see it as an answer to all their problems please, for your fans sake, remember the cloud has more than one flavor. Virtual instances in a public cloud environment can be provisioned within minutes are awesome for your webservers, but they may not perform well for your databases or processor-intensive requirements. KUULUU chose to incorporate bare metal cloud into a hybrid environment where a combination of virtual and dedicated resources work together to provide incredible results:

LP RECHARGE

Avoiding these five mortal sins doesn't guarantee success for your social game, but at the very least, you'll sidestep a few common landmines. For more information on KUULUU's success with SoftLayer, check out this case study.

-Michalina

July 29, 2013

A Brief History of Cloud Computing

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.

Virtualization

As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.

Clouds

As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.

-James

Categories: 
May 15, 2013

Secure Quorum: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we’re happy to welcome Gerard Ibarra from Secure Quorum. Secure Quorum is an easy-to-use emergency notification system and crisis management system that resides in the cloud.

Are You Prepared for an Emergency?

Every company's management team faces the challenge of having too many things going on with not enough time in the day. It's difficult to get everything done, so when push comes to shove, particular projects and issues need to be prioritized to be completed. What do we have to do today that can't be put off to tomorrow? Often, a businesses fall into a reactionary rut where they are constantly "putting out the fires" first, and while it's vital for a business to put out those fires (literal or metaphorical), that approach makes it difficult to proactively prepare for those kinds of issues to streamline the process of resolving them. Secure Quorum was created to provide a simple, secure medium to deal with emergencies and incidents.

What we noticed was that businesses didn't often consider planning for emergencies as part of their operations. The emergencies I'm talking about thankfully don't happen often, but fires, accidents, power outages, workplace violence and denial of service attacks can severely impact the bottom line if they aren't addressed quickly ... They can make or break you. Are you prepared?

Every second that we fail to make informed and logical decisions during an emergency is time lost in taking action. Take these facts for a little perspective:

  • "Property destruction and business disruption due to disasters now rival warfare in terms of loss." (University Corporation for Atmospheric Research)
  • More than 10,000 severe thunderstorms, 2,500 floods, 1,000 tornadoes and 10 hurricanes affect the United States each year. On average, 500 people die yearly because of severe weather and floods. (National Weather News 2005)
  • The cost of natural disasters is rising. During the past two decades, natural disaster damage costs have exceeded the $500 billion mark. Only 17 percent of that figure was covered by insurance. (Dennis S. Mileti, Disasters by Design)
  • Losses as a result of global disasters continue to increase on average every year, with an estimated $360 billion USD lost in 2011. (Centre for Research in the Epidemiology of Disasters)
  • Natural disasters, power outages, IT failures and human error are common causes of disruptions to internal and external communications. They "can cause downtime and have a significant negative impact on employee productivity, customer retention, and the confidence of vendors, partners, and customers." (Debra Chin, Palmer Research, May 2011)

These kinds of "emergencies" are not going away, but because specific emergencies are difficult (if not impossible) to predict, it's not obvious how to deal with them. How do we reduce risk for our employees, vendors, customers and our business? The two best answers to that question are to have a business continuity plan (BCP) and to have a way to communicate and collaborate in the midst of an emergency.

Start with a BCP. A BCP is a strategic plan to help identify and mitigate risk. Investopedia gives a great explanation:

The creation of a strategy through the recognition of threats and risks facing a company, with an eye to ensure that personnel and assets are protected and able to function in the event of a disaster. Business continuity planning (BCP) involves defining potential risks, determining how those risks will affect operations, implementing safeguards and procedures designed to mitigate those risks, testing those procedures to ensure that they work, and periodically reviewing the process to make sure that it is up to date.

Make sure you understand the basics of a BCP, and look for cues from organizations like FEMA for examples of how to approach emergency situations: http://www.ready.gov/business-continuity-planning-suite.

Once you have a basic BCP in place, it's important to be able to execute it when necessary ... That's where an emergency communication and collaboration solution comes into play. You need to streamline how you communicate when an emergency occurs, and if you're relying on a manual process like a phone tree to spread the word and contact key stakeholders in the midst of an incident, you're wasting time that could better be spent focusing to the issue at hand. An emergency communication solution automates that process quickly and logically.

When you create a BCP, you consider which people in your organization are key to responding to specific types of emergencies, and if anything ever happens, you want to get all of those people together. An emergency communication system will collect the relevant information, send it to the relevant people in your organization and seamlessly bridge them into a secured conference call. What would take minutes to complete now takes seconds, and when it comes to responding to these kinds of issues, seconds count. With everyone on a secure call, decisions can be made quickly and recorded to inform employees and stakeholders of what occurred and what the next steps are.

Plan for emergencies and hope that you never have to use that plan. Think about preparing for emergencies strategically, and it could make all the difference in the world. Secure Quorum is a platform that makes it easy to communicate and collaborate quickly, reliably and securely in those high-stress situations, so if you're interested getting help when it comes to responding to emergencies and incidents, visit our site at SecureQuorum.com and check out the whitepaper we just published with one of our customers: Ease of Use: Make it Part of Your Software Decision.

-Gerard Ibarra, CEO of Secure Quorum

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
February 18, 2013

What Happen[ed] in Vegas - Parallels Summit 2013

The Las Vegas Convention and Visitors Authority says, "What happens in Vegas, stays in Vegas," but we absconded from Caesars Palace with far too many pictures and videos from Parallels Summit to adhere to their suggestion. Over the course of three days, attendees stayed busy with presentations, networking sessions, parties, cocktails and (of course) the Server Challenge II. And thanks to Alan's astute questions in The Hangover, we didn't have to ask if the hotel was pager-friendly, whether a payphone bank was available or if Caesar actually lived at the hotel ... We could focus on the business at hand.

This year, Parallels structured the conference around three distinct tracks — Business, Technical and Developer — to focus all of the presentations for their most relevant audiences, and as a result, Parallels Summit engaged a broader, more diverse crowd than ever before. Many of the presentations were specifically geared toward the future of the cloud and how businesses can innovate to leverage the cloud's potential. With all of that buzz around the cloud and innovation, SoftLayer felt right at home. We were also right at home when it came to partying.

SoftLayer was a proud sponsor of the massive Parallels Summit party at PURE Nightclub in Caesar's palace on the second night of the conference. With respect to the "What Happens in Vegas" tagline, we actually powered down our recording devices to let the crowd enjoy the jugglers, acrobats, drinks and music without fear of incriminating pictures winding up on Facebook. Don't worry, though ... We made up for that radio silence by getting a little extra coverage of the epic Server Challenge II competition.

More than one hundred attendees stepped up to reassemble our rack of Supermicro servers, and the competition was fierce. The top two times were fifty-nine hundredths of a second apart from each other, and it took a blazingly fast time of 1:25.00 to even make the leader board. As the challenge heated up, we were able to capture video of the top three competitors (to be used as study materials for all competitors at future events):

It's pretty amazing to see the cult following that the Server Challenge is starting to form, but it's not very surprising. Given how intense some of these contests have been, people are scouting our events page for their next opportunity to step up to the server rack, and I wouldn't be surprised to see that people are mocking up their own Server Challenge racks at home to hone their strategy. A few of our friends on Twitter hinted that they're in training to dominate the next time they compete, so we're preparing for the crowds to get bigger and for the times to keep dropping.

If you weren't able to attend the show, Parallels posted video from two of the keynote presentations, and shared several of the presentation slide decks on the Parallels Summit Agenda. You might not get the full experience of networking, partying or competing in the Server Challenge, but you can still learn a lot.

Viva Las Vegas! Viva Parallels! Viva SoftLayer!

-Kevin

January 28, 2013

Catalyst: In the Startup Sauna and Slush

Slush.fi was a victim of its own success. In November 2012, the website home of Startup Sauna's early-stage startup conference was crippled by an unexpected flood of site traffic, and they had to take immediate action. Should they get a private MySQL instance from their current host to try and accommodate the traffic or should they move their site to the SoftLayer cloud? Spoiler (highlight for clue): You're reading this post on the SoftLayer Blog.

Let me back up for a second and tell you a little about Startup Sauna and Slush. Startup Sauna hosts (among other things) a Helsinki-based seed accelerator program for early-stage startup companies from Northern Europe and Russia. They run two five-week programs every year, with more than one hundred graduated companies to date. In addition to the accelerator program, Startup Sauna also puts on annually the biggest startup conference in Northern europe called Slush. Slush was founded in 2008 with the intent to bring the local startup scene together at least once every year. Now — five years later — Slush brings more international investors and media to the region than any other event out there. This year alone, 3,500 entrepreneurs, investors and partners who converged on Slush to make connections and see the region's most creative and innovative businesses, products and services.

Slush Conference

In October of last year, we met the founders of Startup Sauna, and it was clear that they would be a perfect fit to join Catalyst. We offer their portfolio companies free credits for cloud and dedicated hosting, and we really try get to know the teams and alumni. Because Startup Sauna signed on just before Slush 2012 in November, they didn't want to rock the boat by moving their site to SoftLayer before the conference. Little did we know that they'd end up needing to make the transition during the conference.

When the event started, the Slush website was inundated with traffic. Attendees were checking the agenda and learning about some of the featured startups, and the live stream of the presentation brought record numbers of unique visitors and views. That's all great news ... Until those "record numbers" pushed the site's infrastructure to its limit. Startup Sauna CTO Lari Haataja described what happened:

The number of participants had definitely most impact on our operations. The Slush website was hosted on a standard webhotel (not by SoftLayer), and due to the tremendous traffic we faced some major problems. Everyone was busy during the first morning, and it took until noon before we had time to respond to the messages about our website not responding. Our Google Analytics were on fire, especially when Jolla took the stage to announce their big launch. We were streaming the whole program live, and anyone who wasn't able to attend the conference wanted to be the first to know about what was happening.

The Slush website was hosted on a shared MySQL instance with a limited number of open connections, so when those connections were maxed out (quickly) by site visitors from 134 different countries, database errors abounded. The Startup Sauna team knew that a drastic change was needed to get the site back online and accessible, so they provisioned a SoftLayer cloud server and moved their site to its new home. In less than two hours (much of the time being spent waiting for files to be downloaded and for DNS changes to be recognized), the site was back online and able to accommodate the record volume of traffic.

You've seen a few of these cautionary tales before on the SoftLayer Blog, and that's because these kinds of experiences are all too common. You dream about getting hundreds of thousands of visitors, but when those visitors come, you have to be ready for them. If you have an awesome startup and you want to learn more about the Startup Sauna, swing by Helsinki this week. SoftLayer Chief Strategy Officer George Karidis will be in town, and we plan on taking the Sauna family (and anyone else interested) out for drinks on January 31! Drop me a line in a comment here or over on Twitter, and I'll make sure you get details.

-@EmilyBlitz

Categories: 
Subscribe to cloud