Posts Tagged 'Cloud'

August 7, 2014

Deploy or Die

“Forget about being a futurist, become a now-ist.” With those words, Joi Ito, the director of the MIT Media Lab, ends his most recent talk at TED. What thrills me the most is his encouragement to apply agile principles throughout any innovation process, and creating in the moment, building quickly and improving constantly is the story we’ve been advocating at SoftLayer for a long while.

Joi says that this new approach is possible thanks to the Internet. I actually want to take it further. Because the Internet has been around a lot longer than these agile principles, I argue that the real catalyst for the startups and technology disruptors we see nowadays was the widespread, affordable availability of cloud resources. The chance of deploying infrastructure on demand without long-term commitments, anywhere in the world, and with an option to scale it up and down on the fly decreased the cost of innovation dramatically. And fueling that innovation has always been raison d'être of SoftLayer.

Joi compares two innovation models: the before the Internet (I will go ahead and replace “Internet” with “cloud,” which I believe makes the case even stronger) and the new model. The world seemed to be much more structured before the cloud, governed by a certain set of rules and laws. When the cloud happened, it became very complex, low cost, and fast, with Newtonian rules being often defied.

Before, creating something new would cost millions of dollars. The process started with commercial minds, aka MBAs, who’d write a business plan, look for money to support it, and then hire designers and engineers to build the thing. Recently, this MBA-driven model has flipped: first designers and engineers build a thing, then they look for money from VCs or larger organizations, then they write a business plan, and then they move on to hiring MBAs.

A couple of months ago, I started to share this same observation more loudly. In the past, if an organization wanted to bring something new to the market, or just make iteration to the existing offering, it involved a lot of resources, from time, to people, to supporting infrastructure. Only a handful of ideas, after cumbersome fights with processes, budget restrictions, and people (and their egos), got to see the daylight. Change was a luxury.

Nowadays the creators are people who used to be in the shadows, mainly taking instructions from “management” and spinning the hamster wheel they were put on. Now, the “IT crowd” no longer sits in the basements of their offices. They are creating new revenue streams and becoming driving forces within their organizations, or they are rolling out their own businesses as startup founders. There is a whole new breed of technology entrepreneurs thriving on what the cloud offers.

Coming back to the TED talk, Joi brings great examples proving that this new designers/engineers-driven model has pushed innovation to the edges and beyond not only in software development, but also in manufacturing, medicine, and other disciplines. He describes bottom-up innovation as democratic, chaotic, and hard to control, where traditional rules don’t apply anymore. He replaces the demo-or-die motto with a new one: deploy or die, stating that you have to bring something to the real world for it to really count.

He walks us through the principles behind the new way of doing things, and for each of those, without any hesitation, I can add, “and that’s exactly what the cloud enables” as an ending to each statement:

  • Principle 1: Pull Over Push is about pulling the resources from the network as you need them, rather than stocking them in the center and controlling everything. And that’s exactly what the cloud enables.
  • Principle 2: Learning Over Education means drawing conclusions and learning on the go—not from static information, but by experimenting, testing things in real life, playing around with your idea, seeing what comes out of it, and applying the lessons moving forward. And that’s exactly what the cloud enables.
  • Principle 3: Compass Over Maps calls out the high cost of writing a plan or mapping the whole project, as it usually turns out not to be very accurate nor useful in the unpredictable world we live in. It’s better not to plan the whole thing with all the details ahead, but to know the direction you’re headed and leave yourself the freedom of flexibility, to adjust as you go, taking into account the changes resulting from each step. And that’s exactly what the cloud enables.

I dare to say that all the above is the true power of cloud without fluff, leaving you with an easy choice when facing the deploy-or-die dilemma.

- Michalina

July 1, 2014

The Cloud in 100 Years

Today’s cloud is still in its infancy, with less than 10 years under its belt, yet it has produced some of the most advanced products and solutions known to date. Cloud, in fact, has helped change how the world connects by making information, current events, and communication available globally, at the speed of light.

The Internet itself was born in the 1960s and in just 44 years, look at what it has accomplished! Websites like Google, Bing, and Yahoo provide up-to-the-second information that is reinventing and replacing the role dictionaries and encyclopedias once played. Facebook, Twitter, and Instagram are revolutionizing how most of the world communicates. WordPress, Tumblr, and bloggers give voices to many journalist and writers who were once only heard by few, if any. It is truly a new landscape today. Do you think when Herman Hollerith thought he invented the punch card in the 1890s that it would evolve data processing to “the cloud” in just 100 years? IBM 100 explains:

One could argue that the information age began with the punch card, and that data processing as a transformational technology began with its 1928 redesign by IBM. This thin piece of cardboard, with 80 columns of tiny rectangular holes made the world quantifiable. It allowed data to be recorded, stored, and analyzed. For nearly 50 years, it remained the primary vehicle for processing the essential facts and figures that comprised countless industries, in every corner of the globe. (IBM 100)

What about the future?

It’s obvious that predicting 10 decades into the future is a difficult task, but one thing is for sure, this cloud thing is just getting started.

  • What will we call it? The Internet/World Wide Web is now almost synonymous with the term cloud. I predict that in the next 20 years it will take on another name. Something even more nebulous than the cloud … maybe even “The Nebula.” Or … quite possibly, Skynet!
  • How will it be accessed? In 100 years, I think the more fitting question will be, “how will you hide from it?” Today, we are voluntarily connected with our smart phones. You can be found and contacted using varying mediums from a single, handheld device. FaceTime, WhatsApp, Skype, Tango … you name it. You can make video calls to people halfway around the world in seconds. If Moore’s law still applies in 100 years, our devices could potentially be 50 times smaller than what they are today.
  • Ultimate Control: Nanotechnology will have the ability to control the weather and not only determine if we will have rain but regulate it. Weather control could rid the world of drought and make uninhabitable areas of the world flourish.
  • Medicine: The term “antibiotics” will take on a whole new meaning for medicine in 100 years. Imagine instead of getting a shot of penicillin, you receive 50mL of microscopic robots that can attack the virus directly, from within. The robots then send a push notification to your ‘iPhone 47S’ notifying you that your flu bug has been located and irradiated and that you can press “OK” to send the final report to your physician. The Magic School Bus finally becomes a reality!

Without a doubt, cloud services will be everywhere in the future. The change is already taking place with early adopters and businesses. In the 10 years since the industry coined the term cloud, it’s become a birthplace for technology and industry disruptive behavior. This has caught the attention of the traditional IT organizations as a way to save capital, lower time to market, and increase research and development on their own products and services.

SoftLayer is dedicated to helping the transformation of mid-market and enterprise companies alike. We understand that the cloud is virtually making this world smaller as companies reach into markets that were once out of reach; which is why we’re in the process of doubling our data center footprint to reach those unreachable areas of the world. Don’t be surprised when we announce our first data center on the moon!

-Harold

Categories: 
April 29, 2014

The Media Industry is Making the Move to Cloud

Rumor has it that at the entire rendering of James Cameron’s “Avatar” using 3DFusion required more than 1 petabyte of storage space. This is equivalent to 500 hard drives of 2 terabytes each, or a 32 year-long MP3 file! The computing power behind this would consist of about 34 racks, each with 4 chassis containing 32 machines. All of that adds up to roughly 40,000 processors and 104 terabytes of RAM.

High-res, long-form media files that can reach hundreds of gigabytes of storage are regular phenomena in the media industry. Whether it’s making the next “Avatar” or creating the next big, viral ad campaign, technology is fundamental to the media industry. But, the investment required to set these up is enough to boggle the mind and dissuade even the high risk-takers. So, why buy when you can rent?

Cloud allows you to rent, own, use, and return the infrastructure with no capex. That gives users access to unlimited compute power, including servers, network, storage, firewalls, and ancillary services, all available on demand, with pay-as-you-go billing offered hourly or monthly.

Cloud services are an increasingly viable avenue for the industry to leverage and support the performance needs of online media storage, as well as collaboration environment. The benefits of a customizable approach to the cloud include: digital archives, production support, broadcast facility resiliency, high-intensity processing, and derivatives manufacturing for transcoding and encrypting. An on-demand, scalable infrastructure is the next step toward reducing production and operations costs, simplifying data access, and delivering content faster to the end user.

This year at ad:tech asean, SoftLayer will present on how the media industry is utilizing cloud infrastructure. So, I thought this would be a good opportunity to share some interesting customer stories about media companies at the top of their games and successfully growing their businesses on the cloud. Here are two of those stories.

The Loft Group, an Australian creative digital agency, specializes in creating e-learning campaigns for global brands. The company won a contract with cosmetics giant L’Oreal but realized that in order to go big with their platform, they needed technology that provided their support team with the necessary analytics. The Loft Group selected SoftLayer as the cloud platform for its digital e-learning campaigns. Moving their services to the cloud helped the company achieve global scale, consistent performance across multiple countries and grow at a pace which slashed a 3- to 5-year transformation timeline down to just months.

According to eMarketer’s forecast, global e-commerce sales will top $1.2 trillion by 2016. That growth is projected to continue by 20 percent every year. Ad personalization is playing a larger part in maximizing e-commerce business. To keep up with the demands of real-time ad personalization, companies like Struq, an ad personalization platform, require an infrastructure that can process high volumes at high speeds.

Struq offers highly targeted ad campaigns across a range of promotional platforms. The company often handles more than 2 terabytes of raw event data every day, processing more than 95 percent of requests in fewer than 30 milliseconds. And when the company’s growing European customer base demanded immediate server allocation, Struq turned to SoftLayer for scalability. We were able to offer on-demand provisioning as well as the low latency their customers required. A detailed story of how Struq achieved the requisite scalability and success with SoftLayer is available here.

More stories to come, so stay tuned! In the meantime, you can hear more customer stories during the first leg of ad:tech asean, a prelim roadshow in Jakarta, Kuala Lumpur and Bangkok.

-@namrata_kapur

February 6, 2014

Building a Bridge to the OpenStack API

OpenStack is experiencing explosive growth in the cloud market. With more than 200 companies contributing code to the source and new installations coming online every day, OpenStack is pushing hard to become a global standard for cloud computing. Dozens of useful tools and software products have been developed using the OpenStack API, so a growing community of administrators, developers and IT organizations have access to easy-to-use, powerful cloud resources. This kind of OpenStack integration is great for users on a full OpenStack cloud, but it introduces a challenge to providers and users on other cloud platforms: Should we consider deploying or moving to an OpenStack environment to take advantage of these tools?

If a cloud provider spends years developing a unique platform with a proprietary API, implementing native support for the OpenStack API or deploying a full OpenStack solution may be cost prohibitive, even with significant customer and market demand. The provider can either bite the bullet to implement OpenStack compatibility, hope that a third party library like libclouds or fog is updated to support its API, or choose to go it alone and develop an ecosystem of products around its own API.

Introducing Jumpgate

When we were faced with this situation at SoftLayer, we chose a fourth option. We wanted to make the process of creating an OpenStack-compatible API simpler and more modular. That's where Jumpgate was born. Jumpgate is a middleware that acts as a compatibility layer between the OpenStack API and a provider's proprietary API. Externally, it exposes endpoints that adhere to OpenStack's published and accepted API specification, which it then translates into the provider's API using a series of drivers. Think of it as a mechanism to enable passing from one realm/space into another — like the jumpgates featured in science fiction works.

Connection

How Jumpgate Works
Let's take a look at a high-level example: When you want to create a new virtual instance on OpenStack, you might use the Horizon dashboard or the Nova command line client. When you issue the request, the tool first makes a REST call to a Keystone endpoint for authentication, which returns an authorization token. The client then makes another REST call to a Nova endpoint, which manages the computing instances, to create the actual virtual instance. Nova may then make calls to other tools within the cluster for networking (Quantum), image information (Glance), block storage (Cinder), or more. In addition, your client may also send requests directly to some of these endpoints to query for status updates, information about available resources, and so on.

With Jumpgate, your tool first hits the Jumpgate middleware, which exposes a Keystone endpoint. Jumpgate takes the request, breaks it apart into its relevant pieces, then loads up your provider's appropriate API driver. Next, Jumpgate reformats your request into a form that the driver supports and sends it to the provider's API endpoint. Once the response comes back, Jumpgate again uses the driver to break apart the proprietary API response, reformats it into an OpenStack compatible JSON payload, and sends it back to your client. The result is that you interact with an OpenStack-compatible API, and your cloud provider processes those interactions on their own backend infrastructure.

Internally, Jumpgate is a lightweight middleware built in Python using the Falcon Framework. It provides endpoints for nearly every documented OpenStack API call and allows drivers to attach handlers to these endpoints. This modular approach allows providers to implement only the endpoints that are of the highest importance, rolling out OpenStack API compatibility in stages rather than in one monumental effort. Since it sits alongside the provider's existing API, Jumpgate provides a new API interface without risking the stability already provided by the existing API. It's a value-add service that increases customer satisfaction without a huge increase in cost. Once full implementations is finished, a provider with a proprietary cloud platform can benefit from and offer all the tools that are developed to work with the OpenStack API.

Jumpgate allows providers to test the proper OpenStack compatibility of their drivers by leveraging the OpenStack Tempest test suite. With these tests, developers run the full suite of calls used by OpenStack itself, highlighting edge cases or gaps in functionality. We've even included a helper script that allows Tempest to only run a subset of tests rather than the entire suite to assist with a staged rollout.

Current Development
Jumpgate is currently in an early alpha stage. We've built the compatibility framework itself and started on the SoftLayer drivers as a reference. So far, we've implemented key endpoints within Nova (computing instances), Keystone (identification and authorization), and Glance (image management) to get most of the basic functionality within Horizon (the web dashboard) working. We've heard that several groups outside SoftLayer are successfully using Jumpgate to drive products like Trove and Heat directly on SoftLayer, which is exciting and shows that we're well beyond the "proof of concept" stage. That being said, there's still a lot of work to be done.

We chose to develop Jumpgate in the open with a tool set that would be familiar to developers working with OpenStack. We're excited to debut this project for the broader OpenStack community, and we're accepting pull requests if you're interested in contributing. Making more clouds compatible with the OpenStack API is important and shouldn’t be an individual undertaking. If you're interested in learning more or contributing, head over to our in-flight project page on GitHub: SoftLayer Jumpgate. There, you'll find everything you need to get started along with the updates to our repository. We encourage everyone to contribute code or drivers ... or even just open issues with feature requests. The more community involvement we get, the better.

-Nathan

Categories: 
January 31, 2014

Simplified OpenStack Deployment on SoftLayer

"What is SoftLayer doing with OpenStack?" I can't even begin to count the number of times I've been asked that question over the last few years. In response, I'll usually explain how we've built our object storage platform on top of OpenStack Swift, or I'll give a few examples of how our customers have used SoftLayer infrastructure to build and scale their own OpenStack environments. Our virtual and bare metal cloud servers provide a powerful and flexible foundation for any OpenStack deployment, and our unique three-tiered network integrates perfectly with OpenStack's Compute and Network node architecture, so it's high time we make it easier to build an OpenStack environment on SoftLayer infrastructure.

To streamline and simplify OpenStack deployment for the open source community, we've published Opscode Chef recipes for both OpenStack Grizzly and OpenStack Havana on GitHub: SoftLayer Chef-Openstack. With Chef and SoftLayer, your own OpenStack cloud is a cookbook away. These recipes were designed with the needs of growth and scalability in mind. Let's take a deeper look into what exactly that means.

OpenStack has adopted a three-node design whereby a controller, compute, and network node make up its architecture:

OpenStack Architecture on SoftLayer

Looking more closely at any one node reveal the services it provides. Scaling the infrastructure beyond a few dozen nodes, using this model, could create bottlenecks in services such as your block store, OpenStack Cinder, and image store, OpenStack Glance, since they are traditionally located on the controller node. Infrastructure requirements change from service to service as well. For example OpenStack Neutron, the networking service, does not need much disk I/O while the Cinder storage service might heavily rely on a node's hard disk. Our cookbook allows you to choose how and where to deploy the services, and it even lets you break apart the MySQL backend to further improve platform performance.

Quick Start: Local Demo Environment

To make it easy to get started, we've created a rapid prototype and sandbox script for use with Vagrant and Virtual Box. With Vagrant, you can easily spin up a demo environment of Chef Server and OpenStack in about 15 minutes on moderately good laptops or desktops. Check it out here. This demo environment is an all-in-one installation of our Chef OpenStack deployment. It also installs a basic Chef server as a sandbox to help you see how the SoftLayer recipes were deployed.

Creating a Custom OpenStack Deployment

The thee-node OpenStack model does well in small scale and meets the needs of many consumers; however, control and customizability are the tenants for the design of the SoftLayer OpenStack Chef cookbook. In our model, you have full control over the configuration and location of eleven different components in your deployed environment:

Our Chef recipes will take care of populating the configuration files with the necessary information so you won't have to. When deploying, you merely add the role for the matching service to a hardware or virtual server node, and Chef will deploy the service to it with all the configuration done automatically, including adding multiple Neutron, Nova, and Cinder nodes. This approach allows you to tailor the needs of each service to the hardware it will be deployed to--you might put your Neutron hardware node on a server with 10-gigabit network interfaces and configure your Cinder hardware node with RAID 1+0 15k SAS drives.

OpenStack is a fast growing project for the implementation of IaaS in public and private clouds, but its deployment and configuration can be overwhelming. We created this cookbook to make the process of deploying a full OpenStack environment on SoftLayer quick and straightforward. With the simple configuration of eleven Chef roles, your OpenStack cloud can be deployed onto as little as one node and scaled up to many as hundreds (or thousands).

To follow this project, visit SoftLayer on GitHub. Check out some of our other projects on GitHub, and let us know if you need any help or want to contribute.

-@marcalanjones

January 17, 2014

What's Next? $1.2 Billion Investment. 15 New Data Centers.

SoftLayer was founded in a living room on May 5, 2005. We bootstrapped our vision of becoming the de facto platform for cloud computing by maxing out our credit cards and draining our savings accounts. Over the course of eight years, we built a unique global offering, and in the middle of last year, our long-term vision was validated (and supercharged) by IBM.

When I posted about IBM acquiring SoftLayer last June, I explained that becoming part of IBM "will enable us to continue doing what we've done since 2005, but on an even bigger scale and with greater opportunities." To give you an idea of what "bigger scale" and "greater opportunities" look like, I need only direct you to today's press release: IBM Commits $1.2 Billion to Expand Global Cloud Footprint.

IBM Cloud Investment

It took us the better part of a decade to build a worldwide network of 13 data centers. As part of IBM, we'll more than double our data center footprint in a fraction of that time. In 2006, we were making big moves when we built facilities on the East and West coasts of the United States. Now, we're expanding into places like China, Hong Kong, London, Japan, India, Canada and Mexico City. We had a handful of founders pushing for SoftLayer's success, and now we've got 430,000+ IBM peers to help us reach our goal. This is a whole new ballgame.

The most important overarching story about this planned expansion is what each new facility will mean for our customers. When any cloud provider builds a data center in a new location, it's great news for customers and users in that geographic region: Content in that facility will be geographically closer to them, and they'll see lower pings and better performance from that data center. When SoftLayer builds a data center in a new location, customers and users in that geographic region see performance improvements from *all* of our data centers. The new facility serves as an on-ramp to our global network, so content on any server in any of our data centers can be accessed faster. To help illustrate that point, let's look at a specific example:

If you're in India, and you want to access content from a SoftLayer server in Singapore, you'll traverse the public Internet to reach our network, and the content will traverse the public Internet to get back to you. Third-party peering and transit providers pass the content to/from our network and your ISP, and you'll get the content you requested.

When we add a SoftLayer data center in India, you'll obviously access servers in that facility much more quickly, and when you want content from a server in our Singapore data center, you'll be routed through that new data center's network point of presence in India so that the long haul from India to Singapore will happen entirely on the private network we control and optimize.

Users around the world will have faster, more reliable access to servers in every other SoftLayer data center because we're bringing our network to their front doors. When you combine that kind connectivity and access with our unique hybrid offering of powerful bare metal servers and scalable virtual server instances, it's easy to see how IBM, the most powerful technology company of the last 100 years, is positioned to remain the most powerful technology company in the world for the next century.

Now it's time to get to work.

-@lavosby

December 11, 2013

2013 at SoftLayer: Year in Review

I'm going into my third year at SoftLayer and it feels like "déjà vu all over again" to quote Yogi Berra. The breakneck pace of innovation, cloud adoption and market consolidation — it only seems to be accelerating.

The BIG NEWS for SoftLayer was announced in July when we became part of IBM. Plenty has already been written about the significance of this acquisition but as our CEO, Lance Crosby, eloquently put it in an earlier blog, "customers and clients from both companies will benefit from a higher level of choice and a higher level of service from a single partner. More important, the real significance will come as we merge technology that we developed within the SoftLayer platform with the power and vision that drives SmartCloud and pioneer next-generation cloud services."

We view our acquisition as an interesting inflection point for the entire cloud computing industry. The acquisition has ramifications that go beyond IaaS market and include both PaaS and SaaS offerings. As the foundation for IBM's SmartCloud offerings, the one-stop-shop for an entire portfolio of cloud services will resonate for startups and large enterprises alike. We're also seeing a market that is rapidly consolidating and only those with global reach, deep pockets, and an established customer base will survive.

With IBM's support and resources, SoftLayer's plans for customer growth and geographic expansion have hit the fast track. News outlets are already abuzz with our plans to open a new data center facility in Hong Kong in the first quarter of next year, and that's just the tip of the iceberg for our extremely ambitious 2014 growth plans. Given the huge influx of opportunities our fellow IBMers are bringing to the table, we're going to be busy building data centers to stay one step ahead of customer demand.

The IBM acquisition generated enough news to devote an entire blog to, but because we've accomplished so much in 2013, I'd be remiss if I didn't create some space to highlight some of the other significant milestones we achieved this year. The primary reason SoftLayer was attractive to IBM in the first place was our history of innovation and technology development, and many of the product announcements and press releases we published this year tell that story.

Big Data and Analytics
Big data has been a key focus for SoftLayer in 2013. With the momentum we generated when we announced our partnership with MongoDB in December of 2012, we've been able to develop and roll out high-performance bare metal solution designers for Basho's Riak platfomr and Cloudera Hadoop. Server virtualization is a phenomenal boon to application servers, but disk-heavy, I/O-intensive operations can easily exhaust the resources of a virtualized environment. Because Riak and Hadoop are two of the most popular platforms for big data architectures, we teamed up with Basho and Cloudera to engineer server configurations that would streamline provisioning and supercharge the operations of their data-rich environments. From the newsroom in 2013:

  • SoftLayer announced the availability of Riak and Riak Enterprise on SoftLayer's IaaS platform. This partnership with Basho gives users the availability, fault tolerance, operational simplicity, and scalability of Riak combined with the flexibility, performance, and agility of SoftLayer's on-demand infrastructure.
  • SoftLayer announced a partnership with Cloudera to provide Hadoop big data solutions in a bare metal cloud environment. These on-demand solutions were designed with Cloudera best practices and are rapidly deployed with SoftLayer's easy-to-use solution designer tool.

Cutting-Edge Customers
Beyond the pure cloud innovation milestones we've hit this year, we've also seen a few key customers in vertical markets do their own innovating on our platform. These companies run the gamut from next generation e-commerce to interactive marketers and game developers who require high performance cloud infrastructure to build and scale the next leading application or game. Some of these game developers and cutting-edge tech companies are pretty amazing and we're glad we tapped into them to tell our story:

  • Asia's hottest tech companies looking to expand their reach globally are relying on SoftLayer's cloud infrastructure to break into new markets. Companies such as Distil Networks, Tiket.com, Simpli.fi, and 6waves are leveraging SoftLayer's Singapore data center to build out their customer base while enabling them to deliver their application or game to users across the region with extremely low latency.
  • In March, we announced that hundreds of the top mobile, PC and social games with more than 100 million active players, are now supported on SoftLayer's infrastructure platform. Gaming companies -- including Hothead Games, Geewa, Grinding Gear Games, Peak Games and Rumble Entertainment -- are flocking to SoftLayer because they can roll out virtual and bare-metal servers along with a suite of networking, security and storage solutions on demand and in real time.

Industry Recognition
SoftLayer's success and growth is a collective effort, however, it is nice to see our founder and CEO, Lance Crosby get some well-deserved recognition. In August, the Metroplex Technology Business Council (MTBC), the largest technology trade association in Texas, named him the winner of its Corporate CEO of the Year during the 13th Annual Tech Titans Awards ceremony.

The prestigious annual contest recognizes outstanding information technology companies and individuals in the North Texas area who have made significant contributions during the past year locally, as well as to the technology industry overall.

We're using the momentum we've continued building in 2013 to propel us into 2014. An upcoming milestone, just around the corner, will be our participation at Pulse 2014 in late February. At this conference we plan to unveil the ongoing integration efforts taking place between SoftLayer and IBM including how;

  • SoftLayer provides flexible, secure, cloud-based infrastructure for running the toughest and most mission critical workloads on the cloud;
  • SoftLayer is the foundation of IBM PaaS offerings for cloud-native application development and deployment;
  • SoftLayer is the platform for many of IBM SaaS offerings supporting mobile, social and analytic applications. IBM has a growing portfolio of roughly 110 SaaS applications.

Joining forces with IBM will have its challenges but the opportunities ahead looks amazing. We encourage you to watch this space for even more activity next year and join us at Pulse 2014 in Las Vegas.

-Andre

October 24, 2013

Why Hybrid? Why Now?

As off-premise cloud computing adoption continues to grow in a non-linear fashion, a growing number of businesses running in-house IT environments are debating whether they should get on board as well. If you've been part of any of those conversations, you've tried to balance the hype with the most significant questions for your business: "How do we know if our company is ready to try cloud resources? And if we're ready, how do we actually get started?"

Your company is cloud-ready as soon as you understand and accept the ramifications of remote resources and scaling in the cloud model, and it doesn't have to be an "all-in" decision. If you need certain pieces of your infrastructure to reside in-house, you can start evaluating the cloud with workloads that don't have to be hosted internally. The traditional IT term for this approach is "hybrid," but that term might cause confusion these days.

In the simplest sense, a hybrid model is one in which a workload is handled by one or more non-heterogeneous elements. In the traditional IT sense, those non-heterogeneous elements are two distinct operating environments (on-prem and off-prem). In SoftLayer's world, a hybrid environment leverages different heterogeneous elements: Bare metal and virtual server instances, delivered in the cloud.

Figure 1: Traditional Hybrid - On-Premise to Cloud (Through VPN, SSL or Open Communications)

Traditional Hybrid

Figure 2: SoftLayer's Hybrid - Dedicated + Virtual

SoftLayer Hybrid

Because SoftLayer's "hybrid" and traditional IT's "hybrid" are so different, it's easy to understand the confusion in the marketplace: If a hybrid environment is generally understood to involve the connection of on-premise infrastructure to cloud resources, SoftLayer's definition seems contrarian. Actually, the use of the term is a lot more similar than I expected. In a traditional hosting environment, most businesses think in terms of bare metal (dedicated) servers, and when those businesses move "to the cloud," they're generally thinking in terms of virtualized server instances. So SoftLayer's definition of a hybrid environment is very consistent with the market definition ... It's just all hosted off-premise.

The ability to have dedicated resources intermixed with virtual resources means that workloads from on-premise hypervisors that require native or near-native performance can be moved immediately. And because those workloads don't have to be powered by in-house servers, a company's IT infrastructure moves a CapEx to an OpEx model. In the past, adopting infrastructure as a service (IaaS) involved shoehorning workloads into whichever virtual resource closest matched an existing environment, but those days are gone. Now, on-premise resources can be replicated (and upgraded) on demand in a single off-premise environment, leveraging a mix of virtual and dedicated resources.

SoftLayer's environment simplifies the process for businesses looking to move IT infrastructure off-premise. Those businesses can start by leveraging virtual server instances in a cloud environment while maintaining the in-house resources for certain workloads, and when those in-house resources reach the end of their usable life (or need an upgrade), the businesses can shift those workloads onto bare metal servers in the same cloud environment as their virtual server instances.

The real-world applications are pretty obvious: Your company is considering moving part of a workload to cloud in order to handle peak season loads at the end of the year. You've contemplated transitioning parts of your environment to the cloud, but you've convinced yourself that shared resource pools are too inefficient and full of noisy neighbor problems, so you'd never be able to move your core infrastructure to the same environment. Furthering the dilemma, you have to capitalize on the assets you already have that are still of use to the company.

You finally have the flexibility to slowly transition your environment to a scalable, flexible cloud environment without sacrificing. While the initial setup phases for a hybrid environment may seem arduous, Rome wasn't built in a day, so you shouldn't feel pressure to rush the construction of your IT environment. Here are a few key points to consider when adopting a hybrid model that will make life easier:

  • Keep it simple. Don't overcomplicate your environment. Keep networks, topologies and methodologies simple, and they'll be much more manageable and scalable.
  • Keep it secure. Simple, robust security principles will reduce your deployment timeframe and reduce attack points.
  • Keep it sane. Hybrid mixes the best of both worlds, so chose the best assets to move over. "Best" does not necessarily mean "easiest" or "cheapest" workload, but it doesn't exclude those workloads either.

With this in mind, you're ready to take on a hybrid approach for your infrastructure. There's no certification for when your company finally becomes a "cloud company." The moment you start leveraging off-premise resources, you've got a hybrid environment, and you can adjust your mix of on-premise, off-premise, virtual and bare metal resources as your business needs change and evolve.

-Jeff Klink

Jeff Klink is a senior technical staff member (STSM) with IBM Canada.

August 19, 2013

The 5 Mortal Sins of Launching a Social Game

Social network games have revolutionized the gaming industry and created an impressive footprint on the Web as a whole. 235 million people play games on Facebook every month, and some estimates say that by 2014, more than one third of Internet population will be playing social games. Given that market, it's no wonder that the vast majority of game studios, small or big, have prioritized games to be played on Facebook, Orkut, StudiVZ, VK and other social networks.

Developing and launching a game in general is not an easy task. It takes a lot of time, a lot of people, a lot of planning and a lot of assumptions. On top of those operational challenges, the social gaming market is a jungle where "survival of the fittest" is a very, VERY visible reality: One day everyone is growing tomatoes, the next they are bad guys taking over a city, and the next they are crushing candies. An army of genius developers with the most stunning designs and super-engaging game ideas can find it difficult to navigate the fickle social waters, but in the midst of all of that uncertainty, the most successful gaming studios have all avoided five of the most common mortal sins gaming companies commit when launching a social game.

SoftLayer isn't gaming studio, and we don't have any blockbuster games of our own, but we support some of the most creative and successful gaming companies in the world, so we have a ton of indirect experience and perspective on the market. In fact, leading up to GDC Europe, I was speaking with a few of the brilliant people from KUULUU — an interactive entertainment company that creates social games for leading artists, celebrities and communities — about a new Facebook game they've been working on called LINKIN PARK RECHARGE:

After learning a more about how Kuuluu streamlines the process of developing and launching a new title, I started thinking about the market in general and the common mistakes most game developers make when they release a social game. So without further ado...

The 5 Mortal Sins of Launching a Social Game

1. Infinite Focus

Treat focus as limited resource. If it helps, look at your team's cumulative capacity to focus as though it's a single cube. To dedicate focus to different parts of the game or application, you'll need to slice the cube. The more pieces you create, the thinner the slices will be, and you'll be devoting less focus to the most important pieces (which often results in worse quality). If you're diverting a significant amount of attention from building out the game's story line to perfecting the textures of a character's hair or the grass on the ground, you'll wind up with an aesthetically beautiful game that no one wants to play. Of course that example is an extreme, but it's not uncommon for game developers to fall into a less blatant trap like spending time building and managing hosting infrastructure that could better be spent tweaking and improving in-game performance.

2. Eeny, Meeny, Miny, Moe – Geographic Targeting

Don't underestimate the power of the Internet and its social and viral drivers. You might believe your game will take off in Germany, but when you're publishing to a global social network, you need to be able to respond if your game becomes hugely popular in Seoul. A few enthusiastic Tweets or wall post from the alpha-players in Korea might be the catalyst that takes your user base in the region from 1000 to 80,000 overnight to 2,000,000 in a week. With that boom in demand, you need to have the flexibility to supply that new market with the best quality service ... And having your entire infrastructure in a single facility in Europe won't make for the best user experience in Asia. Keep an eye on the traction your game has in various regions and geolocate your content closer to the markets where you're seeing the most success.

3. They Love Us, so They'll Forgive Us.

Often, a game's success can lure gaming companies into a false sense of security. Think about it in terms of the point above: 2,000,000 Koreans are trying to play your game a week after a great article is published about you, but you don't make any changes to serve that unexpected audience. What happens? Players time out, latency drags the performance of your game to a crawl, and 2,000,000 users are clicking away to play one of the other 10,000 games on Facebook or 160,000 games in a mobile appstore. Gamers are fickle, and they demand high performance. If they experience anything less than a seamless experience, they're likely to spend their time and money elsewhere. Obviously, there's a unique balance for every game: A handful of players will be understanding to the fact that you underestimated the amount of incoming requests, that you need time to add extra infrastructure or move it elsewhere to decrease latency, but even those players will get impatient when they experience lag and downtime.

KUULUU took on this challenge in an innovative, automated way. They monitor the performance of all of their games and immediately ramp up infrastructure resources to accommodate growth in demand in specific areas. When demand shifts from one of their games to another, they're able to balance their infrastructure accordingly to deliver the best end-user experience at all times.

4. We Will Be Thiiiiiiiiiiis Successful.

Don't count your chickens before the eggs hatch. You never really, REALLY know how a social game will perform when the viral factor influences a game's popularity so dramatically. Your finite plans and expectations wind up being a list of guestimations and wishes. It's great to be optimistic and have faith in your game, but you should never have to over-commit resources "just in case." If your game takes two months to get the significant traction you expect, the infrastructure you built to meet those expectations will be underutilized for two months. On the other hand, if your game attracts four times as many players as you expected, you risk overburdening your resources as you scramble to build out servers. This uncertainty is one of the biggest drivers to cloud computing, and it leads us to the last mortal sin of launching a social game ...

5. Public Cloud Is the Answer to Everything.

To all those bravados who feel they are the master of cloud and see it as an answer to all their problems please, for your fans sake, remember the cloud has more than one flavor. Virtual instances in a public cloud environment can be provisioned within minutes are awesome for your webservers, but they may not perform well for your databases or processor-intensive requirements. KUULUU chose to incorporate bare metal cloud into a hybrid environment where a combination of virtual and dedicated resources work together to provide incredible results:

LP RECHARGE

Avoiding these five mortal sins doesn't guarantee success for your social game, but at the very least, you'll sidestep a few common landmines. For more information on KUULUU's success with SoftLayer, check out this case study.

-Michalina

July 29, 2013

A Brief History of Cloud Computing

Believe it or not, "cloud computing" concepts date back to the 1950s when large-scale mainframes were made available to schools and corporations. The mainframe's colossal hardware infrastructure was installed in what could literally be called a "server room" (since the room would generally only be able to hold a single mainframe), and multiple users were able to access the mainframe via "dumb terminals" – stations whose sole function was to facilitate access to the mainframes. Due to the cost of buying and maintaining mainframes, an organization wouldn't be able to afford a mainframe for each user, so it became practice to allow multiple users to share access to the same data storage layer and CPU power from any station. By enabling shared mainframe access, an organization would get a better return on its investment in this sophisticated piece of technology.

Mainframe Computer

A couple decades later in the 1970s, IBM released an operating system called VM that allowed admins on their System/370 mainframe systems to have multiple virtual systems, or "Virtual Machines" (VMs) on a single physical node. The VM operating system took the 1950s application of shared access of a mainframe to the next level by allowing multiple distinct compute environments to live in the same physical environment. Most of the basic functions of any virtualization software that you see nowadays can be traced back to this early VM OS: Every VM could run custom operating systems or guest operating systems that had their "own" memory, CPU, and hard drives along with CD-ROMs, keyboards and networking, despite the fact that all of those resources would be shared. "Virtualization" became a technology driver, and it became a huge catalyst for some of the biggest evolutions in communications and computing.

Mainframe Computer

In the 1990s, telecommunications companies that had historically only offered single dedicated point–to-point data connections started offering virtualized private network connections with the same service quality as their dedicated services at a reduced cost. Rather than building out physical infrastructure to allow for more users to have their own connections, telco companies were able to provide users with shared access to the same physical infrastructure. This change allowed the telcos to shift traffic as necessary to allow for better network balance and more control over bandwidth usage. Meanwhile, virtualization for PC-based systems started in earnest, and as the Internet became more accessible, the next logical step was to take virtualization online.

If you were in the market to buy servers ten or twenty years ago, you know that the costs of physical hardware, while not at the same level as the mainframes of the 1950s, were pretty outrageous. As more and more people expressed demand to get online, the costs had to come out of the stratosphere, and one of the ways that was made possible was by ... you guessed it ... virtualization. Servers were virtualized into shared hosting environments, Virtual Private Servers, and Virtual Dedicated Servers using the same types of functionality provided by the VM OS in the 1950s. As an example of what that looked like in practice, let's say your company required 13 physical systems to run your sites and applications. With virtualization, you can take those 13 distinct systems and split them up between two physical nodes. Obviously, this kind of environment saves on infrastructure costs and minimizes the amount of actual hardware you would need to meet your company's needs.

Virtualization

As the costs of server hardware slowly came down, more users were able to purchase their own dedicated servers, and they started running into a different kind of problem: One server isn't enough to provide the resources I need. The market shifted from a belief that "these servers are expensive, let's split them up" to "these servers are cheap, let's figure out how to combine them." Because of that shift, the most basic understanding of "cloud computing" was born online. By installing and configuring a piece of software called a hypervisor across multiple physical nodes, a system would present all of the environment's resources as though those resources were in a single physical node. To help visualize that environment, technologists used terms like "utility computing" and "cloud computing" since the sum of the parts seemed to become a nebulous blob of computing resources that you could then segment out as needed (like telcos did in the 90s). In these cloud computing environments, it became easy add resources to the "cloud": Just add another server to the rack and configure it to become part of the bigger system.

Clouds

As technologies and hypervisors got better at reliably sharing and delivering resources, many enterprising companies decided to start carving up the bigger environment to make the cloud's benefits to users who don't happen to have an abundance of physical servers available to create their own cloud computing infrastructure. Those users could order "cloud computing instances" (also known as "cloud servers") by ordering the resources they need from the larger pool of available cloud resources, and because the servers are already online, the process of "powering up" a new instance or server is almost instantaneous. Because little overhead is involved for the owner of the cloud computing environment when a new instance is ordered or cancelled (since it's all handled by the cloud's software), management of the environment is much easier. Most companies today operate with this idea of "the cloud" as the current definition, but SoftLayer isn't "most companies."

SoftLayer took the idea of a cloud computing environment and pulled it back one more step: Instead of installing software on a cluster of machines to allow for users to grab pieces, we built a platform that could automate all of the manual aspects of bringing a server online without a hypervisor on the server. We call this platform "IMS." What hypervisors and virtualization do for a group of servers, IMS does for an entire data center. As a result, you can order a bare metal server with all of the resources you need and without any unnecessary software installed, and that server will be delivered to you in a matter of hours. Without a hypervisor layer between your operating system and the bare metal hardware, your servers perform better. Because we automate almost everything in our data centers, you're able to spin up load balancers and firewalls and storage devices on demand and turn them off when you're done with them. Other providers have cloud-enabled servers. We have cloud-enabled data centers.

SoftLayer Pod

IBM and SoftLayer are leading the drive toward wider adoption of innovative cloud services, and we have ambitious goals for the future. If you think we've come a long way from the mainframes of the 1950s, you ain't seen nothin' yet.

-James

Categories: 
Subscribe to cloud