Executive Blog Posts

August 11, 2014

I PLEB Allegiance to My Data!

As a "techy turned marketing turned social media turned compliance turned security turned management" guy, I have had the pleasure of talking to many different customers over the years and have heard horror stories about data loss, data destruction, and data availability. I have also heard great stories about how to protect data and the differing ways to approach data protection.

On a daily basis, I deal with NIST 800-53 rev.4, PCI, HIPAA, CSA, FFIEC, and SOC controls among many others. I also deal with specific customer security worksheets that ask for information about how we (SoftLayer) protect their data in the cloud.

My first response is always, WE DON’T!

The looks I’ve seen on faces in reaction to that response over the years have been priceless. Not just from customers but from auditors’ faces as well.

  • They ask how we back up customer data. We don’t.
  • They ask how we make it redundant. We don’t.
  • They ask how we make it available 99.99 percent of the time. We don’t.

I have to explain to them that SoftLayer is simply infrastructure as a service (IaaS), and we stop there. All other data planning should be done by the customer. OK, you busted me, we do offer managed services as an additional option. We help the customer using that service to configure and protect their data.

We hear from people about Personal Health Information (PHI), credit card data, government data, banking data, insurance data, proprietary information related to code and data structure, and APIs that should be protected with their lives, etc. What is the one running theme? It’s data. And data is data folks, plain and simple!

Photographers want to protect their pictures, chefs want to protect their recipes, grandparents want to protect the pictures of their grandkids, and the Dallas Cowboys want to protect their playbook (not that it is exciting or anything). Data is data, and it should be protected.

So how do you go about doing that? That's where PLEB, the weird acronym in the title of this post, comes in!

PLEB stands for Physical, Logical, Encryption, Backups.

If you take those four topics into consideration when dealing with any type of data, you can limit the risk associated with data loss, destruction, and availability. Let’s look at the details of the four topics:

  • Physical Security—In a cloud model it is on the shoulders of the cloud service provider (CSP) to meet strict requirements of a regulated workload. Your CSP should have robust physical controls in place. They should be SOC2 audited, and you should request the SOC2 report showing little or no exceptions. Think cameras, guards, key card access, bio access, glass alarms, motion detectors, etc. Some, if not all, of these should make your list of must-haves.
  • Logical Access—This is likely a shared control family when dealing with cloud. If the CSP has a portal that can make changes to your systems and the portal has a permissions engine allowing you to add users, then that portion of logical access is a shared control. First, the CSP should protect its portal permission system, while the customer should protect admin access to the portal by creating new privileged users who can make changes to systems. Second, and just as important, when provisioning you must remove the initial credentials setup and add new, private credentials and restrict access accordingly. Note, that it’s strictly a customer control.
  • Encryption—There are many ways to achieve encryption, both at rest and in transit. For data at rest you can use full disk encryption, virtual disk encryption, file or folder encryption, and/or volume encryption. This is required for many regulated workloads and is a great idea for any type of data with personal value. For public data in transit, you should consider SSL or TLS, depending on your needs. For backend connectivity from your place of business, office, or home into your cloud infrastructure, you should consider a secure VPN tunnel for encryption.
  • Backups—I can’t stress enough that backups are not just the right thing to do, they are essential, especially when using IaaS. You want a copy at the CSP you can use if you need to restore quickly. But, you want another copy in a different location upon the chance of a disaster that WILL be out of your control.

So take the PLEB and mitigate risk related to data loss, data destruction, and data availability. Trust me—you will be glad you did.

-@skinman454

January 31, 2014

Simplified OpenStack Deployment on SoftLayer

"What is SoftLayer doing with OpenStack?" I can't even begin to count the number of times I've been asked that question over the last few years. In response, I'll usually explain how we've built our object storage platform on top of OpenStack Swift, or I'll give a few examples of how our customers have used SoftLayer infrastructure to build and scale their own OpenStack environments. Our virtual and bare metal cloud servers provide a powerful and flexible foundation for any OpenStack deployment, and our unique three-tiered network integrates perfectly with OpenStack's Compute and Network node architecture, so it's high time we make it easier to build an OpenStack environment on SoftLayer infrastructure.

To streamline and simplify OpenStack deployment for the open source community, we've published Opscode Chef recipes for both OpenStack Grizzly and OpenStack Havana on GitHub: SoftLayer Chef-Openstack. With Chef and SoftLayer, your own OpenStack cloud is a cookbook away. These recipes were designed with the needs of growth and scalability in mind. Let's take a deeper look into what exactly that means.

OpenStack has adopted a three-node design whereby a controller, compute, and network node make up its architecture:

OpenStack Architecture on SoftLayer

Looking more closely at any one node reveal the services it provides. Scaling the infrastructure beyond a few dozen nodes, using this model, could create bottlenecks in services such as your block store, OpenStack Cinder, and image store, OpenStack Glance, since they are traditionally located on the controller node. Infrastructure requirements change from service to service as well. For example OpenStack Neutron, the networking service, does not need much disk I/O while the Cinder storage service might heavily rely on a node's hard disk. Our cookbook allows you to choose how and where to deploy the services, and it even lets you break apart the MySQL backend to further improve platform performance.

Quick Start: Local Demo Environment

To make it easy to get started, we've created a rapid prototype and sandbox script for use with Vagrant and Virtual Box. With Vagrant, you can easily spin up a demo environment of Chef Server and OpenStack in about 15 minutes on moderately good laptops or desktops. Check it out here. This demo environment is an all-in-one installation of our Chef OpenStack deployment. It also installs a basic Chef server as a sandbox to help you see how the SoftLayer recipes were deployed.

Creating a Custom OpenStack Deployment

The thee-node OpenStack model does well in small scale and meets the needs of many consumers; however, control and customizability are the tenants for the design of the SoftLayer OpenStack Chef cookbook. In our model, you have full control over the configuration and location of eleven different components in your deployed environment:

Our Chef recipes will take care of populating the configuration files with the necessary information so you won't have to. When deploying, you merely add the role for the matching service to a hardware or virtual server node, and Chef will deploy the service to it with all the configuration done automatically, including adding multiple Neutron, Nova, and Cinder nodes. This approach allows you to tailor the needs of each service to the hardware it will be deployed to--you might put your Neutron hardware node on a server with 10-gigabit network interfaces and configure your Cinder hardware node with RAID 1+0 15k SAS drives.

OpenStack is a fast growing project for the implementation of IaaS in public and private clouds, but its deployment and configuration can be overwhelming. We created this cookbook to make the process of deploying a full OpenStack environment on SoftLayer quick and straightforward. With the simple configuration of eleven Chef roles, your OpenStack cloud can be deployed onto as little as one node and scaled up to many as hundreds (or thousands).

To follow this project, visit SoftLayer on GitHub. Check out some of our other projects on GitHub, and let us know if you need any help or want to contribute.

-@marcalanjones

January 17, 2014

What's Next? $1.2 Billion Investment. 15 New Data Centers.

SoftLayer was founded in a living room on May 5, 2005. We bootstrapped our vision of becoming the de facto platform for cloud computing by maxing out our credit cards and draining our savings accounts. Over the course of eight years, we built a unique global offering, and in the middle of last year, our long-term vision was validated (and supercharged) by IBM.

When I posted about IBM acquiring SoftLayer last June, I explained that becoming part of IBM "will enable us to continue doing what we've done since 2005, but on an even bigger scale and with greater opportunities." To give you an idea of what "bigger scale" and "greater opportunities" look like, I need only direct you to today's press release: IBM Commits $1.2 Billion to Expand Global Cloud Footprint.

IBM Cloud Investment

It took us the better part of a decade to build a worldwide network of 13 data centers. As part of IBM, we'll more than double our data center footprint in a fraction of that time. In 2006, we were making big moves when we built facilities on the East and West coasts of the United States. Now, we're expanding into places like China, Hong Kong, London, Japan, India, Canada and Mexico City. We had a handful of founders pushing for SoftLayer's success, and now we've got 430,000+ IBM peers to help us reach our goal. This is a whole new ballgame.

The most important overarching story about this planned expansion is what each new facility will mean for our customers. When any cloud provider builds a data center in a new location, it's great news for customers and users in that geographic region: Content in that facility will be geographically closer to them, and they'll see lower pings and better performance from that data center. When SoftLayer builds a data center in a new location, customers and users in that geographic region see performance improvements from *all* of our data centers. The new facility serves as an on-ramp to our global network, so content on any server in any of our data centers can be accessed faster. To help illustrate that point, let's look at a specific example:

If you're in India, and you want to access content from a SoftLayer server in Singapore, you'll traverse the public Internet to reach our network, and the content will traverse the public Internet to get back to you. Third-party peering and transit providers pass the content to/from our network and your ISP, and you'll get the content you requested.

When we add a SoftLayer data center in India, you'll obviously access servers in that facility much more quickly, and when you want content from a server in our Singapore data center, you'll be routed through that new data center's network point of presence in India so that the long haul from India to Singapore will happen entirely on the private network we control and optimize.

Users around the world will have faster, more reliable access to servers in every other SoftLayer data center because we're bringing our network to their front doors. When you combine that kind connectivity and access with our unique hybrid offering of powerful bare metal servers and scalable virtual server instances, it's easy to see how IBM, the most powerful technology company of the last 100 years, is positioned to remain the most powerful technology company in the world for the next century.

Now it's time to get to work.

-@lavosby

January 10, 2014

Platform Improvements: VLAN Management

As director of product development, I'm tasked with providing SoftLayer customers greater usability and self-service tools on our platform. Often, that challenge involves finding, testing, and introducing new products, but a significant amount of my attention focuses on internal projects to tweak and improve our existing products and services. To give you an idea of what that kind of "behind the scenes" project looks like, I'll fill you in on a few of the updates we recently rolled out to improve the way customers interact with and manage their Virtual LANs (VLANs).

VLANs play a significant role in SoftLayer's platform. In the most basic sense, VLANs fool servers into thinking they're behind the same network switch. If you have multiple servers in the same data center and behind the same router, you could have them all on the same VLAN, and all traffic between the servers would be handled at the layer-2 network level. For customers with multi-tier applications, zones can be created to isolate specific servers into separate VLANs — database servers, app servers, and Web servers can all be isolated in their own security partitions to meet specific security and/or compliance requirements.

In the past, VLANs were all issued distinct numbers so that we could logically and consistently differentiate them from each other. That utilitarian approach has proven to be functional, but we noticed an opportunity to make the naming and management of VLANs more customer-friendly without losing that functionality. Because many of our customers operate large environments with multiple VLANs, they've had the challenge of remembering which servers live behind which VLAN number, and the process of organizing all of that information was pretty daunting. Imagine an old telephone switchboard with criss-crossing wires connecting several numbered jacks (and not connecting others). This is where our new improvements come in.

Customers now have the ability to name their VLANs, and we've made updates that increase visibility into the resources (servers, firewalls, gateways, and subnets) that reside inside specific VLANs. In practice, that means you can name your VLAN that houses database servers "DB" or label it to pinpoint a specific department inside your organization. When you need to find one of those VLANs, you can easily search for it by name and make changes to it easily.

VLAN List View

VLAN Naming

VLAN Detail Page

VLAN Naming

While these little improvements may seem simple, they make life much easier for IT departments and sysadmins with large, complex environments. If you don't need this kind of functionality, we don't throw it in your face, but if you do need it, we make it clear and easily accessible.

If you ever come across quirks in the portal that you'd like us to address, please let us know. We love making big waves by announcing new products and services, but we get as much (or more) joy from finding subtle ways to streamline and improve the way our customers interact with our platform.

-Bryce

November 1, 2013

Paving the Way for the DevOps Revolution

The traditional approach to software development has been very linear: Your development team codes a release and sends it over to a team of quality engineers to be tested. When everything looks good, the code gets passed over to IT operations to be released into production. Each of these teams operates within its own silo and makes changes independent of the other groups, and at any point in the process, it's possible a release can get kicked back to the starting line. With the meteoric rise of agile development — a development philosophy geared toward iterative and incremental code releases — that old waterfall-type development approach is being abandoned in favor of a DevOps approach.

DevOps — a fully integrated development and operations approach — streamlines the software development process in an agile development environment by consolidating development, testing and release responsibilities into one cohesive team. This way, ideas, features and other developments can be released very quickly and iteratively to respond to changing and growing market needs, avoiding the delays of long, drawn-out and timed dev releases.

To help you visualize the difference between the traditional approach and the DevOps approach, take a look at these two pictures:

Traditional Waterfall Development
SoftLayer DevOps Blog

DevOps
SoftLayer DevOps Blog

Unfortunately, many businesses struggle to adopt the DevOps approach because they simply update their org chart by merging their traditional teams, but their development philosophy doesn't change at the same time. As a result, I've encountered a lot of companies who have been jaded by previous attempts to move to a DevOps model, and I'm not alone. There is a significant need in the marketplace for some good old fashioned DevOps expertise.

A couple months ago, my friend Raj Bhargava pinged me with a phenomenal idea to put on a DevOps "un-conference" in Boulder, Colorado, to address the obvious need he's observed for DevOps education and best practices. Raj is a serial, multiple-exit entrepreneur from Boulder, and he is the co-founder and CEO of a DevOps-focused startup there called JumpCloud. When he asked if I would like to co-chair the event and have SoftLayer as a headline sponsor alongside JumpCloud, the answer was a quick and easy "Yes!"

Sure, there have been other DevOps-related conferences around the world, but ours was designed to be different from the outset. As strange as it may sound, half of the conference intentionally occurred outside of the conference: One of our highest priorities was to strike up conversations between the participants before, during and after the event. If we're putting on a conference to encourage a collaborative development approach, it would be counterproductive for us to use a top-down, linear approach to engaging the attendees, right?

I'm happy to report that this inaugural attempt of our untested concept was an amazing success. We kept the event private for our first run at the concept, but the event was bursting at the seams with brilliant developers and tech influencers. Brad Feld and our friends from the Foundry Group invited all of their portfolio CEO's and CTO's. David Cohen, co-founder of Techstars and head honcho at Bullet Time Ventures did the same. JumpCloud and SoftLayer helped round out the attendee list with a few of our most innovative partners as well as a few of technologists from within our own organizations. It was an incredible mix of super-smart tech pros, business leaders and VC's from all over the world.

With such a diverse group of attendees, the conversations at the event were engaging, energizing and profound. We discussed everything from how startups should incorporate automation into their business plans at the outset to how the practice of DevOps evolves as companies scale quickly. At the end of the day, we brought all of those theoretical discussions back down to the ground by sharing case studies of real companies that have had unbelievable success in incorporating DevOps into their businesses. I had the honor of wrapping up the event as moderator of a panel with Jon Prall from Sendgrid, Scott Engstrom from Gnip and Richard Miller of Mocavo, and I couldn't have been happier with the response.

I'd like to send a big thanks to everyone who participated, especially our cosponsors — JumpCloud, VictorOps, Authentic8, DH Capital, SendGrid, Cooley, Pivot Desk, SVP and Pantheon.

I'm looking forward to opening this up to the world next year!

-@PaulFord

October 3, 2013

Improving Communications for Customer-Affecting Events

Service disruptions are never a good thing. Though SoftLayer invests extensively in design, equipment, and personnel training to reduce the risk of disruptions to our customers, in the technology world there are times where scheduled events or unplanned incidents are inevitable. During those times, we understand that restoring service is top priority, and almost as important is communicating to customers regarding the cause of the incident and the current status of our work to resolve it.

To date we've used a combination of tickets, emails, forum posts, portal "yellow" notifications, as well as RSS and Twitter feeds to provide status updates during service-affecting events. Many of these methods require customers to "come and get it," so we've been working on a more targeted, proactive approach to disseminating information.

I'm excited to report that our Development and Operations teams have collaborated on new functionality in the SoftLayer portal that will improve the way we share information with customers about unplanned infrastructure troubles or upcoming planned maintenances. With our new Event Communications toolset, we're able to pinpoint the accounts affected by an event and update users who opt-in to receive notifications about how these events may impact their services.

Notifications

As the development work is finalized, we plan to roll out a few phases of improvements. The first phase of implementation, which is ready today, enables email alerts for unplanned incidents, and any portal user account can opt-in to receive them. These emails provide details about the impact and current status of an unplanned incident in progress (UIP). In this phase, notifications can be sent for devices such as physical servers, CCIs and shared SLB VIPs, and we will be adding additional services over time.

In future phases of this project, we plan to include:

  • A new "Event" section of the Customer Portal which will allow customers to browse upcoming scheduled maintenances or current/recent unplanned incidents which may impact their services. In the past, we generated tickets for scheduled maintenances, so separating these event notifications will improve customer visibility.
  • Enhanced visibility for events in our mobile apps (phone/tablet).
  • Updates to affected services for a given event as customers add / change services.
  • Notification of newly added or newly updated events that have not been read by the user (similar email "inbox" functionality) in the portal.
  • Identification of any related current or recent events as a customer begins to open a ticket in the portal.
  • Reminders of upcoming scheduled maintenances along with progress updates to the event notification throughout the maintenance in some cases.
  • Improved ability to correlate specific incidents to customer service troubles.
  • Dissemination of RFO (reason-for-outage) statements to customers following a post-incident review of an unplanned service disruption.

Since we respect our customers' inboxes, these notifications will only be sent to user accounts that have opted in. If you'd like to receive them, simply log into the Customer Portal and navigate to "Notification Subscriptions" under the "Administration" menu (direct link). From that page, individual users can control event subscriptions, and portal logins that have administrative control over multiple users on the account can control the opt-in for themselves and their downstream users. For a more detailed walkthrough of the opt-in process, visit the KnowledgeLayer: "Update Subscription Settings for the Event Management System"

The Network Operations Center has already begun using this customer notification toolset for customer-affecting events, so we recommend that you opt-in as soon as possible to benefit from this new functionality.

-Dani

September 12, 2013

"Cloud First" or "Mobile First" - Which Development Strategy Comes First?

Company XYZ knows that the majority of its revenue will come from recurring subscriptions to its new SaaS service. To generate visibility and awareness of the SaaS offering, XYZ needs to develop a mobile presence to reach the offering's potential audience. Should XYZ focus on building a mobile presence first (since its timing is most critical), or should it prioritize the completion of the cloud service first (since its importance is most critical)? Do both have to be delivered simultaneously?

It's the theoretical equivalent of the "Which came first: The chicken or the egg?" causality dilemma for many technology companies today.

Several IBM customers have asked me recently about whether the implementation of a "cloud first" strategy or a "mobile first" strategy is most important, and it's a fantastic question. They know that cloud and mobile are not mutually exclusive, but their limited development resources demand that some sort of prioritization be in place. However, should this prioritization be done based on importance or urgency?

IBM MobileFirst

The answer is what you'd expect: It depends! If a company's cloud offering consists solely of back-end services (i.e. no requirement or desire to execute natively on a mobile device), then a cloud-first strategy is clearly needed, right? A mobile presence would only be effective in drawing customers to the back-end services if they are in place and work well. However, what if the cloud offering is targeting only mobile users? Not focusing on the mobile-first user experience could sabotage a great set of back-end services.

As this simple example illustrated, prioritizing one development strategy at the expense of the other strategy can have devastating consequences. In this "Is there an app for that?" generation, a lack of predictable responsiveness for improved quality of service and/or quality of experience can drive your customers to your competitors who are only a click away. Continuous delivery is an essential element of both "cloud first and "mobile first" development. The ability to get feedback quickly from users for new services (and more importantly incorporate that feedback quickly) allows a company to re-shape a service to turn existing users into advocates for the service as well as other adjacent or tiered services. "Cloud first" developers need a cloud service provider that can provide continuous delivery of predictable and superior compute, storage and network services that can be optimized for the type of workload and can adapt to changes in scale requirements. "Mobile first" developers need a mobile application development platform that can ensure the quality of the application's mobile user experience while allowing the mobile application to also leverage back-end services. To accommodate both types of developers, IBM established two "centers of gravity" to allow our customers to strike the right balance between their "cloud first" and "mobile first" development.

It should come as no surprise that the cornerstone of IBM's cloud first offering is SoftLayer. SoftLayer's APIs to its infrastructure services allow companies to optimize their application services based on the needs of application, and the SoftLayer network also optimizes delivery of the application services to the consumer of the service regardless of the location or the type of client access.

For developers looking to prioritize the delivery of services on mobile devices, we centered our MobileFirst initiative on Worklight. Worklight balances the native mobile application experience and integration with back-end services to streamline the development process for "mobile first" companies.

We are actively working on the convergence of our IBM Cloud First and Mobile First strategies via optimized integration of SoftLayer and Worklight services. IBM customers from small businesses through large enterprises will then be able to view "cloud first and "mobile first" as two sides of the same development strategy coin.

-Mac

Mac Devine is an IBM distinguished engineer, director of cloud innovation and CTO, IBM Cloud Services Division. Follow him on Twitter: @mac_devine.

July 19, 2013

Innovation and Entrepreneurship: Transcending Borders

At Cloud World Forum in London, I did an interview with Rachel Downie of CloudMovesTV, and she asked some fantastic questions (full interview embedded a the bottom of this post). One that particularly jumped out to me was, "Does North America have a technology and talent advantage over Europe?" I've posted some thoughts on this topic on the SoftLayer Blog in the past, but I thought I'd reflect on the topic again after six months of traveling across Europe and the Middle East talking with customers, partners and prospects.

I was born just north of Silicon Valley in a little bohemian village called San Francisco. I earned a couple of trophies (and even more battle scars) during the original dot-com boom, so much of my early career was spent in an environment bursting at the seams with entrepreneurs and big ideas. The Valley tends to get most of the press (and all of the movie contracts), so it's easy to assume that the majority of the world's innovation is happening around there. I have first-hand experience that proves that assumption wrong. The talent level, motivation, innovation, technology and desire to make a difference is just as strong, if not stronger, in Europe and the Middle East as it is in the high-profile startup scenes in New York City or San Francisco. And given the level of complexity due to the cultural and language differences, I would argue the innovation that happens in the Middle East and Europe tends to incorporate more flexibility and global scalability earlier than its North American counterparts.

A perfect example of this type of innovation is the ad personalization platform that London-based Struq created. Earlier this year, I presented with Struq CTO Aaron McKee during the TFM&A (Technology For Marketing and Advertising) show in London about how cloud computing helps their product improve online customer dialogue, and I was stunned by how uniquely and efficiently they were able to leverage the cloud to deliver meaningful, accurate results to their customers. Their technology profiles customers, matches them to desired brands, checks media relevance and submits an ad unit target price to auction. If there is a match, Struq then serves a hyper-relevant message to that customer. And all of that in about 25 milliseconds and is happening at scale (over two billion transactions per day). Add in the fact that they serve several different cultures and languages, and you start to understand the work that went into creating this kind of platform. Watch out Valley Boyz and Girlz, they're expanding into the US.

One data point of innovation and success doesn't mean a whole lot, but Struq's success isn't unique. I just got back from Istanbul where I spent some time with Peak Games to learn more about how they became the 3rd largest social gaming company in the world and what SoftLayer could to to help support their growth moving forward. Peak Games, headquartered in Turkey, is on an enviable growth trajectory, and much of their success has come from their lean, focused operations model and clear goals. With more than 30 million customers, it's clear that the team at Peak Games built a phenomenal platform (and some really fun games). Ten years ago, a development team from Turkey may have had to move into a cramped, expensive house in Palo Alto to get the resources and exposure they needed to reach a broader audience, but with the global nature of cloud computing, the need to relocate to succeed is antiquated.

I met a wild-eyed entrepreneur at another meeting in Istanbul who sees exactly what I saw. The region is full of brilliant developers and creative entrepreneurs, so he's on a mission to build out a more robust startup ecosystem to help foster the innovation potential of the region. I've met several people in different countries doing the same thing, but one thing that struck me as unusual about this vision was that he did not say anything about being like Silicon Valley. He almost laughed at me when I asked him about that, and he explained that he wanted his region to be better than Silicon Valley and that his market has unique needs and challenges that being "like Silicon Valley" wouldn't answer. North America is a big market, but it's one of many!

The startups and gaming companies I mentioned get a lot of the attention because they're fun and visible, but the unsung heroes of innovation, the intraprenuers (people who behave like entrepreneurs within large organizations), are the clear and powerful heartbeat of the talent in markets outside of North America. These people are not driven by fame and fortune ... They just want to build innovative products because they can. A mad scientist from one of the largest consumer products firms in the world, based in the EU, just deployed a couple of servers to build an imaging ecosystem that is pushing the limits of technology to improve human health. Another entrepreneur at a large global media company is taking a Mobile First methodology to develop a new way to distribute and consume media in the emerging cross-platform marketplace. These intrapreneurs might not live in Palo Alto or Santa Clara, but they're just as capable to change the world.

Silicon Valley still produces inspiring products and groundbreaking technology, but the skills and expertise that went into those developments aren't confined by borders. To all you innovators across the globe building the future, respect. Working with you is my favorite part of the job.

-@jpwisler

The full interview that inspired this blog post:

July 9, 2013

When to Consider Riak for Your Big Data Architecture

In my Breaking Down 'Big Data' – Database Models, I briefly covered the most common database models, their strengths, and how they handle the CAP theorem — how a distributed storage system balances demands of consistency and availability while maintaining partition tolerance. Here's what I said about Dynamo-inspired databases:

What They Do: Distributed key/value stores inspired by Amazon's Dynamo paper. A key written to a dynamo ring is persisted in several nodes at once before a successful write is reported. Riak also provides a native MapReduce implementation.
Horizontal Scaling: Dynamo-inspired databases usually provide for the best scale and extremely strong data durability.
CAP Balance: Prefer availability over consistency
When to Use: When the system must always be available for writes and effectively cannot lose data.
Example Products: Cassandra, Riak, BigCouch

This type of key/value store architecture is very unique from the document-oriented MongoDB solutions we launched at the end of last year, so we worked with Basho to prioritize development of high-performance Riak solutions on our global platform. Since you already know about MongoDB, let's take a few minutes to meet the new kid on the block.

Riak is a distributed database architected for availability, fault tolerance, operational simplicity and scalability. Riak is masterless, so each node in a Riak cluster is the same and contains a complete, independent copy of the Riak package. This design makes the Riak environment highly fault tolerant and scalable, and it also aids in replication — if a node goes down, you can still read, write and update data.

As you approach the daunting prospect of choosing a big data architecture, there are a few simple questions you need to answer:

  1. How much data do/will I have?
  2. In what format am I storing my data?
  3. How important is my data?

Riak may be the choice for you if [1] you're working with more than three terabytes of data, [2] your data is stored in multiple data formats, and [3] your data must always be available. What does that kind of need look like in real life, though? Luckily, we've had a number of customers kick Riak's tires on SoftLayer bare metal servers, so I can share a few of the use cases we've seen that have benefited significantly from Riak's unique architecture.

Use Case 1 – Digital Media
An advertising company that serves over 10 billion ads per month must be able to quickly deliver its content to millions of end users around the world. Meeting that demand with relational databases would require a complex configuration of expensive, vertically scaled hardware, but it can be scaled out horizontally much easier with Riak. In a matter of only a few hours, the company is up and running with an ad-serving infrastructure that includes a back-end Riak cluster in Dallas with a replication cluster in Singapore along with an application tier on the front end with Web servers, load balancers and CDN.

Use Case 2 – E-commerce
An e-commerce company needs 100-percent availability. If any part of a customer's experience fails, whether it be on the website or in the shopping cart, sales are lost. Riak's fault tolerance is a big draw for this kind of use case: Even if one node or component fails, the company's data is still accessible, and the customer's user experience is uninterrupted. The shopping cart structure is critical, and Riak is built to be available ... It's a perfect match.

As an additional safeguard, the company can take advantage of simple multi-datacenter replication in their Riak Enterprise environment to geographically disperse content closer to its customers (while also serving as an important tool for disaster recovery and backup).

Use Case 3 – Gaming
With customers like Broken Bulb and Peak Games, SoftLayer is no stranger to the gaming industry, so it should come as no surprise that we've seen interesting use cases for Riak from some of our gaming customers. When a game developer incorporated Riak into a new game to store player data like user profiles, statistics and rankings, the performance of the bare metal infrastructure blew him away. As a result, the game's infrastructure was redesigned to also pull gaming content like images, videos and sounds from the Riak database cluster. Since the environment is so easy to scale horizontally, the process on the infrastructure side took no time at all, and the multimedia content in the game is getting served as quickly as the player data.

Databases are common bottlenecks for many applications, but they don't have to be. Making the transition from scaling vertically (upgrading hardware, adding RAM, etc.) to scaling horizontally (spreading the work intelligently across multiple nodes) alleviates many of the pain points for a quickly growing database environment. Have you made that transition? If not, what's holding you back? Have you considered implementing Riak?

-@marcalanjones

June 4, 2013

IBM to Acquire SoftLayer

As most have seen by now, this morning we announced IBM's intent to acquire SoftLayer. It's not just big news, it's great news for SoftLayer and our customers. I'd like to take a moment and share a little background on the deal and pass along a few resources to answer questions you may have.

We founded SoftLayer in 2005 with the vision of becoming the de facto platform for the Internet. We committed ourselves to automation and innovation. We could have taken shortcuts to make a quick buck by creating manual processes or providing one-off services, but we invested in processes that would enable us to build the strongest, most scalable, most controllable foundation on which customers can build whatever they want. We created a network-within-a-network topology of three physical networks to every SoftLayer server, and all of our services live within a unified API. "Can it be automated?" was not the easiest question to ask, but it's the question that enabled us to grow at Internet scale.

As part of the newly created IBM Cloud Services division, customers and clients from both companies will benefit from a higher level of choice and a higher level of service from a single partner. More important, the real significance will come as we merge technology that we developed within the SoftLayer platform with the power and vision that drives SmartCloud and pioneer next-generation cloud services. It might seem like everyone is "in the cloud" now, but the reality is that we're still in the early days in this technology revolution. What the cloud looks like and what businesses are doing with it will change even more in the next two years than it has in the last five.

You might have questions in the midst of the buzz around this acquisition, and I want you to get answers. A great place to learn more about the deal is the SoftLayer page on IBM.com. From there, you can access a FAQ with more information, and you'll also learn more about the IBM SmartCloud portfolio that SoftLayer will compliment.

A few questions that may be top of mind for the customers reading this blog:

How does this affect my SoftLayer services?
Between now and when the deal closes (expected in the third quarter of this year), SoftLayer will continue to operate as an independent company with no changes to SoftLayer services or delivery. Nothing will change for you in the foreseeable future.

Your SoftLayer account relationships and support infrastructure will remain unchanged, and your existing sales and technical representatives will continue to provide the support you need. At any time, please don't hesitate to reach out to your SoftLayer team members.

Over time as any changes occur, information will be communicated to customers and partners with ample time to allow for planning and a smooth transition. Our customers will benefit from the combined technologies and skills of both companies, including increased investment, global reach, industry expertise and support available from IBM, along with IBM and SoftLayer's joint commitment to innovation.

Once the acquisition has been completed, we will be able to provide more details.

What does it mean for me?
We entered this agreement because it will enable us to continue doing what we've done since 2005, but on an even bigger scale and with greater opportunities. We believe in its success and the opportunity it brings customers.

It's going to be a smooth integration. The executive leadership of both IBM and SoftLayer are committed to the long-term success of this acquisition. The SoftLayer management team will remain part of the integrated leadership team to drive the broader IBM SmartCloud strategy into the marketplace. And IBM is best-in-class at integration and has a significant track record of 26 successful acquisitions over the past three years.

IBM will continue to support and enhance SoftLayer's technologies while enabling clients to take advantage of the broader IBM portfolio, including SmartCloud Foundation, SmartCloud Services and SmartCloud Solutions.

-@lavosby

UPDATE: On July 8, 2013, IBM completed its acquisition of SoftLayer: http://sftlyr.com/30z

Subscribe to executive-blog