executive-blog

December 5, 2012

Breaking Down 'Big Data' - Database Models

Forester defines big data as "techniques and technologies that make capturing value from data at an extreme scale economical." Gartner says, "Big data is the term adopted by the market to describe extreme information management and processing issues which exceed the capability of traditional information technology along one or multiple dimensions to support the use of the information assets." Big data demands extreme horizontal scale that traditional IT management can't handle, and it's not a challenge exclusive to the Facebooks, Twitters and Tumblrs of the world ... Just look at the Google search volume for "big data" over the past eight years:

Big Data Search Interest

Developers are collectively facing information overload. As storage has become more and more affordable, it's easier to justify collecting and saving more data. Users are more comfortable with creating and sharing content, and we're able to track, log and index metrics and activity that previously would have been deleted in consideration of space restraints or cost. As the information age progresses, we are collecting more and more data at an ever-accelerating pace, and we're sharing that data at an incredible rate.

To understand the different facets of this increased usage and demand, Gartner came up with the three V's of big data that vary significantly from traditional data requirements: Volume, Velocity and Variety. Larger, more abundant pieces of data ("Volume") are coming at a much faster speed ("Velocity") in formats like media and walls of text that don't easily fit into a column-and-row database structure ("Variety"). Given those equally important factors, many of the biggest players in the IT world have been hard at work to create solutions that provide the scale and speed developers need when they build social, analytics, gaming, financial or medical apps with large data sets.

When we talk about scaling databases here, we're talking about scaling horizontally across multiple servers rather than scaling vertically by upgrading a single server — adding more RAM, increasing HDD capacity, etc. It's important to make that distinction because it leads to a unique challenge shared by all distributed computer systems: The CAP Theorem. According to the CAP theorem, a distributed storage system must choose to sacrifice either consistency (that everyone sees the same data) or availability (that you can always read/write) while having partition tolerance (where the system continues to operate despite arbitrary message loss or failure of part of the system occurs).

Let's take a look at a few of the most common database models, what their strengths are, and how they handle the CAP theorem compromise of consistency v. availability:

Relational Databases

What They Do: Stores data in rows/columns. Parent-child records can be joined remotely on the server. Provides speed over scale. Some capacity for vertical scaling, poor capacity for horizontal scaling. This type of database is where most people start.
Horizontal Scaling: In a relational database system, horizontal scaling is possible via replication — dharing data between redundant nodes to ensure consistency — and some people have success sharding — horizontal partitioning of data — but those techniques add a lot of complexity.
CAP Balance: Prefer consistency over availability.
When to use: When you have highly structured data, and you know what you'll be storing. Great when production queries will be predictable.
Example Products: Oracle, SQLite, PostgreSQL, MySQL

Document-Oriented Databases

What They Do: Stores data in documents. Parent-child records can be stored in the same document and returned in a single fetch operation with no join. The server is aware of the fields stored within a document, can query on them, and return their properties selectively.
Horizontal Scaling: Horizontal scaling is provided via replication, or replication + sharding. Document-oriented databases also usually support relatively low-performance MapReduce for ad-hoc querying.
CAP Balance: Generally prefer consistency over availability
When to Use: When your concept of a "record" has relatively bounded growth, and can store all of its related properties in a single doc.
Example Products: MongoDB, CouchDB, BigCouch, Cloudant

Key-Value Stores

What They Do: Stores an arbitrary value at a key. Most can perform simple operations on a single value. Typically, each property of a record must be fetched in multiple trips, with Redis being an exception. Very simple, and very fast.
Horizontal Scaling: Horizontal scale is provided via sharding.
CAP Balance: Generally prefer consistency over availability.
When to Use: Very simple schemas, caching of upstream query results, or extreme speed scenarios (like real-time counters)
Example Products: CouchBase, Redis, PostgreSQL HStore, LevelDB

BigTable-Inspired Databases

What They Do: Data put into column-oriented stores inspired by Google's BigTable paper. It has tunable CAP parameters, and can be adjusted to prefer either consistency or availability. Both are sort of operationally intensive.
Horizontal Scaling: Good speed and very wide horizontal scale capabilities.
CAP Balance: Prefer consistency over availability
When to Use: When you need consistency and write performance that scales past the capabilities of a single machine. Hbase in particular has been used with around 1,000 nodes in production.
Example Products: Hbase, Cassandra (inspired by both BigTable and Dynamo)

Dynamo-Inspired Databases

What They Do: Distributed key/value stores inspired by Amazon's Dynamo paper. A key written to a dynamo ring is persisted in several nodes at once before a successful write is reported. Riak also provides a native MapReduce implementation.
Horizontal Scaling: Dynamo-inspired databases usually provide for the best scale and extremely strong data durability.
CAP Balance: Prefer availability over consistency,
When to Use: When the system must always be available for writes and effectively cannot lose data.
Example Products: Cassandra, Riak, BigCouch

Each of the database models has strengths and weaknesses, and there are huge communities that support each of the open source examples I gave in each model. If your database is a bottleneck or you're not getting the flexibility and scalability you need to handle your application's volume, velocity and variety of data, start looking at some of these "big data" solutions.

Tried any of the above models and have feedback that differs from ours? Leave a comment below and tell us about it!

-@marcalanjones

December 4, 2012

Big Data at SoftLayer: MongoDB

In one day, Facebook's databases ingest more than 500 terabytes of data, Twitter processes 500 million Tweets and Tumblr users publish more than 75 million posts. With such an unprecedented volume of information, developers face significant challenges when it comes to building an application's architecture and choosing its infrastructure. As a result, demand has exploded for "big data" solutions — resources that make it possible to process, store, analyze, search and deliver data from large, complex data sets. In light of that demand, SoftLayer has been working in strategic partnership with 10gen — the creators of MongoDB — to develop a high-performance, on-demand, big data solution. Today, we're excited to announce the launch of specialized MongoDB servers at SoftLayer.

If you've configured an infrastructure to accommodate big data, you know how much of a pain it can be: You choose your hardware, you configure it to run NoSQL, you install an open source NoSQL project that you think will meet your needs, and you keep tweaking your environment to optimize its performance. Assuming you have the resources (and patience) to get everything running efficiently, you'll wind up with the horizontally scalable database infrastructure you need to handle the volume of content you and your users create and consume. SoftLayer and 10gen are making that process a whole lot easier.

Our new MongoDB solutions take the time and guesswork out of configuring a big data environment. We give you an easy-to-use system for designing and ordering everything you need. You can start with a single server or roll out multiple servers in a single replica set across multiple data centers, and in under two hours, an optimized MongoDB environment is provisioned and ready to be used. I stress that it's an "optimized" environment because that's been our key focus. We collaborated with 10gen engineers on hardware and software configurations that provide the most robust performance for MongoDB, and we incorporated many of their MongoDB best practices. The resulting "engineered servers" are big data powerhouses:

MongoDB Configs

From each engineered server base configuration, you can customize your MongoDB server to meet your application's needs, and as you choose your upgrades from the base configuration, you'll see the thresholds at which you should consider upgrading other components. As your data set's size and the number of indexes in your database increase, you'll need additional RAM, CPU, and storage resources, but you won't need them in the same proportions — certain components become bottlenecks before others. Sure, you could upgrade all of the components in a given database server at the same rate, but if, say, you update everything when you only need to upgrade RAM, you'd be adding (and paying for) unnecessary CPU and storage capacity.

Using our new Solution Designer, it's very easy to graphically design a complex multi-site replica set. Once you finalize your locations and server configurations, you'll click "Order," and our automated provisioning system will kick into high gear. It deploys your server hardware, installs CentOS (with OS optimizations to provide MongoDB performance enhancements), installs MongoDB, installs MMS (MongoDB Monitoring Service) and configures the network connection on each server to cluster it with the other servers in your environment. A process that may have taken days of work and months of tweaking is completed in less than four hours. And because everything is standardized and automated, you run much less risk of human error.

MongoDB Configs

One of the other massive benefits of working so closely with 10gen is that we've been able to integrate 10gen's MongoDB Cloud Subscriptions into our offering. Customers who opt for a MongoDB Cloud Subscription get additional MongoDB features (like SSL and SNMP support) and support direct from the MongoDB authority. As an added bonus, since the 10gen team has an intimate understanding of the SoftLayer environment, they'll be able to provide even better support to SoftLayer customers!

You shouldn't have to sacrifice agility for performance, and you shouldn't have to sacrifice performance for agility. Most of the "big data" offerings in the market today are built on virtual servers that can be provisioned quickly but offer meager performance levels relative to running the same database on bare metal infrastructure. To get the performance benefits of dedicated hardware, many users have chosen to build, roll out and tweak their own configurations. With our MongoDB offering, you get the on-demand availability and flexibility of a cloud infrastructure with the raw power and full control of dedicated hardware.

If you've been toying with the idea of rolling out your own big data infrastructure, life just got a lot better for you.

-Duke

November 27, 2012

Tips and Tricks - Building a jQuery Plugin (Part 1)

I've written several blogs detailing the use of different jQuery plugins (like Select2, LazyLoad and equalHeights), and in the process, I've noticed an increasing frustration among the development community when it comes to building jQuery plugins. The resources and documentation I've found online have not as clear and easy as they could be, so in my next few posts, I'll break down the process to make jQuery plugin creation simple and straightforward. In this post, we'll cover the basic structure of a plugin and where to insert your own functionality, and in Part 2, we'll pick a simple task and add on to our already-made structure.

Before I go any further, it's probably important to address a question you might be asking yourself: "Why would I want to make my own plugin?" The best reason that comes to my mind is portability. If you've ever created a large-scale project, take a look back into your source code and note how many of the hundreds of lines of jQuery code you could put into a plugin to reuse on a different project. You probably invested a lot of time and energy into that code, so it doesn't make sense to reinvent the wheel if you ever need that functionality again. If that's not enough of a reason for you, I can also tell you that if you develop your own jQuery plugin, you'll level-up in cool points, and the jQuery community will love you.

For this post, let's create a jQuery plugin that simply returns, "This is our awesome plugin!" Our first step involves putting together the basic skeleton used by every plugin:

(function($) {
    $.fn.slPlugin = function() {
 
            // Awesome plugin stuff goes here
    };
}) (jQuery);

This is your template — your starting point. Practice it. Remember it. Love it. The "slPlugin" piece is what I chose to name this plugin. It's best to name your plugin something unique ... I always run a quick Google search to ensure I don't duplicate the name of a plugin I (or someone else) might need to use in a project alongside my plugin. In this case, we're calling the example plugin slPlugin because SoftLayer is awesome, and I like naming my plugins after awesome things. I'll save this code in a file called jquery.slPlugin.js.

Now that we have our plugin's skeleton, let's add some default values for variables:

(function($) {
    $.fn.slPlugin = function(options) {
            var defaults = {
                myVar: "default", // this will be the default value of this var
                anotherVar: 0,
                coolVar: "this is cool",                
            };
            var options = $.extend(defaults, options);
    };
}) (jQuery);

Let's look at the changes we made between the first example and this one. You'll notice that in our second line we added "options" to become $.fn.slPlugin = function(options) {. We do this because our function is now accepting arguments, and we need to let the function know that. The next difference you come across is the var defaults blurb. In this section, we're providing default values for our variables. If you don't define values for a given variable when you call the plugin, these default values will be used.

Now let's have our plugin return the message we want to send:

(function($) {
    $.fn.slPlugin = function(options) {
            var defaults = {
                myVar: "This is", // this will be the default value of this var
                anotherVar: "our awesome",
                coolVar: "plugin!",
            };
            var options = $.extend(defaults, options);
            this.each(function() {
                ourString = myVar + " " + anotherVar + " " + coolVar;
            });
            return ourString;
    };
}) (jQuery);

We've defined our default values for our variables, concatenated our variables and we've added a return under our variable declaration. If our jQuery plugin is included in a project and no values are provided for our variables, slPlugin will return, "This is our awesome plugin!"

It seems rather rudimentary at this point, but we have to crawl before we walk. This introductory post is laying the groundwork of coding a jQuery plugin, and we'll continue building on this example in the next installment of this series. As you've seen with the LazyLoad, equalHeights and Select2, there are much more complicated things we can do with our plugin, and we'll get there. Sneak Preview: In the next installment, we'll be creating and implementing a truncation function for our plugin ... Get excited!

-Cassandra

November 21, 2012

Risk Management: The Importance of Redundant Backups

You (should) know the importance of having regular backups of your important data, but to what extent does data need to be backed up to be safe? With a crowbar and shove, thieves broke into my apartment and stole the backups I've used for hundreds of gigabytes of home videos, photo files and archives of past computers. A Dobro RAID enclosure and an external drive used by Apple Time Machine were both stolen, and if I didn't have the originals on my laptop or a redundant offsite backup, I would have lost all of my data. My experience is not uncommon, and it's a perfect example of an often understated principle that everyone should understand: You need redundant backups.

It's pretty simple: You need to back up your data regularly. When you've set up that back up schedule, you should figure out a way to back up your data again. After you've got a couple current backups of your files, you should consider backing up your backups off-site. It seems silly to think of backing up backups, but if anything happens — failed drives, theft, fire, flood, etc. — those backups could be lost forever, and if you've ever lost a significant amount of data due to a hard drive failure or experience like mine, you know that backups are worth their weight in gold.

Admittedly, there is a point of diminishing return when it comes to how much redundancy is needed — it's not worth the time/effort/cost to back up your backups ad infinitum — so here are the best practices I've come up with over the course of my career in the information technology industry:

  • Plan and schedule regular backups to keep your archives current. If your laptop's hard drive dies, having backups from last June probably won't help you as much as backups from last night.
  • Make sure your data exists on three different mediums. It might seem unnecessary, but if you're already being intentional about backing up your information, take it one step further to replicate those backups at least one more time.
  • Something might happen to your easy onsite backups, so it's important to consider off-site backups as well. There are plenty of companies offering secure online backups for home users, and those are generally easy to use (even if they can be a little slow).
  • Check your backups regularly. Having a backup is useless if it's not configured to back up the correct data and running on the correct schedule.
  • RAID is not a backup solution. Yes, RAID can duplicate data across hard drives, but that doesn't mean the data is "backed up" ... If the RAID array fails, all of the hard drives (and all of the data) in the array fail with it.

It's important to note here that "off-site" is a pretty relative term when it comes to backups. Many SoftLayer customers back up a primary drive on their server to a secondary drive on the same server (duplicating the data away from the original drive), and while that's better than nothing, it's also a little risky because it's possible that the server could fail and corrupt both drives. Every backup product SoftLayer offers for customers is off-site relative to the server itself (though it might be in the same facility), so we also make it easy to have your backup in another city or on a different continent.

As I've mentioned already, once you set up your backups, you're not done. You need to check your backups regularly for failures and test them to confirm that you can recover your data quickly in the event of a disaster. Don't just view a file listing. Try extracting files or restore the whole backup archive. If you're able to run a full restore without the pressure of an actual emergency, it'll prove that you're ready for the unexpected ... Like a fire drill for your backups.

Setting up a backup plan doesn't have to be scary or costly. If you don't feel like you could recover quickly after losing your data, spend a little time evaluating ways to make a recovery like that easy. It's crazy, but a big part of "risk management," "disaster recovery" and "business continuity" is simply making sure your data is securely backed up regularly and available to you when you need it.

Plan, prepare, back up.

-Lyndell

November 20, 2012

Community Development: Catalysing European Startups

SoftLayer works hard and plays hard. A few weeks ago, I traveled to Dallas for the first "Global Catalyst Summit"* where the community development teams in Europe, Asia and the United States all came together under one roof to learn, strategize and bond. What that really means is that we all experienced a week of hardcore information flow and brutal fun.

The onboarding process to become a part of the SoftLayer's Community Development (Catalyst) team is pretty rigorous, and traveling to Dallas from Amsterdam for the training made it even more intense. In short order, I learned about the roots of the Catalyst program and why SoftLayer is so interested in investing in helping startups succeed. I got the low-down on the hundreds of companies that are taking advantage of the program right now, and I was inspired by the six incredible people who focus exclusively on the Catalyst program at SoftLayer ... And Big Tex:

SoftLayer Community Development Team and Big Tex

When the whirlwind week of orientation and training came to an end, I came to a solid conclusion: I am working at SoftLayer for a reason. I believe SoftLayer has the most kick-ass global on-demand technology platform out there, and our focus on innovation and automation is reflected in everything we do. On top of that, we give that platform to startups to help springboard their success. I get to work with a community of world-changers. Needless to say, that's an amazing conclusion to come to.

As a member of the Catalyst team in EMEA (Europe, Middle East, Africa), I can provide signficant resources to entrepreneurs who are building awesome new applications and technologies that are making a difference locally, regionally and globally. Anna Bofill Bert and I work out of SoftLayer's Amsterdam office, and we are fully dedicated to helping startup and developer communities in our region.

As a review exercise and a way to educate the audience that may be unfamiliar with Catalyst, I thought I'd bullet out a few of the main ideas:

What is Catalyst?

The SoftLayer Catalyst Startup Program provides:

  • A generous monthly hosting credit toward dedicated, cloud or hybrid compute environments for a FULL YEAR (Ideal for dev-ops/next generation startup compute applications who want high performance from the start).
  • Direct connection to highest level programming team at SoftLayer — Our Innovation Team. Participating companies get help and advice from the people that are writing the book on highly scalable, global infrastructure environments.
  • Connection to the SoftLayer Marketing and PR Team for help getting spreading the word around the world about all the cool stuff participating startups are doing.

We reach startups by listening to them and meeting needs that all of them express. We are telling the SoftLayer story, networking, making friends, drinking too much and travelling like mad. In the course of a month, we went to Lean Start Up Machine in Rotterdam, Structure Europe in Amsterdam, Pioneers Festival in Vienna, HowToWeb in Bucharest and we managed to complete a quick tour of startup communities in Spain.

Like our peers on the US team, we partner with incubators and accelerators to make sure that when startups look for help getting started, they also find SoftLayer. We're already working with partners like Springboard, Seedcamp, GameFounders, Startup Sauna, the INLEA Foundation and Tetuan Valley, and the list of supported communities seems to grow daily. When the portfolio companies in each of these organizations are given access to the Catalyst program, that means SoftLayer's Catalyst customer base is growing pretty phenomenally as well.

What I actually like most about how we help startups is the mentorship and office hours we provide participating companies as well. SoftLayer was founded by ten guys in a living room in 2005, and we've got hundreds of millions of dollars in annual revenue as of 2012. That success is what the SoftLayer team is excited to share insights about.

Hustling is a major part of startup culture, so it's only fitting that I feel like I had to hustle through this blog to get all of my thoughts down. Given that SoftLayer EMEA is a bit of a startup itself, I'm happy to be practicing what we preach. If you'd like more information about Catalyst or you want to apply, please feel free to hit me up: esampson@softlayer.com

We want to be part of your company's success story.

-@EmilyBlitz

*Note: As an homage to Big Tex after the fire, we referred to our meeting as the "Global Catalyst Summit with Big Tex" at the Texas State Fair. We hope to see you back in action in 2013, Big Tex!

November 19, 2012

How It's Made (and Won): The Server Challenge II

Every year, we attend more than fifty trade shows and conferences around the world. We want to spread the word about SoftLayer and connect with each conference's technical audience (also known as future SoftLayer customers). That goal is pretty straightforward on paper, but when it comes to executing on it, we're faced with the same challenge as all of our fellow exhibitors: How do we get our target audience to the our booth?

Walk down any aisle of an expo hall, and you'll see collateral and swag beckoning to attendees like a candy bar at the grocery store register. Some exhibitors rely on Twitter to monitor an event's hashtag and swoop in at every opportunity to reach the show's influential attendees. Other exhibitors might send out emails to their clients and prospects in the area to invite them to the show. We see value in each of those approaches, but what we found to be most effective was to bring a SoftLayer data center to our booth ... or at least a piece of one.

The Server Challenge has come a long way over the years. Its meager beginnings involved installing RAM and hard drive cables in a tower server. Shortly thereafter, a rack-mount server replaced the tower server, but you were still tasked with "inside the server" challenges. As we started looking for ways to tell the bigger SoftLayer story with the Server Challenge, we moved to miniature server rack, and the competition really started to pick up steam. This year, we made it our goal to take the Server Challenge to the next level, and when Supermicro stepped in to sponsor the next iteration of the the competition, we started thinking BIG.

Why use a miniature version of a SoftLayer rack when we could use a full-size version? Why have a standalone screen when rack-mount monitors can make the display part of the unit? Why rely on speakers behind the booth to pump "Eye of the Tiger" while attendees are competing when we could easily build those into the next version of the challenge? What was initially intended to be a "tweak" of the first Server Challenge became a complete overhaul ... Hence the new "Server Challenge II" moniker.

Harkening back to the 8-bit glory days of Pac Man and Space Invaders, the Server Challenge II uses a full-size 42U server rack with vintage arcade-style branding, a built-in timer and speakers that blast esoteric video game music. The bread and butter of the challenge is the actual server hardware, though ... Supermicro provided two new 2U servers to replace the previous version's five 1U servers, and we installed the same Cisco (public and private networks) and SMC (out-of-band management network) switches you see in SoftLayer's pods.

Server Challenge II

We had two instances of the original Server Challenge (one in the US, one in Amsterdam), so in order for the Server Challenge II to be bigger and better, we had to increase that total to five — one instance in Europe, one in Asia and three in the United States. Things might get a little crazier logistically, but as a potential conference attendee, it means you're even more likely to encounter the Server Challenge II if you attend any events with us.

The Server Challenge II's Internal Debut

The first instance of the Server Challenge II made its debut at GDC Online in Austin, and we immediately knew we had a hit. By the time the rack got back to our office, we had to get it ready for its next destination (Cloud Expo West), but before we sent it on its way, we gave it an official internal debut ... and raised some money for the American Heart Association in the process.

Server Challenge II at SoftLayer

SLayers at the SoftLayer HQ in Dallas could pay $3 for one attempt or $5 for two attempts to reach the top of the Server Challenge II leader board. Needless to say, it was competitive. If you click on the image above, you'll notice that our fearless leader, Lance Crosby, stopped by and gave tips to (and/or heckled) a few participants. Unsurprisingly, one of our very talented Server Build Technicians — Ellijah Fleites — took home a MacBook Air and bragging rights as SoftLayer champion with a record time of 1:03.79 ... But records are made to be broken.

In Two Places at Once

Immediately after the AHA fundraiser, we crated up the rack and sent it along to Cloud Expo West in Santa Clara. A few days later, we put the finishing touches on the second Server Challenge II rack, and because we got it done quickly, we were able to get it shipped to the other side of the country for ad:tech NYC. We would finally have the competition running in two places at the exact same time!

We weren't disappointed.

On both coasts, the retro style of the Server Challenge II lured some fantastic competitors (excellent!), and started a lot of great conversations (even better!). Here are the final leader boards from the shows:

Server Challenge II
Server Challenge II

You probably notice that the times in the ad:tech leader board are a little higher than the times in the Cloud Expo leader board, and our team figured out why that was in the middle of the second day of the conference ... The way we bound the network cables differed slightly between the two instances, and we were using different switches to time the competition (one that required only one hand to activate/deactivate, the other requiring both hands). In order to have an "apples-to-apples" comparison between all of our shows, we're going to make sure everything is consistent with all of the instances, and we plan on keeping a running list of fastest overall challenge times ... and maybe even a "World Championship" one day.

Given the early success of the Server Challenge II, you can bet that it's not going anywhere any time soon. If we have multiple shows running the challenge at one time, we might even fire up a video chat where you can compete against an attendee at a completely different conference ... so be prepared.

In the next year, we'll have all five of the Server Challenge II instances in rotation across three continents, and with the popularity of the competition growing by leaps and bounds after every show, we hope by next holiday season, a home version of the Server Challenge II is at the top of every wish list on the planet. :-)

For now, though, I'll just leave you with a glimpse at the action from Cloud Expo West (click for more pictures from the show):

Cloud Expo West

-Raleigh

November 16, 2012

Going Global: Domo Arigato, Japan

I'm SoftLayer's director of international operations, so I have the unique pleasure of spending a lot of time on airplanes and in hotels as I travel between Dallas, Amsterdam, Singapore and wherever else our event schedule dictates. In the past six months, I've spent most of my time in Asia, and I've tried to take advantage of the opportunity relearn the culture to help shape SoftLayer Asia's business.

To really get a sense the geographic distance between Dallas and Singapore, find a globe and put one index finger on Dallas and put your other index finger on Singapore. To travel from one location to the other, you fly to the other side of the planet. Given the space considerations, our network map uses a scaled-down representative topology to show our points of presence in a single view, and you get a sense of how much artistic license was used when you actually make the trip to Singapore.

Global Network

The longest currently scheduled commercial flight on the planet takes you from Singapore to Newark in a cool 19 hours, but I choose to maintain my sanity rather than set world records for amount of time spent in a metal tube. I usually hop from Dallas to Tokyo (a mere 14 hours away) where I spend a few days, and I get on another plane down to Singapore.

The break between the two legs of the trip serves a few different purposes ... I get a much needed escape from the confines of an airplane, I'm able to spend time in an amazing city (where I lived 15 years ago), and I can use the opportunity to explore the market for SoftLayer. Proximity and headcount dictated that we spend most of our direct marketing and sales time focusing on the opportunities radiating from Singapore, so we haven't been able to spend as much time as we'd like in Japan. Fortunately, we've been able organically grow our efforts in the country through community-based partnerships and sponsorships, and we owe a great deal of our success to our partners in the region and our new-found friends. I've observed from our experience in Japan that the culture breeds two contrasting business realities that create challenges and opportunities for companies like SoftLayer: Japan is insular and Japan is global.

When I say that Japan is insular, I mean that IT purchases are generally made in the realm of either Japanese firms or foreign firms that have spent decades building reputation in market. Becoming a trusted part of that market is a time-consuming (and expensive) endeavor, and it's easy for a business to be dissuaded as an outsider. The contrasting reality that Japanese businesses also have a huge need for global reach is where SoftLayer can make an immediate impact.

Consider the Japanese electronics and the automobile industries. Both were built internally before making the leap to other geographies, and over the course of decades, they have established successful brands worldwide. Japanese gaming companies, social media companies and vibrant start-up communities follow a similar trend ... only faster. The capital investment required to go global is negligible compared to their forebears because they don't need to build factories or put elaborate logistics operations in place anymore. Today, a Japanese company with a SaaS solution, a game or a social media experience can successfully share it with the world in a matter minutes or hours at minimal cost, and that's where SoftLayer is able to immediately serve the Japanese market.

The process of building the SoftLayer brand in Asia has been accelerated by the market's needs, and we don't take that for granted. We plan to continue investing in local communities and working with our partners to become a trusted and respected resource in the market, and we are grateful for the opportunities those relationships have opened for us ... Or as Styx would say, "Domo Arigato, Mr. Roboto."

-@quigleymar

November 14, 2012

Risk Management: Securing Your Servers

How do you secure your home when you leave? If you're like most people, you make sure to lock the door you leave from, and you head off to your destination. If Phil is right about "locks keeping honest people honest," simply locking your front door may not be enough. When my family moved into a new house recently, we evaluated its physical security and tried to determine possible avenues of attack (garage, doors, windows, etc.), tools that could be used (a stolen key, a brick, a crowbar, etc.) and ways to mitigate the risk of each kind of attack ... We were effectively creating a risk management plan.

Every risk has different probabilities of occurrence, potential damages, and prevention costs, and the risk management process helps us balance the costs and benefits of various security methods. When it comes to securing a home, the most effective protection comes by using layers of different methods ... To prevent a home invasion, you might lock your door, train your dog to make intruders into chew toys and have an alarm system installed. Even if an attacker can get a key to the house and bring some leftover steaks to appease the dog, the motion detectors for the alarm are going to have the police on their way quickly. (Or you could violate every HOA regulation known to man by digging a moat around the house, filling with sharks with laser beams attached to their heads, and building a medieval drawbridge over the moat.)

I use the example of securing a house because it's usually a little more accessible than talking about "server security." Server security doesn't have to be overly complex or difficult to implement, but its stigma of complexity usually prevents systems administrators from incorporating even the simplest of security measures. Let's take a look at the easiest steps to begin securing your servers in the context of their home security parallels, and you'll see what I'm talking about.

Keep "Bad People" Out: Have secure password requirements.

Passwords are your keys and your locks — the controls you put into place that ensure that only the people who should have access get it. There's no "catch all" method of keeping the bad people out of your systems, but employing a variety of authentication and identification measures can greatly enhance the security of your systems. A first line of defense for server security would be to set password complexity and minimum/maximum password age requirements.

If you want to add an additional layer of security at the authentication level, you can incorporate "Strong" or "Two-Factor" authentication. From there, you can learn about a dizzying array of authentication protocols (like TACACS+ and RADIUS) to centralize access control or you can use active directory groups to simplify the process of granting and/or restricting access to your systems. Each layer of authentication security has benefits and drawbacks, and most often, you'll want to weigh the security risk against your need for ease-of-use and availability as you plan your implementation.

Stay Current on your "Good People": When authorized users leave, make sure their access to your system leaves with them.

If your neighbor doesn't return borrowed tools to your tool shed after you gave him a key when he was finishing his renovation, you need to take his key back when you tell him he can't borrow any more. If you don't, nothing is stopping him from walking over to the shed when you're not looking and taking more (all?) of your tools. I know it seems like a silly example, but that kind of thing is a big oversight when it comes to server security.

Employees are granted access to perform their duties (the principle of least privilege), and when they no longer require access, the "keys to the castle" should be revoked. Auditing who has access to what (whether it be for your systems or for your applications) should be continual.

You might have processes in place to grant and remove access, but it's also important to audit those privileges regularly to catch any breakdowns or oversights. The last thing you want is to have a disgruntled former employee wreak all sorts of havoc on your key systems, sell proprietary information or otherwise cost you revenue, fines, recovery efforts or lost reputation.

Catch Attackers: Monitor your systems closely and set up alerts if an intrusion is detected.

There is always a chance that bad people are going to keep looking for a way to get into your house. Maybe they'll walk around the house to try and open the doors and windows you don't use very often. Maybe they'll ring the doorbell and if no lights turn on, they'll break a window and get in that way.

You can never completely eliminate all risk. Security is a continual process, and eventually some determined, over-caffeinated hacker is going to find a way in. Thinking your security is impenetrable makes you vulnerable if by some stretch of the imagination, an attacker breaches your security (see: Trojan Horse). Continuous monitoring strategies can alert administrators if someone does things they shouldn't be doing. Think of it as a motion detector in your house ... "If someone gets in, I want to know where they are." When you implement monitoring, logging and alerting, you will also be able to recover more quickly from security breaches because every file accessed will be documented.

Minimize the Damage: Lock down your system if it is breached.

A burglar smashes through your living room window, runs directly to your DVD collection, and takes your limited edition "Saved by the Bell" series box set. What can you do to prevent them from running back into the house to get the autographed posted of Alf off of your wall?

When you're monitoring your servers and you get alerted to malicious activity, you're already late to the game ... The damage has already started, and you need to minimize it. In a home security environment, that might involve an ear-piercing alarm or filling the moat around your house even higher so the sharks get a better angle to aim their laser beams. File integrity monitors and IDS software can mitigate damage in a security breach by reverting files when checksums don't match or stopping malicious behavior in its tracks.

These recommendations are only a few of the first-line layers of defense when it comes to server security. Even if you're only able to incorporate one or two of these tips into your environment, you should. When you look at server security in terms of a journey rather than a destination, you can celebrate the progress you make and look forward to the next steps down the road.

Now if you'll excuse me, I have to go to a meeting where I'm proposing moats, drawbridges, and sharks with laser beams on their heads to SamF for data center security ... Wish me luck!

-Matthew

November 8, 2012

Celebrating the First Anniversary of SoftLayer Going Global

In October, SoftLayer's data center in Singapore (SNG01) celebrated its first birthday, and our data center in Amsterdam (AMS01) turned one year old this week as well. In twelve short months, SoftLayer has completely transformed into a truly global operation with data centers and staff around the world. Our customer base has always had an international flavor to it, and our physical extension into Europe and Asia was a no-brainer.

At the end of 2011, somewhere in the neighborhood of 40% of our revenue was generated by companies outside of North America. Since then, both facilities have been fully staffed, and we've ratcheted up support in local startup communities through the Catalyst program. We've also aggressively promoted SoftLayer's global IaaS (Infrastructure-as-a-Service) platform on the trade show circuit, and the unanimous response has been that our decision to go global has been a boon to both our existing and new customers.

This blog is filled with posts about SoftLayer's culture and our SLayers' perspectives on what we're doing as a company, and that kind of openness is one of the biggest reasons we've been successful. SoftLayer's plans for global domination included driving that company culture deep into the heart of Europe and Asia, and we're extremely proud of how both of our international locations show the same SLayer passion and spirit. In Amsterdam, our office is truly pan-European — staffed by employees who hail from the US, Croatia, Greece, France, the Netherlands, Poland, Spain, Sweden, Ireland and England. In Singapore, the SoftLayer melting pot is filled with employees from the US, Singapore, Malaysia, Indonesia and New Zealand. The SoftLayer culture has flourished in the midst of that diversity, and we're a better company for it.

All of this is not to say the last year has not been without challenges ... We've logged hundreds of thousands of air miles, spent far too many nights in hotels and juggled 13-hour and 6-hour time zone difference to make things work. Beyond these personal challenges, we've worked through professional challenges of how to make things happen outside of North America. It seems like everything is different — from dealing with local vendors to adjusting to the markedly different work cultures that put bounds around how and when we work (I wish I was Dutch and had as many vacation days...) — and while some adjustments have been more difficult than others, our team has pulled through and gotten stronger as a result.

As we celebrate our first anniversary of global operations, I reflect on a few of the funny "light bulb" moments I've experienced. From seeing switch balls get the same awed looks at trade shows on three different continents to realizing how to effectively complete simple tasks in the Asian business culture, I'm ecstatic about how far we've come ... And how far we're going to go.

To infinity and beyond?

-@quigleymar

November 6, 2012

Tips and Tricks - Pure CSS Sticky Footers

By now, if you've seen my other blog posts, you know that I'm fascinated with how much JavaScript has evolved and how much you can do with jQuery these days. I'm an advocate of working smarter, not harder, and that maxim knows no coding language limits. In this post, I want to share a pure CSS solution that allows for "sticky" footers on a web page. In comparing several different techniques to present this functionality, I found that all of the other routes were overkill when it came to processing time and resource usage.

Our objective is simple: Make the footer of our web page stay at the bottom even if the page's content area is shorter than the user's browser window.

This, by far, is one of my *favorite* things to do. It makes the web layout so much more appealing and creates a very professional feel. I ended up kicking myself the very first time I tried to add this functionality to a project early in my career (ten years ago ... already!?) when I found out just how easy it was. I take solace in knowing that I'm not alone, though ... A quick search for "footer stick bottom" still yields quite a few results from fellow developers who are wrestling with the same frustrating experience I did. If you're in that boat, fear no more! We're going to your footers in shape in a snap.

Here's a diagram of the problem:

CSS Footer

Unfortunately, a lot of people try to handle it with setting a fixed height to the content which would push the footer down. This may work when YOU view it, but there are several different browser window heights, resolutions and variables that make this an *extremely* unreliable solution (notice the emphasis on the word "extremely" ... this basically means "don't do it").

We need a dynamic solution that is able to adapt on the fly to the height of a user's browser window regardless if the resize it, have Firebug open, use a unique resolution or just have a really, really weird browser!

Let's take a look at what the end results should look like:

CSS Footer

To make this happen, let's get our HTML structure in place first:

<div id="page">
 
      <div id="header"> </div>
 
      <div id="main"> </div>
 
      <div id="footer"> </div>
 
</div>

It's pretty simple so far ... Just a skeleton of a web page. The page div contains ALL elements and is immediately below the

tags in the page code hierarchy. The header div is going to be our top content, the main div will include all of our content, and the footer div is all of our copyrights and footer links.

Let's start by coding the CSS for the full page:

Html, body {
      Padding: 0;
      Margin: 0;
      Height: 100%;
}

Adding a 100% height allows us to set the height of the main div later. The height of a div can only be as tall as the parent element encasing it. Now let's see how the rest of our ids are styled:

#page {
      Min-height: 100%;
      position:relative;
}
 
#main {
      Padding-bottom: 75px;   /* This value is the height of your footer */
}
 
#footer {
      Position: absolute;
      Width: 100%;
      Bottom: 0;
      Height: 75px;  /* This value is the height of your footer */
}

These rules position the footer "absolutely" at the bottom of the page, and because we set #page to min-height: 100%, it ensures that #main is exactly the height of the browser's viewing space. One of the best things about this little trick is that it's compliant with all major current browsers — including Firefox, Chrome, Safari *AND* Internet Explorer (after a little tweak). For Internet Explorer to not throw a fit, we need concede that IE doesn't recognize min-height as a valid property, so we have to add Height: 100%; to #page:

#page {
      Min-height: 100%;  /* for all other browsers */
      height: 100%;  /* for IE */
      position:relative;
}

If the user does not have a modern, popular browser, it's still okay! Though their old browser won't detect the magic we've done here, it'll fail gracefully, and the footer will be positioned directly under the content, as it would have been without our little CSS trick.

I can't finish this blog without mentioning my FAVORITE perk of this trick: Should you not have a specially designed mobile version of your site, this trick even works on smart phones!

-Cassandra

Pages

Subscribe to executive-blog