international

December 20, 2012

MongoDB Performance Analysis: Bare Metal v. Virtual

Developers can be cynical. When "the next great thing in technology" is announced, I usually wait to see how it performs before I get too excited about it ... Show me how that "next great thing" compares apples-to-apples with the competition, and you'll get my attention. With the launch of MongoDB at SoftLayer, I'd guess a lot of developers outside of SoftLayer and 10gen have the same "wait and see" attitude about the new platform, so I put our new MongoDB engineered servers to the test.

When I shared MongoDB architectural best practices, I referenced a few of the significant optimizations our team worked with 10gen to incorporate into our engineered servers (cheat sheet). To illustrate the impact of these changes in MongoDB performance, we ran 10gen's recommended benchmarking harness (freely available for download and testing of your own environment) on our three tiers of engineered servers alongside equivalent shared virtual environments commonly deployed by the MongoDB community. We've made a pretty big deal about the performance impact of running MongoDB on optimized bare metal infrastructure, so it's time to put our money where our mouth is.

The Testing Environment

For each of the available SoftLayer MongoDB engineered servers, data sets of 512kb documents were preloaded onto single MongoDB instances. The data sets were created with varying size compared to available memory to allow for data sets that were both larger (2X) and smaller than available memory. Each test also ensured that the data set was altered during the test run frequently enough to prevent the queries from caching all of the data into memory.

Once the data sets were created, JMeter server instances with 4 cores and 16GB of RAM were used to drive 'benchrun' from the 10gen benchmarking harness. This diagram illustrates how we set up the testing environment (click for a better look):

MongoDB Performance Analysis Setup

These Jmeter servers function as the clients generating traffic on the MongoDB instances. Each client generated random query and update requests with a ratio of six queries per update (The update requests in the test were to ensure that data was not allowed to fully cache into memory and never exercise reads from disk). These tests were designed to create an extreme load on the servers from an exponentially increasing number of clients until the system resources became saturated, and we recorded the resulting performance of the MongoDB application.

At the Medium (MD) and Large (LG) engineered server tiers, performance metrics were run separately for servers using 15K SAS hard drive data mounts and servers using SSD hard drive data mounts. If you missed the post comparing the IOPS statistics between different engineered server hard drive configurations, be sure to check it out. For a better view of the results in a given graph, click the image included in the results below to see a larger version.

Test Case 1: Small MongoDB Engineered Servers vs Shared Virtual Instance

Servers

Small (SM) MongoDB Engineered Server
Single 4-core Intel 1270 CPU
64-bit CentOS
8GB RAM
2 x 500GB SATAII - RAID1
1Gb Network
Virtual Provider Instance
4 Virtual Compute Units
64-bit CentOS
7.5GB RAM
2 x 500GB Network Storage - RAID1
1Gb Network
 

Tests Performed

Small Data Set (8GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 32
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Test Case 2: Medium MongoDB Engineered Servers vs Shared Virtual Instance

Servers (15K SAS Data Mount Comparison)

Medium (MD) MongoDB Engineered Server
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
4 x 300GB 15K SAS - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
30GB RAM
2 x 64GB Network Storage - RAID1 (Journal Mount)
4 x 300GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (32GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Servers (SSD Data Mount Comparison)

Medium (MD) MongoDB Engineered Server
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
4 x 400GB SSD - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
30GB RAM
2 x 64GB Network Storage - RAID1 (Journal Mount)
4 x 300GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (32GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Test Case 3: Large MongoDB Engineered Servers vs Shared Virtual Instance

Servers (15K SAS Data Mount Comparison)

Large (LG) MongoDB Engineered Server
Dual 8-core Intel E5-2620 CPUs
64-bit CentOS
128GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
6 x 600GB 15K SAS - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
64GB RAM (Maximum available on this provider)
2 x 64GB Network Storage - RAID1 (Journal Mount)
6 x 600GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (64GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Servers (SSD Data Mount Comparison)

Large (LG) MongoDB Engineered Server
Dual 8-core Intel E5-2620 CPUs
64-bit CentOS
128GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
6 x 400GB SSD - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
64GB RAM (Maximum available on this provider)
2 x 64GB Network Storage - RAID1 (Journal Mount)
6 x 600GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (64GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned over 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Impressions from Performance Testing

The results speak for themselves. Running a Mongo DB big data solution on a shared virtual environment has significant drawbacks when compared to running MongoDB on a single-tenant bare metal offering. Disk I/O is by far the most limiting resource for MongoDB, and relying on shared network-attached storage (with much lower disk I/O) makes this limitation very apparent. Beyond the average and peak statistics above, performance varied much more significantly in the virtual instance environment, so it's not as consistent and predictable as a bare metal.

Highlights:

  • When a working data set is smaller than available memory, query performance increases.
  • The number of clients performing queries has an impact on query performance because more data is being actively cached at a rapid rate.
  • The addition of a separate Journal Mount volume significantly improves performance. Because the Small (SM) engineered server does not include a secondary mount for Journals, whenever MongoDB began to journal, the disk I/O associated with journalling was disruptive to the query and update operations performed on the Data Mount.
  • The best deployments in terms of operations per second, stability and control were the configurations with a RAID10 SSD Data Mount and a RAID1 SSD Journal Mount. These configurations are available in both our Medium and Large offerings, and I'd highly recommend them.

-Harold

December 19, 2012

SoftLayer API: Streamline. Simplify.

Building an API is a bit of a balancing act. You want your API to be simple and easy to use, and you want it to be feature-rich and completely customizable. Because those two desires happen to live on opposite ends of the spectrum, every API finds a different stasis in terms of how complex and customizable they are. The SoftLayer API was designed to provide customers with granular control of every action associated with any product or service on our platform; anything you can do in our customer portal can be done via our API. That depth of functionality might be intimidating to developers looking to dive in quickly and incorporate the SoftLayer platform into their applications, so our development team has been working to streamline and simplify some of the most common API services to make them even more accessible.

SoftLayer API

To get an idea of what their efforts look like in practice, Phil posted an SLDN blog with a perfect example of how they simplified cloud computing instance (CCI) creation via the API. The traditional CCI ordering process required developers to define nineteen data points:

Hostname
Domain name
complexType
Package Id
Location Id
Quantity to order
Number of cores
Amount of RAM
Remote management options
Port speeds
Public bandwidth allotment
Primary subnet size
Disk size
Operating system
Monitoring
Notification
Response
VPN Management - Private Network
Vulnerability Assessments & Management

While each of those data points is straightforward, you still have to define nineteen of them. You have all of those options when you check out through our shopping cart, so it makes sense that you'd have them in the API, but when it comes to ordering through the API, you don't necessarily need all of those options. Our development team observed our customers' API usage patterns, and they created the slimmed-down and efficient SoftLayer_Virtual_Guest::createObject — a method that only requires seven data points:

Hostname
Domain name
Number of cores
Amount of RAM
Hourly/monthly billing
Local vs SAN disk
Operating System

Without showing you a single line of code, you see the improvement. Default values were established for options like Port speeds and Monitoring based on customer usage patterns, and as a result, developers only have to provide half the data to place a new CCI order. Because each data point might require multiple lines of code, the volume of API code required to place an order is slimmed down even more. The best part is that if you find yourself needing to modify one of the now-default options like Port speeds or Monitoring, you still can!

As the development team finds other API services and methods that can be streamlined and simplified like this one, they'll ninja new solutions to make the API even more accessible. Have you tried coding to the SoftLayer API yet? If not, what's the biggest roadblock for you? If you're already a SLAPI coder, what other methods do you use often that could be streamlined?

-@khazard

December 18, 2012

2012 at SoftLayer: A Year-End Review

It's already December 18, so you've probably read a few dozen "Best of 2012" and "Looking Back on 2012" articles around the web by now. I hope that you indulge me as I add one more tally to that list ... I can't suppress the urge to take a nostalgic look back on all of SoftLayer's successes this year.

As Director of Communications, the easiest milestones for me to use as I look back are our product announcements and press releases, so I'll use those as landmarks to help tell the story of SoftLayer in 2012. Instead of listing those points chronologically, it might make a little more sense to categorize them topically so you can see the bigger picture of what's happening at the company when it comes to product innovation, growth, the startup community and industry recognition.

Driving Product Innovation

When your company motto is "Innovate or Die," there's a lot of pressure to stay on the bleeding edge of technology. In this calendar year alone, we launched some pretty amazing products and capabilities that have the potential of reshaping the competitive landscape:

  • Flex Images – In February, we announced Flex Images — an amazing tool that blurs the line between "cloud" and "dedicated." Users can easily replicate servers and move them between physical and virtual platforms to quickly meet their changing requirements. None of our competitors can match that level of flexibility.
  • High Performance Computing – In April, we launched high-performance computing (HPC) options powered by NVIDIA Tesla GPUS to provide an on-demand, consumption-based platform for users with the most compute-intensive environments.
  • SoftLayer Private Clouds – In June, we unveiled Private Clouds based on CloudStack and Citrix CloudPlatform. A Private Cloud is a an optimized environment that enables quick provisioning of cloud instances on dedicated infrastructure, and because we've automated the provisioning and expansion of the Private Cloud architecture, customers can order and configure full private cloud deployments on demand.
  • Big Data: MongoDB – Our most recent product release, an optimized MongoDB environment, was the amazing result of a strategic partnership with the team at 10gen. This flexible pay-as-you-go solution simplifies the big data buying process and enables organizations to swiftly deploy highly scalable and available production-grade systems. Big data developers don't have to settle for lower-performance virtualized platforms, and they don't have to hassle with building, configuring and tweaking their own dedicated environments (since we did all the work for them).

Expanding in Key Vertical Markets

Beyond the pure "product innovation" milestones we've hit this year, we've also seen a few key vertical markets do their own innovating on our platform. With a paintbrush and a little creativity, Pablo Picasso popularized Cubism, so when our creative customers are provided with a truly scalable platform that delivers unparalleled performance and control across both physical and virtual devices, they do their own world changing. Several top online gaming providers and cutting-edge tech companies chose SoftLayer to do their "painting" this year, and their stories have been pretty amazing:

  • Broken Bulb Studios - This social gaming developer uses SoftLayer's public and private cloud infrastructure with RightScale cloud management to easily deploy, automate and manage its rapidly expanding computing workloads across the globe.
  • KIXEYE, Storm8, and East Side Games - These online gaming companies rely on SoftLayer to provide a platform of dedicated, virtualized and managed servers from which they can develop, test, launch and run their latest games.
  • AppFirst, Cloudant and Struq - These hot tech companies moved to SoftLayer to achieve the scalability, performance and the time-to-market they need to continue meeting global market demand for their services.
  • Huge Wins in Europe, Middle East and Africa - Companies like Binweevils, Boxed Ice, Crazymist, Exit Games, Ganymede, Hotwire Financial, Mangrove, Multiplay, Peak Games and Zamzar are just some of organizations that choose SoftLayer to deliver the cloud infrastructure for their killer applications and games.

Supporting the Startup Community

2012 was the first full year of activity for the Catalyst Startup Program. Catalyst is geared toward furthering innovation by investing time and hosting resources in helping entrepreneurs build their businesses, and as an extension of that program, we also supported several high-profile incubators, accelerators and startup-related events this year:

Earning Industry Recognition

All of this innovation and effort didn't go unnoticed in 2012. SoftLayer's growth and accomplishments throughout the year resulted in some high-profile recognition:

  • SoftLayer won the Red Herring "Top 100 North America Tech Award," a mark of distinction for identifying promising new companies and entrepreneurs. With this award, we join the ranks of past recipients like Facebook, Twitter and Google.
  • SoftLayer was listed in the Top 10 on Business Insider's Digital 100 list of 2012's Most Valuable Private Tech Companies in the world, alongside Twitter, Square and Dropbox.

Beyond that "official" recognition of what we're doing to shake up the market, the best barometer for our success is our customer base. According to an amazing hosting infographic from HostCabi.net, we're the most popular hosting provider among the 100,000 most visited websites in the world. We easily beat out all other service providers — almost doubling the number of sites hosted by the second-place competitor — and we're not slowing down. We're using the momentum we've continued building in 2012 to propel us into 2013, and we encourage you to watch this space for even more activity next year.

-Andre

Categories: 
December 17, 2012

Big Data at SoftLayer: The Importance of IOPS

The jet flow gates in the Hoover Dam can release up to 73,000 cubic feet — the equivalent of 546,040 gallons — of water per second at 120 miles per hour. Imagine replacing those jet flow gates with a single garden hose that pushes 25 gallons per minute (or 0.42 gallons per second). Things would get ugly pretty quickly. In the same way, a massive "big data" infrastructure can be crippled by insufficient IOPS.

IOPS — Input/Output Operations Per Second — measure computer storage in terms of the number of read and write operations it can perform in a second. IOPS are a primary concern for database environments where content is being written and queried constantly, and when we take those database environments to the extreme (big data), the importance of IOPS can't be overstated: If you aren't able perform database reads and writes quickly in a big data environment, it doesn't matter how many gigabytes, terabytes or petabytes you have in your database ... You won't be able to efficiently access, add to or modify your data set.

As we worked with 10gen to create, test and tweak SoftLayer's MongoDB engineered servers, our primary focus centered on performance. Since the performance of massively scalable databases is dictated by the read and write operations to that database's data set, we invested significant resources into maximizing the IOPS for each engineered server ... And that involved a lot more than just swapping hard drives out of servers until we found a configuration that worked best. Yes, "Disk I/O" — the amount of input/output operations a given disk can perform — plays a significant role in big data IOPS, but many other factors limit big data performance. How is performance impacted by network-attached storage? At what point will a given CPU become a bottleneck? How much RAM should included in a base configuration to accommodate the load we expect our users to put on each tier of server? Are there operating system changes that can optimize the performance of a platform like MongoDB?

The resulting engineered servers are a testament to the blood, sweat and tears that were shed in the name of creating a reliable, high-performance big data environment. And I can prove it.

Most shared virtual instances — the scalable infrastructure many users employ for big data — use network-attached storage for their platform's storage. When data has to be queried over a network connection (rather than from a local disk), you introduce latency and more "moving parts" that have to work together. Disk I/O might be amazing on the enterprise SAN where your data lives, but because that data is not stored on-server with your processor or memory resources, performance can sporadically go from "Amazing" to "I Hate My Life" depending on network traffic. When I've tested the IOPS for network-attached storage from a large competitor's virtual instances, I saw an average of around 400 IOPS per mount. It's difficult to say whether that's "not good enough" because every application will have different needs in terms of concurrent reads and writes, but it certainly could be better. We performed some internal testing of the IOPS for the hard drive configurations in our Medium and Large MongoDB engineered servers to give you an apples-to-apples comparison.

Before we get into the tests, here are the specs for the servers we're using:

Medium (MD) MongoDB Engineered Server
Dual 6-core Intel 5670 CPUs
CentOS 6 64-bit
36GB RAM
1Gb Network - Bonded
Large (LG) MongoDB Engineered Server
Dual 8-core Intel E5-2620 CPUs
CentOS 6 64-bit
128GB RAM
1Gb Network - Bonded
 

The numbers shown in the table below reflect the average number of IOPS we recorded with a 100% random read/write workload on each of these engineered servers. To measure these IOPS, we used a tool called fio with an 8k block size and iodepth at 128. Remembering that the virtual instance using network-attached storage was able to get 400 IOPS per mount, let's look at how our "base" configurations perform:

Medium - 2 x 64GB SSD RAID1 (Journal) - 4 x 300GB 15k SAS RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 2937
Random Write IOPS - /var/lib/mongo/logs 1306
Random Read IOPS - /var/lib/mongo/data 1720
Random Write IOPS - /var/lib/mongo/data 772
Random Read IOPS - /var/lib/mongo/data/journal 19659
Random Write IOPS - /var/lib/mongo/data/journal 8869
   
Medium - 2 x 64GB SSD RAID1 (Journal) - 4 x 400GB SSD RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 30269
Random Write IOPS - /var/lib/mongo/logs 13124
Random Read IOPS - /var/lib/mongo/data 33757
Random Write IOPS - /var/lib/mongo/data 14168
Random Read IOPS - /var/lib/mongo/data/journal 19644
Random Write IOPS - /var/lib/mongo/data/journal 8882
   
Large - 2 x 64GB SSD RAID1 (Journal) - 6 x 600GB 15k SAS RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 4820
Random Write IOPS - /var/lib/mongo/logs 2080
Random Read IOPS - /var/lib/mongo/data 2461
Random Write IOPS - /var/lib/mongo/data 1099
Random Read IOPS - /var/lib/mongo/data/journal 19639
Random Write IOPS - /var/lib/mongo/data/journal 8772
 
Large - 2 x 64GB SSD RAID1 (Journal) - 6 x 400GB SSD RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 32403
Random Write IOPS - /var/lib/mongo/logs 13928
Random Read IOPS - /var/lib/mongo/data 34536
Random Write IOPS - /var/lib/mongo/data 15412
Random Read IOPS - /var/lib/mongo/data/journal 19578
Random Write IOPS - /var/lib/mongo/data/journal 8835

Clearly, the 400 IOPS per mount results you'd see in SAN-based storage can't hold a candle to the performance of a physical disk, regardless of whether it's SAS or SSD. As you'd expect, the "Journal" reads and writes have roughly the same IOPS between all of the configurations because all four configurations use 2 x 64GB SSD drives in RAID1. In both configurations, SSD drives provide better Data mount read/write performance than the 15K SAS drives, and the results suggest that having more physical drives in a Data mount will provide higher average IOPS. To put that observation to the test, I maxed out the number of hard drives in both configurations (10 in the 2U MD server and 34 in the 4U LG server) and recorded the results:

Medium - 2 x 64GB SSD RAID1 (Journal) - 10 x 300GB 15k SAS RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 7175
Random Write IOPS - /var/lib/mongo/logs 3481
Random Read IOPS - /var/lib/mongo/data 6468
Random Write IOPS - /var/lib/mongo/data 1763
Random Read IOPS - /var/lib/mongo/data/journal 18383
Random Write IOPS - /var/lib/mongo/data/journal 8765
   
Medium - 2 x 64GB SSD RAID1 (Journal) - 10 x 400GB SSD RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 32160
Random Write IOPS - /var/lib/mongo/logs 12181
Random Read IOPS - /var/lib/mongo/data 34642
Random Write IOPS - /var/lib/mongo/data 14545
Random Read IOPS - /var/lib/mongo/data/journal 19699
Random Write IOPS - /var/lib/mongo/data/journal 8764
   
Large - 2 x 64GB SSD RAID1 (Journal) - 34 x 600GB 15k SAS RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 17566
Random Write IOPS - /var/lib/mongo/logs 11918
Random Read IOPS - /var/lib/mongo/data 9978
Random Write IOPS - /var/lib/mongo/data 6526
Random Read IOPS - /var/lib/mongo/data/journal 18522
Random Write IOPS - /var/lib/mongo/data/journal 8722
 
Large - 2 x 64GB SSD RAID1 (Journal) - 34 x 400GB SSD RAID10 (Data)
Random Read IOPS - /var/lib/mongo/logs 34220
Random Write IOPS - /var/lib/mongo/logs 15388
Random Read IOPS - /var/lib/mongo/data 35998
Random Write IOPS - /var/lib/mongo/data 17120
Random Read IOPS - /var/lib/mongo/data/journal 17998
Random Write IOPS - /var/lib/mongo/data/journal 8822

It should come as no surprise that by adding more drives into the configuration, we get better IOPS, but you might be wondering why the results aren't "betterer" when it comes to the IOPS in the SSD drive configurations. While the IOPS numbers improve going from four to ten drives in the medium engineered server and six to thirty-four drives in the large engineered server, they don't increase as significantly as the IOPS differences in the SAS drives. This is what I meant when I explained that several factors contribute to and potentially limit IOPS performance. In this case, the limiting factor throttling the (ridiculously high) IOPS is the RAID card we are using in the servers. We've been working with our RAID card vendor to test a new card that will open a little more headroom for SSD IOPS, but that replacement card doesn't provide the consistency and reliability we need for these servers (which is just as important as speed).

There are probably a dozen other observations I could point out about how each result compares with the others (and why), but I'll stop here and open the floor for you. Do you notice anything interesting in the results? Does anything surprise you? What kind of IOPS performance have you seen from your server/cloud instance when running a tool like fio?

-Kelly

December 10, 2012

Startup Series: GiveToBenefit

People often ask me why I enjoy working at SoftLayer, and that's a tough question to answer fully. I ALWAYS say that great people and great products (in that order) are some of the biggest reasons, and I explain how refreshing it is to work for a company that operates prioritizes "solving problems" over "selling." I share the SoftLayer "Innovate or Die" motto and talk about how radically the world of hosting is changing, and I get to brag about meeting some of the world's most interesting up-and-coming entrepreneurs and how I have the unique opportunity to help amazing startups grow into Internet powerhouses.

I'm the West Coast community development manager for Catalyst, so I get to tell the SoftLayer story to hundreds of entrepreneurs and startups every month at various meetups, demo days, incubator office hours and conferences. In turn, I get to hear the way those entrepreneurs and startups are changing the world. It's a pretty amazing gig. When I was chatting with a few of my colleagues recently, I realized that I'm in a pretty unique position ... Not everyone gets to hear these stories. I've decided that I owe it to my coworkers, our Catalyst participants and anyone else who will listen to write a semi-regular blog series about some of the cool businesses SoftLayer is helping.

Picking one Catalyst participant to feature for this first blog was a pretty challenging task. With the holidays upon us, one company I'm working closely with jumped out as the perfect candidate to feature in this "season of giving": GiveToBenefit.

GiveToBenefit

GiveToBeneift (or G2B) is a social enterprise based in Philadelphia dedicated to helping non-profits receive high-quality goods from select suppliers through crowd-funding. G2B is unique among the startups in the Catalyst program in that it is a "double bottom line" company: It is designed to generate profit for its business while at the same time creating positive social impact.

Crowd-funding — raising money from the public via online donations — is a relatively new activity, but it has already become a HUGE market. In 2010, more than 38 million people gave $4.5 billion to causes online ... $4.5 BILLION dollars were donated online to fuel ideas and businesses. Chances are, you've heard of companies like Kickstarter and DonorsChoose, so instead of taking time to talk about the crowd-funding process, I can share how GiveToBenefit differs from those other platforms:

Serves Non-Profits Exclusively - GiveToBenefit works exclusively with non-profit companies. They look for non-profits who don't have the financial or human resources to do their own fundraising and who can benefit from the high-quality goods their suppliers provide.

Marketing and Strategy Assistance - GiveToBenefit actively helps the organization market the campaign. The G2B team is ready, willing and able to offer suggestions, answer questions and provide feedback throughout the process, and given the fact that many non-profits lack technology resources, they usually get very involved with each cause.

No Additional Donor Fees - An extremely important note to point out is that GiveToBenefit does not charge donors a fee for their contribution beyond the mandatory fee charged by the credit card processor. More of every the donated dollar goes to its intended cause. Your entire donation goes to the non-profit for a very specific reason. There's no question about whether your donation will go to what you hope for.

Building Connections with High-Quality Suppliers - GiveToBenefit found a way to elevate the role of the supplier of the goods that non-profits receive and use. Brands whose products promise to perform better and last longer than the items the charities have access to are featured. GiveToBenefit derives revenue from its relationships with these suppliers, and G2B uses part of the fee it charges the supplier to fund the marketing of the non-profit's online campaign.

The idea is to go beyond "doing good," to "doing better." I could go on and on about the innovate ways they're "discovering better ways to do good," but the best way to show off their platform would be to send you to the three campaigns they recently launched:

GiveToBenefit Campaigns

Whether you want to contribute to purchasing a Watermark water purification system for the Margaret E. Moul Home for people with neuromuscular disabilities or you want to fill the People's Light & Theatre and Plays & Players Theater with the beautiful sounds of Hailun Pianos, you can contribute and know that your donation is making a difference for some very worthy non-profits.

If you'd like to learn more about GiveToBenefit or if you think one of your favorite non-profits could benefit from a G2B campaign, let me know (jkrammes@softlayer.com), and I'll introduce you to G2B founder and visionary Dan Sossaman.

-@JoshuaKrammes

December 6, 2012

MongoDB: Architectural Best Practices

With the launch of our MongoDB solutions, developers can provision powerful, optimized, horizontally scaling NoSQL database clusters in real-time on bare metal infrastructure in SoftLayer data centers around the world. We worked tirelessly with our friends at 10gen — the creators of MongoDB — to build and tweak hardware and software configurations that enable peak MongoDB performance, and the resulting platform is pretty amazing. As Duke mentioned in his blog post, those efforts followed 10Gen's MongoDB best practices, but what he didn't mention was that we created some architectural best practices of our own for MongoDB in deployments on our platform.

The MongoDB engineered servers that you order from SoftLayer already implement several of the recommendations you'll see below, and I'll note which have been incorporated as we go through them. Given the scope of the topic, it's probably easiest to break down this guide into a few sections to make it a little more digestible. Let's take a look at the architectural best practices of running MongoDB through the phases of the roll-out process: Selecting a deployment strategy to prepare for your MongoDB installation, the installation itself, and the operational considerations of running it in production.

Deployment Strategy

When planning your MongoDB deployment, you should follow Sun Tzu's (modified) advice: "If you know the [friend] and know yourself, you need not fear the result of a hundred battles." "Friend" was substituted for the "enemy" in this advice because the other party is MongoDB. If you aren't familiar with MongoDB, the top of your to-do list should be to read MongoDB's official documentation. That information will give you the background you'll need as you build and use your database. When you feel comfortable with what MongoDB is all about, it's time to "know yourself."

Your most important consideration will be the current and anticipated sizes of your data set. Understanding the volume of data you'll need to accommodate will be the primary driver for your choice of individual physical nodes as well as your sharding plans. Once you've established an expected size of your data set, you need to consider the importance of your data and how tolerant you are of the possibility of lost or lagging data (especially in replicated scenarios). With this information in hand, you can plan and start testing your deployment strategy.

It sounds a little strange to hear that you should test a deployment strategy, but when it comes to big data, you want to make sure your databases start with a strong foundation. You should perform load testing scenarios on a potential deployment strategy to confirm that a given architecture will meet your needs, and there are a few specific areas that you should consider:

Memory Sizing
MongoDB (like many data-oriented applications) works best when the data set can reside in memory. Nothing performs better than a MongoDB instance that does not require disk I/O. Whenever possible, select a platform that has more available RAM than your working data set size. If your data set exceeds the available RAM for a single node, then consider using sharding to increase the amount of available RAM in a cluster to accommodate the larger data set. This will maximize the overall performance of your deployment. If you notice page faults when you put your database under production load, they may indicate that you are exceeding the available RAM in your deployment.

Disk Type
If speed is not your primary concern or if you have a data set that is far larger than any available in memory strategy can support, selecting the proper disk type for your deployment is important. IOPS will be key in selecting your disk type and obviously the higher the IOPS the better the performance of MongoDB. Local disks should be used whenever possible (as network storage can cause high latency and poor performance for your deployment). It's also advised that you use RAID 10 when creating disk arrays.

To give you an idea of what kind of IOPS to expect from a given type of drive, these are the approximate ranges of IOPS per drive in SoftLayer MongoDB engineered servers:

SATA II – 100-200 IOPS
15K SAS – 300-400 IOPS
SSD – 7,000-8,000 IOPS (read) 19,000-20,000 IOPS (write)

CPU
Clock speed and the amount of available processors becomes a consideration if you anticipate using MapReduce. It has also been noted that when running a MongoDB instance with the majority of the data in memory, clock speed can have a major impact on overall performance. If you are planning to use MapReduce or you're able to operate with a majority of your data in memory, consider a deployment strategy that includes a CPU with a high clock/bus speed to maximize your operations per second.

Replication
Replication provides high availability of your data if a node fails in your cluster. It should be standard to replicate with at least three nodes in any MongoDB deployment. The most common configuration for replication with three nodes is a 2x1 deployment — having two primary nodes in a single data center with a backup server in a secondary data center:

MongoDB Replication

Sharding
If you anticipate a large, active data set, you should deploy a sharded MongoDB deployment. Sharding allows you to partition a single data set across multiple nodes. You can allow MongoDB to automatically distribute the data across nodes in the cluster or you may elect to define a shard key and create range-based sharding for that key.

Sharding may also help write performance, so you can also elect to shard even if your data set is small but requires a high amount of updates or inserts. It's important to note that when you deploy a sharded set, MongoDB will require three (and only three) config server instances which are specialized Mongo runtimes to track the current shard configuration. Loss of one of these nodes will cause the cluster to go into a read-only mode (for the configuration only) and will require that all nodes be brought back online before any configuration changes can be made.

Write Safety Mode
There are several write safety modes that govern how MongoDB will handle the persistence of the data to disk. It is important to consider which mode best fits your needs for both data integrity and performance. The following write safety modes are available:

None – This mode provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency. The 'None' strategy will also not provide any sort of protection in the case of network failures. That lack of protection makes this mode highly unreliable and should only be used when performance is a priority and data integrity is not a concern.

Normal – This is the default for MongoDB if you do not select any other mode. It provides a deferred writing strategy that is non-blocking. This will allow for high performance, however there is a small opportunity in the case of a node failing that data can be lost. There is also the possibility that data written to one node in a cluster will not be immediately available on all nodes in that cluster for read consistency.

Safe – This mode will block until MongoDB has acknowledged that it has received the write request but will not block until the write is actually performed. This provides a better level of data integrity and will ensure that read consistency is achieved within a cluster.

Journal Safe – Journals provide a recovery option for MongoDB. Using this mode will ensure that the data has been acknowledged and a Journal update has been performed before returning.

Fsync - This mode provides the highest level of data integrity and blocks until a physical write of the data has occurred. This comes with a degradation in performance and should be used only if data integrity is the primary concern for your application.

Testing the Deployment
Once you've determined your deployment strategy, test it with a data set similar to your production data. 10gen has several tools to help you with load testing your deployment, and the console has a tool named 'benchrun' which can execute operations from within a JavaScript test harness. These tools will return operation information as well as latency numbers for each of those operations. If you require more detailed information about the MongoDB instance, consider using the mongostat command or MongoDB Monitoring Service (MMS) to monitor your deployment during the testing.

Installation

When performing the installation of MongoDB, a few considerations can help create both a stable and performance-oriented solution. 10gen recommends the use CentOS (64-bit) as the base operating system if at all possible. If you try installing MongoDB on a 32-bit operating system, you might run into file size limits that cause issues, and if you feel the urge to install it on Windows, you'll see performance issues if virtual memory begins to be utilized by the OS to make up for a lack of RAM in your deployment. As a result, 32-bit operating systems and Windows operating systems should be avoided on MongoDB servers. SoftLayer provisions CentOS 6.X 64-bit operating systems by default on all of our MongoDB engineered server deployments.

When you've got CentOS 64-bit installed, you should also make the following changes to maximize your performance (all of which are included by default on all SoftLayer engineered servers):

Set SSD Read Ahead Defaults to 16 Blocks - SSD drives have excellent seek times allowing for shrinking the Read Ahead to 16 blocks. Spinning disks might require slight buffering so these have been set to 32 blocks.

noatime - Adding the noatime option eliminates the need for the system to make writes to the file system for files which are simply being read — or in other words: Faster file access and less disk wear.

Turn NUMA Off in BIOS - Linux, NUMA and MongoDB tend not to work well together. If you are running MongoDB on NUMA hardware, we recommend turning it off (running with an interleave memory policy). If you don't, problems will manifest in strange ways like massive slow downs for periods of time or high system CPU time.

Set ulimit - We have set the ulimit to 64000 for open files and 32000 for user processes to prevent failures due to a loss of available file handles or user processes.

Use ext4 - We have selected ext4 over ext3. We found ext3 to be very slow in allocating files (or removing them). Additionally, access within large files is poor with ext3.

One last tip on installation: Make the Journal and Data volumes be distinct physical volumes. If the Journal and Data directories reside on a single physical volume, flushes to the Journal will interrupt the access of data and provide spikes of high latency within your MongoDB deployment.

Operations

Once a MongoDB deployment has been promoted to production, there are a few recommendations for monitoring and optimizing performance. You should always have the MMS agent running on all MongoDB instances to help monitor the health and performance of your deployment. Additionally, this tool is also very useful if you have 10gen MongoDB Cloud Subscriptions because it provides useful debugging data for the 10gen team during support interactions. In addition to MMS, you can use the mongostat command (mentioned in the deployment section) to see runtime information about the performance of a MongoDB node. If either of these tools flags performance issues, sharding or indexing are first-line options to resolve them:

Indexes - Indexes should be created for a MongoDB deployment if monitoring tools indicate that field based queries are performing poorly. Always use indexes when you are querying data based on distinct fields to help boost performance.

Sharding - Sharding can be leveraged when the overall performance of the node is suffering because of a large operating data set. Be sure to shard before you get in the red; the system only splits chunks for sharding on insert or update so if you wait too long to shard you may have some uneven distribution for a period of time or forever depending on your data set and sharding key strategy.

I know it seems like we've covered a lot over the course of this blog post, but this list of best practices is far from exhaustive. If you want to learn more, the MongoDB forums are a great resource to connect with the rest of the MongoDB community and learn from their experiences, and the documentation on MongoDB's site is another phenomenal resource. The best people to talk to when it comes to questions about MongoDB are the folks at 10gen, so I also highly recommend taking advantage of MongoDB Cloud Subscriptions to get their direct support for your one-off questions and issues.

-Harold

December 5, 2012

Breaking Down 'Big Data' - Database Models

Forester defines big data as "techniques and technologies that make capturing value from data at an extreme scale economical." Gartner says, "Big data is the term adopted by the market to describe extreme information management and processing issues which exceed the capability of traditional information technology along one or multiple dimensions to support the use of the information assets." Big data demands extreme horizontal scale that traditional IT management can't handle, and it's not a challenge exclusive to the Facebooks, Twitters and Tumblrs of the world ... Just look at the Google search volume for "big data" over the past eight years:

Big Data Search Interest

Developers are collectively facing information overload. As storage has become more and more affordable, it's easier to justify collecting and saving more data. Users are more comfortable with creating and sharing content, and we're able to track, log and index metrics and activity that previously would have been deleted in consideration of space restraints or cost. As the information age progresses, we are collecting more and more data at an ever-accelerating pace, and we're sharing that data at an incredible rate.

To understand the different facets of this increased usage and demand, Gartner came up with the three V's of big data that vary significantly from traditional data requirements: Volume, Velocity and Variety. Larger, more abundant pieces of data ("Volume") are coming at a much faster speed ("Velocity") in formats like media and walls of text that don't easily fit into a column-and-row database structure ("Variety"). Given those equally important factors, many of the biggest players in the IT world have been hard at work to create solutions that provide the scale and speed developers need when they build social, analytics, gaming, financial or medical apps with large data sets.

When we talk about scaling databases here, we're talking about scaling horizontally across multiple servers rather than scaling vertically by upgrading a single server — adding more RAM, increasing HDD capacity, etc. It's important to make that distinction because it leads to a unique challenge shared by all distributed computer systems: The CAP Theorem. According to the CAP theorem, a distributed storage system must choose to sacrifice either consistency (that everyone sees the same data) or availability (that you can always read/write) while having partition tolerance (where the system continues to operate despite arbitrary message loss or failure of part of the system occurs).

Let's take a look at a few of the most common database models, what their strengths are, and how they handle the CAP theorem compromise of consistency v. availability:

Relational Databases

What They Do: Stores data in rows/columns. Parent-child records can be joined remotely on the server. Provides speed over scale. Some capacity for vertical scaling, poor capacity for horizontal scaling. This type of database is where most people start.
Horizontal Scaling: In a relational database system, horizontal scaling is possible via replication — dharing data between redundant nodes to ensure consistency — and some people have success sharding — horizontal partitioning of data — but those techniques add a lot of complexity.
CAP Balance: Prefer consistency over availability.
When to use: When you have highly structured data, and you know what you'll be storing. Great when production queries will be predictable.
Example Products: Oracle, SQLite, PostgreSQL, MySQL

Document-Oriented Databases

What They Do: Stores data in documents. Parent-child records can be stored in the same document and returned in a single fetch operation with no join. The server is aware of the fields stored within a document, can query on them, and return their properties selectively.
Horizontal Scaling: Horizontal scaling is provided via replication, or replication + sharding. Document-oriented databases also usually support relatively low-performance MapReduce for ad-hoc querying.
CAP Balance: Generally prefer consistency over availability
When to Use: When your concept of a "record" has relatively bounded growth, and can store all of its related properties in a single doc.
Example Products: MongoDB, CouchDB, BigCouch, Cloudant

Key-Value Stores

What They Do: Stores an arbitrary value at a key. Most can perform simple operations on a single value. Typically, each property of a record must be fetched in multiple trips, with Redis being an exception. Very simple, and very fast.
Horizontal Scaling: Horizontal scale is provided via sharding.
CAP Balance: Generally prefer consistency over availability.
When to Use: Very simple schemas, caching of upstream query results, or extreme speed scenarios (like real-time counters)
Example Products: CouchBase, Redis, PostgreSQL HStore, LevelDB

BigTable-Inspired Databases

What They Do: Data put into column-oriented stores inspired by Google's BigTable paper. It has tunable CAP parameters, and can be adjusted to prefer either consistency or availability. Both are sort of operationally intensive.
Horizontal Scaling: Good speed and very wide horizontal scale capabilities.
CAP Balance: Prefer consistency over availability
When to Use: When you need consistency and write performance that scales past the capabilities of a single machine. Hbase in particular has been used with around 1,000 nodes in production.
Example Products: Hbase, Cassandra (inspired by both BigTable and Dynamo)

Dynamo-Inspired Databases

What They Do: Distributed key/value stores inspired by Amazon's Dynamo paper. A key written to a dynamo ring is persisted in several nodes at once before a successful write is reported. Riak also provides a native MapReduce implementation.
Horizontal Scaling: Dynamo-inspired databases usually provide for the best scale and extremely strong data durability.
CAP Balance: Prefer availability over consistency,
When to Use: When the system must always be available for writes and effectively cannot lose data.
Example Products: Cassandra, Riak, BigCouch

Each of the database models has strengths and weaknesses, and there are huge communities that support each of the open source examples I gave in each model. If your database is a bottleneck or you're not getting the flexibility and scalability you need to handle your application's volume, velocity and variety of data, start looking at some of these "big data" solutions.

Tried any of the above models and have feedback that differs from ours? Leave a comment below and tell us about it!

-@marcalanjones

December 4, 2012

Big Data at SoftLayer: MongoDB

In one day, Facebook's databases ingest more than 500 terabytes of data, Twitter processes 500 million Tweets and Tumblr users publish more than 75 million posts. With such an unprecedented volume of information, developers face significant challenges when it comes to building an application's architecture and choosing its infrastructure. As a result, demand has exploded for "big data" solutions — resources that make it possible to process, store, analyze, search and deliver data from large, complex data sets. In light of that demand, SoftLayer has been working in strategic partnership with 10gen — the creators of MongoDB — to develop a high-performance, on-demand, big data solution. Today, we're excited to announce the launch of specialized MongoDB servers at SoftLayer.

If you've configured an infrastructure to accommodate big data, you know how much of a pain it can be: You choose your hardware, you configure it to run NoSQL, you install an open source NoSQL project that you think will meet your needs, and you keep tweaking your environment to optimize its performance. Assuming you have the resources (and patience) to get everything running efficiently, you'll wind up with the horizontally scalable database infrastructure you need to handle the volume of content you and your users create and consume. SoftLayer and 10gen are making that process a whole lot easier.

Our new MongoDB solutions take the time and guesswork out of configuring a big data environment. We give you an easy-to-use system for designing and ordering everything you need. You can start with a single server or roll out multiple servers in a single replica set across multiple data centers, and in under two hours, an optimized MongoDB environment is provisioned and ready to be used. I stress that it's an "optimized" environment because that's been our key focus. We collaborated with 10gen engineers on hardware and software configurations that provide the most robust performance for MongoDB, and we incorporated many of their MongoDB best practices. The resulting "engineered servers" are big data powerhouses:

MongoDB Configs

From each engineered server base configuration, you can customize your MongoDB server to meet your application's needs, and as you choose your upgrades from the base configuration, you'll see the thresholds at which you should consider upgrading other components. As your data set's size and the number of indexes in your database increase, you'll need additional RAM, CPU, and storage resources, but you won't need them in the same proportions — certain components become bottlenecks before others. Sure, you could upgrade all of the components in a given database server at the same rate, but if, say, you update everything when you only need to upgrade RAM, you'd be adding (and paying for) unnecessary CPU and storage capacity.

Using our new Solution Designer, it's very easy to graphically design a complex multi-site replica set. Once you finalize your locations and server configurations, you'll click "Order," and our automated provisioning system will kick into high gear. It deploys your server hardware, installs CentOS (with OS optimizations to provide MongoDB performance enhancements), installs MongoDB, installs MMS (MongoDB Monitoring Service) and configures the network connection on each server to cluster it with the other servers in your environment. A process that may have taken days of work and months of tweaking is completed in less than four hours. And because everything is standardized and automated, you run much less risk of human error.

MongoDB Configs

One of the other massive benefits of working so closely with 10gen is that we've been able to integrate 10gen's MongoDB Cloud Subscriptions into our offering. Customers who opt for a MongoDB Cloud Subscription get additional MongoDB features (like SSL and SNMP support) and support direct from the MongoDB authority. As an added bonus, since the 10gen team has an intimate understanding of the SoftLayer environment, they'll be able to provide even better support to SoftLayer customers!

You shouldn't have to sacrifice agility for performance, and you shouldn't have to sacrifice performance for agility. Most of the "big data" offerings in the market today are built on virtual servers that can be provisioned quickly but offer meager performance levels relative to running the same database on bare metal infrastructure. To get the performance benefits of dedicated hardware, many users have chosen to build, roll out and tweak their own configurations. With our MongoDB offering, you get the on-demand availability and flexibility of a cloud infrastructure with the raw power and full control of dedicated hardware.

If you've been toying with the idea of rolling out your own big data infrastructure, life just got a lot better for you.

-Duke

November 27, 2012

Tips and Tricks - Building a jQuery Plugin (Part 1)

I've written several blogs detailing the use of different jQuery plugins (like Select2, LazyLoad and equalHeights), and in the process, I've noticed an increasing frustration among the development community when it comes to building jQuery plugins. The resources and documentation I've found online have not as clear and easy as they could be, so in my next few posts, I'll break down the process to make jQuery plugin creation simple and straightforward. In this post, we'll cover the basic structure of a plugin and where to insert your own functionality, and in Part 2, we'll pick a simple task and add on to our already-made structure.

Before I go any further, it's probably important to address a question you might be asking yourself: "Why would I want to make my own plugin?" The best reason that comes to my mind is portability. If you've ever created a large-scale project, take a look back into your source code and note how many of the hundreds of lines of jQuery code you could put into a plugin to reuse on a different project. You probably invested a lot of time and energy into that code, so it doesn't make sense to reinvent the wheel if you ever need that functionality again. If that's not enough of a reason for you, I can also tell you that if you develop your own jQuery plugin, you'll level-up in cool points, and the jQuery community will love you.

For this post, let's create a jQuery plugin that simply returns, "This is our awesome plugin!" Our first step involves putting together the basic skeleton used by every plugin:

(function($) {
    $.fn.slPlugin = function() {
 
            // Awesome plugin stuff goes here
    };
}) (jQuery);

This is your template — your starting point. Practice it. Remember it. Love it. The "slPlugin" piece is what I chose to name this plugin. It's best to name your plugin something unique ... I always run a quick Google search to ensure I don't duplicate the name of a plugin I (or someone else) might need to use in a project alongside my plugin. In this case, we're calling the example plugin slPlugin because SoftLayer is awesome, and I like naming my plugins after awesome things. I'll save this code in a file called jquery.slPlugin.js.

Now that we have our plugin's skeleton, let's add some default values for variables:

(function($) {
    $.fn.slPlugin = function(options) {
            var defaults = {
                myVar: "default", // this will be the default value of this var
                anotherVar: 0,
                coolVar: "this is cool",                
            };
            var options = $.extend(defaults, options);
    };
}) (jQuery);

Let's look at the changes we made between the first example and this one. You'll notice that in our second line we added "options" to become $.fn.slPlugin = function(options) {. We do this because our function is now accepting arguments, and we need to let the function know that. The next difference you come across is the var defaults blurb. In this section, we're providing default values for our variables. If you don't define values for a given variable when you call the plugin, these default values will be used.

Now let's have our plugin return the message we want to send:

(function($) {
    $.fn.slPlugin = function(options) {
            var defaults = {
                myVar: "This is", // this will be the default value of this var
                anotherVar: "our awesome",
                coolVar: "plugin!",
            };
            var options = $.extend(defaults, options);
            this.each(function() {
                ourString = myVar + " " + anotherVar + " " + coolVar;
            });
            return ourString;
    };
}) (jQuery);

We've defined our default values for our variables, concatenated our variables and we've added a return under our variable declaration. If our jQuery plugin is included in a project and no values are provided for our variables, slPlugin will return, "This is our awesome plugin!"

It seems rather rudimentary at this point, but we have to crawl before we walk. This introductory post is laying the groundwork of coding a jQuery plugin, and we'll continue building on this example in the next installment of this series. As you've seen with the LazyLoad, equalHeights and Select2, there are much more complicated things we can do with our plugin, and we'll get there. Sneak Preview: In the next installment, we'll be creating and implementing a truncation function for our plugin ... Get excited!

-Cassandra

November 21, 2012

Risk Management: The Importance of Redundant Backups

You (should) know the importance of having regular backups of your important data, but to what extent does data need to be backed up to be safe? With a crowbar and shove, thieves broke into my apartment and stole the backups I've used for hundreds of gigabytes of home videos, photo files and archives of past computers. A Dobro RAID enclosure and an external drive used by Apple Time Machine were both stolen, and if I didn't have the originals on my laptop or a redundant offsite backup, I would have lost all of my data. My experience is not uncommon, and it's a perfect example of an often understated principle that everyone should understand: You need redundant backups.

It's pretty simple: You need to back up your data regularly. When you've set up that back up schedule, you should figure out a way to back up your data again. After you've got a couple current backups of your files, you should consider backing up your backups off-site. It seems silly to think of backing up backups, but if anything happens — failed drives, theft, fire, flood, etc. — those backups could be lost forever, and if you've ever lost a significant amount of data due to a hard drive failure or experience like mine, you know that backups are worth their weight in gold.

Admittedly, there is a point of diminishing return when it comes to how much redundancy is needed — it's not worth the time/effort/cost to back up your backups ad infinitum — so here are the best practices I've come up with over the course of my career in the information technology industry:

  • Plan and schedule regular backups to keep your archives current. If your laptop's hard drive dies, having backups from last June probably won't help you as much as backups from last night.
  • Make sure your data exists on three different mediums. It might seem unnecessary, but if you're already being intentional about backing up your information, take it one step further to replicate those backups at least one more time.
  • Something might happen to your easy onsite backups, so it's important to consider off-site backups as well. There are plenty of companies offering secure online backups for home users, and those are generally easy to use (even if they can be a little slow).
  • Check your backups regularly. Having a backup is useless if it's not configured to back up the correct data and running on the correct schedule.
  • RAID is not a backup solution. Yes, RAID can duplicate data across hard drives, but that doesn't mean the data is "backed up" ... If the RAID array fails, all of the hard drives (and all of the data) in the array fail with it.

It's important to note here that "off-site" is a pretty relative term when it comes to backups. Many SoftLayer customers back up a primary drive on their server to a secondary drive on the same server (duplicating the data away from the original drive), and while that's better than nothing, it's also a little risky because it's possible that the server could fail and corrupt both drives. Every backup product SoftLayer offers for customers is off-site relative to the server itself (though it might be in the same facility), so we also make it easy to have your backup in another city or on a different continent.

As I've mentioned already, once you set up your backups, you're not done. You need to check your backups regularly for failures and test them to confirm that you can recover your data quickly in the event of a disaster. Don't just view a file listing. Try extracting files or restore the whole backup archive. If you're able to run a full restore without the pressure of an actual emergency, it'll prove that you're ready for the unexpected ... Like a fire drill for your backups.

Setting up a backup plan doesn't have to be scary or costly. If you don't feel like you could recover quickly after losing your data, spend a little time evaluating ways to make a recovery like that easy. It's crazy, but a big part of "risk management," "disaster recovery" and "business continuity" is simply making sure your data is securely backed up regularly and available to you when you need it.

Plan, prepare, back up.

-Lyndell

Pages

Subscribe to international