3-bars-3-questions

January 15, 2013

Startup Series: Moqups

Every member on the Catalyst team is given one simple goal: Find the most innovative and creative startups on the planet and get them on the SoftLayer network. We meet entrepreneurs at conferences and events around the world, we team up with the most influential startup accelerators and incubators, and we hunt for businesses who are making waves online. With the momentum Catalyst built in 2012, our message has started spreading exponentially faster than what the community development team could be doing on our own, and now it seems like we've earned a few evangelists in the startup community. We have those evangelists to thank for bringing Moqups to our door.

In a Hacker News thread, a user posted about needing hosting for a server/startup, and a recommendation for the Catalyst program was one of the top-rated results. The founders of Moqups saw that recommendation, researched SoftLayer's hosting platform and submitted an application to become a Catalyst partner. As soon as we saw the unbelievable HTML5 app the Moqups team created to streamline and simplify the process of creating wireframes and mockups for website and application design, we knew they were a perfect fit to join the program.

If you've ever had to create a site prototype or UI mockup, you know how unwieldy the process can be. You want to sketch a layout and present it clearly and cleanly, but there aren't many viable resources between "marker on a whiteboard" and "rendering in Photoshop" to accomplish that goal. That's the problem the Moqups team set out to solve ... Can a web app provide the functionality and flexibility you'd need to fill that gap?

We put their answer to that question to the test. I told Kevin about Moqups and asked him to spend a few minutes wireframing the SoftLayer Blog ... About ten minutes later, he sent me this (Click for the full Moqups version):

SoftLayer Blog Moqup

Obviously, wireframing an existing design is easier than creating a new design from scratch, but Kevin said he was floored by how intuitive the Moqups platform made the process. In fact, the "instructions" for how to use Moqups are actually provided in an example "Quick Introduction to Moqups" project on the home page. That example project allows you to tweak, add and adjust content to understand how the platform works, and because it's all done in HTML5, the user experience is seamless.

Moqups

Put it to the test for yourself: How long will it take you to create a wireframe of your existing website (similar to what Kevin did with the SoftLayer Blog)? You have down-to-the-pixel precision, you can group objects together, Moqups helps you line up or center all of the different pieces of your site. Their extensive library of stencils supplements any custom images you upload, so you can go through the whole process of creating a site mockup without "drawing" anything by hand!

I'm actually surprised that the Moqups team heard about SoftLayer before our community development team heard about them ... In November, I was in Bucharest, Romania, for HowtoWeb, so I was right in their back yard! Central and Eastern European startups are blowing up right now, and Moqups is a perfect example of what we're seeing from that region in EMEA.

Oh, and if you know of a crazy cool startup like Moqups that could use a little hosting help from SoftLayer, tell them about Catalyst!

-@EmilyBlitz

January 10, 2013

Web Development - JavaScript Packaging

If you think of JavaScript as the ugly duckling of programming languages, think again! It got a bad rap in the earlier days of the web because developers knew enough just to get by but didn't really respect it like they did Java, PHP or .Net. Like other well-known and heavily used languages, JavaScript contains various data types (String, Boolean, Number, etc.), objects and functions, and it is even capable of inheritance. Unfortunately, that functionality is often overlooked, and many developers seem to implement it as an afterthought: "Oh, we need to add some neat jQuery effects over there? I'll just throw some inline JavaScript here." That kind of implementation perpetuates a stereotype that JavaScript code is unorganized and difficult to maintain, but it doesn't have to be! I'm going to show you how easy it is to maintain and organize your code base by packaging your JavaScript classes into a single file to be included with your website.

There are a few things to cover before we jump into code:

  1. JavaScript Framework - Mootools is my framework of choice, but you can use whatever JavaScript framework you'd like.
  2. Classes - Because I see JavaScript as another programming language that I respect (and is capable of object-oriented-like design), I write classes for EVERYTHING. Don't think of your JavaScript code as something you use once and throw away. Write your code to be generic enough to be reused wherever it's placed. Object-oriented design is great for this! Mootools makes object-oriented design easy to do, so this point reinforces the point above.
  3. Class Files - Just like you'd organize your PHP to contain one class per file, I do the exact same thing with JavaScript. Note: Each of the class files in the example below uses the class name appended with .js.
  4. Namespacing - I will be organizing my classes in a way that will only add a single property — PT — to the global namespace. I won't get into the details of namespacing in this blog because I'm sure you're already thinking, "The code! The code! Get on with it!" You can namespace whatever is right for your situation.

For this example, our classes will be food-themed because ... well ... I enjoy food. Let's get started by creating our base object:

/*
---
name: PT
description: The base class for all the custom classes
authors: [Philip Thompson]
provides: [PT]
...
*/
var PT = {};

We now have an empty object from which we'll build all of our classes. We'll go I will go into more details later about the comment section, but let's build our first class: PT.Ham.

/*
---
name: PT.Ham
description: The ham class
authors: [Philip Thompson]
requires: [/PT]
provides: [PT.Ham]
...
*/
 
(function() {
    PT.Ham = new Class({
        // Custom code here...
    });
}());

As I mentioned in point three (above), PT.Ham should be saved in the file named PT.Ham.js. When we create second class, PT.Pineapple, we'll store it in PT.Pineapple.js:

/*
---
name: PT.Pineapple
description: The pineapple class
authors: [Philip Thompson]
requires: [/PT]
provides: [PT.Pineapple]
...
*/
 
(function() {
    PT.Pineapple = new Class({
        // Custom code here...
    });
}());

Our final class for this example will be PT.Pizza (I'll let you guess the name of the file where PT.Pizza lives). Our PT.Pizza class will require that PT, PT.Ham and PT.Pineapple be present.

/*
---
name: PT.Pizza
description: The pizza class
authors: [Philip Thompson]
requires: [/PT, /PT.Ham, /PT.Pineapple]
provides: [PT.Pizza]
...
*/
 
(function() {
    PT.Pizza = new Class({
        // Custom code here that uses PT.Ham and PT.Pineapple...
    });
}());

Before we go any further, let's check out the comments we include above each of the classes. The comments are formatted for YAML — YAML Ain't Markup Language (you gotta love recursive acronyms). These comments allow our parser to determine how our classes are related, and they help resolve dependencies. YAML's pretty easy to learn and you only need to know a few basic features to use it. The YAML comments in this example are essential for our JavaScript package-manager — Packager. I won't go into all the details about Packager, but simply mention a few commands that we'll need to build our single JavaScript file.

In addition to the YAML comments in each of the class files, we also need to create a YAML file that will organize our code. This file — package.yml for this example — is used to load our separate JavaScript classes:

name: "PT"
description: "Provides our fancy PT classes"
authors: "[Philip Thompson]"
version: "1.0.0"
sources:
    - js/PT.js
    - js/PT.Ham.js
    - js/PT.Pineapple.js
    - js/PT.Pizza.js

package.yml shows that all of our PT* files are located in the js directory, one directory up from the package.yml file. Some of the properties in the YAML file are optional, and you can add much more detail if you'd like, but this will get the job done for our purposes.

Now we're ready to turn back to Packager to build our packaged file. Packager includes an option to use PHP, but we're just going to do it command-line. First, we need to register the new package (package.yml) we created for PT. If our JavaScript files are located in /path/to/web/directory/js, the package.yml file is in /path/to/web/directory:

./packager register /path/to/web/directory

This finds our package.yml file and registers our PT package. Now that we have our package registered, we can build it:

./packager build * > /path/to/web/directory/js/PT.all.js

The Packager sees that our PT package is registered, so it looks at each of the individual class files to build a single large file. In the comments of each of the class files, it determines if there are dependencies and warns you if any are not found.

It might seem like a lot of work when it's written out like this, but I can assure you that when you go through the process, it takes no time at all. The huge benefit of packaging our JavaScript is evident as soon as you start incorporating those JavaScript classes into your website ... Because we have built all of our class files into a single file, we don't need to include each of the individual JavaScript files into our website (much less include the inline JavaScript declarations that make you cringe). To streamline your implementation even further if you're using your JavaScript package in a production deployment, I recommend that you "minify" your code as well.

See ... Organized code is no longer just for server-side only languages. Treat your JavaScript kindly, and it will be your friend!

Happy coding!

-Philip

January 8, 2013

Startup Series: Bright Funds

Did you ever see The Beach with Leonardo DiCaprio? You know ... The one with a community of world-shunners that live in a paradisaical community on a beautiful white-sand beach. The people in that community were purists — altruistic types who believed in the possibilities of living a simple life based on community support of the individual and the individual's reciprocal support and dedication to the community. Recently, I walked into Hattery — a co-working space in SF — and found a similarly tight-knit community that immediately reminded me of that movie. Hattery is "off the radar" to a certain extent, and that's largely because the collaborative environment and culture are what drive the incredible group of entrepreneurs who work there. To be allowed in the co-working space, it seems like the prerequisites are endless passion and an ambitious vision, so I shouldn't be surprised that Bright Funds calls it home.

Bright Funds is a business that was created to provide users the ability to easily invest in complete solutions for the causes they care about. After signing on as a Catalyst partner, Bright Fund co-founders Ty Walrod and Rutul Davé invited me to lunch at the Hattery office, and I immediately accepted so I could learn more about what they are up to. Having been involved in the tech startup world for a while now, I knew that I'd be meeting two very special entrepreneurs with big hearts and even BIGGER tech startup street cred.

Rutul and Ty were not content with their user experience (UX) when it came to giving to charities and helping solve some of the world's biggest problems. They noticed that little effort had been invested in providing donors with tools to make the act of giving both enjoyable and highly effective, so they took action. Bright Funds was created to redefine and refocus the experience of "giving to charity" ... Giving shouldn't just involve going through the motions of transferring funds from our bank accounts. They built a new giving platform to be more intuitive, rewarding and enlightening, and they did an unbelievable job.

Think of the last time you had a great user experience: An interaction that was as enjoyable as it was effective. Aesthetics play a big role, and when those aesthetics make doing what you want to do easier and more satisfying, you've got an awesome UX. The best user experiences involve empowering users to make informed and intelligent choices by providing them what they need and getting out of the way. Often, UX is used for site design or application metrics, but Bright Funds took the concept and used it to create an elegant and simple business model:

Bright Funds was designed to create a giving experience with an intuitive flow in mind. Instead of just writing checks or handing over cash to a charity, the experience of giving through Bright Funds is interactive and didactic. You manage your giving like you would a mutual fund portfolio — you decide what percentage of your giving should go to which types of vetted and validated causes, and you get regular performance updates from charity. I want to help save the environment. I want to give clean water to all. I want to empower the underserved. I want to educate the world. You choose which causes you want to prioritize, and Bright Funds channels your giving to the most effective organizations serving the greatest needs in the world today.

Bright Funds

Instead of focusing on individual nonprofits, you support causes and issues that matter most to you. In that sense, Bright Funds is a very unique approach to charitable giving, and it's a powerful force in making a difference. Visit Bright Funds for more information, and get started by building your own 'Impact Portfolio.' If you're curious about what mine looks like, check it out:

Bright Funds Impact Portfolio

What does yours look like?

-@JoshuaKrammes

This is a startup series post about Bright Funds, a SoftLayer Catalyst Program participant.
About Bright Funds:
Bright Funds is a better way to give. Individuals and employees at companies with gift matching programs create personalized giving portfolios and contribute to thoroughly researched funds of highly effective nonprofits, all working to address the greatest challenges of our time. In one platform, Bright Funds brings together the power of research, the reliability of a trusted financial service, and the convenience of a secure, cloud-based platform with centralized contributions, integrated matching, and simple tax reporting.
December 31, 2012

FatCloud: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Ian Miller, CEO of FatCloud. FatCloud is a cloud-enabled application platform that allows enterprises to build, deploy and manage next-generation .NET applications.

'The Cloud' and Agility

As the CEO of a cloud-enabled application platform for the .NET community, I get the same basic question all the time: "What is the cloud?" I'm a consumer of cloud services and a supplier of software that helps customers take advantage of the cloud, so my answer to that question has evolved over the years, and I've come to realize that the cloud is fundamentally about agility. The growth, evolution and adoption of cloud technology have been fueled by businesses that don't want to worry about infrastructure and need to pivot or scale quickly as their needs change.

Because FatCloud is a consumer of cloud infrastructure from Softlayer, we are much more nimble than we'd be if we had to worry about building data centers, provisioning hardware, patching software and doing all the other time-consuming tasks that are involved in managing a server farm. My team can focus on building innovative software with confidence that the infrastructure will be ready for us on-demand when we need it. That peace of mind also happens to be one of the biggest reasons developers turn to FatCloud ... They don't want to worry about configuring the fundamental components of the platform under their applications.

Fat Cloud

Our customers trust FatCloud's software platform to help them build and scale their .NET applications more efficiently. To do this, we provide a Core Foundation of .NET WCF services that effectively provides the "plumbing" for .NET cloud computing, and we offer premium features like a a distributed NoSQL database, work queue, file storage/management system, content caching and an easy-to-use administration tool that simplifies managing the cloud for our customers. FatCloud makes developing for hundreds of servers as easy as developing for one, and to prove it, we offer a free 3-node developer edition so that potential customers can see for themselves.

FatCloud Offering

The agility of the cloud has the clearest value for a company like ours. In one heavy-duty testing month, we needed 75 additional servers online, and after that testing was over, we needed the elasticity to scale that infrastructure back down. We're able to adjust our server footprint as we balance our computing needs and work within budget constraints. Ten years ago, that would have been overwhelmingly expensive (if not impossible). Today, we're able to do it economically and in real-time. SoftLayer is helping keep FatCloud agile, and FatCloud passes that agility on to our customers.

Companies developing custom software for the cloud, mobile or web using .NET want a reliable foundation to build from, and they want to be able to bring their applications to market faster. With FatCloud, those developers can complete their projects in about half the time it would take them if they were to develop conventionally, and that speed can be a huge competitive differentiator.

The expensive "scale up" approach of buying and upgrading powerful machines for something like SQL Server is out-of-date now. The new kid in town is the "scale out" approach of using low-cost servers to expand infrastructure horizontally. You'll never run into those "scale up" hardware limitations, and you can build a dynamic, scalable and elastic application much more economically. You can be agile.

If you have questions about how FatCloud and SoftLayer make cloud-enabled .NET development easier, send us an email: sales@fatcloud.com. Our team is always happy to share the easy (and free) steps you can take to start taking advantage of the agility the cloud provides.

-Ian Miller, CEO of FatCloud

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace. These partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New partners will be added to the Marketplace each month, so stay tuned for many more come.
December 30, 2012

Risk Management: Event Logging to Protect Your Systems

The calls start rolling in at 2am on Sunday morning. Alerts start firing off. Your livelihood is in grave danger. It doesn't come with the fanfare of a blockbuster Hollywood thriller, but if a server hosting your critical business infrastructure is attacked, becomes compromised or fails, it might feel like the end of the world. In our Risk Management series, and we've covered the basics of securing your servers, so the next consideration we need to make is for when our security is circumvented.

It seems silly to prepare for a failure in a security plan we spend time and effort creating, but if we stick our heads in the sand and tell ourselves that we're secure, we won't be prepared in the unlikely event of something happening. Every attempt to mitigate risks and stop threats in their tracks will be circumvented by the one failure, threat or disaster you didn't cover in your risk management plan. When that happens, accurate event logging will help you record what happened, respond to the event (if it's still in progress) and have the information available to properly safeguard against or prevent similar threats in the future.

Like any other facet of security, "event logging" can seem overwhelming and unforgiving if you're looking at hundreds of types of events to log, each with dozens of variations and options. Like we did when we looked at securing servers, let's focus our attention on a few key areas and build out what we need:

Which events should you log?
Look at your risk assessment and determine which systems are of the highest value or could cause the most trouble if interrupted. Those systems are likely to be what you prioritized when securing your servers, and they should also take precedence when it comes to event logging. You probably don't have unlimited compute and storage resources, so you have to determine which types of events are most valuable for you and how long you should keep records of them — it's critical to have your event logs on-hand when you need them, so logs should be retained online for a period of time and then backed up offline to be available for another period of time.

Your goal is to understand what's happening on your servers and why it's happening so you know how to respond. The most common audit-able events include successful and unsuccessful account log-on events, account management events, object access, policy change, privilege functions, process tracking and system events. The most conservative approach actually involves logging more information/events and keeping those logs for longer than you think you need. From there, you can evaluate your logs periodically to determine if the level of auditing/logging needs to be adjusted.

Where do you store the event logs?
Your event logs won't do you any good if they are stored in a space that is insufficient for the amount of data you need to collect. I recommend centralizing your logs in a secure environment that is both readily available and scalable. In addition to the logs being accessible when the server(s) they are logging are inaccessible, aggregating and organize your logs in a central location can be a powerful tool to build reports and analyze trends. With that information, you'll be able to more clearly see deviations from normal activity to catch attacks (or attempted attacks) in progress.

How do you protect your event logs?
Attacks can come from both inside and out. To avoid intentional malicious activity by insiders, separation of duties should be enforced when planning logging. Learn from The X Files and "Trust no one." Someone who has been granted the 'keys to your castle' shouldn't also be able to disable the castle's security system or mess with the castle's logs. Your network engineer shouldn't have exclusive access to your router logs, and your sysadmin shouldn't be the only one looking at your web server logs.

Keep consistent time.
Make sure all of your servers are using the same accurate time source. That way, all logs generated from those servers will share consistent time-stamps. Trying to diagnose an attack or incident is exceptionally more difficult if your web server's clock isn't synced with your database server's clock or if they're set to different time zones. You're putting a lot of time and effort into logging events, so you're shooting yourself in the foot if events across all of your servers don't line up cleanly.

Read your logs!
Logs won't do you any good if you're not looking at them. Know the red flags to look for in each of your logs, and set aside time to look for those flags regularly. Several SoftLayer customers — like Tech Partner Papertrail — have come up with innovative and effective log management platforms that streamline the process of aggregating, searching and analyzing log files.

It's important to reiterate that logging — like any other security endeavor — is not a 'one size fits all' model, but that shouldn't discourage you from getting started. If you aren't logging or you aren't actively monitoring your logs, any step you take is a step forward, and each step is worth the effort.

Thanks for reading, and stay secure, my friends!

-Matthew

December 27, 2012

Using SoftLayer Object Storage to Back Up Your Server

Before I came to my senses and moved my personal servers to SoftLayer, I was one of many victims of a SolusVM exploit that resulted in the wide-scale attack of many nodes in my previous host's Chicago data center. While I'm a firm believer in backing up my data, I could not have foreseen the situation I was faced with: Not only was my server in one data center compromised with all of its data deleted, but my backup server in one of the host's other data centers was also attacked ... This left me with old, stale backups on my local computer and not much else. I quickly relocated my data and decided that I should use SoftLayer Object Storage to supplement and improve upon my backup and disaster recovery plans.

With SoftLayer Object Storage Python Client set up and the SoftLayer Object Storage Backup script — slbackup.py — in hand, I had the tools I needed to build a solid backup infrastructure easily. On Linux.org, I contributed an article about how to perform MySQL backups with those resources, so the database piece is handled, but I also need to back up my web files, so I whipped up another quick bash script to run:

#!/bin/bash
 
# The path the backups will be dumped to
DUMP_DIR="/home/backups/"
 
# Path to the web files to be backed up
BACKUP_PATH="/var/www/sites /"
 
# Back up folder name (mmddyyyy)
BACKUP_DIR="`date +%m%d%Y`"
 
# Backup File Name
DUMP_FILE="`date +%m_%d_%Y_%H_%M_%S`_site_files"
 
# SL container name
CONTAINER="site_backups"
 
# Create backup dir if doesn't exist
if [ ! -d $DUMP_DIR$BACKUP_DIR ]; then
        mkdir -p $DUMP_DIR$BACKUP_DIR
fi
 
tar -zcvpf $DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz $BACKUP_PATH
 
# Make sure the archive exists
if [ -f $DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz ]; then
        /root/slbackup.py -s $DUMP_DIR$BACKUP_DIR/ -o "$CONTAINER" -r 30
 
        # Remove the backup stored locally
        rm -rf $DUMP_DIR$BACKUP_DIR
 
        # Success
        exit 0
else
        echo "$DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz does not exist."
        exit 1
fi

It's not the prettiest bash script, but it gets the job done. By tweaking a few variables, you can easily generate backups for any important directory of files and push them to your SoftLayer Object Storage account. If you want to change the retention time of your backups to be longer or shorter, you can change the 30 after the –r in the line below to the number of days you want to keep each backup:

/root/slbackup.py -s $DUMP_DIR$BACKUP_DIR/ -o "$CONTAINER" -r 30

I created a script for each website on my server, and I set a CRON (crontab –e) entry to run each one on Sundays staggered by 5 minutes:

5 1 * * 0  /root/bin/cron/CRON-site1.com_web_files > /dev/null
10 1 * * 0  /root/bin/cron/CRON-site2.com_web_files > /dev/null
15 1 * * 0  /root/bin/cron/CRON-site3.com_web_files > /dev/null 

If you're looking for an easy way to automate and solidify your backups, this little bit of code could make life easier on you. Had I taken the few minutes to put this script together prior to the attack I experienced at my previous host, I wouldn't have lost any of my data. It's easy to get lulled into "backup apathy" when you don't need your backups, but just because nothing *has* happened to your data doesn't mean nothing *can* happen to your data.

Take it from me ... Be over-prepared and save yourself a lot of trouble.

-Ronald

December 24, 2012

Giving From (and For) the Heart

This time of year is often referred to as "The Season of Giving," and we thought we'd share two SLayers' stories about their involvement in the American Heart Association Heart Walk. Like last year, we split up into fundraising teams for the AHA with a goal of raising $100,000. In addition to those fundraising efforts, SoftLayer also encouraged employees to get active and get involved in the annual Heart Walks in Houston and Dallas. Here's our on-location coverage from two team captains who attended those events this year:

Dallas

My name is Fabrienne Curtis, and I work in the Accounting Department at SoftLayer. I joined a team with thirty other people (from several different departments) to raise money for the American Heart Association, and because I love to help and work on community projects, I volunteered to be a team captain. Our team had a ton of great ideas for fundraisers, so we set an ambitious goal of raising $12,400 ($400 per person). When the dust settled, I'm proud to report that we me that goal with a total team tally of $12,488 (which SoftLayer then matched).

Beyond the fundraising, participating in the Dallas Heart Walk at Victory Park was a highlight this year. No one on my team knew that this walk had a personal meaning to me ... I lost my dad to congestive heart failure and wanted to walk in his behalf. When I got to the Heart Walk, I was touched. There was a "Survivor Wall" and there were several signs where you could share who you're walking on behalf of. If not for SoftLayer, I probably wouldn't have participated in the Heart Walk, so as I wrote on the wall and created a sign for my dad, I thought about how good it felt to work for a company that truly cares about the well-being of its employees.

SoftLayer Photo Booth

SoftLayer added a little flair to the event by setting up a photo booth for people to take pictures and take home, and with the help of Don Hunter, Hao Ho and my husband Jerry, 679 photos were taken!

SoftLayer Photo Booth

Here are some pictures I snapped from the 2012 Dallas Heart Walk:

SoftLayer Heart Walk
The Start!
SoftLayer Heart Walk
The SoftLayer "Uniform"
SoftLayer Heart Walk
The Crowd
SoftLayer Heart Walk
Victorious!

Thank you SoftLayer for having a heart! If you want more coverage of this years event, check out this Dallas Heart Walk 2012 video and click through to our Dallas Heart Walk Flickr album.

-Fabrienne

Houston

Dallas didn't get to have all of the fun when it comes to the AHA Heart Walk, and I made sure to document the Houston goings-on to share with our avid SoftLayer Blog readers. From bake sales to ice cream socials, the Houston office was diligent about donating money and raising heart-health awareness for months prior to the 2012 walk, and those months were extremely eventful. Like Fabrienne, I jumped at the opportunity to be one of 18 team captains at SoftLayer, and considering the fact that cardiovascular disease is the number one killer of Americans, I was inspired to get everyone involved.

I'll be the first to admit that I am not in the best of shape, so a five-kilometer walk through a course at Reliant Stadium would be pretty challenging. My team had been tirelessly preparing for the 5k "mini-marathon" walk, and as November approached, you could sense the excitement and enthusiasm brewing. Walking only one mile can add up to two hours to your lifespan, so in the process of preparing for the walk, we added quite a few hours to our collective lives. When the big day finally arrived, we were ready:

SoftLayer Heart Walk
The Houston Heart Walk SLayers

Given that our day started at an unbelievable 7:00am on a Saturday, most of our participants were tired-eyed and ready to chow down on the free burritos and fruit provided by SoftLayer, and by the time we fired up the photo booth and broke out the goofy props, everyone was wide awake. It's like they say, "Give a man a fish and he'll eat for a day ... Give a man fun props and a camera, and he'll have a blast (and pictures that can be used against him." Actually, I don't know if "they" say that, but it's true:

SoftLayer Heart Walk

Before we knew it, a gunshot of glitter and colorful confetti got the crowd of people moving down the 3.1-ish mile track, and we were hooting and cheering, pumped to represent our company! By mile two, my legs were a little wobbly and the sun was scorching, I could see that our dog, Rikku (whom had been carried the entire way) looked was confused about why I was putting her through the exhausting task of being comfortably in my arms as we herded through the people like cattle.

SoftLayer Heart Walk

AHA water stations and mile markers reminded us that we were doing it for the best cause ever: The people we love and the people of the past that have been lost due to heart disease. It's a safe bet that if you don't know someone directly affected by heart disease, you will eventually. The American Heart Association organizes these fundraisers and walks every year across the world to gather donations and raise awareness so that one day, we may be able to conquer this silent killer. With their donations, they're able to participate in research for preventative treatment, provide education to children to avoid obesity and fund medical research that could one day breakthrough and save lives.

All in all it was a wonderful experience, one that I'll definitely be sure to be a part of next year.

-Cassandra

SoftLayer Heart Walk

Categories: 
December 20, 2012

MongoDB Performance Analysis: Bare Metal v. Virtual

Developers can be cynical. When "the next great thing in technology" is announced, I usually wait to see how it performs before I get too excited about it ... Show me how that "next great thing" compares apples-to-apples with the competition, and you'll get my attention. With the launch of MongoDB at SoftLayer, I'd guess a lot of developers outside of SoftLayer and 10gen have the same "wait and see" attitude about the new platform, so I put our new MongoDB engineered servers to the test.

When I shared MongoDB architectural best practices, I referenced a few of the significant optimizations our team worked with 10gen to incorporate into our engineered servers (cheat sheet). To illustrate the impact of these changes in MongoDB performance, we ran 10gen's recommended benchmarking harness (freely available for download and testing of your own environment) on our three tiers of engineered servers alongside equivalent shared virtual environments commonly deployed by the MongoDB community. We've made a pretty big deal about the performance impact of running MongoDB on optimized bare metal infrastructure, so it's time to put our money where our mouth is.

The Testing Environment

For each of the available SoftLayer MongoDB engineered servers, data sets of 512kb documents were preloaded onto single MongoDB instances. The data sets were created with varying size compared to available memory to allow for data sets that were both larger (2X) and smaller than available memory. Each test also ensured that the data set was altered during the test run frequently enough to prevent the queries from caching all of the data into memory.

Once the data sets were created, JMeter server instances with 4 cores and 16GB of RAM were used to drive 'benchrun' from the 10gen benchmarking harness. This diagram illustrates how we set up the testing environment (click for a better look):

MongoDB Performance Analysis Setup

These Jmeter servers function as the clients generating traffic on the MongoDB instances. Each client generated random query and update requests with a ratio of six queries per update (The update requests in the test were to ensure that data was not allowed to fully cache into memory and never exercise reads from disk). These tests were designed to create an extreme load on the servers from an exponentially increasing number of clients until the system resources became saturated, and we recorded the resulting performance of the MongoDB application.

At the Medium (MD) and Large (LG) engineered server tiers, performance metrics were run separately for servers using 15K SAS hard drive data mounts and servers using SSD hard drive data mounts. If you missed the post comparing the IOPS statistics between different engineered server hard drive configurations, be sure to check it out. For a better view of the results in a given graph, click the image included in the results below to see a larger version.

Test Case 1: Small MongoDB Engineered Servers vs Shared Virtual Instance

Servers

Small (SM) MongoDB Engineered Server
Single 4-core Intel 1270 CPU
64-bit CentOS
8GB RAM
2 x 500GB SATAII - RAID1
1Gb Network
Virtual Provider Instance
4 Virtual Compute Units
64-bit CentOS
7.5GB RAM
2 x 500GB Network Storage - RAID1
1Gb Network
 

Tests Performed

Small Data Set (8GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 32
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Test Case 2: Medium MongoDB Engineered Servers vs Shared Virtual Instance

Servers (15K SAS Data Mount Comparison)

Medium (MD) MongoDB Engineered Server
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
4 x 300GB 15K SAS - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
30GB RAM
2 x 64GB Network Storage - RAID1 (Journal Mount)
4 x 300GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (32GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Servers (SSD Data Mount Comparison)

Medium (MD) MongoDB Engineered Server
Dual 6-core Intel 5670 CPUs
64-bit CentOS
36GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
4 x 400GB SSD - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
30GB RAM
2 x 64GB Network Storage - RAID1 (Journal Mount)
4 x 300GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (32GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Test Case 3: Large MongoDB Engineered Servers vs Shared Virtual Instance

Servers (15K SAS Data Mount Comparison)

Large (LG) MongoDB Engineered Server
Dual 8-core Intel E5-2620 CPUs
64-bit CentOS
128GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
6 x 600GB 15K SAS - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
64GB RAM (Maximum available on this provider)
2 x 64GB Network Storage - RAID1 (Journal Mount)
6 x 600GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (64GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Servers (SSD Data Mount Comparison)

Large (LG) MongoDB Engineered Server
Dual 8-core Intel E5-2620 CPUs
64-bit CentOS
128GB RAM
2 x 64GB SSD - RAID1 (Journal Mount)
6 x 400GB SSD - RAID10 (Data Mount)
1Gb Network - Bonded
Virtual Provider Instance
26 Virtual Compute Units
64-bit CentOS
64GB RAM (Maximum available on this provider)
2 x 64GB Network Storage - RAID1 (Journal Mount)
6 x 600GB Network Storage - RAID10 (Data Mount)
1Gb Network
 

Tests Performed

Small Data Set (64GB of .5mb documents)
200 iterations of 6:1 query-to-update operations
Concurrent client connections exponentially increased from 1 to 128
Test duration spanned over 48 hours
Average Read Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Read Operations per Second
by Concurrent ClientMongoDB Performance Analysis
Average Write Operations per Second
by Concurrent Client
MongoDB Performance Analysis
Peak Write Operations per Second
by Concurrent ClientMongoDB Performance Analysis

Impressions from Performance Testing

The results speak for themselves. Running a Mongo DB big data solution on a shared virtual environment has significant drawbacks when compared to running MongoDB on a single-tenant bare metal offering. Disk I/O is by far the most limiting resource for MongoDB, and relying on shared network-attached storage (with much lower disk I/O) makes this limitation very apparent. Beyond the average and peak statistics above, performance varied much more significantly in the virtual instance environment, so it's not as consistent and predictable as a bare metal.

Highlights:

  • When a working data set is smaller than available memory, query performance increases.
  • The number of clients performing queries has an impact on query performance because more data is being actively cached at a rapid rate.
  • The addition of a separate Journal Mount volume significantly improves performance. Because the Small (SM) engineered server does not include a secondary mount for Journals, whenever MongoDB began to journal, the disk I/O associated with journalling was disruptive to the query and update operations performed on the Data Mount.
  • The best deployments in terms of operations per second, stability and control were the configurations with a RAID10 SSD Data Mount and a RAID1 SSD Journal Mount. These configurations are available in both our Medium and Large offerings, and I'd highly recommend them.

-Harold

December 19, 2012

SoftLayer API: Streamline. Simplify.

Building an API is a bit of a balancing act. You want your API to be simple and easy to use, and you want it to be feature-rich and completely customizable. Because those two desires happen to live on opposite ends of the spectrum, every API finds a different stasis in terms of how complex and customizable they are. The SoftLayer API was designed to provide customers with granular control of every action associated with any product or service on our platform; anything you can do in our customer portal can be done via our API. That depth of functionality might be intimidating to developers looking to dive in quickly and incorporate the SoftLayer platform into their applications, so our development team has been working to streamline and simplify some of the most common API services to make them even more accessible.

SoftLayer API

To get an idea of what their efforts look like in practice, Phil posted an SLDN blog with a perfect example of how they simplified cloud computing instance (CCI) creation via the API. The traditional CCI ordering process required developers to define nineteen data points:

Hostname
Domain name
complexType
Package Id
Location Id
Quantity to order
Number of cores
Amount of RAM
Remote management options
Port speeds
Public bandwidth allotment
Primary subnet size
Disk size
Operating system
Monitoring
Notification
Response
VPN Management - Private Network
Vulnerability Assessments & Management

While each of those data points is straightforward, you still have to define nineteen of them. You have all of those options when you check out through our shopping cart, so it makes sense that you'd have them in the API, but when it comes to ordering through the API, you don't necessarily need all of those options. Our development team observed our customers' API usage patterns, and they created the slimmed-down and efficient SoftLayer_Virtual_Guest::createObject — a method that only requires seven data points:

Hostname
Domain name
Number of cores
Amount of RAM
Hourly/monthly billing
Local vs SAN disk
Operating System

Without showing you a single line of code, you see the improvement. Default values were established for options like Port speeds and Monitoring based on customer usage patterns, and as a result, developers only have to provide half the data to place a new CCI order. Because each data point might require multiple lines of code, the volume of API code required to place an order is slimmed down even more. The best part is that if you find yourself needing to modify one of the now-default options like Port speeds or Monitoring, you still can!

As the development team finds other API services and methods that can be streamlined and simplified like this one, they'll ninja new solutions to make the API even more accessible. Have you tried coding to the SoftLayer API yet? If not, what's the biggest roadblock for you? If you're already a SLAPI coder, what other methods do you use often that could be streamlined?

-@khazard

December 18, 2012

2012 at SoftLayer: A Year-End Review

It's already December 18, so you've probably read a few dozen "Best of 2012" and "Looking Back on 2012" articles around the web by now. I hope that you indulge me as I add one more tally to that list ... I can't suppress the urge to take a nostalgic look back on all of SoftLayer's successes this year.

As Director of Communications, the easiest milestones for me to use as I look back are our product announcements and press releases, so I'll use those as landmarks to help tell the story of SoftLayer in 2012. Instead of listing those points chronologically, it might make a little more sense to categorize them topically so you can see the bigger picture of what's happening at the company when it comes to product innovation, growth, the startup community and industry recognition.

Driving Product Innovation

When your company motto is "Innovate or Die," there's a lot of pressure to stay on the bleeding edge of technology. In this calendar year alone, we launched some pretty amazing products and capabilities that have the potential of reshaping the competitive landscape:

  • Flex Images – In February, we announced Flex Images — an amazing tool that blurs the line between "cloud" and "dedicated." Users can easily replicate servers and move them between physical and virtual platforms to quickly meet their changing requirements. None of our competitors can match that level of flexibility.
  • High Performance Computing – In April, we launched high-performance computing (HPC) options powered by NVIDIA Tesla GPUS to provide an on-demand, consumption-based platform for users with the most compute-intensive environments.
  • SoftLayer Private Clouds – In June, we unveiled Private Clouds based on CloudStack and Citrix CloudPlatform. A Private Cloud is a an optimized environment that enables quick provisioning of cloud instances on dedicated infrastructure, and because we've automated the provisioning and expansion of the Private Cloud architecture, customers can order and configure full private cloud deployments on demand.
  • Big Data: MongoDB – Our most recent product release, an optimized MongoDB environment, was the amazing result of a strategic partnership with the team at 10gen. This flexible pay-as-you-go solution simplifies the big data buying process and enables organizations to swiftly deploy highly scalable and available production-grade systems. Big data developers don't have to settle for lower-performance virtualized platforms, and they don't have to hassle with building, configuring and tweaking their own dedicated environments (since we did all the work for them).

Expanding in Key Vertical Markets

Beyond the pure "product innovation" milestones we've hit this year, we've also seen a few key vertical markets do their own innovating on our platform. With a paintbrush and a little creativity, Pablo Picasso popularized Cubism, so when our creative customers are provided with a truly scalable platform that delivers unparalleled performance and control across both physical and virtual devices, they do their own world changing. Several top online gaming providers and cutting-edge tech companies chose SoftLayer to do their "painting" this year, and their stories have been pretty amazing:

  • Broken Bulb Studios - This social gaming developer uses SoftLayer's public and private cloud infrastructure with RightScale cloud management to easily deploy, automate and manage its rapidly expanding computing workloads across the globe.
  • KIXEYE, Storm8, and East Side Games - These online gaming companies rely on SoftLayer to provide a platform of dedicated, virtualized and managed servers from which they can develop, test, launch and run their latest games.
  • AppFirst, Cloudant and Struq - These hot tech companies moved to SoftLayer to achieve the scalability, performance and the time-to-market they need to continue meeting global market demand for their services.
  • Huge Wins in Europe, Middle East and Africa - Companies like Binweevils, Boxed Ice, Crazymist, Exit Games, Ganymede, Hotwire Financial, Mangrove, Multiplay, Peak Games and Zamzar are just some of organizations that choose SoftLayer to deliver the cloud infrastructure for their killer applications and games.

Supporting the Startup Community

2012 was the first full year of activity for the Catalyst Startup Program. Catalyst is geared toward furthering innovation by investing time and hosting resources in helping entrepreneurs build their businesses, and as an extension of that program, we also supported several high-profile incubators, accelerators and startup-related events this year:

Earning Industry Recognition

All of this innovation and effort didn't go unnoticed in 2012. SoftLayer's growth and accomplishments throughout the year resulted in some high-profile recognition:

  • SoftLayer won the Red Herring "Top 100 North America Tech Award," a mark of distinction for identifying promising new companies and entrepreneurs. With this award, we join the ranks of past recipients like Facebook, Twitter and Google.
  • SoftLayer was listed in the Top 10 on Business Insider's Digital 100 list of 2012's Most Valuable Private Tech Companies in the world, alongside Twitter, Square and Dropbox.

Beyond that "official" recognition of what we're doing to shake up the market, the best barometer for our success is our customer base. According to an amazing hosting infographic from HostCabi.net, we're the most popular hosting provider among the 100,000 most visited websites in the world. We easily beat out all other service providers — almost doubling the number of sites hosted by the second-place competitor — and we're not slowing down. We're using the momentum we've continued building in 2012 to propel us into 2013, and we encourage you to watch this space for even more activity next year.

-Andre

Categories: 

Pages

Subscribe to 3-bars-3-questions