January 31, 2013

ActiveCampaign: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Peter Evans from ActiveCampaign. ActiveCampaign is a complete email marketing and marketing automation platform designed to help small businesses grow.

The Challenge of Sending Email Simply

You need to send email. Usually, that's a pretty simple task, so it's not uncommon to find users who think that sending a monthly newsletter is more or less the same task as sending a quick note to a friend about going to see a movie. In fact, those two email use-cases are completely different animals. With all of the nuances inherent in sending and managing large volumes of email, a plethora of email marketing services are positioned to help users better navigate the email marketing waters. It's tough to differentiate which features you might need and which features are just there to be a "Check" in a comparison checklist. ActiveCampaign set out to make the decision-making process simpler ... We knew that we needed the standard features like auto-responder campaigns, metrics reports and email templates, but we also knew we had to differentiate our service in a meaningful way. So we focused on automation.

Too often, the "automation" provided by a platform can be very cumbersome to set up (if it's available at all), and when it's actually working, there's little confirmation that actions are being performed as expected. In response, we were intentional about ActiveCampaign's automation features being easy to set up and manage ... If automation saves time and money, it shouldn't be intimidatingly difficult to incorporate into your campaigns. Here is a screenshot of what it takes to incorporate automation in your email campaigns with ActiveCampaign:

ActiveCampaign Screenshot

No complicated logic. No unnecessary options. With a only a few clicks, you can select an action to spark a meaningful response in your system. If a subscriber in your Newsletter list clicks on a link, you might want to move that subscriber to a different list. Because you might want to send a different campaign to that user as well, we provide the ability to add multiple automated actions for each subscriber action, and it's all very clear.

One of the subscriber actions that might stand out to you if you've used other email service providers (or ESPs) is the "When subscriber replies to a campaign" bullet. ActiveCampaign is the first ESP (that we're aware of) to provide users the option to send a series of follow-up campaigns (or to restrict the sending of future campaigns) to subscribers who reply to a campaign email. Replies are tracked in your campaign reports, and you have deep visibility into how many people replied, who replied, and how many times they replied. With that information, you can segment those subscribers and create automated actions for them, and the end result is that you're connecting with your subscriber base much more effectively because you're able to target them better ... And you don't have to break your back to do it.

SoftLayer customers know how valuable automation can be in terms of infrastructure, so it should be no surprise that email marketing campaigns can benefit so much from automation as well. Lots of ESPs provide stats, and it's up to you to figure out meaningful ways to use that information. ActiveCampaign goes a step beyond those other providers by helping you very simply engage your subscribers with relevant and intentional actions. If you're interested in learning more, check us out at

-Peter Evans, ActiveCampaign

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
January 29, 2013

iptables Tips and Tricks: APF (Advanced Policy Firewall) Configuration

Let's talk about APF. APF — Advanced Policy Firewall — is a policy-based iptables firewall system that provides simple, powerful control over your day-to-day server security. It might seem intimidating to be faced with all of the features and configuration tools in APF, but this blog should put your fears to rest.

APF is an iptables wrapper that works alongside iptables and extends its functionality. I personally don't use iptables wrappers, but I have a lot of experience with them, and I've seen that they do offer some additional features that streamline policy management. For example, by employing APF, you'll get several simple on/off toggles (set via configuration files) that make some complex iptables configurations available without extensive coding requirements. The flip-side of a wrapper's simplicity is that you aren't directly in control of the iptables commands, so if something breaks it might take longer to diagnose and repair. Before you add a wrapper like APF, be sure that you know what you are getting into. Here are a few points to consider:

  • Make sure that what you're looking to use adds a feature you need but cannot easily incorporate with iptables on its own.
  • You need to know how to effectively enable and disable the iptables wrapper (the correct way ... read the manual!), and you should always have a trusted failsafe iptables ruleset handy in the unfortunate event that something goes horribly wrong and you need to disable the wrapper.
  • Learn about the basic configurations and rule changes you can apply via the command line. You'll need to understand the way your wrapper takes rules because it may differ from the way iptables handles rules.
  • You can't manually configure your iptables rules once you have your wrapper in place (or at least you shouldn't).
  • Be sure to know how to access your server via the IPMI management console so that if you completely lock yourself out beyond repair, you can get back in. You might even go so far as to have a script or set of instructions ready for tech support to run, in the event that you can't get in via the management console.

TL;DR: Have a Band-Aid ready!

APF Configuration

Now that you have been sufficiently advised about the potential challenges of using a wrapper (and you've got your Band-Aid ready), we can check out some of the useful APF rules that make iptables administration a lot easier. Most of the configuration for APF is in conf.apf. This file handles the default behavior, but not necessarily the specific blocking rules, and when we make any changes to the configuration, we'll need to restart the APF service for the changes to take effect.

Let's jump into conf.apf and break down what we see. The first code snippit is fairly self-explanatory. It's another way to make sure you don't lock yourself out of your server as you are making configuration changes and testing them:

# !!! Do not leave set to (1) !!!
# When set to enabled; 5 minute cronjob is set to stop the firewall. Set
# this off (0) when firewall is determined to be operating as desired.

The next configuration options we'll look at are where you can make quick high-level changes if you find that legitimate traffic is being blocked and you want to make APF a little more lenient:

# This controls the amount of violation hits an address must have before it
# is blocked. It is a good idea to keep this very low to prevent evasive
# measures. The default is 0 or 1, meaning instant block on first violation.
# This is the amount of time (in seconds) that an address gets blocked for if
# a violation is triggered, the default is 300s (5 minutes).
# This allows RAB to 'trip' the block timer back to 0 seconds if an address
# attempts ANY subsiquent communication while still on the inital block period.
# This controls if the firewall should log all violation hits from an address.
# The use of LOG_DROP variable set to 1 will override this to force logging.
# This controls if the firewall should log all subsiqent traffic from an address
# that is already blocked for a violation hit, this can generate allot of logs.
# The use of LOG_DROP variable set to 1 will override this to force logging.

Next, we have an option to adjust ICMP flood protection. This protection should be useful against some forms of DoS attacks, and the associated rules show up in your INPUT chain:

# Set a reasonable packet/time ratio for ICMP packets, exceeding this flow
# will result in dropped ICMP packets. Supported values are in the form of:
# pkt/s (packets/seconds), pkt/m (packets/minutes)
# Set value to 0 for unlimited, anything above is enabled.

If you wanted to add more ports to block for p2p traffic (which will show up in the P2P chain), you'll update this code:

# A common set of known Peer-To-Peer (p2p) protocol ports that are often
# considered undesirable traffic on public Internet servers. These ports
# are also often abused on web hosting servers where clients upload p2p
# client agents for the purpose of distributing or downloading pirated media.
# Format is comma separated for single ports and an underscore separator for
# ranges (4660_4678).

The next few lines let you designate the ports that you want to have closed at all times. They will be blocked for INPUT and OUTPUT chains:

# These are common Internet service ports that are understood in the wild
# services you would not want logged under normal circumstances. All ports
# that are defined here will be implicitly dropped with no logging for
# TCP/UDP traffic inbound or outbound. Format is comma separated for single
# ports and an underscore separator for ranges (135_139).

The next important section to look at deals with conntrack. If you get "conntrack full" errors, this is where you'd increase the allowed connections. It's not uncommon to need more connections than the default, so if you need to adjust that value, you'd do it here:

# This is the maximum number of "sessions" (connection tracking entries) that
# can be handled simultaneously by the firewall in kernel memory. Increasing
# this value too high will simply waste memory - setting it too low may result
# in some or all connections being refused, in particular during denial of
# service attacks.

We've talked about the ports we want closed at all times, so it only makes sense that we'd specify which ports we want open for all interfaces:

# Common inbound (ingress) TCP ports
# Common inbound (ingress) UDP ports
# Common outbound (egress) TCP ports
# Common outbound (egress) UDP ports

And when we want a special port allowance for specific users, we can declare it easily. For example, if we want port 22 open for user ID 0, we'd use this code:

# Allow outbound access to destination port 22 for uid 0

The next few sections on Remote Rule Imports and Global Trust are a little more specialized, and I encourage you to read a little more about them (since there's so much to them and not enough space to cover them here on the blog). An important feature of APF is that it imports block lists from outside sources to keep you safe from some attackers, so the Remote Rule Imports can prove to be very useful. The Global Trust section is incredibly useful for multi-server deployments of APF. Here, you can set up your global allow/block lists and have them all pull from a central location so that you can make a single update to the source and have the update propogated to all servers in your configuration. These changes are synced to the glob_allow/deny.rules files, and they will be downloaded (and overwritten) on a regular basis from your specified source, so don't make any manual edits in glob_allow/deny.rules.

As you can see, apf.conf is no joke. It has a lot of stuff going on, but it's very straightforward and documented well. Once we've set up apf.conf with the configurations we need, it's time to look at the more focused allow_hosts.rules and deny_hosts.rules files. These .rules files are where where you put your typical firewall rules in place. If there's one piece of advice I can give you about these configurations, it would be to check if your traffic is already allowed or blocked. Having multiple rules that do the same thing (possibly in different places) is confusing and potentially dangerous.

The deny_hosts.rules configuration will look just like allow_hosts.rules, but it's performing the opposite function. Let's check out an allow_hosts.rules configuration that will allow the Nimsoft service to function:


The format is somewhat simplistic, but the file gives a little more context in the comments:

# The trust rules can be made in advanced format with 4 options
# (proto:flow:port:ip);
# 1) protocol: [packet protocol tcp/udp]
# 2) flow in/out: [packet direction, inbound or outbound]
# 3) s/d=port: [packet source or destination port]
# 4) s/d=ip(/xx) [packet source or destination address, masking supported]
# Syntax:
# proto:flow:[s/d]=port:[s/d]=ip(/mask)

APF also uses ds_hosts.rules to load the blocklist, and I assume the ecnshame_hosts.rules does something similar (can't find much information about it), so you won't need to edit these files manually. Additionally, you probably don't need to make any changes to log.rules, unless you want to make changes to what exactly you log. As it stands, it logs certain dropped connections, which should be enough. Also, it might be worth noting that this file is a script, not a configuration file.

The last two configuration files are the preroute.rules and postroute.rules that (unsurprisingly) are used to make routing changes. If you have been following my articles, this corresponds to the iptables chains for PREROUTING and POSTROUTING where you would do things like port forwarding and other advanced configuration that you probably don't want to do in most cases.

APF Command Line Management

As I mentioned in the "points to consider" at the top of this post, it's important to learn the changes you can perform from the command line, and APF has some very useful command line tools:

[root@server]# apf --help
APF version 9.7 <>
Copyright (C) 2002-2011, R-fx Networks <>
Copyright (C) 2011, Ryan MacDonald <>
This program may be freely redistributed under the terms of the GNU GPL
usage /usr/local/sbin/apf [OPTION]
-s|--start ......................... load all firewall rules
-r|--restart ....................... stop (flush) &amp; reload firewall rules
-f|--stop........ .................. stop (flush) all firewall rules
-l|--list .......................... list all firewall rules
-t|--status ........................ output firewall status log
-e|--refresh ....................... refresh &amp; resolve dns names in trust rules
-a HOST CMT|--allow HOST COMMENT ... add host (IP/FQDN) to allow_hosts.rules and
                                     immediately load new rule into firewall
-d HOST CMT|--deny HOST COMMENT .... add host (IP/FQDN) to deny_hosts.rules and
                                     immediately load new rule into firewall
-u|--remove HOST ................... remove host from [glob]*_hosts.rules
                                     and immediately remove rule from firewall
-o|--ovars ......................... output all configuration options

You can use these command line tools to turn your firewall on and off, add allowed or blocked hosts and display troubleshooting information. These commands are very easy to use, but if you want more fine-tuned control, you'll need to edit the configuration files directly (as we looked at above).

I know it seems like a lot of information, but to a large extent, that's all you need to know to get started with APF. Take each section slowly and understand what each configuration file is doing, and you'll master APF in no time at all.


January 28, 2013

Catalyst: In the Startup Sauna and Slush was a victim of its own success. In November 2012, the website home of Startup Sauna's early-stage startup conference was crippled by an unexpected flood of site traffic, and they had to take immediate action. Should they get a private MySQL instance from their current host to try and accommodate the traffic or should they move their site to the SoftLayer cloud? Spoiler (highlight for clue): You're reading this post on the SoftLayer Blog.

Let me back up for a second and tell you a little about Startup Sauna and Slush. Startup Sauna hosts (among other things) a Helsinki-based seed accelerator program for early-stage startup companies from Northern Europe and Russia. They run two five-week programs every year, with more than one hundred graduated companies to date. In addition to the accelerator program, Startup Sauna also puts on annually the biggest startup conference in Northern europe called Slush. Slush was founded in 2008 with the intent to bring the local startup scene together at least once every year. Now — five years later — Slush brings more international investors and media to the region than any other event out there. This year alone, 3,500 entrepreneurs, investors and partners who converged on Slush to make connections and see the region's most creative and innovative businesses, products and services.

Slush Conference

In October of last year, we met the founders of Startup Sauna, and it was clear that they would be a perfect fit to join Catalyst. We offer their portfolio companies free credits for cloud and dedicated hosting, and we really try get to know the teams and alumni. Because Startup Sauna signed on just before Slush 2012 in November, they didn't want to rock the boat by moving their site to SoftLayer before the conference. Little did we know that they'd end up needing to make the transition during the conference.

When the event started, the Slush website was inundated with traffic. Attendees were checking the agenda and learning about some of the featured startups, and the live stream of the presentation brought record numbers of unique visitors and views. That's all great news ... Until those "record numbers" pushed the site's infrastructure to its limit. Startup Sauna CTO Lari Haataja described what happened:

The number of participants had definitely most impact on our operations. The Slush website was hosted on a standard webhotel (not by SoftLayer), and due to the tremendous traffic we faced some major problems. Everyone was busy during the first morning, and it took until noon before we had time to respond to the messages about our website not responding. Our Google Analytics were on fire, especially when Jolla took the stage to announce their big launch. We were streaming the whole program live, and anyone who wasn't able to attend the conference wanted to be the first to know about what was happening.

The Slush website was hosted on a shared MySQL instance with a limited number of open connections, so when those connections were maxed out (quickly) by site visitors from 134 different countries, database errors abounded. The Startup Sauna team knew that a drastic change was needed to get the site back online and accessible, so they provisioned a SoftLayer cloud server and moved their site to its new home. In less than two hours (much of the time being spent waiting for files to be downloaded and for DNS changes to be recognized), the site was back online and able to accommodate the record volume of traffic.

You've seen a few of these cautionary tales before on the SoftLayer Blog, and that's because these kinds of experiences are all too common. You dream about getting hundreds of thousands of visitors, but when those visitors come, you have to be ready for them. If you have an awesome startup and you want to learn more about the Startup Sauna, swing by Helsinki this week. SoftLayer Chief Strategy Officer George Karidis will be in town, and we plan on taking the Sauna family (and anyone else interested) out for drinks on January 31! Drop me a line in a comment here or over on Twitter, and I'll make sure you get details.


January 24, 2013


Research from the Aberdeen Group shows the average website is losing 9% of its business because
 the speed of the site frustrates visitors into leaving. 9% of your traffic might be leaving your site because they feel like it's too slow. That thought is staggering, and any site owner would be foolish not to fix the problem. SPEEDILICIOUS — one of our new Catalyst partners — has an innovative solution that optimizes website performance and helps businesses deliver content to their end users faster.


I recently had the chance to chat with SPEEDILICIOUS founders Seymour Segnit and Chip Krauskopf, and Seymour rephrased that "9%" statistic in a pretty alarming way: "Losing 9% of your business is the equivalent of simply allowing your website to go offline, down, dark, dead, 404 for over a MONTH each year!" There is ample data to back this up from high-profile sites like Amazon, Microsoft and, but intuitively, you know it already ... A slow site (even a slightly slow site) is annoying.

The challenge many website owners have when it comes to their loading speeds is that problems might not be noticeable from their own workstations. Thanks to caching and the Internet connections most of us have, when we visit our own sites, we don't have any trouble accessing our content quickly. Unfortunately, many of our customers don't share that experience when they visit our sites on mobile, hotel, airports and (worst of all) conference connections. The most common approach to speeding up load times is to throw bigger servers or a CDN (content delivery network) at the problem, but while those improvements make a difference, they only address part of the problem ... Even with the most powerful servers in SoftLayer's fleet, your page can load at a crawl if your code can't be rendered quickly by a browser.

That makes life as a website developer difficult. The process of optimizing code and tweaking settings to speed up load times can be time-consuming and frustrating. Or as Chip explained to me, "Speeding up your site is essential, it shouldn’t be be slow and complicated. We fix that problem." Take a look:

The idea that your site performance can be sped up significantly overnight seems a little crazy, but if it works (which it clearly does), wouldn't it be crazier not to try it? SPEEDILICIOUS offers a $1 trial for you to see the results on your own site, and they regularly host a free webinar called "How to Grow Your Business 5-15% Overnight" which covers the critical techniques for speeding up any website.

As technology continues to improve and behavioral patterns of purchasing migrate away from the mall and onto our computers and smart phones, SPEEDILICIOUS has a tremendous opportunity to capture a ripe market. So they're clearly a great fit for Catalyst. If you're interested in learning more or would like to speak to Seymour, Chip or anyone on their team, please let me know and I'll make the direct introduction any time.


January 15, 2013

Startup Series: Moqups

Every member on the Catalyst team is given one simple goal: Find the most innovative and creative startups on the planet and get them on the SoftLayer network. We meet entrepreneurs at conferences and events around the world, we team up with the most influential startup accelerators and incubators, and we hunt for businesses who are making waves online. With the momentum Catalyst built in 2012, our message has started spreading exponentially faster than what the community development team could be doing on our own, and now it seems like we've earned a few evangelists in the startup community. We have those evangelists to thank for bringing Moqups to our door.

In a Hacker News thread, a user posted about needing hosting for a server/startup, and a recommendation for the Catalyst program was one of the top-rated results. The founders of Moqups saw that recommendation, researched SoftLayer's hosting platform and submitted an application to become a Catalyst partner. As soon as we saw the unbelievable HTML5 app the Moqups team created to streamline and simplify the process of creating wireframes and mockups for website and application design, we knew they were a perfect fit to join the program.

If you've ever had to create a site prototype or UI mockup, you know how unwieldy the process can be. You want to sketch a layout and present it clearly and cleanly, but there aren't many viable resources between "marker on a whiteboard" and "rendering in Photoshop" to accomplish that goal. That's the problem the Moqups team set out to solve ... Can a web app provide the functionality and flexibility you'd need to fill that gap?

We put their answer to that question to the test. I told Kevin about Moqups and asked him to spend a few minutes wireframing the SoftLayer Blog ... About ten minutes later, he sent me this (Click for the full Moqups version):

SoftLayer Blog Moqup

Obviously, wireframing an existing design is easier than creating a new design from scratch, but Kevin said he was floored by how intuitive the Moqups platform made the process. In fact, the "instructions" for how to use Moqups are actually provided in an example "Quick Introduction to Moqups" project on the home page. That example project allows you to tweak, add and adjust content to understand how the platform works, and because it's all done in HTML5, the user experience is seamless.


Put it to the test for yourself: How long will it take you to create a wireframe of your existing website (similar to what Kevin did with the SoftLayer Blog)? You have down-to-the-pixel precision, you can group objects together, Moqups helps you line up or center all of the different pieces of your site. Their extensive library of stencils supplements any custom images you upload, so you can go through the whole process of creating a site mockup without "drawing" anything by hand!

I'm actually surprised that the Moqups team heard about SoftLayer before our community development team heard about them ... In November, I was in Bucharest, Romania, for HowtoWeb, so I was right in their back yard! Central and Eastern European startups are blowing up right now, and Moqups is a perfect example of what we're seeing from that region in EMEA.

Oh, and if you know of a crazy cool startup like Moqups that could use a little hosting help from SoftLayer, tell them about Catalyst!


January 10, 2013

Web Development - JavaScript Packaging

If you think of JavaScript as the ugly duckling of programming languages, think again! It got a bad rap in the earlier days of the web because developers knew enough just to get by but didn't really respect it like they did Java, PHP or .Net. Like other well-known and heavily used languages, JavaScript contains various data types (String, Boolean, Number, etc.), objects and functions, and it is even capable of inheritance. Unfortunately, that functionality is often overlooked, and many developers seem to implement it as an afterthought: "Oh, we need to add some neat jQuery effects over there? I'll just throw some inline JavaScript here." That kind of implementation perpetuates a stereotype that JavaScript code is unorganized and difficult to maintain, but it doesn't have to be! I'm going to show you how easy it is to maintain and organize your code base by packaging your JavaScript classes into a single file to be included with your website.

There are a few things to cover before we jump into code:

  1. JavaScript Framework - Mootools is my framework of choice, but you can use whatever JavaScript framework you'd like.
  2. Classes - Because I see JavaScript as another programming language that I respect (and is capable of object-oriented-like design), I write classes for EVERYTHING. Don't think of your JavaScript code as something you use once and throw away. Write your code to be generic enough to be reused wherever it's placed. Object-oriented design is great for this! Mootools makes object-oriented design easy to do, so this point reinforces the point above.
  3. Class Files - Just like you'd organize your PHP to contain one class per file, I do the exact same thing with JavaScript. Note: Each of the class files in the example below uses the class name appended with .js.
  4. Namespacing - I will be organizing my classes in a way that will only add a single property — PT — to the global namespace. I won't get into the details of namespacing in this blog because I'm sure you're already thinking, "The code! The code! Get on with it!" You can namespace whatever is right for your situation.

For this example, our classes will be food-themed because ... well ... I enjoy food. Let's get started by creating our base object:

name: PT
description: The base class for all the custom classes
authors: [Philip Thompson]
provides: [PT]
var PT = {};

We now have an empty object from which we'll build all of our classes. We'll go I will go into more details later about the comment section, but let's build our first class: PT.Ham.

name: PT.Ham
description: The ham class
authors: [Philip Thompson]
requires: [/PT]
provides: [PT.Ham]
(function() {
    PT.Ham = new Class({
        // Custom code here...

As I mentioned in point three (above), PT.Ham should be saved in the file named PT.Ham.js. When we create second class, PT.Pineapple, we'll store it in PT.Pineapple.js:

name: PT.Pineapple
description: The pineapple class
authors: [Philip Thompson]
requires: [/PT]
provides: [PT.Pineapple]
(function() {
    PT.Pineapple = new Class({
        // Custom code here...

Our final class for this example will be PT.Pizza (I'll let you guess the name of the file where PT.Pizza lives). Our PT.Pizza class will require that PT, PT.Ham and PT.Pineapple be present.

name: PT.Pizza
description: The pizza class
authors: [Philip Thompson]
requires: [/PT, /PT.Ham, /PT.Pineapple]
provides: [PT.Pizza]
(function() {
    PT.Pizza = new Class({
        // Custom code here that uses PT.Ham and PT.Pineapple...

Before we go any further, let's check out the comments we include above each of the classes. The comments are formatted for YAML — YAML Ain't Markup Language (you gotta love recursive acronyms). These comments allow our parser to determine how our classes are related, and they help resolve dependencies. YAML's pretty easy to learn and you only need to know a few basic features to use it. The YAML comments in this example are essential for our JavaScript package-manager — Packager. I won't go into all the details about Packager, but simply mention a few commands that we'll need to build our single JavaScript file.

In addition to the YAML comments in each of the class files, we also need to create a YAML file that will organize our code. This file — package.yml for this example — is used to load our separate JavaScript classes:

name: "PT"
description: "Provides our fancy PT classes"
authors: "[Philip Thompson]"
version: "1.0.0"
    - js/PT.js
    - js/PT.Ham.js
    - js/PT.Pineapple.js
    - js/PT.Pizza.js

package.yml shows that all of our PT* files are located in the js directory, one directory up from the package.yml file. Some of the properties in the YAML file are optional, and you can add much more detail if you'd like, but this will get the job done for our purposes.

Now we're ready to turn back to Packager to build our packaged file. Packager includes an option to use PHP, but we're just going to do it command-line. First, we need to register the new package (package.yml) we created for PT. If our JavaScript files are located in /path/to/web/directory/js, the package.yml file is in /path/to/web/directory:

./packager register /path/to/web/directory

This finds our package.yml file and registers our PT package. Now that we have our package registered, we can build it:

./packager build * > /path/to/web/directory/js/PT.all.js

The Packager sees that our PT package is registered, so it looks at each of the individual class files to build a single large file. In the comments of each of the class files, it determines if there are dependencies and warns you if any are not found.

It might seem like a lot of work when it's written out like this, but I can assure you that when you go through the process, it takes no time at all. The huge benefit of packaging our JavaScript is evident as soon as you start incorporating those JavaScript classes into your website ... Because we have built all of our class files into a single file, we don't need to include each of the individual JavaScript files into our website (much less include the inline JavaScript declarations that make you cringe). To streamline your implementation even further if you're using your JavaScript package in a production deployment, I recommend that you "minify" your code as well.

See ... Organized code is no longer just for server-side only languages. Treat your JavaScript kindly, and it will be your friend!

Happy coding!


January 8, 2013

Startup Series: Bright Funds

Did you ever see The Beach with Leonardo DiCaprio? You know ... The one with a community of world-shunners that live in a paradisaical community on a beautiful white-sand beach. The people in that community were purists — altruistic types who believed in the possibilities of living a simple life based on community support of the individual and the individual's reciprocal support and dedication to the community. Recently, I walked into Hattery — a co-working space in SF — and found a similarly tight-knit community that immediately reminded me of that movie. Hattery is "off the radar" to a certain extent, and that's largely because the collaborative environment and culture are what drive the incredible group of entrepreneurs who work there. To be allowed in the co-working space, it seems like the prerequisites are endless passion and an ambitious vision, so I shouldn't be surprised that Bright Funds calls it home.

Bright Funds is a business that was created to provide users the ability to easily invest in complete solutions for the causes they care about. After signing on as a Catalyst partner, Bright Fund co-founders Ty Walrod and Rutul Davé invited me to lunch at the Hattery office, and I immediately accepted so I could learn more about what they are up to. Having been involved in the tech startup world for a while now, I knew that I'd be meeting two very special entrepreneurs with big hearts and even BIGGER tech startup street cred.

Rutul and Ty were not content with their user experience (UX) when it came to giving to charities and helping solve some of the world's biggest problems. They noticed that little effort had been invested in providing donors with tools to make the act of giving both enjoyable and highly effective, so they took action. Bright Funds was created to redefine and refocus the experience of "giving to charity" ... Giving shouldn't just involve going through the motions of transferring funds from our bank accounts. They built a new giving platform to be more intuitive, rewarding and enlightening, and they did an unbelievable job.

Think of the last time you had a great user experience: An interaction that was as enjoyable as it was effective. Aesthetics play a big role, and when those aesthetics make doing what you want to do easier and more satisfying, you've got an awesome UX. The best user experiences involve empowering users to make informed and intelligent choices by providing them what they need and getting out of the way. Often, UX is used for site design or application metrics, but Bright Funds took the concept and used it to create an elegant and simple business model:

Bright Funds was designed to create a giving experience with an intuitive flow in mind. Instead of just writing checks or handing over cash to a charity, the experience of giving through Bright Funds is interactive and didactic. You manage your giving like you would a mutual fund portfolio — you decide what percentage of your giving should go to which types of vetted and validated causes, and you get regular performance updates from charity. I want to help save the environment. I want to give clean water to all. I want to empower the underserved. I want to educate the world. You choose which causes you want to prioritize, and Bright Funds channels your giving to the most effective organizations serving the greatest needs in the world today.

Bright Funds

Instead of focusing on individual nonprofits, you support causes and issues that matter most to you. In that sense, Bright Funds is a very unique approach to charitable giving, and it's a powerful force in making a difference. Visit Bright Funds for more information, and get started by building your own 'Impact Portfolio.' If you're curious about what mine looks like, check it out:

Bright Funds Impact Portfolio

What does yours look like?


This is a startup series post about Bright Funds, a SoftLayer Catalyst Program participant.
About Bright Funds:
Bright Funds is a better way to give. Individuals and employees at companies with gift matching programs create personalized giving portfolios and contribute to thoroughly researched funds of highly effective nonprofits, all working to address the greatest challenges of our time. In one platform, Bright Funds brings together the power of research, the reliability of a trusted financial service, and the convenience of a secure, cloud-based platform with centralized contributions, integrated matching, and simple tax reporting.
December 31, 2012

FatCloud: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Ian Miller, CEO of FatCloud. FatCloud is a cloud-enabled application platform that allows enterprises to build, deploy and manage next-generation .NET applications.

'The Cloud' and Agility

As the CEO of a cloud-enabled application platform for the .NET community, I get the same basic question all the time: "What is the cloud?" I'm a consumer of cloud services and a supplier of software that helps customers take advantage of the cloud, so my answer to that question has evolved over the years, and I've come to realize that the cloud is fundamentally about agility. The growth, evolution and adoption of cloud technology have been fueled by businesses that don't want to worry about infrastructure and need to pivot or scale quickly as their needs change.

Because FatCloud is a consumer of cloud infrastructure from Softlayer, we are much more nimble than we'd be if we had to worry about building data centers, provisioning hardware, patching software and doing all the other time-consuming tasks that are involved in managing a server farm. My team can focus on building innovative software with confidence that the infrastructure will be ready for us on-demand when we need it. That peace of mind also happens to be one of the biggest reasons developers turn to FatCloud ... They don't want to worry about configuring the fundamental components of the platform under their applications.

Fat Cloud

Our customers trust FatCloud's software platform to help them build and scale their .NET applications more efficiently. To do this, we provide a Core Foundation of .NET WCF services that effectively provides the "plumbing" for .NET cloud computing, and we offer premium features like a a distributed NoSQL database, work queue, file storage/management system, content caching and an easy-to-use administration tool that simplifies managing the cloud for our customers. FatCloud makes developing for hundreds of servers as easy as developing for one, and to prove it, we offer a free 3-node developer edition so that potential customers can see for themselves.

FatCloud Offering

The agility of the cloud has the clearest value for a company like ours. In one heavy-duty testing month, we needed 75 additional servers online, and after that testing was over, we needed the elasticity to scale that infrastructure back down. We're able to adjust our server footprint as we balance our computing needs and work within budget constraints. Ten years ago, that would have been overwhelmingly expensive (if not impossible). Today, we're able to do it economically and in real-time. SoftLayer is helping keep FatCloud agile, and FatCloud passes that agility on to our customers.

Companies developing custom software for the cloud, mobile or web using .NET want a reliable foundation to build from, and they want to be able to bring their applications to market faster. With FatCloud, those developers can complete their projects in about half the time it would take them if they were to develop conventionally, and that speed can be a huge competitive differentiator.

The expensive "scale up" approach of buying and upgrading powerful machines for something like SQL Server is out-of-date now. The new kid in town is the "scale out" approach of using low-cost servers to expand infrastructure horizontally. You'll never run into those "scale up" hardware limitations, and you can build a dynamic, scalable and elastic application much more economically. You can be agile.

If you have questions about how FatCloud and SoftLayer make cloud-enabled .NET development easier, send us an email: Our team is always happy to share the easy (and free) steps you can take to start taking advantage of the agility the cloud provides.

-Ian Miller, CEO of FatCloud

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace. These partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New partners will be added to the Marketplace each month, so stay tuned for many more come.
December 30, 2012

Risk Management: Event Logging to Protect Your Systems

The calls start rolling in at 2am on Sunday morning. Alerts start firing off. Your livelihood is in grave danger. It doesn't come with the fanfare of a blockbuster Hollywood thriller, but if a server hosting your critical business infrastructure is attacked, becomes compromised or fails, it might feel like the end of the world. In our Risk Management series, and we've covered the basics of securing your servers, so the next consideration we need to make is for when our security is circumvented.

It seems silly to prepare for a failure in a security plan we spend time and effort creating, but if we stick our heads in the sand and tell ourselves that we're secure, we won't be prepared in the unlikely event of something happening. Every attempt to mitigate risks and stop threats in their tracks will be circumvented by the one failure, threat or disaster you didn't cover in your risk management plan. When that happens, accurate event logging will help you record what happened, respond to the event (if it's still in progress) and have the information available to properly safeguard against or prevent similar threats in the future.

Like any other facet of security, "event logging" can seem overwhelming and unforgiving if you're looking at hundreds of types of events to log, each with dozens of variations and options. Like we did when we looked at securing servers, let's focus our attention on a few key areas and build out what we need:

Which events should you log?
Look at your risk assessment and determine which systems are of the highest value or could cause the most trouble if interrupted. Those systems are likely to be what you prioritized when securing your servers, and they should also take precedence when it comes to event logging. You probably don't have unlimited compute and storage resources, so you have to determine which types of events are most valuable for you and how long you should keep records of them — it's critical to have your event logs on-hand when you need them, so logs should be retained online for a period of time and then backed up offline to be available for another period of time.

Your goal is to understand what's happening on your servers and why it's happening so you know how to respond. The most common audit-able events include successful and unsuccessful account log-on events, account management events, object access, policy change, privilege functions, process tracking and system events. The most conservative approach actually involves logging more information/events and keeping those logs for longer than you think you need. From there, you can evaluate your logs periodically to determine if the level of auditing/logging needs to be adjusted.

Where do you store the event logs?
Your event logs won't do you any good if they are stored in a space that is insufficient for the amount of data you need to collect. I recommend centralizing your logs in a secure environment that is both readily available and scalable. In addition to the logs being accessible when the server(s) they are logging are inaccessible, aggregating and organize your logs in a central location can be a powerful tool to build reports and analyze trends. With that information, you'll be able to more clearly see deviations from normal activity to catch attacks (or attempted attacks) in progress.

How do you protect your event logs?
Attacks can come from both inside and out. To avoid intentional malicious activity by insiders, separation of duties should be enforced when planning logging. Learn from The X Files and "Trust no one." Someone who has been granted the 'keys to your castle' shouldn't also be able to disable the castle's security system or mess with the castle's logs. Your network engineer shouldn't have exclusive access to your router logs, and your sysadmin shouldn't be the only one looking at your web server logs.

Keep consistent time.
Make sure all of your servers are using the same accurate time source. That way, all logs generated from those servers will share consistent time-stamps. Trying to diagnose an attack or incident is exceptionally more difficult if your web server's clock isn't synced with your database server's clock or if they're set to different time zones. You're putting a lot of time and effort into logging events, so you're shooting yourself in the foot if events across all of your servers don't line up cleanly.

Read your logs!
Logs won't do you any good if you're not looking at them. Know the red flags to look for in each of your logs, and set aside time to look for those flags regularly. Several SoftLayer customers — like Tech Partner Papertrail — have come up with innovative and effective log management platforms that streamline the process of aggregating, searching and analyzing log files.

It's important to reiterate that logging — like any other security endeavor — is not a 'one size fits all' model, but that shouldn't discourage you from getting started. If you aren't logging or you aren't actively monitoring your logs, any step you take is a step forward, and each step is worth the effort.

Thanks for reading, and stay secure, my friends!


December 27, 2012

Using SoftLayer Object Storage to Back Up Your Server

Before I came to my senses and moved my personal servers to SoftLayer, I was one of many victims of a SolusVM exploit that resulted in the wide-scale attack of many nodes in my previous host's Chicago data center. While I'm a firm believer in backing up my data, I could not have foreseen the situation I was faced with: Not only was my server in one data center compromised with all of its data deleted, but my backup server in one of the host's other data centers was also attacked ... This left me with old, stale backups on my local computer and not much else. I quickly relocated my data and decided that I should use SoftLayer Object Storage to supplement and improve upon my backup and disaster recovery plans.

With SoftLayer Object Storage Python Client set up and the SoftLayer Object Storage Backup script — — in hand, I had the tools I needed to build a solid backup infrastructure easily. On, I contributed an article about how to perform MySQL backups with those resources, so the database piece is handled, but I also need to back up my web files, so I whipped up another quick bash script to run:

# The path the backups will be dumped to
# Path to the web files to be backed up
BACKUP_PATH="/var/www/sites /"
# Back up folder name (mmddyyyy)
BACKUP_DIR="`date +%m%d%Y`"
# Backup File Name
DUMP_FILE="`date +%m_%d_%Y_%H_%M_%S`_site_files"
# SL container name
# Create backup dir if doesn't exist
if [ ! -d $DUMP_DIR$BACKUP_DIR ]; then
        mkdir -p $DUMP_DIR$BACKUP_DIR
# Make sure the archive exists
if [ -f $DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz ]; then
        /root/ -s $DUMP_DIR$BACKUP_DIR/ -o "$CONTAINER" -r 30
        # Remove the backup stored locally
        rm -rf $DUMP_DIR$BACKUP_DIR
        # Success
        exit 0
        echo "$DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz does not exist."
        exit 1

It's not the prettiest bash script, but it gets the job done. By tweaking a few variables, you can easily generate backups for any important directory of files and push them to your SoftLayer Object Storage account. If you want to change the retention time of your backups to be longer or shorter, you can change the 30 after the –r in the line below to the number of days you want to keep each backup:

/root/ -s $DUMP_DIR$BACKUP_DIR/ -o "$CONTAINER" -r 30

I created a script for each website on my server, and I set a CRON (crontab –e) entry to run each one on Sundays staggered by 5 minutes:

5 1 * * 0  /root/bin/cron/CRON-site1.com_web_files > /dev/null
10 1 * * 0  /root/bin/cron/CRON-site2.com_web_files > /dev/null
15 1 * * 0  /root/bin/cron/CRON-site3.com_web_files > /dev/null 

If you're looking for an easy way to automate and solidify your backups, this little bit of code could make life easier on you. Had I taken the few minutes to put this script together prior to the attack I experienced at my previous host, I wouldn't have lost any of my data. It's easy to get lulled into "backup apathy" when you don't need your backups, but just because nothing *has* happened to your data doesn't mean nothing *can* happen to your data.

Take it from me ... Be over-prepared and save yourself a lot of trouble.



Subscribe to technology