culture

January 29, 2013

iptables Tips and Tricks: APF (Advanced Policy Firewall) Configuration

Let's talk about APF. APF — Advanced Policy Firewall — is a policy-based iptables firewall system that provides simple, powerful control over your day-to-day server security. It might seem intimidating to be faced with all of the features and configuration tools in APF, but this blog should put your fears to rest.

APF is an iptables wrapper that works alongside iptables and extends its functionality. I personally don't use iptables wrappers, but I have a lot of experience with them, and I've seen that they do offer some additional features that streamline policy management. For example, by employing APF, you'll get several simple on/off toggles (set via configuration files) that make some complex iptables configurations available without extensive coding requirements. The flip-side of a wrapper's simplicity is that you aren't directly in control of the iptables commands, so if something breaks it might take longer to diagnose and repair. Before you add a wrapper like APF, be sure that you know what you are getting into. Here are a few points to consider:

  • Make sure that what you're looking to use adds a feature you need but cannot easily incorporate with iptables on its own.
  • You need to know how to effectively enable and disable the iptables wrapper (the correct way ... read the manual!), and you should always have a trusted failsafe iptables ruleset handy in the unfortunate event that something goes horribly wrong and you need to disable the wrapper.
  • Learn about the basic configurations and rule changes you can apply via the command line. You'll need to understand the way your wrapper takes rules because it may differ from the way iptables handles rules.
  • You can't manually configure your iptables rules once you have your wrapper in place (or at least you shouldn't).
  • Be sure to know how to access your server via the IPMI management console so that if you completely lock yourself out beyond repair, you can get back in. You might even go so far as to have a script or set of instructions ready for tech support to run, in the event that you can't get in via the management console.

TL;DR: Have a Band-Aid ready!

APF Configuration

Now that you have been sufficiently advised about the potential challenges of using a wrapper (and you've got your Band-Aid ready), we can check out some of the useful APF rules that make iptables administration a lot easier. Most of the configuration for APF is in conf.apf. This file handles the default behavior, but not necessarily the specific blocking rules, and when we make any changes to the configuration, we'll need to restart the APF service for the changes to take effect.

Let's jump into conf.apf and break down what we see. The first code snippit is fairly self-explanatory. It's another way to make sure you don't lock yourself out of your server as you are making configuration changes and testing them:

# !!! Do not leave set to (1) !!!
# When set to enabled; 5 minute cronjob is set to stop the firewall. Set
# this off (0) when firewall is determined to be operating as desired.
DEVEL_MODE="1"

The next configuration options we'll look at are where you can make quick high-level changes if you find that legitimate traffic is being blocked and you want to make APF a little more lenient:

# This controls the amount of violation hits an address must have before it
# is blocked. It is a good idea to keep this very low to prevent evasive
# measures. The default is 0 or 1, meaning instant block on first violation.
RAB_HITCOUNT="1"
 
# This is the amount of time (in seconds) that an address gets blocked for if
# a violation is triggered, the default is 300s (5 minutes).
RAB_TIMER="300"
# This allows RAB to 'trip' the block timer back to 0 seconds if an address
# attempts ANY subsiquent communication while still on the inital block period.
RAB_TRIP="1"
 
# This controls if the firewall should log all violation hits from an address.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_HIT="1"
 
# This controls if the firewall should log all subsiqent traffic from an address
# that is already blocked for a violation hit, this can generate allot of logs.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_TRIP="0"

Next, we have an option to adjust ICMP flood protection. This protection should be useful against some forms of DoS attacks, and the associated rules show up in your INPUT chain:

# Set a reasonable packet/time ratio for ICMP packets, exceeding this flow
# will result in dropped ICMP packets. Supported values are in the form of:
# pkt/s (packets/seconds), pkt/m (packets/minutes)
# Set value to 0 for unlimited, anything above is enabled.
ICMP_LIM="30/s"

If you wanted to add more ports to block for p2p traffic (which will show up in the P2P chain), you'll update this code:

# A common set of known Peer-To-Peer (p2p) protocol ports that are often
# considered undesirable traffic on public Internet servers. These ports
# are also often abused on web hosting servers where clients upload p2p
# client agents for the purpose of distributing or downloading pirated media.
# Format is comma separated for single ports and an underscore separator for
# ranges (4660_4678).
BLK_P2P_PORTS="1214,2323,4660_4678,6257,6699,6346,6347,6881_6889,6346,7778"

The next few lines let you designate the ports that you want to have closed at all times. They will be blocked for INPUT and OUTPUT chains:

# These are common Internet service ports that are understood in the wild
# services you would not want logged under normal circumstances. All ports
# that are defined here will be implicitly dropped with no logging for
# TCP/UDP traffic inbound or outbound. Format is comma separated for single
# ports and an underscore separator for ranges (135_139).
BLK_PORTS="135_139,111,513,520,445,1433,1434,1234,1524,3127"

The next important section to look at deals with conntrack. If you get "conntrack full" errors, this is where you'd increase the allowed connections. It's not uncommon to need more connections than the default, so if you need to adjust that value, you'd do it here:

# This is the maximum number of "sessions" (connection tracking entries) that
# can be handled simultaneously by the firewall in kernel memory. Increasing
# this value too high will simply waste memory - setting it too low may result
# in some or all connections being refused, in particular during denial of
# service attacks.
SYSCTL_CONNTRACK="65536"

We've talked about the ports we want closed at all times, so it only makes sense that we'd specify which ports we want open for all interfaces:

# Common inbound (ingress) TCP ports
IG_TCP_CPORTS="22"
# Common inbound (ingress) UDP ports
IG_UDP_CPORTS=""
# Common outbound (egress) TCP ports
EG_TCP_CPORTS="21,25,80,443,43"
# Common outbound (egress) UDP ports
EG_UDP_CPORTS="20,21,53"

And when we want a special port allowance for specific users, we can declare it easily. For example, if we want port 22 open for user ID 0, we'd use this code:

# Allow outbound access to destination port 22 for uid 0
EG_TCP_UID="0:22"

The next few sections on Remote Rule Imports and Global Trust are a little more specialized, and I encourage you to read a little more about them (since there's so much to them and not enough space to cover them here on the blog). An important feature of APF is that it imports block lists from outside sources to keep you safe from some attackers, so the Remote Rule Imports can prove to be very useful. The Global Trust section is incredibly useful for multi-server deployments of APF. Here, you can set up your global allow/block lists and have them all pull from a central location so that you can make a single update to the source and have the update propogated to all servers in your configuration. These changes are synced to the glob_allow/deny.rules files, and they will be downloaded (and overwritten) on a regular basis from your specified source, so don't make any manual edits in glob_allow/deny.rules.

As you can see, apf.conf is no joke. It has a lot of stuff going on, but it's very straightforward and documented well. Once we've set up apf.conf with the configurations we need, it's time to look at the more focused allow_hosts.rules and deny_hosts.rules files. These .rules files are where where you put your typical firewall rules in place. If there's one piece of advice I can give you about these configurations, it would be to check if your traffic is already allowed or blocked. Having multiple rules that do the same thing (possibly in different places) is confusing and potentially dangerous.

The deny_hosts.rules configuration will look just like allow_hosts.rules, but it's performing the opposite function. Let's check out an allow_hosts.rules configuration that will allow the Nimsoft service to function:

tcp:in:d=48000_48020:s=10.0.0.0/8
tcp:out:d=48000_48020:d=10.0.0.0/8

The format is somewhat simplistic, but the file gives a little more context in the comments:

# The trust rules can be made in advanced format with 4 options
# (proto:flow:port:ip);
# 1) protocol: [packet protocol tcp/udp]
# 2) flow in/out: [packet direction, inbound or outbound]
# 3) s/d=port: [packet source or destination port]
# 4) s/d=ip(/xx) [packet source or destination address, masking supported]
# Syntax:
# proto:flow:[s/d]=port:[s/d]=ip(/mask)

APF also uses ds_hosts.rules to load the DShield.org blocklist, and I assume the ecnshame_hosts.rules does something similar (can't find much information about it), so you won't need to edit these files manually. Additionally, you probably don't need to make any changes to log.rules, unless you want to make changes to what exactly you log. As it stands, it logs certain dropped connections, which should be enough. Also, it might be worth noting that this file is a script, not a configuration file.

The last two configuration files are the preroute.rules and postroute.rules that (unsurprisingly) are used to make routing changes. If you have been following my articles, this corresponds to the iptables chains for PREROUTING and POSTROUTING where you would do things like port forwarding and other advanced configuration that you probably don't want to do in most cases.

APF Command Line Management

As I mentioned in the "points to consider" at the top of this post, it's important to learn the changes you can perform from the command line, and APF has some very useful command line tools:

[root@server]# apf --help
APF version 9.7 <apf@r-fx.org>
Copyright (C) 2002-2011, R-fx Networks <proj@r-fx.org>
Copyright (C) 2011, Ryan MacDonald <ryan@r-fx.org>
This program may be freely redistributed under the terms of the GNU GPL
 
usage /usr/local/sbin/apf [OPTION]
-s|--start ......................... load all firewall rules
-r|--restart ....................... stop (flush) &amp; reload firewall rules
-f|--stop........ .................. stop (flush) all firewall rules
-l|--list .......................... list all firewall rules
-t|--status ........................ output firewall status log
-e|--refresh ....................... refresh &amp; resolve dns names in trust rules
-a HOST CMT|--allow HOST COMMENT ... add host (IP/FQDN) to allow_hosts.rules and
                                     immediately load new rule into firewall
-d HOST CMT|--deny HOST COMMENT .... add host (IP/FQDN) to deny_hosts.rules and
                                     immediately load new rule into firewall
-u|--remove HOST ................... remove host from [glob]*_hosts.rules
                                     and immediately remove rule from firewall
-o|--ovars ......................... output all configuration options

You can use these command line tools to turn your firewall on and off, add allowed or blocked hosts and display troubleshooting information. These commands are very easy to use, but if you want more fine-tuned control, you'll need to edit the configuration files directly (as we looked at above).

I know it seems like a lot of information, but to a large extent, that's all you need to know to get started with APF. Take each section slowly and understand what each configuration file is doing, and you'll master APF in no time at all.

-Mark

January 28, 2013

Catalyst: In the Startup Sauna and Slush

Slush.fi was a victim of its own success. In November 2012, the website home of Startup Sauna's early-stage startup conference was crippled by an unexpected flood of site traffic, and they had to take immediate action. Should they get a private MySQL instance from their current host to try and accommodate the traffic or should they move their site to the SoftLayer cloud? Spoiler (highlight for clue): You're reading this post on the SoftLayer Blog.

Let me back up for a second and tell you a little about Startup Sauna and Slush. Startup Sauna hosts (among other things) a Helsinki-based seed accelerator program for early-stage startup companies from Northern Europe and Russia. They run two five-week programs every year, with more than one hundred graduated companies to date. In addition to the accelerator program, Startup Sauna also puts on annually the biggest startup conference in Northern europe called Slush. Slush was founded in 2008 with the intent to bring the local startup scene together at least once every year. Now — five years later — Slush brings more international investors and media to the region than any other event out there. This year alone, 3,500 entrepreneurs, investors and partners who converged on Slush to make connections and see the region's most creative and innovative businesses, products and services.

Slush Conference

In October of last year, we met the founders of Startup Sauna, and it was clear that they would be a perfect fit to join Catalyst. We offer their portfolio companies free credits for cloud and dedicated hosting, and we really try get to know the teams and alumni. Because Startup Sauna signed on just before Slush 2012 in November, they didn't want to rock the boat by moving their site to SoftLayer before the conference. Little did we know that they'd end up needing to make the transition during the conference.

When the event started, the Slush website was inundated with traffic. Attendees were checking the agenda and learning about some of the featured startups, and the live stream of the presentation brought record numbers of unique visitors and views. That's all great news ... Until those "record numbers" pushed the site's infrastructure to its limit. Startup Sauna CTO Lari Haataja described what happened:

The number of participants had definitely most impact on our operations. The Slush website was hosted on a standard webhotel (not by SoftLayer), and due to the tremendous traffic we faced some major problems. Everyone was busy during the first morning, and it took until noon before we had time to respond to the messages about our website not responding. Our Google Analytics were on fire, especially when Jolla took the stage to announce their big launch. We were streaming the whole program live, and anyone who wasn't able to attend the conference wanted to be the first to know about what was happening.

The Slush website was hosted on a shared MySQL instance with a limited number of open connections, so when those connections were maxed out (quickly) by site visitors from 134 different countries, database errors abounded. The Startup Sauna team knew that a drastic change was needed to get the site back online and accessible, so they provisioned a SoftLayer cloud server and moved their site to its new home. In less than two hours (much of the time being spent waiting for files to be downloaded and for DNS changes to be recognized), the site was back online and able to accommodate the record volume of traffic.

You've seen a few of these cautionary tales before on the SoftLayer Blog, and that's because these kinds of experiences are all too common. You dream about getting hundreds of thousands of visitors, but when those visitors come, you have to be ready for them. If you have an awesome startup and you want to learn more about the Startup Sauna, swing by Helsinki this week. SoftLayer Chief Strategy Officer George Karidis will be in town, and we plan on taking the Sauna family (and anyone else interested) out for drinks on January 31! Drop me a line in a comment here or over on Twitter, and I'll make sure you get details.

-@EmilyBlitz

Categories: 
January 24, 2013

Startup Series: SPEEDILICIOUS

Research from the Aberdeen Group shows the average website is losing 9% of its business because
 the speed of the site frustrates visitors into leaving. 9% of your traffic might be leaving your site because they feel like it's too slow. That thought is staggering, and any site owner would be foolish not to fix the problem. SPEEDILICIOUS — one of our new Catalyst partners — has an innovative solution that optimizes website performance and helps businesses deliver content to their end users faster.

SPEEDILICIOUS

I recently had the chance to chat with SPEEDILICIOUS founders Seymour Segnit and Chip Krauskopf, and Seymour rephrased that "9%" statistic in a pretty alarming way: "Losing 9% of your business is the equivalent of simply allowing your website to go offline, down, dark, dead, 404 for over a MONTH each year!" There is ample data to back this up from high-profile sites like Amazon, Microsoft and Walmart.com, but intuitively, you know it already ... A slow site (even a slightly slow site) is annoying.

The challenge many website owners have when it comes to their loading speeds is that problems might not be noticeable from their own workstations. Thanks to caching and the Internet connections most of us have, when we visit our own sites, we don't have any trouble accessing our content quickly. Unfortunately, many of our customers don't share that experience when they visit our sites on mobile, hotel, airports and (worst of all) conference connections. The most common approach to speeding up load times is to throw bigger servers or a CDN (content delivery network) at the problem, but while those improvements make a difference, they only address part of the problem ... Even with the most powerful servers in SoftLayer's fleet, your page can load at a crawl if your code can't be rendered quickly by a browser.

That makes life as a website developer difficult. The process of optimizing code and tweaking settings to speed up load times can be time-consuming and frustrating. Or as Chip explained to me, "Speeding up your site is essential, it shouldn’t be be slow and complicated. We fix that problem." Take a look:

The idea that your site performance can be sped up significantly overnight seems a little crazy, but if it works (which it clearly does), wouldn't it be crazier not to try it? SPEEDILICIOUS offers a $1 trial for you to see the results on your own site, and they regularly host a free webinar called "How to Grow Your Business 5-15% Overnight" which covers the critical techniques for speeding up any website.

As technology continues to improve and behavioral patterns of purchasing migrate away from the mall and onto our computers and smart phones, SPEEDILICIOUS has a tremendous opportunity to capture a ripe market. So they're clearly a great fit for Catalyst. If you're interested in learning more or would like to speak to Seymour, Chip or anyone on their team, please let me know and I'll make the direct introduction any time.

-@JoshuaKrammes

January 15, 2013

Startup Series: Moqups

Every member on the Catalyst team is given one simple goal: Find the most innovative and creative startups on the planet and get them on the SoftLayer network. We meet entrepreneurs at conferences and events around the world, we team up with the most influential startup accelerators and incubators, and we hunt for businesses who are making waves online. With the momentum Catalyst built in 2012, our message has started spreading exponentially faster than what the community development team could be doing on our own, and now it seems like we've earned a few evangelists in the startup community. We have those evangelists to thank for bringing Moqups to our door.

In a Hacker News thread, a user posted about needing hosting for a server/startup, and a recommendation for the Catalyst program was one of the top-rated results. The founders of Moqups saw that recommendation, researched SoftLayer's hosting platform and submitted an application to become a Catalyst partner. As soon as we saw the unbelievable HTML5 app the Moqups team created to streamline and simplify the process of creating wireframes and mockups for website and application design, we knew they were a perfect fit to join the program.

If you've ever had to create a site prototype or UI mockup, you know how unwieldy the process can be. You want to sketch a layout and present it clearly and cleanly, but there aren't many viable resources between "marker on a whiteboard" and "rendering in Photoshop" to accomplish that goal. That's the problem the Moqups team set out to solve ... Can a web app provide the functionality and flexibility you'd need to fill that gap?

We put their answer to that question to the test. I told Kevin about Moqups and asked him to spend a few minutes wireframing the SoftLayer Blog ... About ten minutes later, he sent me this (Click for the full Moqups version):

SoftLayer Blog Moqup

Obviously, wireframing an existing design is easier than creating a new design from scratch, but Kevin said he was floored by how intuitive the Moqups platform made the process. In fact, the "instructions" for how to use Moqups are actually provided in an example "Quick Introduction to Moqups" project on the home page. That example project allows you to tweak, add and adjust content to understand how the platform works, and because it's all done in HTML5, the user experience is seamless.

Moqups

Put it to the test for yourself: How long will it take you to create a wireframe of your existing website (similar to what Kevin did with the SoftLayer Blog)? You have down-to-the-pixel precision, you can group objects together, Moqups helps you line up or center all of the different pieces of your site. Their extensive library of stencils supplements any custom images you upload, so you can go through the whole process of creating a site mockup without "drawing" anything by hand!

I'm actually surprised that the Moqups team heard about SoftLayer before our community development team heard about them ... In November, I was in Bucharest, Romania, for HowtoWeb, so I was right in their back yard! Central and Eastern European startups are blowing up right now, and Moqups is a perfect example of what we're seeing from that region in EMEA.

Oh, and if you know of a crazy cool startup like Moqups that could use a little hosting help from SoftLayer, tell them about Catalyst!

-@EmilyBlitz

January 10, 2013

Web Development - JavaScript Packaging

If you think of JavaScript as the ugly duckling of programming languages, think again! It got a bad rap in the earlier days of the web because developers knew enough just to get by but didn't really respect it like they did Java, PHP or .Net. Like other well-known and heavily used languages, JavaScript contains various data types (String, Boolean, Number, etc.), objects and functions, and it is even capable of inheritance. Unfortunately, that functionality is often overlooked, and many developers seem to implement it as an afterthought: "Oh, we need to add some neat jQuery effects over there? I'll just throw some inline JavaScript here." That kind of implementation perpetuates a stereotype that JavaScript code is unorganized and difficult to maintain, but it doesn't have to be! I'm going to show you how easy it is to maintain and organize your code base by packaging your JavaScript classes into a single file to be included with your website.

There are a few things to cover before we jump into code:

  1. JavaScript Framework - Mootools is my framework of choice, but you can use whatever JavaScript framework you'd like.
  2. Classes - Because I see JavaScript as another programming language that I respect (and is capable of object-oriented-like design), I write classes for EVERYTHING. Don't think of your JavaScript code as something you use once and throw away. Write your code to be generic enough to be reused wherever it's placed. Object-oriented design is great for this! Mootools makes object-oriented design easy to do, so this point reinforces the point above.
  3. Class Files - Just like you'd organize your PHP to contain one class per file, I do the exact same thing with JavaScript. Note: Each of the class files in the example below uses the class name appended with .js.
  4. Namespacing - I will be organizing my classes in a way that will only add a single property — PT — to the global namespace. I won't get into the details of namespacing in this blog because I'm sure you're already thinking, "The code! The code! Get on with it!" You can namespace whatever is right for your situation.

For this example, our classes will be food-themed because ... well ... I enjoy food. Let's get started by creating our base object:

/*
---
name: PT
description: The base class for all the custom classes
authors: [Philip Thompson]
provides: [PT]
...
*/
var PT = {};

We now have an empty object from which we'll build all of our classes. We'll go I will go into more details later about the comment section, but let's build our first class: PT.Ham.

/*
---
name: PT.Ham
description: The ham class
authors: [Philip Thompson]
requires: [/PT]
provides: [PT.Ham]
...
*/
 
(function() {
    PT.Ham = new Class({
        // Custom code here...
    });
}());

As I mentioned in point three (above), PT.Ham should be saved in the file named PT.Ham.js. When we create second class, PT.Pineapple, we'll store it in PT.Pineapple.js:

/*
---
name: PT.Pineapple
description: The pineapple class
authors: [Philip Thompson]
requires: [/PT]
provides: [PT.Pineapple]
...
*/
 
(function() {
    PT.Pineapple = new Class({
        // Custom code here...
    });
}());

Our final class for this example will be PT.Pizza (I'll let you guess the name of the file where PT.Pizza lives). Our PT.Pizza class will require that PT, PT.Ham and PT.Pineapple be present.

/*
---
name: PT.Pizza
description: The pizza class
authors: [Philip Thompson]
requires: [/PT, /PT.Ham, /PT.Pineapple]
provides: [PT.Pizza]
...
*/
 
(function() {
    PT.Pizza = new Class({
        // Custom code here that uses PT.Ham and PT.Pineapple...
    });
}());

Before we go any further, let's check out the comments we include above each of the classes. The comments are formatted for YAML — YAML Ain't Markup Language (you gotta love recursive acronyms). These comments allow our parser to determine how our classes are related, and they help resolve dependencies. YAML's pretty easy to learn and you only need to know a few basic features to use it. The YAML comments in this example are essential for our JavaScript package-manager — Packager. I won't go into all the details about Packager, but simply mention a few commands that we'll need to build our single JavaScript file.

In addition to the YAML comments in each of the class files, we also need to create a YAML file that will organize our code. This file — package.yml for this example — is used to load our separate JavaScript classes:

name: "PT"
description: "Provides our fancy PT classes"
authors: "[Philip Thompson]"
version: "1.0.0"
sources:
    - js/PT.js
    - js/PT.Ham.js
    - js/PT.Pineapple.js
    - js/PT.Pizza.js

package.yml shows that all of our PT* files are located in the js directory, one directory up from the package.yml file. Some of the properties in the YAML file are optional, and you can add much more detail if you'd like, but this will get the job done for our purposes.

Now we're ready to turn back to Packager to build our packaged file. Packager includes an option to use PHP, but we're just going to do it command-line. First, we need to register the new package (package.yml) we created for PT. If our JavaScript files are located in /path/to/web/directory/js, the package.yml file is in /path/to/web/directory:

./packager register /path/to/web/directory

This finds our package.yml file and registers our PT package. Now that we have our package registered, we can build it:

./packager build * > /path/to/web/directory/js/PT.all.js

The Packager sees that our PT package is registered, so it looks at each of the individual class files to build a single large file. In the comments of each of the class files, it determines if there are dependencies and warns you if any are not found.

It might seem like a lot of work when it's written out like this, but I can assure you that when you go through the process, it takes no time at all. The huge benefit of packaging our JavaScript is evident as soon as you start incorporating those JavaScript classes into your website ... Because we have built all of our class files into a single file, we don't need to include each of the individual JavaScript files into our website (much less include the inline JavaScript declarations that make you cringe). To streamline your implementation even further if you're using your JavaScript package in a production deployment, I recommend that you "minify" your code as well.

See ... Organized code is no longer just for server-side only languages. Treat your JavaScript kindly, and it will be your friend!

Happy coding!

-Philip

January 8, 2013

Startup Series: Bright Funds

Did you ever see The Beach with Leonardo DiCaprio? You know ... The one with a community of world-shunners that live in a paradisaical community on a beautiful white-sand beach. The people in that community were purists — altruistic types who believed in the possibilities of living a simple life based on community support of the individual and the individual's reciprocal support and dedication to the community. Recently, I walked into Hattery — a co-working space in SF — and found a similarly tight-knit community that immediately reminded me of that movie. Hattery is "off the radar" to a certain extent, and that's largely because the collaborative environment and culture are what drive the incredible group of entrepreneurs who work there. To be allowed in the co-working space, it seems like the prerequisites are endless passion and an ambitious vision, so I shouldn't be surprised that Bright Funds calls it home.

Bright Funds is a business that was created to provide users the ability to easily invest in complete solutions for the causes they care about. After signing on as a Catalyst partner, Bright Fund co-founders Ty Walrod and Rutul Davé invited me to lunch at the Hattery office, and I immediately accepted so I could learn more about what they are up to. Having been involved in the tech startup world for a while now, I knew that I'd be meeting two very special entrepreneurs with big hearts and even BIGGER tech startup street cred.

Rutul and Ty were not content with their user experience (UX) when it came to giving to charities and helping solve some of the world's biggest problems. They noticed that little effort had been invested in providing donors with tools to make the act of giving both enjoyable and highly effective, so they took action. Bright Funds was created to redefine and refocus the experience of "giving to charity" ... Giving shouldn't just involve going through the motions of transferring funds from our bank accounts. They built a new giving platform to be more intuitive, rewarding and enlightening, and they did an unbelievable job.

Think of the last time you had a great user experience: An interaction that was as enjoyable as it was effective. Aesthetics play a big role, and when those aesthetics make doing what you want to do easier and more satisfying, you've got an awesome UX. The best user experiences involve empowering users to make informed and intelligent choices by providing them what they need and getting out of the way. Often, UX is used for site design or application metrics, but Bright Funds took the concept and used it to create an elegant and simple business model:

Bright Funds was designed to create a giving experience with an intuitive flow in mind. Instead of just writing checks or handing over cash to a charity, the experience of giving through Bright Funds is interactive and didactic. You manage your giving like you would a mutual fund portfolio — you decide what percentage of your giving should go to which types of vetted and validated causes, and you get regular performance updates from charity. I want to help save the environment. I want to give clean water to all. I want to empower the underserved. I want to educate the world. You choose which causes you want to prioritize, and Bright Funds channels your giving to the most effective organizations serving the greatest needs in the world today.

Bright Funds

Instead of focusing on individual nonprofits, you support causes and issues that matter most to you. In that sense, Bright Funds is a very unique approach to charitable giving, and it's a powerful force in making a difference. Visit Bright Funds for more information, and get started by building your own 'Impact Portfolio.' If you're curious about what mine looks like, check it out:

Bright Funds Impact Portfolio

What does yours look like?

-@JoshuaKrammes

This is a startup series post about Bright Funds, a SoftLayer Catalyst Program participant.
About Bright Funds:
Bright Funds is a better way to give. Individuals and employees at companies with gift matching programs create personalized giving portfolios and contribute to thoroughly researched funds of highly effective nonprofits, all working to address the greatest challenges of our time. In one platform, Bright Funds brings together the power of research, the reliability of a trusted financial service, and the convenience of a secure, cloud-based platform with centralized contributions, integrated matching, and simple tax reporting.
December 31, 2012

FatCloud: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Ian Miller, CEO of FatCloud. FatCloud is a cloud-enabled application platform that allows enterprises to build, deploy and manage next-generation .NET applications.

'The Cloud' and Agility

As the CEO of a cloud-enabled application platform for the .NET community, I get the same basic question all the time: "What is the cloud?" I'm a consumer of cloud services and a supplier of software that helps customers take advantage of the cloud, so my answer to that question has evolved over the years, and I've come to realize that the cloud is fundamentally about agility. The growth, evolution and adoption of cloud technology have been fueled by businesses that don't want to worry about infrastructure and need to pivot or scale quickly as their needs change.

Because FatCloud is a consumer of cloud infrastructure from Softlayer, we are much more nimble than we'd be if we had to worry about building data centers, provisioning hardware, patching software and doing all the other time-consuming tasks that are involved in managing a server farm. My team can focus on building innovative software with confidence that the infrastructure will be ready for us on-demand when we need it. That peace of mind also happens to be one of the biggest reasons developers turn to FatCloud ... They don't want to worry about configuring the fundamental components of the platform under their applications.

Fat Cloud

Our customers trust FatCloud's software platform to help them build and scale their .NET applications more efficiently. To do this, we provide a Core Foundation of .NET WCF services that effectively provides the "plumbing" for .NET cloud computing, and we offer premium features like a a distributed NoSQL database, work queue, file storage/management system, content caching and an easy-to-use administration tool that simplifies managing the cloud for our customers. FatCloud makes developing for hundreds of servers as easy as developing for one, and to prove it, we offer a free 3-node developer edition so that potential customers can see for themselves.

FatCloud Offering

The agility of the cloud has the clearest value for a company like ours. In one heavy-duty testing month, we needed 75 additional servers online, and after that testing was over, we needed the elasticity to scale that infrastructure back down. We're able to adjust our server footprint as we balance our computing needs and work within budget constraints. Ten years ago, that would have been overwhelmingly expensive (if not impossible). Today, we're able to do it economically and in real-time. SoftLayer is helping keep FatCloud agile, and FatCloud passes that agility on to our customers.

Companies developing custom software for the cloud, mobile or web using .NET want a reliable foundation to build from, and they want to be able to bring their applications to market faster. With FatCloud, those developers can complete their projects in about half the time it would take them if they were to develop conventionally, and that speed can be a huge competitive differentiator.

The expensive "scale up" approach of buying and upgrading powerful machines for something like SQL Server is out-of-date now. The new kid in town is the "scale out" approach of using low-cost servers to expand infrastructure horizontally. You'll never run into those "scale up" hardware limitations, and you can build a dynamic, scalable and elastic application much more economically. You can be agile.

If you have questions about how FatCloud and SoftLayer make cloud-enabled .NET development easier, send us an email: sales@fatcloud.com. Our team is always happy to share the easy (and free) steps you can take to start taking advantage of the agility the cloud provides.

-Ian Miller, CEO of FatCloud

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace. These partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New partners will be added to the Marketplace each month, so stay tuned for many more come.
December 30, 2012

Risk Management: Event Logging to Protect Your Systems

The calls start rolling in at 2am on Sunday morning. Alerts start firing off. Your livelihood is in grave danger. It doesn't come with the fanfare of a blockbuster Hollywood thriller, but if a server hosting your critical business infrastructure is attacked, becomes compromised or fails, it might feel like the end of the world. In our Risk Management series, and we've covered the basics of securing your servers, so the next consideration we need to make is for when our security is circumvented.

It seems silly to prepare for a failure in a security plan we spend time and effort creating, but if we stick our heads in the sand and tell ourselves that we're secure, we won't be prepared in the unlikely event of something happening. Every attempt to mitigate risks and stop threats in their tracks will be circumvented by the one failure, threat or disaster you didn't cover in your risk management plan. When that happens, accurate event logging will help you record what happened, respond to the event (if it's still in progress) and have the information available to properly safeguard against or prevent similar threats in the future.

Like any other facet of security, "event logging" can seem overwhelming and unforgiving if you're looking at hundreds of types of events to log, each with dozens of variations and options. Like we did when we looked at securing servers, let's focus our attention on a few key areas and build out what we need:

Which events should you log?
Look at your risk assessment and determine which systems are of the highest value or could cause the most trouble if interrupted. Those systems are likely to be what you prioritized when securing your servers, and they should also take precedence when it comes to event logging. You probably don't have unlimited compute and storage resources, so you have to determine which types of events are most valuable for you and how long you should keep records of them — it's critical to have your event logs on-hand when you need them, so logs should be retained online for a period of time and then backed up offline to be available for another period of time.

Your goal is to understand what's happening on your servers and why it's happening so you know how to respond. The most common audit-able events include successful and unsuccessful account log-on events, account management events, object access, policy change, privilege functions, process tracking and system events. The most conservative approach actually involves logging more information/events and keeping those logs for longer than you think you need. From there, you can evaluate your logs periodically to determine if the level of auditing/logging needs to be adjusted.

Where do you store the event logs?
Your event logs won't do you any good if they are stored in a space that is insufficient for the amount of data you need to collect. I recommend centralizing your logs in a secure environment that is both readily available and scalable. In addition to the logs being accessible when the server(s) they are logging are inaccessible, aggregating and organize your logs in a central location can be a powerful tool to build reports and analyze trends. With that information, you'll be able to more clearly see deviations from normal activity to catch attacks (or attempted attacks) in progress.

How do you protect your event logs?
Attacks can come from both inside and out. To avoid intentional malicious activity by insiders, separation of duties should be enforced when planning logging. Learn from The X Files and "Trust no one." Someone who has been granted the 'keys to your castle' shouldn't also be able to disable the castle's security system or mess with the castle's logs. Your network engineer shouldn't have exclusive access to your router logs, and your sysadmin shouldn't be the only one looking at your web server logs.

Keep consistent time.
Make sure all of your servers are using the same accurate time source. That way, all logs generated from those servers will share consistent time-stamps. Trying to diagnose an attack or incident is exceptionally more difficult if your web server's clock isn't synced with your database server's clock or if they're set to different time zones. You're putting a lot of time and effort into logging events, so you're shooting yourself in the foot if events across all of your servers don't line up cleanly.

Read your logs!
Logs won't do you any good if you're not looking at them. Know the red flags to look for in each of your logs, and set aside time to look for those flags regularly. Several SoftLayer customers — like Tech Partner Papertrail — have come up with innovative and effective log management platforms that streamline the process of aggregating, searching and analyzing log files.

It's important to reiterate that logging — like any other security endeavor — is not a 'one size fits all' model, but that shouldn't discourage you from getting started. If you aren't logging or you aren't actively monitoring your logs, any step you take is a step forward, and each step is worth the effort.

Thanks for reading, and stay secure, my friends!

-Matthew

December 27, 2012

Using SoftLayer Object Storage to Back Up Your Server

Before I came to my senses and moved my personal servers to SoftLayer, I was one of many victims of a SolusVM exploit that resulted in the wide-scale attack of many nodes in my previous host's Chicago data center. While I'm a firm believer in backing up my data, I could not have foreseen the situation I was faced with: Not only was my server in one data center compromised with all of its data deleted, but my backup server in one of the host's other data centers was also attacked ... This left me with old, stale backups on my local computer and not much else. I quickly relocated my data and decided that I should use SoftLayer Object Storage to supplement and improve upon my backup and disaster recovery plans.

With SoftLayer Object Storage Python Client set up and the SoftLayer Object Storage Backup script — slbackup.py — in hand, I had the tools I needed to build a solid backup infrastructure easily. On Linux.org, I contributed an article about how to perform MySQL backups with those resources, so the database piece is handled, but I also need to back up my web files, so I whipped up another quick bash script to run:

#!/bin/bash
 
# The path the backups will be dumped to
DUMP_DIR="/home/backups/"
 
# Path to the web files to be backed up
BACKUP_PATH="/var/www/sites /"
 
# Back up folder name (mmddyyyy)
BACKUP_DIR="`date +%m%d%Y`"
 
# Backup File Name
DUMP_FILE="`date +%m_%d_%Y_%H_%M_%S`_site_files"
 
# SL container name
CONTAINER="site_backups"
 
# Create backup dir if doesn't exist
if [ ! -d $DUMP_DIR$BACKUP_DIR ]; then
        mkdir -p $DUMP_DIR$BACKUP_DIR
fi
 
tar -zcvpf $DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz $BACKUP_PATH
 
# Make sure the archive exists
if [ -f $DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz ]; then
        /root/slbackup.py -s $DUMP_DIR$BACKUP_DIR/ -o "$CONTAINER" -r 30
 
        # Remove the backup stored locally
        rm -rf $DUMP_DIR$BACKUP_DIR
 
        # Success
        exit 0
else
        echo "$DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz does not exist."
        exit 1
fi

It's not the prettiest bash script, but it gets the job done. By tweaking a few variables, you can easily generate backups for any important directory of files and push them to your SoftLayer Object Storage account. If you want to change the retention time of your backups to be longer or shorter, you can change the 30 after the –r in the line below to the number of days you want to keep each backup:

/root/slbackup.py -s $DUMP_DIR$BACKUP_DIR/ -o "$CONTAINER" -r 30

I created a script for each website on my server, and I set a CRON (crontab –e) entry to run each one on Sundays staggered by 5 minutes:

5 1 * * 0  /root/bin/cron/CRON-site1.com_web_files > /dev/null
10 1 * * 0  /root/bin/cron/CRON-site2.com_web_files > /dev/null
15 1 * * 0  /root/bin/cron/CRON-site3.com_web_files > /dev/null 

If you're looking for an easy way to automate and solidify your backups, this little bit of code could make life easier on you. Had I taken the few minutes to put this script together prior to the attack I experienced at my previous host, I wouldn't have lost any of my data. It's easy to get lulled into "backup apathy" when you don't need your backups, but just because nothing *has* happened to your data doesn't mean nothing *can* happen to your data.

Take it from me ... Be over-prepared and save yourself a lot of trouble.

-Ronald

December 24, 2012

Giving From (and For) the Heart

This time of year is often referred to as "The Season of Giving," and we thought we'd share two SLayers' stories about their involvement in the American Heart Association Heart Walk. Like last year, we split up into fundraising teams for the AHA with a goal of raising $100,000. In addition to those fundraising efforts, SoftLayer also encouraged employees to get active and get involved in the annual Heart Walks in Houston and Dallas. Here's our on-location coverage from two team captains who attended those events this year:

Dallas

My name is Fabrienne Curtis, and I work in the Accounting Department at SoftLayer. I joined a team with thirty other people (from several different departments) to raise money for the American Heart Association, and because I love to help and work on community projects, I volunteered to be a team captain. Our team had a ton of great ideas for fundraisers, so we set an ambitious goal of raising $12,400 ($400 per person). When the dust settled, I'm proud to report that we me that goal with a total team tally of $12,488 (which SoftLayer then matched).

Beyond the fundraising, participating in the Dallas Heart Walk at Victory Park was a highlight this year. No one on my team knew that this walk had a personal meaning to me ... I lost my dad to congestive heart failure and wanted to walk in his behalf. When I got to the Heart Walk, I was touched. There was a "Survivor Wall" and there were several signs where you could share who you're walking on behalf of. If not for SoftLayer, I probably wouldn't have participated in the Heart Walk, so as I wrote on the wall and created a sign for my dad, I thought about how good it felt to work for a company that truly cares about the well-being of its employees.

SoftLayer Photo Booth

SoftLayer added a little flair to the event by setting up a photo booth for people to take pictures and take home, and with the help of Don Hunter, Hao Ho and my husband Jerry, 679 photos were taken!

SoftLayer Photo Booth

Here are some pictures I snapped from the 2012 Dallas Heart Walk:

SoftLayer Heart Walk
The Start!
SoftLayer Heart Walk
The SoftLayer "Uniform"
SoftLayer Heart Walk
The Crowd
SoftLayer Heart Walk
Victorious!

Thank you SoftLayer for having a heart! If you want more coverage of this years event, check out this Dallas Heart Walk 2012 video and click through to our Dallas Heart Walk Flickr album.

-Fabrienne

Houston

Dallas didn't get to have all of the fun when it comes to the AHA Heart Walk, and I made sure to document the Houston goings-on to share with our avid SoftLayer Blog readers. From bake sales to ice cream socials, the Houston office was diligent about donating money and raising heart-health awareness for months prior to the 2012 walk, and those months were extremely eventful. Like Fabrienne, I jumped at the opportunity to be one of 18 team captains at SoftLayer, and considering the fact that cardiovascular disease is the number one killer of Americans, I was inspired to get everyone involved.

I'll be the first to admit that I am not in the best of shape, so a five-kilometer walk through a course at Reliant Stadium would be pretty challenging. My team had been tirelessly preparing for the 5k "mini-marathon" walk, and as November approached, you could sense the excitement and enthusiasm brewing. Walking only one mile can add up to two hours to your lifespan, so in the process of preparing for the walk, we added quite a few hours to our collective lives. When the big day finally arrived, we were ready:

SoftLayer Heart Walk
The Houston Heart Walk SLayers

Given that our day started at an unbelievable 7:00am on a Saturday, most of our participants were tired-eyed and ready to chow down on the free burritos and fruit provided by SoftLayer, and by the time we fired up the photo booth and broke out the goofy props, everyone was wide awake. It's like they say, "Give a man a fish and he'll eat for a day ... Give a man fun props and a camera, and he'll have a blast (and pictures that can be used against him." Actually, I don't know if "they" say that, but it's true:

SoftLayer Heart Walk

Before we knew it, a gunshot of glitter and colorful confetti got the crowd of people moving down the 3.1-ish mile track, and we were hooting and cheering, pumped to represent our company! By mile two, my legs were a little wobbly and the sun was scorching, I could see that our dog, Rikku (whom had been carried the entire way) looked was confused about why I was putting her through the exhausting task of being comfortably in my arms as we herded through the people like cattle.

SoftLayer Heart Walk

AHA water stations and mile markers reminded us that we were doing it for the best cause ever: The people we love and the people of the past that have been lost due to heart disease. It's a safe bet that if you don't know someone directly affected by heart disease, you will eventually. The American Heart Association organizes these fundraisers and walks every year across the world to gather donations and raise awareness so that one day, we may be able to conquer this silent killer. With their donations, they're able to participate in research for preventative treatment, provide education to children to avoid obesity and fund medical research that could one day breakthrough and save lives.

All in all it was a wonderful experience, one that I'll definitely be sure to be a part of next year.

-Cassandra

SoftLayer Heart Walk

Categories: 

Pages

Subscribe to culture