sales

February 27, 2013

The Three Most Common Hosting-Related Phobias

As a member of the illustrious the SoftLayer sales (SLales) team, I have the daily pleasure of talking with any number of potential, prospective, new and current customers, and in many of those conversations, I've picked up on a fairly common theme: FEAR. Now we're not talking about lachanophobia (fear of vegetables) or nomophobia (fear of losing cell phone contact) here ... We're talking about fear that paralyzes users and holds them captive — effectively preventing their growth and limiting their business's potential. Fear is a disease.

I've created my own little naming convention for the top three most common phobias I hear from users as they consider making changes to their hosting environments:

1. Pessimisobia
This phobia is best summarized by the saying, "Better the devil you know than the devil you don't." Users with this phobia could suffer from frequent downtime, a lack of responsive support and long term commitment contracts, but their service is a known quantity. What if a different provider is even worse? If you don't suffer from pessimisobia, this phobia probably seems silly, but it's very evident in many of the conversations I have.

2. Whizkiditus
This affliction is particularly prevalent in established companies. Symptoms of this phobia include recurring discomfort associated with the thought of learning a new management system or deviating from a platform where users have become experts. There's an efficiency to being comfortable with how a particular platform works, but the ceiling to that efficiency is the platform itself. Users with whizkiditus might not admit it, but the biggest reason they shy away from change is that they are afraid of losing the familiarity they've built with their old systems over the years ... even if that means staying on a platform that prohibits scale and growth.

3. Everythingluenza
In order to illustrate this phobia of compartmentalizing projects to phase in changes, let's look at a little scenario:

I host all of my applications at Company 1. I want to move Application A to the more-qualified Company 2, but if I do that, I'll have to move Applications B through Z to Company 2 also. All of that work would be too time-consuming and cumbersome, so I won't change anything.

It's easy to get overwhelmed when considering a change of cloud hosting for any piece of your business, and it's even more intimidating when you feel like it has to be an "all or nothing" decision.

Unless you are afflicted with euphobia (the fear of hearing good news), you'll be happy to hear that these common fears, once properly diagnosed, are quickly and easily curable on the SoftLayer platform. There are no known side effects from treatment, and patients experience immediate symptom relief with a full recovery in between 1-3 months.

This might be a lighthearted look at some quirky fears, but I don't want to downplay how significant these phobias are to the developers and entrepreneurs that suffer from them. If any of these fears strike a chord with you, reach out to the SLales team (by phone, chat or email), and we'll help you create a treatment plan. Once you address and conquer these fears, you can devote all of your energy back to getting over your selenophobia (fear of the moon).

-Arielle

Categories: 
February 20, 2013

Global Game Jam: Build a Video Game in 48 Hours

You're a conflicted zombie that yearns to be human again. Now you've got to dodge grandma and babies in an 8-bit side-scroller. Now you're Vimberly Koll, and you have to stop Poseidon from raining down on the Global Game Jam. At the end of Global Game Jam Vancouver, teams of developers, 3D artists, level designers and sound engineers conceptualized and created these games (along with a number of others) in less than 48 hours. Building a game in a weekend is no small task, so only the best and brightest game developers in the world converge on over 300 sites in 63 countries to show off their skills.

For the fifth annual Global Game Jam, more than 16,000 participants committed a weekend to learning from and collaborating with their peers in a worldwide game development hackathon. I was lucky enough to get to sit in on the action in Vancouver, and I thought I'd give you a glimpse into how participants make game development magic happen in such a short period of time.

Vancouver Global Game Jam

Day 1 (Friday Night): The Brainstorm
More than 260 participants poured into an open study area of the Life Sciences building at the Univerity of British Columbia to build the next best distraction ... er, video game. The event kicked off with a keynote from Brian Proviciano, a game development prodigy, who shared his history and offered sage advice for those interested in the industry. Following a comical 20-second idea pitch session, the caffeine began to flow and the brainstorm commenced.

Inspiration could come from anywhere, and a perfect example is the "Poseidon" game I mentioned above: GGJVancouver organizer Kimberly Voll had sprinklers rain on her office a few days prior to the event, so someone decided to make a game out of that situation. This year, the Global Game Jam introduced an interesting twist that they called "diversifiers." Diversifiers are side-challenges for extra credit, and two of my favorites were "Atari Age" — the game has to be smaller than 4kb — and "May the (Web) Force be With You" — the game has to run in a browser.

Fast-forward two hours, and as you look around, you see storyboards and scripts being written, characters being born, and a few intrepid developers starting to experiment with APIs, game engines , and external controllers to find some additional flair for their final products. You wouldn't expect a game made in 48 hours to incorporate an iOS Eye Tracking API or the Leap Motion gesture controller, but these developers are ambitious!

As the concepts are finalized, team members rotate in and out for sleep, and some even go home to get some rest — a good idea on the first night since everyone usually pulls an all-nighter on Saturday.

Vancouver Global Game Jam

Day 2 (Saturday): Laying the Foundation
It was cool to walk the aisles and peer over peoples' shoulders as musical scores, wrangled code and character models were coming together. However, the scene wasn't all smiles and hugs; a few groups were wrestling quirky bugs and integration issues, and in some cases, they ended up having to completely reboot their approach. Day 2 set the course for all of the teams. A few teams disbanded due to disagreements or unfixable bugs, and some developers peeled off from their teams to follow an untamed passion. In the Global Game Jam, there are no rules ... only games.

Vancouver Global Game Jam

Day 3 (Sunday): Sleep, What's That?
By Day 3, the building starts feeling like a college dorm during finals week when everyone is staying up all night to study or finish their comp-sci assignments (I know it wasn't just me...). Running on various vehicles of caffeine, teams worked heads-down all day to meet their 3pm deadline. Sighs of relief and high fives were exchanged when the games were submitted, and the event concluded with a pizza party and demo session where everyone could see and share the fruits of their labor.

Vancouver Global Game Jam

As I left the conference, teams were given the opportunity to showcase their games on the big screen to a chorus of laughter and applause. It was an awesome experience, and I'm glad SoftLayer sponsored it so that I could attend, take it all in and meet a ton of outstanding up-and-coming game developers. If you're into making games (or you've thought about it), check out the Global Game Jam one of these years.

Just don't forget to bring deodorant ... for your neighbor's sake.

-@andy_mui

Photo Credit Shout-Outs: Alex Larente, Ligia Brosch, Naz Madani. Great shots!

February 18, 2013

What Happen[ed] in Vegas - Parallels Summit 2013

The Las Vegas Convention and Visitors Authority says, "What happens in Vegas, stays in Vegas," but we absconded from Caesars Palace with far too many pictures and videos from Parallels Summit to adhere to their suggestion. Over the course of three days, attendees stayed busy with presentations, networking sessions, parties, cocktails and (of course) the Server Challenge II. And thanks to Alan's astute questions in The Hangover, we didn't have to ask if the hotel was pager-friendly, whether a payphone bank was available or if Caesar actually lived at the hotel ... We could focus on the business at hand.

This year, Parallels structured the conference around three distinct tracks — Business, Technical and Developer — to focus all of the presentations for their most relevant audiences, and as a result, Parallels Summit engaged a broader, more diverse crowd than ever before. Many of the presentations were specifically geared toward the future of the cloud and how businesses can innovate to leverage the cloud's potential. With all of that buzz around the cloud and innovation, SoftLayer felt right at home. We were also right at home when it came to partying.

SoftLayer was a proud sponsor of the massive Parallels Summit party at PURE Nightclub in Caesar's palace on the second night of the conference. With respect to the "What Happens in Vegas" tagline, we actually powered down our recording devices to let the crowd enjoy the jugglers, acrobats, drinks and music without fear of incriminating pictures winding up on Facebook. Don't worry, though ... We made up for that radio silence by getting a little extra coverage of the epic Server Challenge II competition.

More than one hundred attendees stepped up to reassemble our rack of Supermicro servers, and the competition was fierce. The top two times were fifty-nine hundredths of a second apart from each other, and it took a blazingly fast time of 1:25.00 to even make the leader board. As the challenge heated up, we were able to capture video of the top three competitors (to be used as study materials for all competitors at future events):

It's pretty amazing to see the cult following that the Server Challenge is starting to form, but it's not very surprising. Given how intense some of these contests have been, people are scouting our events page for their next opportunity to step up to the server rack, and I wouldn't be surprised to see that people are mocking up their own Server Challenge racks at home to hone their strategy. A few of our friends on Twitter hinted that they're in training to dominate the next time they compete, so we're preparing for the crowds to get bigger and for the times to keep dropping.

If you weren't able to attend the show, Parallels posted video from two of the keynote presentations, and shared several of the presentation slide decks on the Parallels Summit Agenda. You might not get the full experience of networking, partying or competing in the Server Challenge, but you can still learn a lot.

Viva Las Vegas! Viva Parallels! Viva SoftLayer!

-Kevin

February 15, 2013

Cedexis: SoftLayer "Master Model Builder"

Think of the many components of our cloud infrastrucutre as analogous to LEGO bricks. If our overarching vision is to help customers "Build the Future," then our products are "building blocks" that can be purposed and repurposed to create scalable, high-performance architecture. Like LEGO bricks, each of our components is compatible with every other component in our catalog, so our customers are essentially showing off their Master Model Builder skills as they incorporate unique combinations of infrastructure and API functionality into their own product offerings. Cedexis has proven to be one of those SoftLayer "Master Model Builders."

As you might remember from their Technology Partner Marketplace feature, Cedexis offers a content and application delivery system that helps users balance traffic based on availability, performance and cost. They've recently posted a blog about how they integrated the SoftLayer API into their system to detect an unresponsive server (disabled network interface), divert traffic at the DNS routing level and return it as soon as the server became available again (re-enabled the network interface) ... all through the automation of their Openmix service:

They've taken the building blocks of SoftLayer infrastructure and API connectivity to create a feature-rich platform that improves the uptime and performance for sites and applications using Openmix. Beyond the traffic shaping around unreachable servers, Cedexis also incorporated the ability to move traffic between servers based on the amount of bandwidth you have remaining in a given month or based on the response times it sees between servers in different data centers. You can even make load balancing decisions based on SoftLayer's server management data with Fusion — one of their newest products.

The tools and access Cedexis uses to power these Openmix features are available to all of our customers via the SoftLayer API, and if you've ever wondered how to combine our blocks into your environment in unique, dynamic and useful ways, Cedexis gives a perfect example. In the Product Development group, we love to see these kinds of implementations, so if you're using SoftLayer in an innovative way, don't keep it a secret!

-Bryce

February 14, 2013

Tips and Tricks – Building a jQuery Plugin (Part 2)

jQuery plugins don't have to be complicated to create. If you've stumbled upon this blog in pursuit of a guide to show you how to make a jQuery plugin, you might not believe me ... It seems like there's a chasm between the "haves" of jQuery plugin developers and the "have nots" of future jQuery developers, and there aren't very many bridges to get from one side to the other. In Part 1 of our "Building a jQuery Plugin" series, we broke down how to build the basic structure of a plugin, and in this installment, we'll be adding some usable functionality to our plugin.

Let's start with the jQuery code block we created in Part 1:

(function($) {
    $.fn.slPlugin = function(options) {
            var defaults = {
                myVar: "This is", // this will be the default value of this var
                anotherVar: "our awesome",
                coolVar: "plugin!",
            };
            var options = $.extend(defaults, options);
            this.each(function() {
                ourString = myVar + " " + anotherVar + " " + coolVar;
            });
            return ourString;
    };
}) (jQuery);

We want our plugin to do a little more than return, "This is our awesome plugin!" so let's come up with some functionality to build. For this exercise, let's create a simple plugin that allows truncates a blob of text to a specified length while providing the user an option show/hide the rest of the text. Since the most common character length limitation on the Internet these days is Twitter's 140 characters, we'll use that mark in our example.

Taking what we know about the basic jQuery plugin structure, let's create the foundation for our new plugin — slPlugin2:

(function($) {
    $.fn.slPlugin2 = function(options) {
 
        var defaults = {
            length: 140,
            moreLink: "read more",
            lessLink: "collapse",
            trailingText: "..."
        };
 
        var options = $.extend(defaults, options);
    };
})(jQuery);

As you can see, we've established four default variables:

  • length: The length of the paragraph we want before we truncate the rest.
  • moreLength: What we append to the paragraph when it is truncated. This will be the link the user clicks to expand the rest of the text.
  • lessLink: What we append to the paragraph when it is expanded. This will be the link the user clicks to collapse the rest of the text.
  • trailingText: The typical ellipses to append to the truncation.

In our jQuery plugin example from Part 1, we started our function with this.each(function() {, and for this example, we're going to add a return for this to maintain chainability. By doing so, we're able to manipulate the segment with methods. For example, if we started our function with this.each(function() {, we'd call it with this line:

$('#ourParagraph').slPlugin2();

If we start the function with return this.each(function() {, we have the freedom to add further manipulation:

$('#ourParagraph').slPlugin2().bind();

With such a simple change, we're able to add method calls to make one massive dynamic function.

Let's flesh out the actual function a little more. We'll add a substantial bit of code in this step, but you should be able to follow along with the changes via the comments:

(function($) {
    $.fn.slPlugin2 = function(options) {
 
        var defaults = {
            length: 140, 
            moreLink: "read more",
            lessLink: "collapse",
            trailingText: "..."
        };
 
        var options = $.extend(defaults, options);
 
        // return this keyword for chainability
        return this.each(function() {
            var ourText = $(this);  // the element we want to manipulate
            var ourHtml = ourText.html(); //get the contents of ourText!
            // let's check if the contents are longer than we want
            if (ourHtml.length > options.length) {
                var truncSpot = ourHtml.indexOf(' ', options.length); // the location of the first space (so we don't truncate mid-word) where we will end our truncation.
 
   // make sure to ignore the first space IF the text starts with a space
   if (truncSpot != -1){
       // the part of the text that will not be truncated, starting from the beginning
       var firstText = ourHtml.substring(0, truncSpot);
 
       // the part of the text that will be truncated, minus the trailing space
       var secondText = ourHtml.substring(truncSpot, ourHtml.legnth -1);
                }
            }
        })
    };
})(jQuery);

Are you still with us? I know it seems like a lot to take in, but each piece is very straightforward. The firstText is the chunk of text that will be shown: The first 140 characters (or whatever length you define). The secondText is what will be truncated. We have two blobs of text, and now we need to make them work together:

(function($) {
    $.fn.slPlugin2 = function(options) {
 
        var defaults = {
            length: 140, 
            moreLink: "read more",
            lessLink: "read less",
            trailingText: "..."
        };
 
        var options = $.extend(defaults, options);
 
        // return this keyword for chainability
        return this.each(function() {
            var ourText = $(this);  // the element we want to manipulate
            var ourHtml = ourText.html(); //get the contents of ourText!
            // let's check if the contents are longer than we want
            if (ourHtml.length > options.length) {
                var truncSpot = ourHtml.indexOf(' ', options.length); // the location of the first space (so we don't truncate mid-word) where we will end our truncation.
 
   // make sure to ignore the first space IF the text starts with a space
   if (truncSpot != -1){
       // the part of the text that will not be truncated, starting from the beginning
       var firstText = ourHtml.substring(0, truncSpot);
 
       // the part of the text that will be truncated, minus the trailing space
       var secondText = ourHtml.substring(truncSpot, ourHtml.legnth -1);
 
       // perform our truncation on our container ourText, which is technically more of a "rewrite" of our paragraph, to our liking so we can modify how we please. It's basically saying: display the first blob then add our trailing text, then add our truncated part wrapped in span tags (to further modify)
       ourText.html(firstText + options.trailingText + '<span class="slPlugin2">' + secondText + '</span>');
 
       // but wait! The secondText isn't supposed to show until the user clicks "read more", right? Right! Hide it using the span tags we wrapped it in above.
       ourText.find('.slPlugin2').css("display", "none");
                }
            }
        })
    };
})(jQuery);

Our function now truncates text to the specified length, and we can call it from our page simply:

<script src="jquery.min.js"></script>
<script src="jquery.slPlugin2.js"></script>
<script type="text/javascript">
$(document).ready(function() {  
    $('#slText').slPlugin2();  
});
</script>

Out of all the ways to truncate text via jQuery, this has to be my favorite. It's feature-rich while still being fairly easy to understand. As you might have noticed, we haven't touched on the "read more" and "read less" links or the expanding/collapsing animations yet, but we'll be covering those in Part 3 of this series. Between now and when Part 3 is published, I challenge you to think up how you'd add those features to this plugin as homework.

-Cassandra

February 12, 2013

From the Startup Trenches to the Catalyst War Room

Before joining SoftLayer, I was locked in a dark, cold room for two years. Sustained by a diet of sugar and caffeine and basking in the glow of a 27" iMac, I was tasked with making servers dance to the tune of Ruby. The first few months were the toughest. The hours were long, and we worked through holidays. And I loved it.

If that work environment seems like torture, you probably haven't been on the front lines of a development team. I was a member of a band of brothers at war with poorly documented vendor APIs, trying to emerge victorious from the Battle of Version 1.0. We operated (and suffered) like a startup in its early stages, so I've had firsthand experience with the ups and downs of creating and innovating in technology. Little did I know that those long hours and challenges were actually preparing me to help hundreds of other developers facing similar circumstances ... I was training to be a Catalyst SLayer:

Catalyst Team

You probably know a lot about Catalyst by now, but one of the perks of the program that often gets overshadowed by "free hosting" is the mentorship and feedback the SoftLayer team provides every Catalyst participant. Entrepreneurs bounce ideas off of guys like Paul Ford and George Karidis to benefit from the years of experience and success we've experienced, and the more technical folks can enlist our help in figuring out more efficient ways to tie their platforms to their infrastructure.

When I was forging through the startup waters, I was fortunate to have been supported by financially reinforced walls and the skilled engineers of a well-established hosting company in Tokyo. Unfortunately, that kind of support is relatively uncommon. That's where Catalyst swoops in. SoftLayer's roots were planted in the founders' living rooms and garages, so we're particularly fond of other companies who are bootstrapping, learning from failure and doing whatever it takes to succeed. In my role with Catalyst, I've effectively become a resource for hundreds of startups around the world ... and that feels good.

Five days before my official start date, I receive a call from Josh telling me that we'd be spending my first official week on the job in Seattle with Surf Incubator and Portland with Portland Incubator Experiment (PIE). While the trip did not involve carving waves or stuffing our faces with baked goods (bummer), we did get to hear passionate people explain what keeps them up at night. We got to share a little bit about SoftLayer and how we can help them sleep better (or fuel them with more energy when they're up at night ... depending on which they preferred), and as I headed back to Los Angeles, I knew I made the right choice to become a SLayer. I'm surrounded by energy, creativity, passion, innovation and collaboration on a daily basis. It's intoxicating.

TL;DR: I love my job.

-@andy_mui

February 11, 2013

Startup Series: Planwise

Every startup dreams about entering an unowned, wide-open market ... and subsequently dominating it. About a year ago, I met a couple of Aussies — Vincent and Niall — who saw a gaping hole in the world of personal finance and seized the opportunity to meet the unspoken needs of a huge demographic: People who want to be in control of their money but hate the complexity of planning and budgeting. They built Planwise — a forward-looking financial decision-making tool that shows you your future financial goals in the context of each other and your daily financial commitments.

Planwise

If you look at the way people engage with their finances on a daily basis, you might think that we don't really care about our money. Unless we're about to run out of it, we want to do something with it, or it constrains us from doing something we want to do, we don't spend much time managing our finances. Most of the online tools that dominate the finance space are enterprise-centric solutions that require sign-ups and API calls to categorize your historical spend. Those tools confirm that you spend too much each month on coffee and beer (in case you didn't already know), but Planwise takes a different approach — one that focuses on the future.

Planwise is a tool that answers potentially complex financial questions quickly and clearly. "If I make one additional principal payment on my mortgage every year, what will my outstanding balance be in five years?" "How would would my long-term savings be affected if I moved to a nicer (and more expensive) apartment?" "How much money should I set aside every month if I want to travel to Europe next summer?" You shouldn't have to dig up your old accounting textbooks or call a CPA to get a grasp on your financial future:

One of the most significant differentiators for Planwise is that you can use the tool without signing up and without any identifiable information. You just launch Planwise, add relevant numbers, and immediately see the financial impact of scenarios like paying off debt, losing your job, or changing your expenses significantly. If you find Planwise useful and you want to keep your information in the system (so you don't have to enter it again), you can create an account to save your data by just providing your email address.

Planwise has been a SoftLayer customer since around August of last year, and I've gotten to work with them quite a bit via the Catalyst program. They built a remarkable hybrid infrastructure on SoftLayer's platform where they leverage dedicated hardware, cloud instances and cutting-edge DB deployments to scale their environment up and down as their usage demands. I'd also be remiss if I didn't give them a shout-out for evangelizing Catalyst to bring some other outstanding startups onboard. You've met one of those referred companies already (Bright Funds), and you'll probably hear about a few more soon.

Go make some plans with Planwise.

-@JoshuaKrammes

February 8, 2013

Data Center Power-Up: Installing a 2-Megawatt Generator

When I was a kid, my living room often served as a "job site" where I managed a fleet of construction vehicles. Scaled-down versions of cranes, dump trucks, bulldozers and tractor-trailers littered the floor, and I oversaw the construction (and subsequent destruction) of some pretty monumental projects. Fast-forward a few years (or decades), and not much has changed except that the "heavy machinery" has gotten a lot heavier, and I'm a lot less inclined to "destruct." As SoftLayer's vice president of facilities, part of my job is to coordinate the early logistics of our data center expansions, and as it turns out, that responsibility often involves overseeing some of the big rigs that my parents tripped over in my youth.

The video below documents the installation of a new Cummins two-megawatt diesel generator for a pod in our DAL05 data center. You see the crane prepare for the work by installing counter-balance weights, and work starts with the team placing a utility transformer on its pad outside our generator yard. A truck pulls up with the generator base in tow, and you watch the base get positioned and lowered into place. The base looks so large because it also serves as the generator's 4,000 gallon "belly" fuel tank. After the base is installed, the generator is trucked in, and it is delicately picked up, moved, lined up and lowered onto its base. The last step you see is the generator housing being installed over the generator to protect it from the elements. At this point, the actual "installation" is far from over — we need to hook everything up and test it — but those steps don't involve the nostalgia-inducing heavy machinery you probably came to this post to see:

When we talk about the "megawatt" capacity of a generator, we're talking about the bandwidth of power available for use when the generator is operating at full capacity. One megawatt is one million watts, so a two-megawatts generator could power 20,000 100-watt light bulbs at the same time. This power can be sustained for as long as the generator has fuel, and we have service level agreements to keep us at the front of the line to get more fuel when we need it. Here are a few other interesting use-cases that could be powered by a two-megawatt generator:

  • 1,000 Average Homes During Mild Weather
  • 400 Homes During Extreme Weather
  • 20 Fast Food Restaurants
  • 3 Large Retail Stores
  • 2.5 Grocery Stores
  • A SoftLayer Data Center Pod Full of Servers (Most Important Example!)

Every SoftLayer facility has an n+1 power architecture. If we need three generators to provide power for three data center pods in one location, we'll install four. This additional capacity allows us to balance the load on generators when they're in use, and we can take individual generators offline for maintenance without jeopardizing our ability to support the power load for all of the facility's data center pods.

Those of you who are in the fondly remember Tonka trucks and CAT crane toys are the true target audience for this post, but even if you weren't big into construction toys when you were growing up, you'll probably still appreciate the work we put into safeguarding our facilities from a power perspective. You don't often see the "outside the data center" work that goes into putting a new SoftLayer data center pod online, so I thought it'd give you a glimpse. Are there an topics from an operations or facilities perspectives that you also want to see?

-Robert

January 31, 2013

ActiveCampaign: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Peter Evans from ActiveCampaign. ActiveCampaign is a complete email marketing and marketing automation platform designed to help small businesses grow.

The Challenge of Sending Email Simply

You need to send email. Usually, that's a pretty simple task, so it's not uncommon to find users who think that sending a monthly newsletter is more or less the same task as sending a quick note to a friend about going to see a movie. In fact, those two email use-cases are completely different animals. With all of the nuances inherent in sending and managing large volumes of email, a plethora of email marketing services are positioned to help users better navigate the email marketing waters. It's tough to differentiate which features you might need and which features are just there to be a "Check" in a comparison checklist. ActiveCampaign set out to make the decision-making process simpler ... We knew that we needed the standard features like auto-responder campaigns, metrics reports and email templates, but we also knew we had to differentiate our service in a meaningful way. So we focused on automation.

Too often, the "automation" provided by a platform can be very cumbersome to set up (if it's available at all), and when it's actually working, there's little confirmation that actions are being performed as expected. In response, we were intentional about ActiveCampaign's automation features being easy to set up and manage ... If automation saves time and money, it shouldn't be intimidatingly difficult to incorporate into your campaigns. Here is a screenshot of what it takes to incorporate automation in your email campaigns with ActiveCampaign:

ActiveCampaign Screenshot

No complicated logic. No unnecessary options. With a only a few clicks, you can select an action to spark a meaningful response in your system. If a subscriber in your Newsletter list clicks on a link, you might want to move that subscriber to a different list. Because you might want to send a different campaign to that user as well, we provide the ability to add multiple automated actions for each subscriber action, and it's all very clear.

One of the subscriber actions that might stand out to you if you've used other email service providers (or ESPs) is the "When subscriber replies to a campaign" bullet. ActiveCampaign is the first ESP (that we're aware of) to provide users the option to send a series of follow-up campaigns (or to restrict the sending of future campaigns) to subscribers who reply to a campaign email. Replies are tracked in your campaign reports, and you have deep visibility into how many people replied, who replied, and how many times they replied. With that information, you can segment those subscribers and create automated actions for them, and the end result is that you're connecting with your subscriber base much more effectively because you're able to target them better ... And you don't have to break your back to do it.

SoftLayer customers know how valuable automation can be in terms of infrastructure, so it should be no surprise that email marketing campaigns can benefit so much from automation as well. Lots of ESPs provide stats, and it's up to you to figure out meaningful ways to use that information. ActiveCampaign goes a step beyond those other providers by helping you very simply engage your subscribers with relevant and intentional actions. If you're interested in learning more, check us out at http://www.activecampaign.com.

-Peter Evans, ActiveCampaign

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
January 29, 2013

iptables Tips and Tricks: APF (Advanced Policy Firewall) Configuration

Let's talk about APF. APF — Advanced Policy Firewall — is a policy-based iptables firewall system that provides simple, powerful control over your day-to-day server security. It might seem intimidating to be faced with all of the features and configuration tools in APF, but this blog should put your fears to rest.

APF is an iptables wrapper that works alongside iptables and extends its functionality. I personally don't use iptables wrappers, but I have a lot of experience with them, and I've seen that they do offer some additional features that streamline policy management. For example, by employing APF, you'll get several simple on/off toggles (set via configuration files) that make some complex iptables configurations available without extensive coding requirements. The flip-side of a wrapper's simplicity is that you aren't directly in control of the iptables commands, so if something breaks it might take longer to diagnose and repair. Before you add a wrapper like APF, be sure that you know what you are getting into. Here are a few points to consider:

  • Make sure that what you're looking to use adds a feature you need but cannot easily incorporate with iptables on its own.
  • You need to know how to effectively enable and disable the iptables wrapper (the correct way ... read the manual!), and you should always have a trusted failsafe iptables ruleset handy in the unfortunate event that something goes horribly wrong and you need to disable the wrapper.
  • Learn about the basic configurations and rule changes you can apply via the command line. You'll need to understand the way your wrapper takes rules because it may differ from the way iptables handles rules.
  • You can't manually configure your iptables rules once you have your wrapper in place (or at least you shouldn't).
  • Be sure to know how to access your server via the IPMI management console so that if you completely lock yourself out beyond repair, you can get back in. You might even go so far as to have a script or set of instructions ready for tech support to run, in the event that you can't get in via the management console.

TL;DR: Have a Band-Aid ready!

APF Configuration

Now that you have been sufficiently advised about the potential challenges of using a wrapper (and you've got your Band-Aid ready), we can check out some of the useful APF rules that make iptables administration a lot easier. Most of the configuration for APF is in conf.apf. This file handles the default behavior, but not necessarily the specific blocking rules, and when we make any changes to the configuration, we'll need to restart the APF service for the changes to take effect.

Let's jump into conf.apf and break down what we see. The first code snippit is fairly self-explanatory. It's another way to make sure you don't lock yourself out of your server as you are making configuration changes and testing them:

# !!! Do not leave set to (1) !!!
# When set to enabled; 5 minute cronjob is set to stop the firewall. Set
# this off (0) when firewall is determined to be operating as desired.
DEVEL_MODE="1"

The next configuration options we'll look at are where you can make quick high-level changes if you find that legitimate traffic is being blocked and you want to make APF a little more lenient:

# This controls the amount of violation hits an address must have before it
# is blocked. It is a good idea to keep this very low to prevent evasive
# measures. The default is 0 or 1, meaning instant block on first violation.
RAB_HITCOUNT="1"
 
# This is the amount of time (in seconds) that an address gets blocked for if
# a violation is triggered, the default is 300s (5 minutes).
RAB_TIMER="300"
# This allows RAB to 'trip' the block timer back to 0 seconds if an address
# attempts ANY subsiquent communication while still on the inital block period.
RAB_TRIP="1"
 
# This controls if the firewall should log all violation hits from an address.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_HIT="1"
 
# This controls if the firewall should log all subsiqent traffic from an address
# that is already blocked for a violation hit, this can generate allot of logs.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_TRIP="0"

Next, we have an option to adjust ICMP flood protection. This protection should be useful against some forms of DoS attacks, and the associated rules show up in your INPUT chain:

# Set a reasonable packet/time ratio for ICMP packets, exceeding this flow
# will result in dropped ICMP packets. Supported values are in the form of:
# pkt/s (packets/seconds), pkt/m (packets/minutes)
# Set value to 0 for unlimited, anything above is enabled.
ICMP_LIM="30/s"

If you wanted to add more ports to block for p2p traffic (which will show up in the P2P chain), you'll update this code:

# A common set of known Peer-To-Peer (p2p) protocol ports that are often
# considered undesirable traffic on public Internet servers. These ports
# are also often abused on web hosting servers where clients upload p2p
# client agents for the purpose of distributing or downloading pirated media.
# Format is comma separated for single ports and an underscore separator for
# ranges (4660_4678).
BLK_P2P_PORTS="1214,2323,4660_4678,6257,6699,6346,6347,6881_6889,6346,7778"

The next few lines let you designate the ports that you want to have closed at all times. They will be blocked for INPUT and OUTPUT chains:

# These are common Internet service ports that are understood in the wild
# services you would not want logged under normal circumstances. All ports
# that are defined here will be implicitly dropped with no logging for
# TCP/UDP traffic inbound or outbound. Format is comma separated for single
# ports and an underscore separator for ranges (135_139).
BLK_PORTS="135_139,111,513,520,445,1433,1434,1234,1524,3127"

The next important section to look at deals with conntrack. If you get "conntrack full" errors, this is where you'd increase the allowed connections. It's not uncommon to need more connections than the default, so if you need to adjust that value, you'd do it here:

# This is the maximum number of "sessions" (connection tracking entries) that
# can be handled simultaneously by the firewall in kernel memory. Increasing
# this value too high will simply waste memory - setting it too low may result
# in some or all connections being refused, in particular during denial of
# service attacks.
SYSCTL_CONNTRACK="65536"

We've talked about the ports we want closed at all times, so it only makes sense that we'd specify which ports we want open for all interfaces:

# Common inbound (ingress) TCP ports
IG_TCP_CPORTS="22"
# Common inbound (ingress) UDP ports
IG_UDP_CPORTS=""
# Common outbound (egress) TCP ports
EG_TCP_CPORTS="21,25,80,443,43"
# Common outbound (egress) UDP ports
EG_UDP_CPORTS="20,21,53"

And when we want a special port allowance for specific users, we can declare it easily. For example, if we want port 22 open for user ID 0, we'd use this code:

# Allow outbound access to destination port 22 for uid 0
EG_TCP_UID="0:22"

The next few sections on Remote Rule Imports and Global Trust are a little more specialized, and I encourage you to read a little more about them (since there's so much to them and not enough space to cover them here on the blog). An important feature of APF is that it imports block lists from outside sources to keep you safe from some attackers, so the Remote Rule Imports can prove to be very useful. The Global Trust section is incredibly useful for multi-server deployments of APF. Here, you can set up your global allow/block lists and have them all pull from a central location so that you can make a single update to the source and have the update propogated to all servers in your configuration. These changes are synced to the glob_allow/deny.rules files, and they will be downloaded (and overwritten) on a regular basis from your specified source, so don't make any manual edits in glob_allow/deny.rules.

As you can see, apf.conf is no joke. It has a lot of stuff going on, but it's very straightforward and documented well. Once we've set up apf.conf with the configurations we need, it's time to look at the more focused allow_hosts.rules and deny_hosts.rules files. These .rules files are where where you put your typical firewall rules in place. If there's one piece of advice I can give you about these configurations, it would be to check if your traffic is already allowed or blocked. Having multiple rules that do the same thing (possibly in different places) is confusing and potentially dangerous.

The deny_hosts.rules configuration will look just like allow_hosts.rules, but it's performing the opposite function. Let's check out an allow_hosts.rules configuration that will allow the Nimsoft service to function:

tcp:in:d=48000_48020:s=10.0.0.0/8
tcp:out:d=48000_48020:d=10.0.0.0/8

The format is somewhat simplistic, but the file gives a little more context in the comments:

# The trust rules can be made in advanced format with 4 options
# (proto:flow:port:ip);
# 1) protocol: [packet protocol tcp/udp]
# 2) flow in/out: [packet direction, inbound or outbound]
# 3) s/d=port: [packet source or destination port]
# 4) s/d=ip(/xx) [packet source or destination address, masking supported]
# Syntax:
# proto:flow:[s/d]=port:[s/d]=ip(/mask)

APF also uses ds_hosts.rules to load the DShield.org blocklist, and I assume the ecnshame_hosts.rules does something similar (can't find much information about it), so you won't need to edit these files manually. Additionally, you probably don't need to make any changes to log.rules, unless you want to make changes to what exactly you log. As it stands, it logs certain dropped connections, which should be enough. Also, it might be worth noting that this file is a script, not a configuration file.

The last two configuration files are the preroute.rules and postroute.rules that (unsurprisingly) are used to make routing changes. If you have been following my articles, this corresponds to the iptables chains for PREROUTING and POSTROUTING where you would do things like port forwarding and other advanced configuration that you probably don't want to do in most cases.

APF Command Line Management

As I mentioned in the "points to consider" at the top of this post, it's important to learn the changes you can perform from the command line, and APF has some very useful command line tools:

[root@server]# apf --help
APF version 9.7 <apf@r-fx.org>
Copyright (C) 2002-2011, R-fx Networks <proj@r-fx.org>
Copyright (C) 2011, Ryan MacDonald <ryan@r-fx.org>
This program may be freely redistributed under the terms of the GNU GPL
 
usage /usr/local/sbin/apf [OPTION]
-s|--start ......................... load all firewall rules
-r|--restart ....................... stop (flush) &amp; reload firewall rules
-f|--stop........ .................. stop (flush) all firewall rules
-l|--list .......................... list all firewall rules
-t|--status ........................ output firewall status log
-e|--refresh ....................... refresh &amp; resolve dns names in trust rules
-a HOST CMT|--allow HOST COMMENT ... add host (IP/FQDN) to allow_hosts.rules and
                                     immediately load new rule into firewall
-d HOST CMT|--deny HOST COMMENT .... add host (IP/FQDN) to deny_hosts.rules and
                                     immediately load new rule into firewall
-u|--remove HOST ................... remove host from [glob]*_hosts.rules
                                     and immediately remove rule from firewall
-o|--ovars ......................... output all configuration options

You can use these command line tools to turn your firewall on and off, add allowed or blocked hosts and display troubleshooting information. These commands are very easy to use, but if you want more fine-tuned control, you'll need to edit the configuration files directly (as we looked at above).

I know it seems like a lot of information, but to a large extent, that's all you need to know to get started with APF. Take each section slowly and understand what each configuration file is doing, and you'll master APF in no time at all.

-Mark

Pages

Subscribe to sales