Tips And Tricks Posts

May 15, 2013

Secure Quorum: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we’re happy to welcome Gerard Ibarra from Secure Quorum. Secure Quorum is an easy-to-use emergency notification system and crisis management system that resides in the cloud.

Are You Prepared for an Emergency?

Every company's management team faces the challenge of having too many things going on with not enough time in the day. It's difficult to get everything done, so when push comes to shove, particular projects and issues need to be prioritized to be completed. What do we have to do today that can't be put off to tomorrow? Often, a businesses fall into a reactionary rut where they are constantly "putting out the fires" first, and while it's vital for a business to put out those fires (literal or metaphorical), that approach makes it difficult to proactively prepare for those kinds of issues to streamline the process of resolving them. Secure Quorum was created to provide a simple, secure medium to deal with emergencies and incidents.

What we noticed was that businesses didn't often consider planning for emergencies as part of their operations. The emergencies I'm talking about thankfully don't happen often, but fires, accidents, power outages, workplace violence and denial of service attacks can severely impact the bottom line if they aren't addressed quickly ... They can make or break you. Are you prepared?

Every second that we fail to make informed and logical decisions during an emergency is time lost in taking action. Take these facts for a little perspective:

  • "Property destruction and business disruption due to disasters now rival warfare in terms of loss." (University Corporation for Atmospheric Research)
  • More than 10,000 severe thunderstorms, 2,500 floods, 1,000 tornadoes and 10 hurricanes affect the United States each year. On average, 500 people die yearly because of severe weather and floods. (National Weather News 2005)
  • The cost of natural disasters is rising. During the past two decades, natural disaster damage costs have exceeded the $500 billion mark. Only 17 percent of that figure was covered by insurance. (Dennis S. Mileti, Disasters by Design)
  • Losses as a result of global disasters continue to increase on average every year, with an estimated $360 billion USD lost in 2011. (Centre for Research in the Epidemiology of Disasters)
  • Natural disasters, power outages, IT failures and human error are common causes of disruptions to internal and external communications. They "can cause downtime and have a significant negative impact on employee productivity, customer retention, and the confidence of vendors, partners, and customers." (Debra Chin, Palmer Research, May 2011)

These kinds of "emergencies" are not going away, but because specific emergencies are difficult (if not impossible) to predict, it's not obvious how to deal with them. How do we reduce risk for our employees, vendors, customers and our business? The two best answers to that question are to have a business continuity plan (BCP) and to have a way to communicate and collaborate in the midst of an emergency.

Start with a BCP. A BCP is a strategic plan to help identify and mitigate risk. Investopedia gives a great explanation:

The creation of a strategy through the recognition of threats and risks facing a company, with an eye to ensure that personnel and assets are protected and able to function in the event of a disaster. Business continuity planning (BCP) involves defining potential risks, determining how those risks will affect operations, implementing safeguards and procedures designed to mitigate those risks, testing those procedures to ensure that they work, and periodically reviewing the process to make sure that it is up to date.

Make sure you understand the basics of a BCP, and look for cues from organizations like FEMA for examples of how to approach emergency situations: http://www.ready.gov/business-continuity-planning-suite.

Once you have a basic BCP in place, it's important to be able to execute it when necessary ... That's where an emergency communication and collaboration solution comes into play. You need to streamline how you communicate when an emergency occurs, and if you're relying on a manual process like a phone tree to spread the word and contact key stakeholders in the midst of an incident, you're wasting time that could better be spent focusing to the issue at hand. An emergency communication solution automates that process quickly and logically.

When you create a BCP, you consider which people in your organization are key to responding to specific types of emergencies, and if anything ever happens, you want to get all of those people together. An emergency communication system will collect the relevant information, send it to the relevant people in your organization and seamlessly bridge them into a secured conference call. What would take minutes to complete now takes seconds, and when it comes to responding to these kinds of issues, seconds count. With everyone on a secure call, decisions can be made quickly and recorded to inform employees and stakeholders of what occurred and what the next steps are.

Plan for emergencies and hope that you never have to use that plan. Think about preparing for emergencies strategically, and it could make all the difference in the world. Secure Quorum is a platform that makes it easy to communicate and collaborate quickly, reliably and securely in those high-stress situations, so if you're interested getting help when it comes to responding to emergencies and incidents, visit our site at SecureQuorum.com and check out the whitepaper we just published with one of our customers: Ease of Use: Make it Part of Your Software Decision.

-Gerard Ibarra, CEO of Secure Quorum

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
May 10, 2013

Understanding and Implementing Coding Standards

Coding standards provide a consistent framework for development within a project and across projects in an organization. A dozen programmers can complete a simple project in a dozen different ways by using unique coding methodologies and styles, so I like to think of coding standards as the "rules of the road" for developers.

When you're driving in a car, traffic is controlled by "standards" such as lanes, stoplights, yield signs and laws that set expectations around how you should drive. When you take a road trip to a different state, the stoplights might be hung horizontally instead of vertically or you'll see subtle variations in signage, but because you're familiar with the rules of the road, you're comfortable with the mechanics of driving in this new place. Coding standards help control development traffic and provide the consistency programmers need to work comfortably with a team across projects. The problem with allowing developers to apply their own unique coding styles to a project is the same as allowing drivers to drive as they wish ... Confusion about lane usage, safe passage through intersections and speed would result in collisions and bottlenecks.

Coding standards often seem restrictive or laborious when a development team starts considering their adoption, but they don't have to be ... They can be implemented methodically to improve the team's efficiency and consistency over time, and they can be as simple as establishing that all instantiations of an object must be referenced with a variable name that begins with a capital letter:

$User = new User();

While that example may seem overly simplistic, it actually makes life a lot easier for all of the developers on a given project. Regardless of who created that variable, every other developer can see the difference between a variable that holds data and one that are instantiates an object. Think about the shapes of signs you encounter while driving ... You know what a stop sign looks like without reading the word "STOP" on it, so when you see a red octagon (in the United States, at least), you know what to do when you approach it in your car. Seeing a capitalized variable name would tell us about its function.

The example I gave of capitalizing instantiated objects is just an example. When it comes to coding standards, the most effective rules your team can incorporate are the ones that make the most sense to you. While there are a few best practices in terms of formatting and commenting in code, the most important characteristics of coding standards for a given team is consistency and clarity.

So how do you go about creating a coding standard? Most developers dislike doing unnecessary work, so the easiest way to create a coding standard is to use an already-existing one. Take a look at any libraries or frameworks you are using in your current project. Do they use any coding standards? Are those coding standards something you can live with or use as a starting point? You are free to make any changes to it you wish in order to best facilitate your team's needs, and you can even set how strict specific coding standards must be adhered to. Take for example left-hand comparisons:

if ( $a == 12 ) {} // right-hand comparison
if ( 12 == $a ) {} // left-hand comparison

Both of these statements are valid but one may be preferred over the other. Consider the following statements:

if ( $a = 12 ) {} // supposed to be a right-hand comparison but is now an assignment
if ( 12 = $a ) {} // supposed to be a left-hand comparison but is now an assignment

The first statement will now evaluate to true due to $a being assigned the value of 12 which will then cause the code within the if-statement to execute (which is not the desired result). The second statement will cause an error, therefore making it obvious a mistake in the code has occurred. Because our team couldn't come to a consensus, we decided to allow both of these standards ... Either of these two formats are acceptable and they'll both pass code review, but they are the only two acceptable variants. Code that deviates from those two formats would fail code review and would not be allowed in the code base.

Coding standards play an important role in efficient development of a project when you have several programmers working on the same code. By adopting coding standards and following them, you'll avoid a free-for-all in your code base, and you'll be able to look at every line of code and know more about what that line is telling you than what the literal code is telling you ... just like seeing a red octagon posted on the side of the road at an intersection.

-@SoftLayerDevs

April 29, 2013

Web Development - Installing mod_security with OWASP

You want to secure your web application, but you don't know where to start. A number of open-source resources and modules exist, but that variety is more intimidating than it is liberating. If you're going to take the time to implement application security, you don't want to put your eggs in the wrong basket, so you wind up suffering from analysis paralysis as you compare all of the options. You want a powerful, flexible security solution that isn't overly complex, so to save you the headache of making the decision, I'll make it for you: Start with mod_security and OWASP.

ModSecurity (mod_security) is an open-source Apache module that acts as a web application firewall. It is used to help protect your server (and websites) from several methods of attack, most common being brute force. You can think of mod_security as an invisible layer that separates users and the content on your server, quietly monitoring HTTP traffic and other interactions. It's easy to understand and simple to implement.

The challenge is that without some advanced configuration, mod_security isn't very functional, and that advanced configuration can get complex pretty quickly. You need to determine and set additional rules so that mod_security knows how to respond when approached with a potential threat. That's where Open Web Application Security Project (OWASP) comes in. You can think of the OWASP as an enhanced core ruleset that the mod_security module will follow to prevent attacks on your server.

The process of getting started with mod_security and OWASP might seem like a lot of work, but it's actually quite simple. Let's look at the installation and configuration process in a CentOS environment. First, we want to install the dependencies that mod_security needs:

## Install the GCC compiler and mod_security dependencies ##
$ sudo yum install gcc make
$ sudo yum install libxml2 libxml2-devel httpd-devel pcre-devel curl-devel

Now that we have the dependencies in place, let's install mod_security. Unfortunately, there is no yum for mod_security because it is not a maintained package, so you'll have to install it directly from the source:

## Get mod_security from its source ##
$ cd /usr/src
$ git clone https://github.com/SpiderLabs/ModSecurity.git

Now that we have mod_security on our server, we'll install it:

## Install mod_security ##
$ cd ModSecurity
$ ./configure
$ make install

And we'll copy over the default mod_security configuration file into the necessary Apache directory:

## Copy configuration file ##
$ cp modsecurity.conf-recommended /etc/httpd/conf.d/modsecurity.conf

We've got mod_security installed now, so we need to tell Apache about it ... It's no use having mod_security installed if our server doesn't know it's supposed to be using it:

## Apache configuration for mod_security ##
$ vi /etc/httpd/conf/httpd.conf

We'll need to load our Apache config file to include our dependencies (BEFORE the mod_security module) and the mod_security file module itself:

## Load dependencies ##
LoadFile /usr/lib/libxml2.so
LoadFile /usr/lib/liblua5.1.so
## Load mod_security ##
LoadModule security2_module modules/mod_security2.so

We'll save our configuration changes and restart Apache:

## Restart Apache! ##
$ sudo /etc/init.d/httpd restart

As I mentioned at the top of this post, our installation of mod_security is good, but we want to enhance our ruleset with the help of OWASP. If you've made it this far, you won't have a problem following a similar process to install OWASP:

## OWASP ##
$ cd /etc/httpd/
$ git clone https://github.com/SpiderLabs/owasp-modsecurity-crs.git
$ mv owasp-modsecurity-crs modsecurity-crs

Just like with mod_security, we'll set up our configuration file:

## OWASP configuration file ##
$ cd modsecurity-crs
$ cp modsecurity_crs_10_setup.conf.example modsecurity_crs_10_config.conf

Now we have mod_security and the OWASP core ruleset ready to go! The last step we need to take is to update the Apache config file to set up our basic ruleset:

## Apache configuration ##
$ vi /etc/httpd/conf/httpd.conf

We'll add an IfModule and point it to our new OWASP rule set at the end of the file:

<IfModule security2_module>
    Include modsecurity-crs/modsecurity_crs_10_config.conf
    Include modsecurity-crs/base_rules/*.conf
</IfModule>

And to complete the installation, we save the config file and restart Apache:

## Restart Apache! ##
$ sudo /etc/init.d/httpd restart

And we've got mod_security installed with the OWASP core ruleset! With this default installation, we're leveraging the rules the OWASP open source community has come up with, and we have the flexibility to tweak and enhance those rules as our needs dictate. If you have any questions about this installation or you have any other technical blog topics you'd like to hear from us about, please let us know!

-Cassandra

April 16, 2013

iptables Tips and Tricks - Track Bandwidth with iptables

As I mentioned in my last post about CSF configuration in iptables, I'm working on a follow-up post about integrating CSF into cPanel, but I thought I'd inject a simple iptables use-case for bandwidth tracking. You probably think about iptables in terms of firewalls and security, but it also includes a great diagnostic tool for counting bandwidth for individual rules or set of rules. If you can block it, you can track it!

The best part about using iptables to track bandwidth is that the tracking is enabled by default. To see this feature in action, add the "-v" into the command:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 2495 packets, 104K bytes)

The output includes counters for both the policies and the rules. To track the rules, you can create a new chain for tracking bandwidth:

[root@server ~]$ iptables -N tracking
[root@server ~]$ iptables -vnL
...
Chain tracking (0 references)
 pkts bytes target prot opt in out source           destination

Then you need to set up new rules to match the traffic that you wish to track. In this scenario, let's look at inbound http traffic on port 80:

[root@server ~]$ iptables -I INPUT -p tcp --dport 80 -j tracking
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35111 packets, 1490K bytes)
 pkts bytes target prot opt in out source           destination
    0   0 tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80

Now let's generate some traffic and check it again:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35216 packets, 1500K bytes)
 pkts bytes target prot opt in out source           destination
  101  9013 tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80

You can see the packet and byte transfer amounts to track the INPUT — traffic to a destination port on your server. If you want track the amount of data that the server is generating, you'd look for OUTPUT from the source port on your server:

[root@server ~]$ iptables -I OUTPUT -p tcp --sport 80 -j tracking
[root@server ~]$ iptables -vnL
...
Chain OUTPUT (policy ACCEPT 26149 packets, 174M bytes)
 pkts bytes target prot opt in out source           destination
  488 3367K tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp spt:80

Now that we know how the tracking chain works, we can add in a few different layers to get even more information. That way you can keep your INPUT and OUTPUT chains looking clean.

[root@server ~]$ iptables –N tracking
[root@server ~]$ iptables –N tracking2
[root@server ~]$ iptables –I INPUT –j tracking
[root@server ~]$ iptables –I OUTPUT –j tracking
[root@server ~]$ iptables –A tracking –p tcp --dport 80 –j tracking2
[root@server ~]$ iptables –A tracking –p tcp --sport 80 –j tracking2
[root@server ~]$ iptables -vnL
 
Chain INPUT (policy ACCEPT 96265 packets, 4131K bytes)
 pkts bytes target prot opt in out source           destination
 4002  184K tracking    all  --  *  *   0.0.0.0/0        0.0.0.0/0
 
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source           destination
 
Chain OUTPUT (policy ACCEPT 33751 packets, 231M bytes)
 pkts bytes target prot opt in out source           destination
 1399 9068K tracking    all  --  *  *   0.0.0.0/0        0.0.0.0/0
 
Chain tracking (2 references)
 pkts bytes target prot opt in out source           destination
 1208 59626 tracking2   tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80
  224 1643K tracking2   tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp spt:80
 
Chain tracking2 (2 references)
 pkts bytes target prot opt in out source           destination

Keep in mind that every time a packet passes through one of your rules, it will eat CPU cycles. Diverting all your traffic through 100 rules that track bandwidth may not be the best idea, so it's important to have an efficient ruleset. If your server has eight processor cores and tons of overhead available, that concern might be inconsequential, but if you're running lean, you could conceivably run into issues.

The easiest way to think about making efficient rulesets is to think about eating the largest slice of pie first. Understand iptables rule processing and put the rules that get more traffic higher in your list. Conversely, save the tiniest pieces of your pie for last. If you run all of your traffic by a rule that only applies to a tiny segment before you screen out larger segments, you're wasting processing power.

Another thing to keep in mind is that you do not need to specify a target (in our examples above, we established tracking and tracking2 as our targets). If you're used to each rule having a specific purpose of either blocking, allowing, or diverting traffic, this simple tidbit might seem revolutionary. For example, we could use this rule:

[root@server ~]$ iptables -A INPUT

If that seems a little bare to you, don't worry ... It is! The output will show that it is a rule that tracks all traffic in the chain at that point. We're appending the data to the end of the chain in this example ("-A") but we could also insert it ("-I") at the top of the chain instead. This command could be helpful if you are using a number of different chains and you want to see the exact volume of packets that are filtered at any given point. Additionally, this strategy could show how much traffic a potential rule would filter before you run it on your production system. Because having several of these kinds of commands can get a little messy, it's also helpful to add comments to help sort things out:

[root@server ~]$ iptables -A INPUT -m comment --comment "track all data"
 
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 11M packets, 5280M bytes)
 pkts bytes target prot opt in out source           destination
   98  9352        all  --  *  *   0.0.0.0/0        0.0.0.0/0       /* track all data */

Nothing terribly complicated about using iptables to count bandwidth, right? If you have iptables rulesets and you want to get a glimpse at how your traffic is being affected, this little trick could be useful. You can rely on the information iptables gives you about your bandwidth usage, and you won't be the only one ... cPanel actually uses iptables to track bandwidth.

-Mark

March 22, 2013

Social Media for Brands: Monitor Twitter Search via Email

If you're responsible for monitoring Twitter for conversations about your brand, you're faced with a challenge: You need to know what people are saying about your brand at all times AND you don't want to live your entire life in front of Twitter Search.

Over the years, a number of social media applications have been released specifically for brand managers and social media teams, but most of those applications (especially the free/inexpensive ones) differentiate themselves only by the quality of their analytics and how real-time their data is reported. If that's what you need, you have plenty of fantastic options. Those differentiators don't really help you if you want to take a more passive role in monitoring Twitter search ... You still have to log into the application to see your fancy dashboards with all of the information. Why can't the data come to you?

About three weeks ago, Hazzy stopped by my desk and asked if I'd help build a tool that uses the Twitter Search API to collect brand keywords mentions and send an email alert with those mentions in digest form every 30 minutes. The social media team had been using Twilert for these types of alerts since February 2012, but over the last few months, messages have been delayed due to issues connecting to Twitter search ... It seems that the service is so popular that it hits Twitter's limits on API calls. An email digest scheduled to be sent every thirty minutes ends up going out ten hours late, and ten hours is an eternity in social media time. We needed something a little more timely and reliable, so I got to work on a simple "Twitter Monitor" script to find all mentions of our keyword(s) on Twitter, email those results in a simple digest format, and repeat the process every 30 minutes when new mentions are found.

With Bear's Python-Twitter library on GitHub, connecting to the Twitter API is a breeze. Why did we use Bear's library in particular? Just look at his profile picture. Yeah ... 'nuff said. So with that Python wrapper to the Twitter API in place, I just had to figure out how to use the tools Twitter provided to get the job done. For the most part, the process was very clear, and Twitter actually made querying the search service much easier than we expected. The Search API finds all mentions of whatever string of characters you designate, so instead of creating an elaborate Boolean search for "SoftLayer OR #SoftLayer OR @SoftLayer ..." or any number of combinations of arbitrary strings, we could simply search for "SoftLayer" and have all of those results included. If you want to see only @ replies or hashtags, you can limit your search to those alone, but because "SoftLayer" isn't a word that gets thrown around much without referencing us, we wanted to see every instance. This is the code we ended up working with for the search functionality:

def status_by_search(search):
    statuses = api.GetSearch(term=search)
    results = filter(lambda x: x.id > get_log_value(), statuses)
    returns = []
    if len(results) > 0:
        for result in results:
            returns.append(format_status(result))
 
        new_tweets(results)
        return returns, len(returns)
    else:
        exit()

If you walk through the script, you'll notice that we want to return only unseen Tweets to our email recipients. Shortly after got the Twitter Monitor up and running, we noticed how easy it would be to get spammed with the same messages every time the script ran, so we had to filter our results accordingly. Twitter's API allows you to request tweets with a Tweet ID greater than one that you specify, however when I tried designating that "oldest" Tweet ID, we had mixed results ... Whether due to my ignorance or a fault in the implementation, we were getting fewer results than we should. Tweet IDs are unique and numerically sequential, so they can be relied upon as much as datetime (and far easier to boot), so I decided to use the highest Tweet ID from each batch of processed messages to filter the next set of results. The script stores that Tweet ID and uses a little bit of logic to determine which Tweets are newer than the last Tweet reported.

def new_tweets(results):
    if get_log_value() < max(result.id for result in results):
        set_log_value(max(result.id for result in results))
        return True
 
 
def get_log_value():
    with open('tweet.id', 'r') as f:
        return int(f.read())
 
 
def set_log_value(messageId):
    with open('tweet.id', 'w+') as f:
        f.write(str(messageId))

Once we culled out our new Tweets, we needed our script to email those results to our social media team. Luckily, we didn't have to reinvent the wheel here, and we added a few lines that enabled us to send an HTML-formatted email over any SMTP server. One of the downsides of the script is that login credentials for your SMTP server are stored in plaintext, so if you can come up with another alternative that adds a layer of security to those credentials (or lets you send with different kinds of credentials) we'd love for you to share it.

From that point, we could run the script manually from the server (or a laptop for that matter), and an email digest would be sent with new Tweets. Because we wanted to automate that process, I added a cron job that would run the script at the desired interval. As a bonus, if the script doesn't find any new Tweets since the last time it was run, it doesn't send an email, so you won't get spammed by "0 Results" messages overnight.

The script has been in action for a couple of weeks now, and it has gotten our social media team's seal of approval. We've added a few features here and there (like adding the number of Tweets in an email to the email's subject line), and I've enlisted the help of Kevin Landreth to clean up the code a little. Now, we're ready to share the SoftLayer Twitter Monitor script with the world via GitHub!

SoftLayer Twitter Monitor on GitHub

The script should work well right out of the box in any Python environment with the required libraries after a few simple configuration changes:

  • Get your Twitter Customer Secret, Access Token and Access Secret from https://dev.twitter.com/
  • Copy/paste that information where noted in the script.
  • Update your search term(s).
  • Enter your mailserver address and port.
  • Enter your email account credentials if you aren't working with an open relay.
  • Set the self.from_ and self.to values to your preference.
  • Ensure all of the Python requirements are met.
  • Configure a cron job to run the script your desired interval. For example, if you want to send emails every 10 minutes: */10 * * * * <path to python> <path to script> 2>&1 /dev/null

As soon as you add your information, you should be in business. You'll have an in-house Twitter Monitor that delivers a simple email digest of your new Twitter mentions at whatever interval you specify!

Like any good open source project, we want the community's feedback on how it can be improved or other features we could incorporate. This script uses the Search API, but we're also starting to play around with the Stream API and SoftLayer Message Queue to make some even cooler tools to automate brand monitoring on Twitter.

If you end up using the script and liking it, send SoftLayer a shout-out via Twitter and share it with your friends!

-@SoftLayerDevs

March 19, 2013

iptables Tips and Tricks: CSF Configuration

In our last "iptables Tips and Tricks" installment, we talked about Advanced Policy Firewall (APF) configuration, so it should come as no surprise that in this installment, we're turning our attention to ConfigServer Security & Firewall (CSF). Before we get started, you should probably run through the list of warnings I include at the top of the APF blog post and make sure you have your Band-Aid ready in case you need it.

To get the ball rolling, we need to download CSF and install it on our server. In this post, we're working with a CentOS 6.0 32-bit server, so our (root) terminal commands would look like this to download and install CSF:

$ wget http://www.configserver.com/free/csf.tgz #Download CSF using wget.
$ tar zxvf csf.tgz #Unpack it.
$ yum install perl-libwww-perl #Make sure perl modules are installed ...
$ yum install perl-Time-HiRes  #Otherwise it will generate an error.
$ cd csf
$ ./install.sh #Install CSF.
 
#MAKE SURE YOU HAVE YOUR BAND-AID READY
 
$ /etc/init.d/csf start #Start CSF. (Note: You can also use '$ service csf start')

Once you start CSF, you can see a list of the default rules that load at startup. CSF defaults to a DROP policy:

$ iptables -nL | grep policy
Chain INPUT (policy DROP)
Chain FORWARD (policy DROP)
Chain OUTPUT (policy DROP)

Don't ever run "iptables -F" unless you want to lock yourself out. In fact, you might want to add "This server is running CSF - do not run 'iptables -F'" to your /etc/motd, just as a reminder/warning to others.

CSF loads on startup by default. This means that if you get locked out, a simple reboot probably won't fix the problem. Runlevels 2, 3, 4, and 5 are all on:

$ chkconfig --list | grep csf
csf             0:off   1:off   2:on    3:on    4:on    5:on    6:off

Some features of CSF will not work unless you have certain iptables modules installed. I believe they are installed by default in CentOS, but if you custom-built your iptables, they might not all be installed. Run this script to see if all modules are installed:

$ /etc/csf/csftest.pl
Testing ip_tables/iptable_filter...OK
Testing ipt_LOG...OK
Testing ipt_multiport/xt_multiport...OK
Testing ipt_REJECT...OK
Testing ipt_state/xt_state...OK
Testing ipt_limit/xt_limit...OK
Testing ipt_recent...OK
Testing xt_connlimit...OK
Testing ipt_owner/xt_owner...OK
Testing iptable_nat/ipt_REDIRECT...OK
Testing iptable_nat/ipt_DNAT...OK
 
RESULT: csf should function on this server

As I mentioned, this is the default iptables installation on a minimal CentOS 6.0 image, so chances are good that these modules are already installed on your system. It never hurts to check, though.

The CSF Configuration File

The primary CSF configuration is stored in the well-documented /etc/csf/csf.conf file. CSF is extremely configurable, so there are a lot of options to read over. Let's take a look over some of the more important features:

Testing

TESTING = "1"
TESTING_INTERVAL = "5"

This TESTING cron job runs every "5" minutes so you don't lock yourself out when you're testing your rules. When you are satisfied with your rules (and confident that you won't lock yourself out), you can set TESTING to "0".

Globally Allowed Ports

# Allow incoming TCP ports
TCP_IN = "20,21,22,25,53,80,110,143,443,465,587,993,995"
 
# Allow outgoing TCP ports
TCP_OUT = "20,21,22,25,53,80,110,113,443"
 
# Allow incoming UDP ports
UDP_IN = "20,21,53"
 
# Allow outgoing UDP ports
# To allow outgoing traceroute add 33434:33523 to this list
UDP_OUT = "20,21,53,113,123"

Incoming Ping Requests

# Allow incoming PING
ICMP_IN = "1"

Allowing ping is usually a good option for diagnostic purposes, so I don't recommend turning it off. Disallowing ping is an example of "security through obscurity," and it will not typically dissuade your attackers.

Ethernet Device

ETH_DEVICE = ""
ETH6_DEVICE = ""

Here, you can configure iptables to ONLY use one Ethernet adapter. You might want to only guard your public network adapter in some situations.

IP Limit in Permanent "Deny" File

DENY_IP_LIMIT = "200"

A higher number here will obviously screen out more IP addresses in csf.deny, but higher numbers also may cause slowdowns.

IP Limit in Temporary "Deny" File

DENY_TEMP_IP_LIMIT = "100"

Similar to DENY_IP_LIMIT, the DENY_TEMP_IP_LIMIT represents the maximum number of IPs that can be stored in the temporary ban list.

SMTP Blocking

SMTP_BLOCK = "0"

When set to "1", SMTP_BLOCK does not completely block outbound SMTP, but it does block it for most users. This will prevent malicious scripts and compromised users from making outbound connections from unauthorized mail clients on the server. SMTP_BLOCK doesn't stop those scripts from running, but it does stop them from functioning. Mail sent through the proper channels will still be delivered normally.

Allowing SMTP on localhost

SMTP_ALLOWLOCAL = "1"

Custom Mail Port Designation

SMTP_PORTS = "25,465,587"

Allowing SMTP Access to Users/Groups

SMTP_ALLOWUSER = ""
SMTP_ALLOWGROUP = "mail,mailman"

SYN Flood Protection

SYNFLOOD = "0"
SYNFLOOD_RATE = "100/s"
SYNFLOOD_BURST = "150"

Per the documentation, you should only enable SYN flood protection (SYNFLOOD= "1") if you are currently under a SYN flood attack.

Concurrent Connections Limit

CONNLIMIT = "22;5,80;20"
PORTFLOOD = "22;tcp;5;300,80;tcp;20;5

These options allow you to add customized DoS protection. CONNLIMIT handles the number of concurrent connections, and in this example, we're limiting port 22 to 5 connections and port 80 to 20 connections.

PORTFLOOD watches the number of connections per a given number of seconds. In this example, we're limiting the TCP connection on port 22 to 5 connections/second with a quiet period of 300 seconds before the connection is unblocked. Additonally, we're limiting the TCP connection on port 80 to 20 connections/second with a quiet period of 5 seconds before the connection is unblocked.

Check the readme.txt file for more information about the syntax.

Logging to Syslog

SYSLOG = "0"

When enabled, this option logs lfd (Login Failure Daemon) messages to syslog as well as to /var/log/lfd.log.

Dropping v. Rejecting Packets

DROP = "DROP"

This configuration allows you to either DROP or REJECT packets. REJECT tells the sender that the packet has been blocked by the firewall. DROP just drops the packet and does not send a response. I like DROP better for regular use, but REJECT might be more helpful if you need to diagnose a connectivity issue.

Logging Dropped Connections

DROP_LOGGING = "1"

This option logs dropped connections to syslog. I don't see any reason to turn this off unless your hard drive is getting full.

Port Exceptions When Logging Dropped Connections

DROP_NOLOG = "67,68,111,113,135:139,445,500,513,520"

These ports are specifically blocked from being logged either to conserve hard drive space or make the log file easier to read.

"Watch Mode"

WATCH_MODE = "0"

If you are ever stuck trying to troubleshoot a large ruleset, you might consider turning this option on. You can use it to track the actions to watched IP addresses to see where they are getting blocked or accepted.

Login Failure Daemon Alert

LF_ALERT_TO = ""
LF_ALERT_FROM = ""
LF_ALERT_SMTP = ""

You can specify an email address to report errors from the Login Failure Daemon, which tracks and automatically blocks brute force login attempts.

Permanent Blocks and NetBlocks

LF_PERMBLOCK = "1"
LF_PERMBLOCK_INTERVAL = "86400"
LF_PERMBLOCK_COUNT = "4"
LF_PERMBLOCK_ALERT = "1"
LF_NETBLOCK = "0"
LF_NETBLOCK_INTERVAL = "86400"
LF_NETBLOCK_COUNT = "4"
LF_NETBLOCK_CLASS = "C"
LF_NETBLOCK_ALERT = "1"

These settings control the permanent block and netblock blocking. You probably don't need to touch these settings, but you might want some additional security or less security depending on your company needs. If something gets permablocked, it will require your intervention to clear it, which might create downtime for your clients. Likewise, if a legitimate IP address happens to be part of a netblock which has an attacking IP address on it, it will get blocked if you have that feature turned on. A class C network encompasses 256 IP addresses. You can set this to class B or A, but that could block thousands or millions of IP addresses, respectively. Unless you find yourself under constant attack, I would advise you to leave that LF_NETBLOCK off.

Additional Protection During Updates

# Safe Chain Update. If enabled, all dynamic update chains (GALLOW*, GDENY*,
# SPAMHAUS, DSHIELD, BOGON, CC_ALLOW, CC_DENY, ALLOWDYN*) will create a new
# chain when updating, and insert it into the relevant LOCALINPUT/LOCALOUTPUT
# chain, then flush and delete the old dynamic chain and rename the new chain.
#
# This prevents a small window of opportunity opening when an update occurs and
# the dynamic chain is flushed for the new rules.
SAFECHAINUPDATE = "0"

Activating this option will increase your system resource usage and will require more rules to be running at one time, but it provides an additional layer of protection during updates. Without this option turned on, your rules will be flushed for a short amount of time, leaving your server vulnerable.

Multi-Server Deployment Options

LF_GLOBAL = "0"
GLOBAL_ALLOW = ""
GLOBAL_DENY = ""
GLOBAL_IGNORE = ""

Like APF, you can configure global lists for multiple server deployments. You'll need to specify a URL of the text file with the IP addresses for the global lists.

SPAMHAUSE Blocklist

LF_SPAMHAUS = "0"

This option enables the SPAMHAUS blocklist. Specify the number of seconds between refreshes. Recommended setting is 86400 (1 day).

Blocking TOR Exit IP Addresses

LF_TOR = "0"

Enabling this option will block TOR exit IP addresses. If you are not familiar with TOR, it is a completely anonymous proxy network. This could block some legitimate users who are trying to protect their anonymity, so I would recommend only turning this on if you are already under attack from a TOR exit address.

Blocking Bogon Addresses

LF_BOGON = "0"
LF_BOGON_URL = "http://www.cymru.com/Documents/bogon-bn-agg.txt"
LF_BOGON_SKIP = ""

Blocking bogon addresses (addresses that should not be possible) is usually a good decision. To enable, set the number of seconds between refreshes. I recommend enabling this option and setting the refresh at 86400 (1 day). If you do so, be sure to add your private network adapters to the skip list.

Country-Specific Access to Your Server

CC_DENY = ""
CC_ALLOW = ""

With these options, you can block or allow entire countries from accessing your server. To do so, enter the country codes in a comma separated list. Even though this generates a lot of additional rules, it's valuable to some sysadmins.

CC_ALLOW_FILTER = ""

Alternatively, you can set your server to exclusively accept traffic from a list of country codes. All other countries not listed will have their traffic dropped. There are many other settings related to these options that I don't have time to cover in this blog.

Blocking Login Failures

LF_TRIGGER = "0"

This enables blocking of login failures (per service). There are a lot of great customization options in this section.

Scanning Directories for Malicious Files

LF_DIRWATCH = "300"

This feature scans /tmp and /dev/shm for potentially malicious files and alerts you to their presence based on the interval you designate. You can also have CSF automatically quarantine malicious files with this option:

LF_DIRWATCH_DISABLE = "0"

Distributed Attack Protection

LF_DISTATTACK = "0"

By enabling this option, you activate additional protection against distributed attacks.

Blocking Based on Abusive Email Usage

LT_POP3D = "0"
LT_IMAPD = "0"

If a user checks email too many times per hour (more than the non-zero value specified), the user's IP address is blocked.

Email Alert Following Block

LT_EMAIL_ALERT = "1"

This will send you email when something is blocked. I'd recommend leaving it on.

Blocking IP Addresses Based on Number of Connections

CT_LIMIT = "0"

This feature tracks connections and blocks the IP if the number of connections is too high. Use caution because if you enable this option and set this value too low, it will block legitimate traffic.

Application-Level Protection

PT_LIMIT = "60"

This feature provides application level protection against malicious scripts that take a long time to execute.

Blocking Port Scanners

PS_INTERVAL = "300"
PS_LIMIT = "10"

Enabling HTML User Interface for CSF

UI = "0"

CSF has a built-in HTML user interface. You can enable this by setting UI = "1". There are a list of prerequisites for this option in the readme.txt.

Notifying Blocked IP Addresses

MESSENGER = "0"

This option will notify blocked IP addresses when they have been blocked by the firewall.

Port Knocking

PORTKNOCKING = ""

CSF supports port knocking, which is a technique that provides an additional layer of security. See http://www.portknocking.org/ for details.

Allow and Deny Lists

As we walked through the CSF configuration file, you saw that I referenced the csf.deny file, so it should come as no surprise that CSF also includes csf.allow to customize "allow" rules as well. If you are familiar with APF, these files have a very similar syntax ... Each entry is made up of the same four components: protocol|flow|port|IP. The only real difference being that APF uses the colon as a delimiter while CSF uses the pipe:

#APF Version
tcp:in:d=48000_48020:s=10.0.0.0/8
 
#CSF Version
tcp|in|d=48000_48020|s=10.0.0.0/8

Fortunately, replacing your colon with a pipe is a minimally invasive procedure that can be automated with a tool like vi.

CSF Command Line Tool

The command line tool for CSF is much more robust than the one for APF:

$ csf --help
csf: v5.79 (cPanel)
 
ConfigServer Security &amp; Firewall
(c)2006-2013, Way to the Web Limited (http://www.configserver.com)
 
Usage: /usr/sbin/csf [option] [value]
 
Option              Meaning
-h, --help          Show this message
-l, --status        List/Show iptables configuration
-l6, --status6      List/Show ip6tables configuration
-s, --start         Start firewall rules
-f, --stop          Flush/Stop firewall rules (Note: lfd may restart csf)
-r, --restart       Restart firewall rules
-q, --startq        Quick restart (csf restarted by lfd)
-sf, --startf       Force CLI restart regardless of LF_QUICKSTART setting
-a, --add ip        Allow an IP and add to /etc/csf.allow
-ar, --addrm ip     Remove an IP from /etc/csf.allow and delete rule
-d, --deny ip       Deny an IP and add to /etc/csf.deny
-dr, --denyrm ip    Unblock an IP and remove from /etc/csf.deny
-df, --denyf        Remove and unblock all entries in /etc/csf.deny
-g, --grep ip       Search the iptables rules for an IP match (incl. CIDR)
-t, --temp          Displays the current list of temp IP entries and their TTL
-tr, --temprm ip    Remove an IPs from the temp IP ban and allow list
-td, --tempdeny ip ttl [-p port] [-d direction]
                    Add an IP to the temp IP ban list. ttl is how long to
                    blocks for (default:seconds, can use one suffix of h/m/d).
                    Optional port. Optional direction of block can be one of:
                    in, out or inout (default:in)
-ta, --tempallow ip ttl [-p port] [-d direction]
                    Add an IP to the temp IP allow list (default:inout)
-tf, --tempf        Flush all IPs from the temp IP entries
-cp, --cping        PING all members in an lfd Cluster
-cd, --cdeny ip     Deny an IP in a Cluster and add to /etc/csf.deny
-ca, --callow ip    Allow an IP in a Cluster and add to /etc/csf.allow
-cr, --crm ip       Unblock an IP in a Cluster and remove from /etc/csf.deny
-cc, --cconfig [name] [value]
                    Change configuration option [name] to [value] in a Cluster
-cf, --cfile [file] Send [file] in a Cluster to /etc/csf/
-crs, --crestart    Cluster restart csf and lfd
-w, --watch ip      Log SYN packets for an IP across iptables chains
-m, --mail [addr]   Display Server Check in HTML or email to [addr] if present
-lr, --logrun       Initiate Log Scanner report via lfd
-c, --check         Check for updates to csf but do not upgrade
-u, --update        Check for updates to csf and upgrade if available
-uf                 Force an update of csf
-x, --disable       Disable csf and lfd
-e, --enable        Enable csf and lfd if previously disabled
-v, --version       Show csf version

The command line tool will also tell you if the testing mode is enabled (which is a very useful feature). If TESTING were enabled, we'd see this line at the bottom of the output:

*WARNING* TESTING mode is enabled - do not forget to disable it in the configuration

Did you make it all the way through?! Great! I know it's a lot to take in, but it's not terribly complicated when we break it down and understand how each piece works. Next time, I'll be back with some tips on integrating CSF into cPanel.

-Mark

March 7, 2013

Script Clip: HTML5 Audio Player with jQuery Controls

HTML5 and jQuery provide mind-blowing functionality. Projects that would have taken hours of development and hundreds of lines of code a few years ago can now be completed in about the time it'll take you to read this paragraph. If you wanted to add your own audio player on a web page in the past, what would it have involved? Complicated elements? Flash (*shudders*)? It was so complicated that most developers just linked to the audio file, and the user just downloaded the file to play it locally. With HTML5, an embedded, cross-browser audio player can be added to a page with five lines of code, and if you want to get really fancy, you can easily use jQuery to add some custom controls.

If you've read any of my previous blogs, you know that I love when I find little code snippets that make life as a web developer easier. My go-to tools in that pursuit are HTML5 and jQuery, so when I came across this audio player, I knew I had to share. There are some great jQuery plugins to play music files on a web page, but they can be major overkill for a simple application if you have to include comprehensive controls and themes. Sometimes you just want something simple without all of that overhead:

Oooh... Ahhh...

That song — Pop Bounce by SoftLayer's very own Chris Interrante — is written in five simple lines of HTML5 code:

<audio style="width:550px; margin: 0 auto; display:block;" controls>
  <source src="http://cdn.softlayer.com/innerlayer/Interrante-PopBounce.ogg" type="audio/ogg">
  <source src="http://cdn.softlayer.com/innerlayer/Interrante-PopBounce.mp3" type="audio/mpeg">
Your browser does not support the audio element.
</audio>

If IE 9+, Chrome 6+, Firefox 3.6+, Safari 5+ and Opera 10+ would all agree on supported file formats for the <audio> tag, the code snippet would be even smaller. I completely geek out over it every time I look at it and remember the days of yore. As you can see, the HTML5 application has some simple default controls: Play, Pause, Scan to Time, etc. As a developers, I couldn't help but look for a to spice it up a little ... What if we want to fire an event when the user plays, pauses, stops or takes any other action with the audio file? jQuery!

Make sure your jQuery include is in the <head> of your page:

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script>

Now let's use jQuery to script separate "Play" and "Pause" links ... And let's have those links fire off an alert when they are pressed:

$(document).ready(function(){
  $("#play-button").click(function(){
   $("#audioplayer")[0].play();
   alert('You have played the audio file!');
  })    
 
  $("#pause-button").click(function(){
   $("#audioplayer")[0].pause();
   alert('You have paused the audio file!');
  })    
})

With that script in the <head> as well, the HTML on our page will look like this:

<div class=:"audioplayer">
  <audio id="audioplayer" name="audioplayer" controls loop>
    <source src="http://cdn.softlayer.com/innerlayer/Interrante-PopBounce.ogg" type="audio/ogg">
    <source src="http://cdn.softlayer.com/innerlayer/Interrante-PopBounce.mp3" type="audio/mpeg">
  Your browser does not support the audio element.
  </audio>
 
  <a id="play-button" href="#">Play!</a>
  <a id="pause-button" href="#">Pause!</a>
</div>

Want proof that it works that simply? Boom.

You can theme it any way you like; you can add icons instead of the text ... The world is your oyster. The bonus is that you're using one of the lightest media players on the Internet! If you decide to get brave (or just more awesome), you can explore additional features. You're using jQuery, so your possibilities are nearly limitless. If you want to implement a "Stop" feature (which returns the audio back to the beginning when "Stop" is pressed), you can get creative:

$("#stop-button").click(function(){
    $("#audioplayer")[0].currentTime = 0; // return the audio file back to the beginning
}

If you want to include some volume controls, those can be added in a snap as well:

$("#volumeUp").click(function(){
    $("#audioplayer")[0].volume +=0.1;
}
 
$("#volumeDown").click(function(){
    $("#audioplayer")[0].volume -=0.1;
}

Try it out and let me know what you think. Your homework is to come up with some unique audio player functionality and share it here!

-Cassandra

February 14, 2013

Tips and Tricks – Building a jQuery Plugin (Part 2)

jQuery plugins don't have to be complicated to create. If you've stumbled upon this blog in pursuit of a guide to show you how to make a jQuery plugin, you might not believe me ... It seems like there's a chasm between the "haves" of jQuery plugin developers and the "have nots" of future jQuery developers, and there aren't very many bridges to get from one side to the other. In Part 1 of our "Building a jQuery Plugin" series, we broke down how to build the basic structure of a plugin, and in this installment, we'll be adding some usable functionality to our plugin.

Let's start with the jQuery code block we created in Part 1:

(function($) {
    $.fn.slPlugin = function(options) {
            var defaults = {
                myVar: "This is", // this will be the default value of this var
                anotherVar: "our awesome",
                coolVar: "plugin!",
            };
            var options = $.extend(defaults, options);
            this.each(function() {
                ourString = myVar + " " + anotherVar + " " + coolVar;
            });
            return ourString;
    };
}) (jQuery);

We want our plugin to do a little more than return, "This is our awesome plugin!" so let's come up with some functionality to build. For this exercise, let's create a simple plugin that allows truncates a blob of text to a specified length while providing the user an option show/hide the rest of the text. Since the most common character length limitation on the Internet these days is Twitter's 140 characters, we'll use that mark in our example.

Taking what we know about the basic jQuery plugin structure, let's create the foundation for our new plugin — slPlugin2:

(function($) {
    $.fn.slPlugin2 = function(options) {
 
        var defaults = {
            length: 140,
            moreLink: "read more",
            lessLink: "collapse",
            trailingText: "..."
        };
 
        var options = $.extend(defaults, options);
    };
})(jQuery);

As you can see, we've established four default variables:

  • length: The length of the paragraph we want before we truncate the rest.
  • moreLength: What we append to the paragraph when it is truncated. This will be the link the user clicks to expand the rest of the text.
  • lessLink: What we append to the paragraph when it is expanded. This will be the link the user clicks to collapse the rest of the text.
  • trailingText: The typical ellipses to append to the truncation.

In our jQuery plugin example from Part 1, we started our function with this.each(function() {, and for this example, we're going to add a return for this to maintain chainability. By doing so, we're able to manipulate the segment with methods. For example, if we started our function with this.each(function() {, we'd call it with this line:

$('#ourParagraph').slPlugin2();

If we start the function with return this.each(function() {, we have the freedom to add further manipulation:

$('#ourParagraph').slPlugin2().bind();

With such a simple change, we're able to add method calls to make one massive dynamic function.

Let's flesh out the actual function a little more. We'll add a substantial bit of code in this step, but you should be able to follow along with the changes via the comments:

(function($) {
    $.fn.slPlugin2 = function(options) {
 
        var defaults = {
            length: 140, 
            moreLink: "read more",
            lessLink: "collapse",
            trailingText: "..."
        };
 
        var options = $.extend(defaults, options);
 
        // return this keyword for chainability
        return this.each(function() {
            var ourText = $(this);  // the element we want to manipulate
            var ourHtml = ourText.html(); //get the contents of ourText!
            // let's check if the contents are longer than we want
            if (ourHtml.length > options.length) {
                var truncSpot = ourHtml.indexOf(' ', options.length); // the location of the first space (so we don't truncate mid-word) where we will end our truncation.
 
   // make sure to ignore the first space IF the text starts with a space
   if (truncSpot != -1){
       // the part of the text that will not be truncated, starting from the beginning
       var firstText = ourHtml.substring(0, truncSpot);
 
       // the part of the text that will be truncated, minus the trailing space
       var secondText = ourHtml.substring(truncSpot, ourHtml.legnth -1);
                }
            }
        })
    };
})(jQuery);

Are you still with us? I know it seems like a lot to take in, but each piece is very straightforward. The firstText is the chunk of text that will be shown: The first 140 characters (or whatever length you define). The secondText is what will be truncated. We have two blobs of text, and now we need to make them work together:

(function($) {
    $.fn.slPlugin2 = function(options) {
 
        var defaults = {
            length: 140, 
            moreLink: "read more",
            lessLink: "read less",
            trailingText: "..."
        };
 
        var options = $.extend(defaults, options);
 
        // return this keyword for chainability
        return this.each(function() {
            var ourText = $(this);  // the element we want to manipulate
            var ourHtml = ourText.html(); //get the contents of ourText!
            // let's check if the contents are longer than we want
            if (ourHtml.length > options.length) {
                var truncSpot = ourHtml.indexOf(' ', options.length); // the location of the first space (so we don't truncate mid-word) where we will end our truncation.
 
   // make sure to ignore the first space IF the text starts with a space
   if (truncSpot != -1){
       // the part of the text that will not be truncated, starting from the beginning
       var firstText = ourHtml.substring(0, truncSpot);
 
       // the part of the text that will be truncated, minus the trailing space
       var secondText = ourHtml.substring(truncSpot, ourHtml.legnth -1);
 
       // perform our truncation on our container ourText, which is technically more of a "rewrite" of our paragraph, to our liking so we can modify how we please. It's basically saying: display the first blob then add our trailing text, then add our truncated part wrapped in span tags (to further modify)
       ourText.html(firstText + options.trailingText + '<span class="slPlugin2">' + secondText + '</span>');
 
       // but wait! The secondText isn't supposed to show until the user clicks "read more", right? Right! Hide it using the span tags we wrapped it in above.
       ourText.find('.slPlugin2').css("display", "none");
                }
            }
        })
    };
})(jQuery);

Our function now truncates text to the specified length, and we can call it from our page simply:

<script src="jquery.min.js"></script>
<script src="jquery.slPlugin2.js"></script>
<script type="text/javascript">
$(document).ready(function() {  
    $('#slText').slPlugin2();  
});
</script>

Out of all the ways to truncate text via jQuery, this has to be my favorite. It's feature-rich while still being fairly easy to understand. As you might have noticed, we haven't touched on the "read more" and "read less" links or the expanding/collapsing animations yet, but we'll be covering those in Part 3 of this series. Between now and when Part 3 is published, I challenge you to think up how you'd add those features to this plugin as homework.

-Cassandra

January 31, 2013

ActiveCampaign: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Peter Evans from ActiveCampaign. ActiveCampaign is a complete email marketing and marketing automation platform designed to help small businesses grow.

The Challenge of Sending Email Simply

You need to send email. Usually, that's a pretty simple task, so it's not uncommon to find users who think that sending a monthly newsletter is more or less the same task as sending a quick note to a friend about going to see a movie. In fact, those two email use-cases are completely different animals. With all of the nuances inherent in sending and managing large volumes of email, a plethora of email marketing services are positioned to help users better navigate the email marketing waters. It's tough to differentiate which features you might need and which features are just there to be a "Check" in a comparison checklist. ActiveCampaign set out to make the decision-making process simpler ... We knew that we needed the standard features like auto-responder campaigns, metrics reports and email templates, but we also knew we had to differentiate our service in a meaningful way. So we focused on automation.

Too often, the "automation" provided by a platform can be very cumbersome to set up (if it's available at all), and when it's actually working, there's little confirmation that actions are being performed as expected. In response, we were intentional about ActiveCampaign's automation features being easy to set up and manage ... If automation saves time and money, it shouldn't be intimidatingly difficult to incorporate into your campaigns. Here is a screenshot of what it takes to incorporate automation in your email campaigns with ActiveCampaign:

ActiveCampaign Screenshot

No complicated logic. No unnecessary options. With a only a few clicks, you can select an action to spark a meaningful response in your system. If a subscriber in your Newsletter list clicks on a link, you might want to move that subscriber to a different list. Because you might want to send a different campaign to that user as well, we provide the ability to add multiple automated actions for each subscriber action, and it's all very clear.

One of the subscriber actions that might stand out to you if you've used other email service providers (or ESPs) is the "When subscriber replies to a campaign" bullet. ActiveCampaign is the first ESP (that we're aware of) to provide users the option to send a series of follow-up campaigns (or to restrict the sending of future campaigns) to subscribers who reply to a campaign email. Replies are tracked in your campaign reports, and you have deep visibility into how many people replied, who replied, and how many times they replied. With that information, you can segment those subscribers and create automated actions for them, and the end result is that you're connecting with your subscriber base much more effectively because you're able to target them better ... And you don't have to break your back to do it.

SoftLayer customers know how valuable automation can be in terms of infrastructure, so it should be no surprise that email marketing campaigns can benefit so much from automation as well. Lots of ESPs provide stats, and it's up to you to figure out meaningful ways to use that information. ActiveCampaign goes a step beyond those other providers by helping you very simply engage your subscribers with relevant and intentional actions. If you're interested in learning more, check us out at http://www.activecampaign.com.

-Peter Evans, ActiveCampaign

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
January 29, 2013

iptables Tips and Tricks: APF (Advanced Policy Firewall) Configuration

Let's talk about APF. APF — Advanced Policy Firewall — is a policy-based iptables firewall system that provides simple, powerful control over your day-to-day server security. It might seem intimidating to be faced with all of the features and configuration tools in APF, but this blog should put your fears to rest.

APF is an iptables wrapper that works alongside iptables and extends its functionality. I personally don't use iptables wrappers, but I have a lot of experience with them, and I've seen that they do offer some additional features that streamline policy management. For example, by employing APF, you'll get several simple on/off toggles (set via configuration files) that make some complex iptables configurations available without extensive coding requirements. The flip-side of a wrapper's simplicity is that you aren't directly in control of the iptables commands, so if something breaks it might take longer to diagnose and repair. Before you add a wrapper like APF, be sure that you know what you are getting into. Here are a few points to consider:

  • Make sure that what you're looking to use adds a feature you need but cannot easily incorporate with iptables on its own.
  • You need to know how to effectively enable and disable the iptables wrapper (the correct way ... read the manual!), and you should always have a trusted failsafe iptables ruleset handy in the unfortunate event that something goes horribly wrong and you need to disable the wrapper.
  • Learn about the basic configurations and rule changes you can apply via the command line. You'll need to understand the way your wrapper takes rules because it may differ from the way iptables handles rules.
  • You can't manually configure your iptables rules once you have your wrapper in place (or at least you shouldn't).
  • Be sure to know how to access your server via the IPMI management console so that if you completely lock yourself out beyond repair, you can get back in. You might even go so far as to have a script or set of instructions ready for tech support to run, in the event that you can't get in via the management console.

TL;DR: Have a Band-Aid ready!

APF Configuration

Now that you have been sufficiently advised about the potential challenges of using a wrapper (and you've got your Band-Aid ready), we can check out some of the useful APF rules that make iptables administration a lot easier. Most of the configuration for APF is in conf.apf. This file handles the default behavior, but not necessarily the specific blocking rules, and when we make any changes to the configuration, we'll need to restart the APF service for the changes to take effect.

Let's jump into conf.apf and break down what we see. The first code snippit is fairly self-explanatory. It's another way to make sure you don't lock yourself out of your server as you are making configuration changes and testing them:

# !!! Do not leave set to (1) !!!
# When set to enabled; 5 minute cronjob is set to stop the firewall. Set
# this off (0) when firewall is determined to be operating as desired.
DEVEL_MODE="1"

The next configuration options we'll look at are where you can make quick high-level changes if you find that legitimate traffic is being blocked and you want to make APF a little more lenient:

# This controls the amount of violation hits an address must have before it
# is blocked. It is a good idea to keep this very low to prevent evasive
# measures. The default is 0 or 1, meaning instant block on first violation.
RAB_HITCOUNT="1"
 
# This is the amount of time (in seconds) that an address gets blocked for if
# a violation is triggered, the default is 300s (5 minutes).
RAB_TIMER="300"
# This allows RAB to 'trip' the block timer back to 0 seconds if an address
# attempts ANY subsiquent communication while still on the inital block period.
RAB_TRIP="1"
 
# This controls if the firewall should log all violation hits from an address.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_HIT="1"
 
# This controls if the firewall should log all subsiqent traffic from an address
# that is already blocked for a violation hit, this can generate allot of logs.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_TRIP="0"

Next, we have an option to adjust ICMP flood protection. This protection should be useful against some forms of DoS attacks, and the associated rules show up in your INPUT chain:

# Set a reasonable packet/time ratio for ICMP packets, exceeding this flow
# will result in dropped ICMP packets. Supported values are in the form of:
# pkt/s (packets/seconds), pkt/m (packets/minutes)
# Set value to 0 for unlimited, anything above is enabled.
ICMP_LIM="30/s"

If you wanted to add more ports to block for p2p traffic (which will show up in the P2P chain), you'll update this code:

# A common set of known Peer-To-Peer (p2p) protocol ports that are often
# considered undesirable traffic on public Internet servers. These ports
# are also often abused on web hosting servers where clients upload p2p
# client agents for the purpose of distributing or downloading pirated media.
# Format is comma separated for single ports and an underscore separator for
# ranges (4660_4678).
BLK_P2P_PORTS="1214,2323,4660_4678,6257,6699,6346,6347,6881_6889,6346,7778"

The next few lines let you designate the ports that you want to have closed at all times. They will be blocked for INPUT and OUTPUT chains:

# These are common Internet service ports that are understood in the wild
# services you would not want logged under normal circumstances. All ports
# that are defined here will be implicitly dropped with no logging for
# TCP/UDP traffic inbound or outbound. Format is comma separated for single
# ports and an underscore separator for ranges (135_139).
BLK_PORTS="135_139,111,513,520,445,1433,1434,1234,1524,3127"

The next important section to look at deals with conntrack. If you get "conntrack full" errors, this is where you'd increase the allowed connections. It's not uncommon to need more connections than the default, so if you need to adjust that value, you'd do it here:

# This is the maximum number of "sessions" (connection tracking entries) that
# can be handled simultaneously by the firewall in kernel memory. Increasing
# this value too high will simply waste memory - setting it too low may result
# in some or all connections being refused, in particular during denial of
# service attacks.
SYSCTL_CONNTRACK="65536"

We've talked about the ports we want closed at all times, so it only makes sense that we'd specify which ports we want open for all interfaces:

# Common inbound (ingress) TCP ports
IG_TCP_CPORTS="22"
# Common inbound (ingress) UDP ports
IG_UDP_CPORTS=""
# Common outbound (egress) TCP ports
EG_TCP_CPORTS="21,25,80,443,43"
# Common outbound (egress) UDP ports
EG_UDP_CPORTS="20,21,53"

And when we want a special port allowance for specific users, we can declare it easily. For example, if we want port 22 open for user ID 0, we'd use this code:

# Allow outbound access to destination port 22 for uid 0
EG_TCP_UID="0:22"

The next few sections on Remote Rule Imports and Global Trust are a little more specialized, and I encourage you to read a little more about them (since there's so much to them and not enough space to cover them here on the blog). An important feature of APF is that it imports block lists from outside sources to keep you safe from some attackers, so the Remote Rule Imports can prove to be very useful. The Global Trust section is incredibly useful for multi-server deployments of APF. Here, you can set up your global allow/block lists and have them all pull from a central location so that you can make a single update to the source and have the update propogated to all servers in your configuration. These changes are synced to the glob_allow/deny.rules files, and they will be downloaded (and overwritten) on a regular basis from your specified source, so don't make any manual edits in glob_allow/deny.rules.

As you can see, apf.conf is no joke. It has a lot of stuff going on, but it's very straightforward and documented well. Once we've set up apf.conf with the configurations we need, it's time to look at the more focused allow_hosts.rules and deny_hosts.rules files. These .rules files are where where you put your typical firewall rules in place. If there's one piece of advice I can give you about these configurations, it would be to check if your traffic is already allowed or blocked. Having multiple rules that do the same thing (possibly in different places) is confusing and potentially dangerous.

The deny_hosts.rules configuration will look just like allow_hosts.rules, but it's performing the opposite function. Let's check out an allow_hosts.rules configuration that will allow the Nimsoft service to function:

tcp:in:d=48000_48020:s=10.0.0.0/8
tcp:out:d=48000_48020:d=10.0.0.0/8

The format is somewhat simplistic, but the file gives a little more context in the comments:

# The trust rules can be made in advanced format with 4 options
# (proto:flow:port:ip);
# 1) protocol: [packet protocol tcp/udp]
# 2) flow in/out: [packet direction, inbound or outbound]
# 3) s/d=port: [packet source or destination port]
# 4) s/d=ip(/xx) [packet source or destination address, masking supported]
# Syntax:
# proto:flow:[s/d]=port:[s/d]=ip(/mask)

APF also uses ds_hosts.rules to load the DShield.org blocklist, and I assume the ecnshame_hosts.rules does something similar (can't find much information about it), so you won't need to edit these files manually. Additionally, you probably don't need to make any changes to log.rules, unless you want to make changes to what exactly you log. As it stands, it logs certain dropped connections, which should be enough. Also, it might be worth noting that this file is a script, not a configuration file.

The last two configuration files are the preroute.rules and postroute.rules that (unsurprisingly) are used to make routing changes. If you have been following my articles, this corresponds to the iptables chains for PREROUTING and POSTROUTING where you would do things like port forwarding and other advanced configuration that you probably don't want to do in most cases.

APF Command Line Management

As I mentioned in the "points to consider" at the top of this post, it's important to learn the changes you can perform from the command line, and APF has some very useful command line tools:

[root@server]# apf --help
APF version 9.7 <apf@r-fx.org>
Copyright (C) 2002-2011, R-fx Networks <proj@r-fx.org>
Copyright (C) 2011, Ryan MacDonald <ryan@r-fx.org>
This program may be freely redistributed under the terms of the GNU GPL
 
usage /usr/local/sbin/apf [OPTION]
-s|--start ......................... load all firewall rules
-r|--restart ....................... stop (flush) &amp; reload firewall rules
-f|--stop........ .................. stop (flush) all firewall rules
-l|--list .......................... list all firewall rules
-t|--status ........................ output firewall status log
-e|--refresh ....................... refresh &amp; resolve dns names in trust rules
-a HOST CMT|--allow HOST COMMENT ... add host (IP/FQDN) to allow_hosts.rules and
                                     immediately load new rule into firewall
-d HOST CMT|--deny HOST COMMENT .... add host (IP/FQDN) to deny_hosts.rules and
                                     immediately load new rule into firewall
-u|--remove HOST ................... remove host from [glob]*_hosts.rules
                                     and immediately remove rule from firewall
-o|--ovars ......................... output all configuration options

You can use these command line tools to turn your firewall on and off, add allowed or blocked hosts and display troubleshooting information. These commands are very easy to use, but if you want more fine-tuned control, you'll need to edit the configuration files directly (as we looked at above).

I know it seems like a lot of information, but to a large extent, that's all you need to know to get started with APF. Take each section slowly and understand what each configuration file is doing, and you'll master APF in no time at all.

-Mark

Subscribe to tips-and-tricks