Posts Tagged 'Security'

January 29, 2013

iptables Tips and Tricks: APF (Advanced Policy Firewall) Configuration

Let's talk about APF. APF — Advanced Policy Firewall — is a policy-based iptables firewall system that provides simple, powerful control over your day-to-day server security. It might seem intimidating to be faced with all of the features and configuration tools in APF, but this blog should put your fears to rest.

APF is an iptables wrapper that works alongside iptables and extends its functionality. I personally don't use iptables wrappers, but I have a lot of experience with them, and I've seen that they do offer some additional features that streamline policy management. For example, by employing APF, you'll get several simple on/off toggles (set via configuration files) that make some complex iptables configurations available without extensive coding requirements. The flip-side of a wrapper's simplicity is that you aren't directly in control of the iptables commands, so if something breaks it might take longer to diagnose and repair. Before you add a wrapper like APF, be sure that you know what you are getting into. Here are a few points to consider:

  • Make sure that what you're looking to use adds a feature you need but cannot easily incorporate with iptables on its own.
  • You need to know how to effectively enable and disable the iptables wrapper (the correct way ... read the manual!), and you should always have a trusted failsafe iptables ruleset handy in the unfortunate event that something goes horribly wrong and you need to disable the wrapper.
  • Learn about the basic configurations and rule changes you can apply via the command line. You'll need to understand the way your wrapper takes rules because it may differ from the way iptables handles rules.
  • You can't manually configure your iptables rules once you have your wrapper in place (or at least you shouldn't).
  • Be sure to know how to access your server via the IPMI management console so that if you completely lock yourself out beyond repair, you can get back in. You might even go so far as to have a script or set of instructions ready for tech support to run, in the event that you can't get in via the management console.

TL;DR: Have a Band-Aid ready!

APF Configuration

Now that you have been sufficiently advised about the potential challenges of using a wrapper (and you've got your Band-Aid ready), we can check out some of the useful APF rules that make iptables administration a lot easier. Most of the configuration for APF is in conf.apf. This file handles the default behavior, but not necessarily the specific blocking rules, and when we make any changes to the configuration, we'll need to restart the APF service for the changes to take effect.

Let's jump into conf.apf and break down what we see. The first code snippit is fairly self-explanatory. It's another way to make sure you don't lock yourself out of your server as you are making configuration changes and testing them:

# !!! Do not leave set to (1) !!!
# When set to enabled; 5 minute cronjob is set to stop the firewall. Set
# this off (0) when firewall is determined to be operating as desired.
DEVEL_MODE="1"

The next configuration options we'll look at are where you can make quick high-level changes if you find that legitimate traffic is being blocked and you want to make APF a little more lenient:

# This controls the amount of violation hits an address must have before it
# is blocked. It is a good idea to keep this very low to prevent evasive
# measures. The default is 0 or 1, meaning instant block on first violation.
RAB_HITCOUNT="1"
 
# This is the amount of time (in seconds) that an address gets blocked for if
# a violation is triggered, the default is 300s (5 minutes).
RAB_TIMER="300"
# This allows RAB to 'trip' the block timer back to 0 seconds if an address
# attempts ANY subsiquent communication while still on the inital block period.
RAB_TRIP="1"
 
# This controls if the firewall should log all violation hits from an address.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_HIT="1"
 
# This controls if the firewall should log all subsiqent traffic from an address
# that is already blocked for a violation hit, this can generate allot of logs.
# The use of LOG_DROP variable set to 1 will override this to force logging.
RAB_LOG_TRIP="0"

Next, we have an option to adjust ICMP flood protection. This protection should be useful against some forms of DoS attacks, and the associated rules show up in your INPUT chain:

# Set a reasonable packet/time ratio for ICMP packets, exceeding this flow
# will result in dropped ICMP packets. Supported values are in the form of:
# pkt/s (packets/seconds), pkt/m (packets/minutes)
# Set value to 0 for unlimited, anything above is enabled.
ICMP_LIM="30/s"

If you wanted to add more ports to block for p2p traffic (which will show up in the P2P chain), you'll update this code:

# A common set of known Peer-To-Peer (p2p) protocol ports that are often
# considered undesirable traffic on public Internet servers. These ports
# are also often abused on web hosting servers where clients upload p2p
# client agents for the purpose of distributing or downloading pirated media.
# Format is comma separated for single ports and an underscore separator for
# ranges (4660_4678).
BLK_P2P_PORTS="1214,2323,4660_4678,6257,6699,6346,6347,6881_6889,6346,7778"

The next few lines let you designate the ports that you want to have closed at all times. They will be blocked for INPUT and OUTPUT chains:

# These are common Internet service ports that are understood in the wild
# services you would not want logged under normal circumstances. All ports
# that are defined here will be implicitly dropped with no logging for
# TCP/UDP traffic inbound or outbound. Format is comma separated for single
# ports and an underscore separator for ranges (135_139).
BLK_PORTS="135_139,111,513,520,445,1433,1434,1234,1524,3127"

The next important section to look at deals with conntrack. If you get "conntrack full" errors, this is where you'd increase the allowed connections. It's not uncommon to need more connections than the default, so if you need to adjust that value, you'd do it here:

# This is the maximum number of "sessions" (connection tracking entries) that
# can be handled simultaneously by the firewall in kernel memory. Increasing
# this value too high will simply waste memory - setting it too low may result
# in some or all connections being refused, in particular during denial of
# service attacks.
SYSCTL_CONNTRACK="65536"

We've talked about the ports we want closed at all times, so it only makes sense that we'd specify which ports we want open for all interfaces:

# Common inbound (ingress) TCP ports
IG_TCP_CPORTS="22"
# Common inbound (ingress) UDP ports
IG_UDP_CPORTS=""
# Common outbound (egress) TCP ports
EG_TCP_CPORTS="21,25,80,443,43"
# Common outbound (egress) UDP ports
EG_UDP_CPORTS="20,21,53"

And when we want a special port allowance for specific users, we can declare it easily. For example, if we want port 22 open for user ID 0, we'd use this code:

# Allow outbound access to destination port 22 for uid 0
EG_TCP_UID="0:22"

The next few sections on Remote Rule Imports and Global Trust are a little more specialized, and I encourage you to read a little more about them (since there's so much to them and not enough space to cover them here on the blog). An important feature of APF is that it imports block lists from outside sources to keep you safe from some attackers, so the Remote Rule Imports can prove to be very useful. The Global Trust section is incredibly useful for multi-server deployments of APF. Here, you can set up your global allow/block lists and have them all pull from a central location so that you can make a single update to the source and have the update propogated to all servers in your configuration. These changes are synced to the glob_allow/deny.rules files, and they will be downloaded (and overwritten) on a regular basis from your specified source, so don't make any manual edits in glob_allow/deny.rules.

As you can see, apf.conf is no joke. It has a lot of stuff going on, but it's very straightforward and documented well. Once we've set up apf.conf with the configurations we need, it's time to look at the more focused allow_hosts.rules and deny_hosts.rules files. These .rules files are where where you put your typical firewall rules in place. If there's one piece of advice I can give you about these configurations, it would be to check if your traffic is already allowed or blocked. Having multiple rules that do the same thing (possibly in different places) is confusing and potentially dangerous.

The deny_hosts.rules configuration will look just like allow_hosts.rules, but it's performing the opposite function. Let's check out an allow_hosts.rules configuration that will allow the Nimsoft service to function:

tcp:in:d=48000_48020:s=10.0.0.0/8
tcp:out:d=48000_48020:d=10.0.0.0/8

The format is somewhat simplistic, but the file gives a little more context in the comments:

# The trust rules can be made in advanced format with 4 options
# (proto:flow:port:ip);
# 1) protocol: [packet protocol tcp/udp]
# 2) flow in/out: [packet direction, inbound or outbound]
# 3) s/d=port: [packet source or destination port]
# 4) s/d=ip(/xx) [packet source or destination address, masking supported]
# Syntax:
# proto:flow:[s/d]=port:[s/d]=ip(/mask)

APF also uses ds_hosts.rules to load the DShield.org blocklist, and I assume the ecnshame_hosts.rules does something similar (can't find much information about it), so you won't need to edit these files manually. Additionally, you probably don't need to make any changes to log.rules, unless you want to make changes to what exactly you log. As it stands, it logs certain dropped connections, which should be enough. Also, it might be worth noting that this file is a script, not a configuration file.

The last two configuration files are the preroute.rules and postroute.rules that (unsurprisingly) are used to make routing changes. If you have been following my articles, this corresponds to the iptables chains for PREROUTING and POSTROUTING where you would do things like port forwarding and other advanced configuration that you probably don't want to do in most cases.

APF Command Line Management

As I mentioned in the "points to consider" at the top of this post, it's important to learn the changes you can perform from the command line, and APF has some very useful command line tools:

[root@server]# apf --help
APF version 9.7 <apf@r-fx.org>
Copyright (C) 2002-2011, R-fx Networks <proj@r-fx.org>
Copyright (C) 2011, Ryan MacDonald <ryan@r-fx.org>
This program may be freely redistributed under the terms of the GNU GPL
 
usage /usr/local/sbin/apf [OPTION]
-s|--start ......................... load all firewall rules
-r|--restart ....................... stop (flush) &amp; reload firewall rules
-f|--stop........ .................. stop (flush) all firewall rules
-l|--list .......................... list all firewall rules
-t|--status ........................ output firewall status log
-e|--refresh ....................... refresh &amp; resolve dns names in trust rules
-a HOST CMT|--allow HOST COMMENT ... add host (IP/FQDN) to allow_hosts.rules and
                                     immediately load new rule into firewall
-d HOST CMT|--deny HOST COMMENT .... add host (IP/FQDN) to deny_hosts.rules and
                                     immediately load new rule into firewall
-u|--remove HOST ................... remove host from [glob]*_hosts.rules
                                     and immediately remove rule from firewall
-o|--ovars ......................... output all configuration options

You can use these command line tools to turn your firewall on and off, add allowed or blocked hosts and display troubleshooting information. These commands are very easy to use, but if you want more fine-tuned control, you'll need to edit the configuration files directly (as we looked at above).

I know it seems like a lot of information, but to a large extent, that's all you need to know to get started with APF. Take each section slowly and understand what each configuration file is doing, and you'll master APF in no time at all.

-Mark

December 30, 2012

Risk Management: Event Logging to Protect Your Systems

The calls start rolling in at 2am on Sunday morning. Alerts start firing off. Your livelihood is in grave danger. It doesn't come with the fanfare of a blockbuster Hollywood thriller, but if a server hosting your critical business infrastructure is attacked, becomes compromised or fails, it might feel like the end of the world. In our Risk Management series, and we've covered the basics of securing your servers, so the next consideration we need to make is for when our security is circumvented.

It seems silly to prepare for a failure in a security plan we spend time and effort creating, but if we stick our heads in the sand and tell ourselves that we're secure, we won't be prepared in the unlikely event of something happening. Every attempt to mitigate risks and stop threats in their tracks will be circumvented by the one failure, threat or disaster you didn't cover in your risk management plan. When that happens, accurate event logging will help you record what happened, respond to the event (if it's still in progress) and have the information available to properly safeguard against or prevent similar threats in the future.

Like any other facet of security, "event logging" can seem overwhelming and unforgiving if you're looking at hundreds of types of events to log, each with dozens of variations and options. Like we did when we looked at securing servers, let's focus our attention on a few key areas and build out what we need:

Which events should you log?
Look at your risk assessment and determine which systems are of the highest value or could cause the most trouble if interrupted. Those systems are likely to be what you prioritized when securing your servers, and they should also take precedence when it comes to event logging. You probably don't have unlimited compute and storage resources, so you have to determine which types of events are most valuable for you and how long you should keep records of them — it's critical to have your event logs on-hand when you need them, so logs should be retained online for a period of time and then backed up offline to be available for another period of time.

Your goal is to understand what's happening on your servers and why it's happening so you know how to respond. The most common audit-able events include successful and unsuccessful account log-on events, account management events, object access, policy change, privilege functions, process tracking and system events. The most conservative approach actually involves logging more information/events and keeping those logs for longer than you think you need. From there, you can evaluate your logs periodically to determine if the level of auditing/logging needs to be adjusted.

Where do you store the event logs?
Your event logs won't do you any good if they are stored in a space that is insufficient for the amount of data you need to collect. I recommend centralizing your logs in a secure environment that is both readily available and scalable. In addition to the logs being accessible when the server(s) they are logging are inaccessible, aggregating and organize your logs in a central location can be a powerful tool to build reports and analyze trends. With that information, you'll be able to more clearly see deviations from normal activity to catch attacks (or attempted attacks) in progress.

How do you protect your event logs?
Attacks can come from both inside and out. To avoid intentional malicious activity by insiders, separation of duties should be enforced when planning logging. Learn from The X Files and "Trust no one." Someone who has been granted the 'keys to your castle' shouldn't also be able to disable the castle's security system or mess with the castle's logs. Your network engineer shouldn't have exclusive access to your router logs, and your sysadmin shouldn't be the only one looking at your web server logs.

Keep consistent time.
Make sure all of your servers are using the same accurate time source. That way, all logs generated from those servers will share consistent time-stamps. Trying to diagnose an attack or incident is exceptionally more difficult if your web server's clock isn't synced with your database server's clock or if they're set to different time zones. You're putting a lot of time and effort into logging events, so you're shooting yourself in the foot if events across all of your servers don't line up cleanly.

Read your logs!
Logs won't do you any good if you're not looking at them. Know the red flags to look for in each of your logs, and set aside time to look for those flags regularly. Several SoftLayer customers — like Tech Partner Papertrail — have come up with innovative and effective log management platforms that streamline the process of aggregating, searching and analyzing log files.

It's important to reiterate that logging — like any other security endeavor — is not a 'one size fits all' model, but that shouldn't discourage you from getting started. If you aren't logging or you aren't actively monitoring your logs, any step you take is a step forward, and each step is worth the effort.

Thanks for reading, and stay secure, my friends!

-Matthew

November 14, 2012

Risk Management: Securing Your Servers

How do you secure your home when you leave? If you're like most people, you make sure to lock the door you leave from, and you head off to your destination. If Phil is right about "locks keeping honest people honest," simply locking your front door may not be enough. When my family moved into a new house recently, we evaluated its physical security and tried to determine possible avenues of attack (garage, doors, windows, etc.), tools that could be used (a stolen key, a brick, a crowbar, etc.) and ways to mitigate the risk of each kind of attack ... We were effectively creating a risk management plan.

Every risk has different probabilities of occurrence, potential damages, and prevention costs, and the risk management process helps us balance the costs and benefits of various security methods. When it comes to securing a home, the most effective protection comes by using layers of different methods ... To prevent a home invasion, you might lock your door, train your dog to make intruders into chew toys and have an alarm system installed. Even if an attacker can get a key to the house and bring some leftover steaks to appease the dog, the motion detectors for the alarm are going to have the police on their way quickly. (Or you could violate every HOA regulation known to man by digging a moat around the house, filling with sharks with laser beams attached to their heads, and building a medieval drawbridge over the moat.)

I use the example of securing a house because it's usually a little more accessible than talking about "server security." Server security doesn't have to be overly complex or difficult to implement, but its stigma of complexity usually prevents systems administrators from incorporating even the simplest of security measures. Let's take a look at the easiest steps to begin securing your servers in the context of their home security parallels, and you'll see what I'm talking about.

Keep "Bad People" Out: Have secure password requirements.

Passwords are your keys and your locks — the controls you put into place that ensure that only the people who should have access get it. There's no "catch all" method of keeping the bad people out of your systems, but employing a variety of authentication and identification measures can greatly enhance the security of your systems. A first line of defense for server security would be to set password complexity and minimum/maximum password age requirements.

If you want to add an additional layer of security at the authentication level, you can incorporate "Strong" or "Two-Factor" authentication. From there, you can learn about a dizzying array of authentication protocols (like TACACS+ and RADIUS) to centralize access control or you can use active directory groups to simplify the process of granting and/or restricting access to your systems. Each layer of authentication security has benefits and drawbacks, and most often, you'll want to weigh the security risk against your need for ease-of-use and availability as you plan your implementation.

Stay Current on your "Good People": When authorized users leave, make sure their access to your system leaves with them.

If your neighbor doesn't return borrowed tools to your tool shed after you gave him a key when he was finishing his renovation, you need to take his key back when you tell him he can't borrow any more. If you don't, nothing is stopping him from walking over to the shed when you're not looking and taking more (all?) of your tools. I know it seems like a silly example, but that kind of thing is a big oversight when it comes to server security.

Employees are granted access to perform their duties (the principle of least privilege), and when they no longer require access, the "keys to the castle" should be revoked. Auditing who has access to what (whether it be for your systems or for your applications) should be continual.

You might have processes in place to grant and remove access, but it's also important to audit those privileges regularly to catch any breakdowns or oversights. The last thing you want is to have a disgruntled former employee wreak all sorts of havoc on your key systems, sell proprietary information or otherwise cost you revenue, fines, recovery efforts or lost reputation.

Catch Attackers: Monitor your systems closely and set up alerts if an intrusion is detected.

There is always a chance that bad people are going to keep looking for a way to get into your house. Maybe they'll walk around the house to try and open the doors and windows you don't use very often. Maybe they'll ring the doorbell and if no lights turn on, they'll break a window and get in that way.

You can never completely eliminate all risk. Security is a continual process, and eventually some determined, over-caffeinated hacker is going to find a way in. Thinking your security is impenetrable makes you vulnerable if by some stretch of the imagination, an attacker breaches your security (see: Trojan Horse). Continuous monitoring strategies can alert administrators if someone does things they shouldn't be doing. Think of it as a motion detector in your house ... "If someone gets in, I want to know where they are." When you implement monitoring, logging and alerting, you will also be able to recover more quickly from security breaches because every file accessed will be documented.

Minimize the Damage: Lock down your system if it is breached.

A burglar smashes through your living room window, runs directly to your DVD collection, and takes your limited edition "Saved by the Bell" series box set. What can you do to prevent them from running back into the house to get the autographed posted of Alf off of your wall?

When you're monitoring your servers and you get alerted to malicious activity, you're already late to the game ... The damage has already started, and you need to minimize it. In a home security environment, that might involve an ear-piercing alarm or filling the moat around your house even higher so the sharks get a better angle to aim their laser beams. File integrity monitors and IDS software can mitigate damage in a security breach by reverting files when checksums don't match or stopping malicious behavior in its tracks.

These recommendations are only a few of the first-line layers of defense when it comes to server security. Even if you're only able to incorporate one or two of these tips into your environment, you should. When you look at server security in terms of a journey rather than a destination, you can celebrate the progress you make and look forward to the next steps down the road.

Now if you'll excuse me, I have to go to a meeting where I'm proposing moats, drawbridges, and sharks with laser beams on their heads to SamF for data center security ... Wish me luck!

-Matthew

November 2, 2012

The Trouble with Open DNS Resolvers

In the last couple of days, there's been a bit of buzz about "open DNS resolvers" and DNS amplification DDoS attacks, and SoftLayer's name has been brought up a few times. In a blog post on October 30, CloudFlare explained DNS Amplification DDoS attacks and reported the geographic and network sources of open DNS resolvers that were contributing to a 20Gbps attack on their network. SoftLayer's AS numbers (SOFTLAYER and the legacy THEPLANET-AS number) show up on the top ten "worst offenders" list, and Dan Goodin contacted us to get a comment for a follow-up piece on Ars Technica — Meet the network operators helping to fuel the spike in big DDoS attacks.

While the content of that article is less sensationalized than the title, there are still a few gaps to fill about when it comes to how SoftLayer is actually involved in the big picture (*SPOILER ALERT* We aren't "helping to fuel the spike in big DDoS attacks"). The CloudFlare blog and the Ars Technica post presuppose that the presence of open recursive DNS resolvers is a sign of negligence on the part of the network provider at best and maliciousness at worst, and that's not the case.

The majority of SoftLayer's infrastructure is made up of self-managed dedicated and cloud servers. Customers who rent those servers on a monthly basis have unrestricted access to operate their servers in any way they'd like as long as that activity meets our acceptable use policy. Some of our largest customers are hosting resellers who provide that control to their customers who can then provide that control to their own customers. And if 23 million hostnames reside on the SoftLayer network, you can bet that we've got a lot of users hosting their DNS on SoftLayer infrastructure. Unfortunately, it's easier for those customers and customers-of-customers and customers-of-customers-of-customers to use "defaults" instead of looking for, learning and implementing "best practices."

It's all too common to find those DNS resolvers open and ultimately vulnerable to DNS amplification attacks, and whenever our team is alerted to that vulnerability on our network, we make our customers aware of it. In turn, they may pass the word down the customer-of-customer chain to get to the DNS owner. It's usually not a philosophical question about whether DNS resolvers should be open for the greater good of the Internet ... It's a question of whether the DNS owner has any idea that their "configuration" is vulnerable to be abused in this way.

SoftLayer's network operations, abuse and support teams have tools that flag irregular and potentially abusive traffic coming from any server on our network, and we take immediate action when we find a problem or are alerted to one by someone who sends details to abuse@softlayer.com. The challenge we run into is that flagging obvious abusive behavior from an active DNS server is a bit of a cat-and-mouse game ... Attackers cloak their activity in normal traffic. Instead of sending a huge amount of traffic from a single domain, they send a marginal amount of traffic from a large number of machines, and the "abusive" traffic is nearly impossible for even the DNS owner to differentiate from "regular" traffic.

CloudFlare effectively became a honeypot, and they caught a distributed DNS amplification DoS attack. The results they gathered are extremely valuable to teams like mine at SoftLayer, so if they go the next step to actively contact the abuse channel for each of the network providers in their list, I hope that each of the other providers will jump on that information as I know my team will.

If you have a DNS server on the SoftLayer network, and you're not sure whether it's configured to prevent it from being used for these types of attacks, our support team is happy to help you out. For those of you interested in doing a little DNS homework to learn more, Google's Developer Network has an awesome overview of DNS security threats and mitigations which gives an overview of potential attacks and preventative measures you can take. If you're just looking for an easy way to close an open recursor, scroll to the bottom of CloudFlare's post, and follow their quick guide.

If, on the other hand, you have your own DNS server and you don't want to worry about all of this configuration or administration, SoftLayer operates private DNS resolvers that are limited to our announced IP space. Feel free to use ours instead!

-Ryan

October 23, 2012

Tips from the Abuse Department: Know Spam. Stop Spam.

As an abuse administrator, I'm surrounded by spam on a daily basis. When someone sends an abuse-related complaint to our abuse@softlayer.com contact address, it gets added to our ticket queue, and our Abuse SLayers take time to investigate and follow up with the customers whose servers violate our acceptable use policy. The majority of those abuse-related submissions are reporting spam coming from our network, and in my interaction with customers, I've noticed that spam (and the source of spam) is widely misunderstood.

Most spam tickets we create on customer accounts pinpoint spam sent from a compromised or exploited server. Our direct customer didn't send the phishing email, malware distribution, pharmacy advertisement or pornographic spam, but that activity came from their account. While they're accountable for the abusive behavior coming from their server, in many cases, they don't know that there's a problem until we post an abuse ticket on their account. These servers are targeted and compromised by common techniques and exploits that could have been easily avoided, but they aren't very well known outside the world of abuse.

To protect yourself from a spammer, you need to think like a spammer. You need to understand how someone might try to exploit your environment so that you can prevent them from doing so. As you're looking at ways to secure your server proactively, make sure you target these five exploits in particular:

1. User Auth Login

This is by far the most common exploit to used to send spam. This method involves a person or script using the credentials of a user to send spam through a domain's mail server. The majority of these incidences are caused by malware on a client PC that obtains the login and password for a domain user and uses that information to log on and send mail from the client PC through the server. Often, these spam messages are sent through a botnet command structure.

When an account is compromised, simply changing the password for the compromised user on the server usually won't stop the abuse. We see quite a few accounts that continue to send spam after an initial abuse ticket results in a password change. Most servers that are sending spam with this method are found to only be sending a small amount of spam at any given time to avoid detection. The low volume of spam that is being sent per server is made up for by the fact that there are thousands of servers being used for the same spamming campaigns.

In order to stop the User Auth Login exploit, a customer needs to clean all of the malicious software (malware) from their environments. To prevent future User Auth Login compromises, users should be made aware of the potential dangers of untrusted software, and if they believe their machines are infected, they need to know what to do.

2. Tell-a-friend Exploitation

The User Auth Login technique is the most common method employed by spammers, but the "tell-a-friend" script exploitation isn't far behind when it comes to volume of affected servers. This spamming method find websites that use scripts to invite users to refer friends to a page or product. Spammers will use the 'Your Message' field in one of these scripts to input their own content and links, and they'll push the actual page referral link to the bottom of the message. When these site scripts aren't secure, the spammer will use them to send hundreds or thousands of messages.

To avoid having your website fall victim to this type of spam, be very wary of any widget or script you add. If you need to add Facebook, Twitter and email "share" functionality to your site, make sure you incorporate a tell-a-friend script that does not allow for customizable messages or does not accept input of more than one email address. Also, users won't need the "cc" or "bcc" fields, so you can be sure those are axed as well. If you can't find a good "share" script that you're comfortable with from a security perspective, it might be a good idea to remove that functionality to avoid exploitation.

3. Uploaded Mailers

Spam sent via an uploaded third party mailer can sometimes prove difficult for admins to locate. An uploaded third party mailer could be capable of creating it's own outbound SMTP connection, and that would allow a program to bypass the existing MTA on the server and render any legitimate mail logs useless for investigation. Another challenge is that a php mailer can be uploaded to a location within a user's web content, and that mailer is run by the user 'nobody' (the default Apache user).

We strongly suggest configuring your server to have the mail headers show the script's user (that's not the Apache default user) and the location the script is running from on the server. Many times, these kinds of mailers are maliciously uploaded after a user's FTP password is been compromised, so be sure your FTP login information is secure.

4. Software Exploits

The "software exploits" category casts a huge shadow. Every piece of software on a server — from mail servers, content management systems and control panels to the operating system itself — can be targeted by hackers. They probe servers to find security vulnerabilities and weak coding, and when they find a vulnerability, they take control.

The hacker who found the software vulnerability might not actually take advantage of the exploit immediately. That user may sell access to other entities for their use, and that use often ends up being spam. In addition to having strong firewall rules and access restrictions, you should update and maintain the current stable versions of all software on your servers.

5. WordPress Exploits

WordPress exploits would technically fall under the "Software Exploits" category, but I'm breaking it out into its own category simply due to the volume of spam issues that are the result of exploiting this particular piece of software. The first step to protecting against spam being sent through this source is to make sure you have the latest version of WordPress installed. With that done, be sure to research the latest security plugins for that version and install any that are applicable to your environment.

These five techniques are not the only ones used by spammers to take advantage of your environment, but they are some of the most common. To protect yourself from becoming a source of spam, make your servers a more difficult target to exploit. To stop spam, you need to know spam. Now that you know spam, it's time to stop it. Ask questions, test your environment regularly and watch your logs for any unexplained usage.

-Andrew

October 16, 2012

An Introduction to Risk Management

Whether you're managing a SaaS solution for thousands of large clients around the world or you're running a small mail server for a few mom-and-pop businesses in your neighborhood, you're providing IT service for a fee — and your customers expect you to deliver. It's easy to get caught up in focusing your attention and energy on day-to-day operations, and in doing so, you might neglect some of the looming risks that threaten the continuity of your business. You need to prioritize risk assessment and management.

Just reading that you need to invest in "Risk Management" probably makes you shudder. Admittedly, when a business owner has to start quantifying and qualifying potential areas of business risk, the process can seem daunting and full of questions ... "What kinds of risks should I be concerned with?" "Once I find a potential risk, should I mitigate it? Avoid it? Accept it?" "How much do I need to spend on risk management?"

When it comes to risk management in hosting, the biggest topics are information security, backups and disaster recovery. While those general topics are common, each business's needs will differ greatly in each area. Because risk management isn't a very "cookie-cutter" process, it's intimidating. It's important to understand that protecting your business from risks isn't a destination ... it's a journey, and whatever you do, you'll be better off than you were before you did it.

Because there's not a "100% Complete" moment in the process of risk management, some people think it's futile — a gross waste of time and resources. History would suggest that risk management can save companies millions of dollars, and that's just when you look at failures. You don't see headlines when businesses effectively protect themselves from attempted hacks or when sites automatically fail over to a new server after a hardware failure.

It's unfortunate how often confidential customer data is unintentionally released by employees or breached by malicious attackers. Especially because those instances are often so easily preventable. When you understand the potential risks of your business's confidential data in the hands of the wrong people (whether malicious attackers or careless employees), you'll usually take action to avoid quantifiable losses like monetary fines and unquantifiable ones like the loss of your reputation.

More and more, regulations are being put in place to holding companies accountable for protecting their sensitive information. In the healthcare industry businesses have to meet the strict Health Insurance Portability and Accountability Act (HIPAA) regulations. Sites that accept credit card payments online are required to operate in Payment Card Industry (PCI) Compliance. Data centers will spend hours (and hours and hours) achieving and maintaining their SSAE 16 certification. These rules and requirements are not arbitrarily designed to be restrictive (though they can feel that way sometimes) ... They are based on best practices to ultimately protect businesses in those industries from risks that are common throughout the respective industry.

Over the coming months, I'll discuss ways that you as a SoftLayer customer can mitigate and manage your risk. We'll talk about security and backup plans that will incrementally protect your business and your customers. While we won't get to the destination of 100% risk-mitigated operations, we'll get you walking down the path of continuous risk assessment, identification and mitigation.

Stay tuned!

-Matthew

October 5, 2012

Spark::red: Tech Partner Spotlight

This guest blog comes to us from Spark::red, a featured member of the SoftLayer Technology Partners Marketplace. Spark::red is a global PCI Level 1 compliant hosting provider specializing in Oracle ATG Commerce. With full-redundancy at every layer, powerful servers, and knowledgeable architects, Spark::red delivers exceptional environments in weeks, instead of months. In this video we talk to Spark::red co-founder Devon Hillard about what Spark::red does, how they help companies that are outgrowing current solutions, and why they chose SoftLayer.

The Three Most Common PCI Compliance Myths

As a hosting provider that specializes in Oracle ATG Commerce, Spark::red has extensive experience and expertise when it comes to the Payment Card Industry Data Security Standards (PCI DSS). If you're not familiar with PCI DSS, they are standards imposed on companies that process payment data, and they are designed to protect the company and its customers.

We've been helping online businesses maintain PCI Compliance for several years now, and in that time, we've encountered a great deal of confusion and misinformation when it comes to compliance. Despite numerous documents and articles available on this topic, we've found that three myths seem to persist when it comes to PCI DSS compliance. Consider us the PCI DSS compliance mythbusters.

Myth 1: Only large enterprise-level businesses are required to be PCI Compliant.

According to PCI DSS, every company involved in payment card processing online or offline should be PCI Compliant. The list of those companies includes e-commerce businesses of all sizes, banks and web hosting providers. It's important to note that I said, "should be PCI Compliant" here. There is no federal law that makes PCI compliance a legal requirement. However, a business IS required to be PCI compliant technically in order to take and process Visa or MasterCard payments. Failure to operate in with PCI compliance could mean huge fees if you're found in violation after a breach.

Payment card data security is the most significant concern for cardholders, and it should be a priority for your business, whether you have two hundred customers or two million customers. If you're processing ANY credit card payments, you should make sure you are PCI-compliant.

There are four levels of PCI compliance based on the number of credit card transactions your business processes a year, so the PCI compliance process is going to look different for small, medium-sized and large businesses. Visit the PCI Security Standards Council website to check which level of PCI compliance your business needs.

Myth 1: Busted.

Myth 2: A business that uses a PCI-compliant managed hosting provider automatically becomes PCI-compliant.

Multiple parties are involved in processing payment data, and each of them needs to meet certain standards to guarantee cardholders' data security. From a managed hosting provider perspective, we're responsible for things like proper firewall installation and maintenance, updating anti-virus programs of our servers, providing a unique ID for each person with computer access to restrict access to the most sensitive data, regular system scanning for vulnerabilities. Our customer — an online retailer, for example — would need to develop its software applications in accordance with PCI DSS, keep cardholders data storage to a minimum, and perform application-layer penetration tests that are out of their hosting provider's control.

If you're pursuing PCI compliance, you have a significant advantage if you start with a PCI-compliant managed hosting provider. Many security questions are already answered by your PCI-compliant host, so there is a shorter list of things for you to be worry about. You save money, time and effort in the process of completing PCI certification.

Myth 2: Busted.

Myth 3: A business that uses SSL certificates is PCI compliant.

Secure Sockets Layer (SSL) certificates allow secure data transmission to and from the server through data encryption that significantly decreases the network vulnerabilities from IP spoofing, IP source rooting, DNS spoofing, man-in-the-middle attacks and other threats from hackers. However, SSL cannot protect cardholder data from attacks using cross-site scripting or SQL injection, and they don't provide secure audit trails or event monitoring. SSL certificates are an important part of secure transactions, but they're only part of PCI DSS compliance.

Myth 3: Busted.

If you have questions about PCI compliance or you're interested in Oracle ATG Hosting, visit Spark::red, give us a call or send us an email, and we'll do what we can to help. When PCI compliance doesn't seem like a scary monster in your closet, it's easier to start the process and get it done quickly.

-Elena Rybalchenko, Spark::red

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
September 10, 2012

Creating a Usable, Memorable and Secure Password

When I was young, I vividly remember a wise man sharing a proverb with me: "Locks are for honest people." The memory is so vivid because it completely confused me ... "If everyone was honest, there would be no need for locks," I thought, naively. As it turns out, everyone isn't honest, and if "locks keep honest people honest," they don't do anything to/for dishonest people. That paradox lingered in the back of my mind, and a few years later, I found myself using some sideways logic to justify learning the mechanics of lock picking.

I ordered my first set of lock picks (with instruction booklet) for around $10 online. When the package arrived, I scrambled to unwrap it like Ralphie unwrapped the "Red Ryder" BB gun in "A Christmas Story," and I set out to find my first lock to pick. After a few unsuccessful attempts, I turned to the previously discarded instruction booklet, and I sat down to actually learn what I was supposed to be doing. That bit of study wound up being useful; with that knowledge, I managed to pick my first lock.

I tend to collect hobbies. I also tend to shift every spare thought towards my newest obsession until whatever goal I set is accomplished. To this end, I put together a mobile lock-picking training device — the cylinder/tumbler from a dead bolt, my torq wrench wrapped with electrical tape to prevent the recurrence of blisters, and my favorite snake rake. I took this device with me everywhere, unconsciously unlocking and resetting the lock as I went about my shopping, sat in a doctor's office or walked around the block. In my mind, I was honing my skills on a mechanical challenge, but as one of my friends let me know, people who saw me playing with the lock in public would stare at me like I was a budding burglar audaciously flaunting his trade.

I spent less money on a lock picking set than I would have on a lock, and I felt like had a key to open any door. The only thing between me and the other side of a locked door in front of me was my honesty. What about the dishonest people in the world, though? They have the same access to cheap tools, and while they probably don't practice their burgling in public, can spend just as much time sharpening their skills in private. From then on, I was much more aware of the kinds of locks I bought and used to secure my valuables.

When I started getting involved in technology, I immediately noticed the similarities between physical security and digital security. When I was growing up, NBC public service announcements taught me, "Knowledge is Power," and that's even truer now than it was then. We trust technology with our information, and if someone else gets access to that information, the results can be catastrophic.

Online, the most common "hacks" and security exploits are usually easily avoidable. They're the IRL equivalent of leaving valuables on a table by an unlocked window with the thought, "The window is closed ... My stuff is secure." Some of those windows may be hard to reach, but some of them are street-level in high-traffic pedestrian areas. The most vulnerable and visible of access points: Passwords.

You've heard people tell you not to do silly things like making "1 2 3 4 5" your combination lock, and your IT team has probably gotten onto you about using "password" to log onto your company's domain, but our tendency to create simpler passwords is a response to the inherent problem that a secure password is, by its nature, hard to remember. The average Internet user probably isn't going to use pwgen or a password lockbox ... If you had a list of passwords from a given site, my guess is that you'd wind up seeing a lot more pets' names and birth years than passwords like S0L@Y#Rpr!Vcl0udN)3mblyR#Q. What people need to understand is that the "secure" password can be just as easy to remember as "Fluffy1982."

Making a *Usable* Secure Password

The process of creating a unique, usable and secure password is pretty straightforward:

  1. Start with a series of words or phrases which have a meaning to you: A quote in a movie, song lyric, title of your favorite book series, etc. For our example, let's use "SoftLayer Private Clouds, no assembly required."
  2. l33t up your phrase. To do this, you'd remove punctuation and spaces, and you'd replace a letter in the phrase with a special character. You predetermining these conversions to create a template of alterations to any string which only take minimal thought from you. In the simplest of cyphers, letters become a numbers or characters that resemble the letter: An "o" becomes a "0," "e" becomes a "3," an "a" becomes an "@," etc. In more complicated structures, a character can be different based on where it lies in the string or what less-commmon substitutions you choose to use. Our example at this point would look like this: "S0ftL@y3rPr1v@t3Cl0udsn0@ss3mblyr3qu1r3d"
  3. Right now, we have a password that would make any brute-forcing script-kiddie yearn for the Schwarts, but we're not done yet. If someone were to find our cypher and personal phrase, they may be able to figure out our password. Also, this password is too long for use in many sites with password restrictions that cap you a 16 characters. Our goal is to create a password between 15-25 characters and be prepared to make cuts when necessary.
  4. A good practice is to cut out the beginning or ending of a word. In our example (taking out the l33t substitutions for simplicity here), our phrase might look like this: "so-layer-priv-cloud-no-embly-req"
  5. When we combine the shortened password with l33t substitutions, the last trick we want to incorporate is using our Shift key. An "e" might be a "3" in a simple l33t cypher, but if we use the Shift key, the "e" becomes a "#" (Shift+"3"): "S0L@Y#Rpr!Vcl0udN)#mblyR#Q"

The main idea is that when you're "locking" your accounts with a password, you don't need the most complicated lock ever created ... You just need one that can't be picked easily. Establish a pattern of uncommon substitutions that you can use consistently across all of your sites, and you'll be able to use seemingly common phrases like "Fluffy is my dog's name" or "Neil Armstrong was an astronaut" without worrying about anyone being able to "open your window."

-Phil (@SoftLayerDevs)

July 27, 2012

SoftLayer 'Cribs' ≡ DAL05 Data Center Tour

The highlight of any customer visit to a SoftLayer office is always the data center tour. The infrastructure in our data centers is the hardware platform on which many of our customers build and run their entire businesses, so it's not surprising that they'd want a first-hand look at what's happening inside the DC. Without exception, visitors to a SoftLayer data center pod are impressed when they walk out of a SoftLayer data center pod ... even if they've been in dozens of similar facilities in the past.

What about the customers who aren't able to visit us, though? We can post pictures, share stats, describe our architecture and show you diagrams of our facilities, but those mediums can't replace the experience of an actual data center tour. In the interest of bridging the "data center tour" gap for customers who might not be able to visit SoftLayer in person (or who want to show off their infrastructure), we decided to record a video data center tour.

If you've seen "professional" video data center tours in the past, you're probably positioning a pillow on top of your keyboard right now to protect your face if you fall asleep from boredom when you hear another baritone narrator voiceover and see CAD mock-ups of another "enterprise class" facility. Don't worry ... That's not how we roll:

Josh Daley — whose role as site manager of DAL05 made him the ideal tour guide — did a fantastic job, and I'm looking forward to feedback from our customers about whether this data center tour style is helpful and/or entertaining.

If you want to see more videos like this one, "Like" it, leave comments with ideas and questions, and share it wherever you share things (Facebook, Twitter, your refrigerator, etc.).

-@khazard

July 25, 2012

ServerDensity: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome David Mytton, Founder of ServerDensity. Server Density is a hosted server and website monitoring service that alerts you when your website is slow, down or back up.

5 Ways to Minimize Downtime During Summer Vacation

It's a fact of life that everything runs smoothly until you're out of contact, away from the Internet or on holiday. However, you can't be available 24/7 on the chance that something breaks; instead, there are several things you can do to ensure that when things go wrong, the problem can be managed and resolved quickly. To help you set up your own "get back up" plan, we've come up with a checklist of the top five things you can do to prepare for an ill-timed issue.

1. Monitoring

How will you know when things break? Using a tool like Server Density — which combines availability monitoring from locations around the world with internal server metrics like disk usage, Apache and MySQL — means that you can be alerted if your site goes down, and have the data to find out why.

Surprisingly, the most common problems we see are some that are the easiest to fix. One problem that happens all too often is when a customer simply runs out of disk space in a volume! If you've ever had it happen to you, you know that running out of space will break things in strange ways — whether it prevents the database from accepting writes or fails to store web sessions on disk. By doing something as simple as setting an alert to monitor used disk space for all important volumes (not just root) at around 75%, you'll have proactive visibility into your server to avoid hitting volume capacity.

Additionally, you should define triggers for unusual values that will set off a red flag for you. For example, if your Apache requests per second suddenly drop significantly, that change could indicate a problem somewhere else in your infrastructure, and if you're not monitoring those indirect triggers, you may not learn about those other problems as quickly as you'd like. Find measurable direct and indirect relationships that can give you this kind of early warning, and find a way to measure them and alert yourself when something changes.

2. Dealing with Alerts

It's no good having alerts sent to someone who isn't responding (or who can't at a given time). Using a service like Pagerduty allows you to define on-call rotations for different types of alerts. Nobody wants to be on-call every hour of every day, so differentiating and channeling alerts in an automated way could save you a lot of hassle. Another huge benefit of a platform like Pagerduty is that it also handles escalations: If the first contact in the path doesn't wake up or is out of service, someone else gets notified quickly.

3. Tracking Incidents

Whether you're the only person responsible or you have a team of engineers, you'll want to track the status of alerts/issues, particularly if they require escalation to different vendors. If an incident lasts a long time, you'll want to be able to hand it off to another person in your organization with all of the information they need. By tracking incidents with detailed notes information, you can avoid fatigue and prevent unnecessary repetition of troubleshooting steps.

We use JIRA for this because it allows you to define workflows an issue can progress along as you work on it. It also includes easy access to custom fields (e.g. specifying a vendor ticket ID) and can be assigned to different people.

4. Understanding What Happened

After you have received an alert, acknowledged it and started tracking the incident, it's time to start investigating. Often, this involves looking at logs, and if you only have one or two servers, it's relatively easy, but as soon as you add more, the process can get exponentially more difficult.

We recommend piping them all into a log search tool like (fellow Tech Partners Marketplace participant) Papertrail or Loggly. Those platforms afford you access to all of your logs from a single interface with the ability to see incoming lines in real-time or the functionality to search back to when the incident began (since you've clearly monitored and tracked all of that information in the first three steps).

5. Getting Access to Your Servers

If you're traveling internationally, access to the Internet via a free hotspot like the ones you find in Starbucks isn't always possible. It's always a great idea to order a portable 3G hotspot in advance of a trip. You can usually pick one up from the airport to get basic Internet access without paying ridiculous roaming charges. Once you have your connection, the next step is to make sure you can access your servers.

Both iPhone and Android have SSH and remote desktop apps available which allow you to quickly log into your servers to fix easy problems. Having those tools often saves a lot of time if you don't have access to your laptop, but they also introduce a security concern: If you open server logins to the world so you can login from the dynamic IPs that change when you use mobile connectivity, then it's worth considering a multi-factor authentication layer. We use Duo Security for several reasons, with one major differentiator being the modules they have available for all major server operating systems to lock down our logins even further.

You're never going to escape the reality of system administration: If your server has a problem, you need to fix it. What you can get away from is the uncertainty of not having a clearly defined process for responding to issues when they arise.

-David Mytton, ServerDensity

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
Subscribe to security