sales

April 1, 2013

SoftLayer Mobile: Now a Universal iOS Application

Last month, we put SoftLayer Mobile HD out to pasture. That iPad-specific application performed amazingly, and we got a lot of great feedback from our customers, so we doubled-down on our efforts to support iPad users by merging SoftLayer Mobile HD functionality with our standard SoftLayer Mobile app to provide a singular, universal application for all iOS devices.

By merging our two iOS applications into a single, universal app, we can provide better feature parity, maintain coherent architecture and increase code reuse and maintainability because we're only working with a single feature-rich binary app that provides a consistent user experience on the iPhone and the iPad at the same. Obviously, this meant we had to retool much of the legacy iPhone-specific SoftLayer Mobile app in order to provide the same device-specific functionality we had for the iPad in SoftLayer Mobile HD, but I was surprised at how straightforward that process ended up being. I thought I'd share a few of the resources iOS includes that simplify the process of creating a universal iOS application.

iOS supports development of universal applications via device-specific resource loading and device-specific runtime checks, and we leveraged those tools based on particular situations in our code base.

Device-specific resource loading allows iOS to choose the appropriate resource for the device being used. For example, if we have two different versions of an image called SoftLayerOnBlack.png to fit either an iPhone or an iPad, we simply call one SoftLayerOnBlack~iphone.png and call the other one SoftLayerOnBlack~ipad.png. With those two images in our application bundle, we let the system choose which image to use with a simple line of code:

UIImage* image = [UIImage imageNamed: @"SoftLayerOnBlack.png"];

In addition to device-specific resource loading, iOS also included device-specific runtime checks. With these runtime checks, we're able to create conditional code paths depending on the underlying device type:

if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
    // The device is an iPad running iOS 3.2 or later.
} else {
    // The device is an iPhone or iPod touch.
}

These building blocks allow for a great deal of flexibility when it comes to creating a universal iOS application. Both techniques enable simple support based on what device is running the application, but they're used in subtly different ways. With those device-specific tools, developers are able to approach their universal applications in a couple of distinct ways:

Device-Dependent View Controller:
If we want users on the iPhone and iPad applications to have the same functionality but have the presentation tailored to their specific devices, we would create separate iPhone and iPad view controllers. For example, let's look at how our Object Storage browser appears on the iPhone and the iPad in SoftLayer Mobile:

Object Storage - iPhoneObject Storage - iPad

We want to take advantage of the additional real estate the iPad provides, so at runtime, the appropriate view controller is be selected based on the devices' UI context. The technique would look a little like this:

@implementation SLMenuController
...
 
- (void) navigateToStorageModule: (id) sender {
UIViewController<SLApplicationModule> *storageModule = nil;
    if (UI_USER_INTERFACE_IDIOM() == UIUserInterfaceIdiomPad) {
        storageModule = [SLStorageModule_iPad storageModule];
    } else {
        storageModule = [SLStorageModule storageModule];
    }
    [self navigateToModule: storageModule];
}
...
@end

"Universal" View Controller
In other situations, we didn't need for the viewing experience to differ between the iPhone and the iPad, so we used a single view controller for all devices. We don't compromise the user experience or presentation of data because the view controller either re-scales or reconfigures the layout at runtime based on screen size. Take a look at the "About" module on the iPhone and iPad:

About Module - iPhoneAbout Module - iPad

The code for the universal view controller of the "About" module looks something like this:

@implementation SLAboutModuleNavigationViewController
…
 
- (id) init {
    self = [super init];
    if (self) {
      _navigationHidden = YES;
_navigationWidth = [[UIScreen mainScreen] bounds].size.width * 0.5;
    }
    return self;
}@end

There are plenty of other iOS features and tricks in the universal SoftLayer Mobile app. If you've got a SoftLayer account and an iOS devices, download the app to try it out and let us know what you think. If you were a SoftLayer Mobile HD user, do you notice any significant changes in the new app from the legacy app?

-Pawel

P.S. If you're not on iOS but you still want some SoftLayer love on your mobile device, check out the other SoftLayer Mobile Apps on Android and Windows Phone.

March 26, 2013

Should My Startup Join an Accelerator/Incubator Program?

As part of my role at SoftLayer, I have the opportunity and privilege to mentor numerous entrepreneurs and startup teams when they partner with us through our Catalyst program. One question I hear often is, "Should I join an accelerator?" My answer: "That all depends." Let's look at the five lessons entrepreneurs should learn before they decide to join a startup accelerator or incubator program.

Lesson 1: The founders must be committed to the success of their venture.
Joining an accelerator or incubator comes with some strings attached — startups give up between 6 to 10 percent of their equity in exchange for some cash and structured program that usually lasts around three months. Obviously, this kind of commitment should not be taken lightly.

Too often, startups join accelerator programs before they are ready or mature enough as a team. Sometimes, a company's idea isn't fully baked, so they end up spending as much time "creating" their business as they do "accelerating" it. As a result, that company isn't able to leverage an accelerator's resources efficiently throughout the entire program ... The founders need to establish a vision for the business, begin laying the groundwork for the company's products and services, and be 100% committed to the accelerator program before joining. If you can't say with confidence that your startup meets all three of those requirements, don't do it. Take care of those three points and proceed to the next lesson.

Lesson 2: Be prepared to leverage what you are given.
Many startups join accelerator and incubator programs with unrealistic expectations. Participation in these programs — even the most exclusive and well-known ones — by no means guarantees that you'll raise additional money or have a successful exit. These programs provide startups with office space, free cloud services, and access to mentors, investors, recruiters and media ... Those outstanding services provide participating startups with a distinct competitive advantage, but they don't serve up success on a silver platter. If you aren't ready work tirelessly to leverage the benefits of a startup program, don't bother.

Lesson 3: Take advice and criticism well; mentors are trying to help.
"Mentorship" is very tough to qualify, and criticism is difficult to take ... Especially if you're 100% committed to your business and you don't want to be told that you've done something wrong. Mentors in these startup programs have "been there and done that," and they wouldn't be in a mentorship position if they weren't looking out for your best interest and the ultimate success of your company.

Look programs that take mentorship seriously and can provide a broad range of expertise from strategy to marketing and business development to software architecture to building and scaling IT infrastructure. Then be intentional about listening to the people around you.

Lesson 4: Do your research and make an informed decision.
With the proliferation of startups globally, we're also seeing an evolution in the accelerator ecosystem. There are a number accelerators being positioned to help support founders with ideas on a global, regional and local basis, but it's important to evaluate a program's vision with its execution of that vision. Not all startup programs are created equal, and some might not offer the right set of resources and opportunities for your team. When you're giving up equity in your company, you should have complete confidence that the accelerator or incubator you join will deliver on its side of the deal.

Lesson 5: Leverage the network and community you will meet.
When you've done your homework, applied and been accepted to the perfect startup program, meet everyone you can and learn from them. One of the most tangible benefits of joining an accelerator is the way you can fast track a business idea while boosting network contacts. Much in the way someone chooses a prestigious college or joins a fraternity, some of the most valuable resources you'll come across in these programs are the people you meet. In this way, accelerators and incubators are becoming a proxy for undergrad and graduate school ... The appeal for promising entrepreneurs is simple: Why wait to make a dent in the universe? Today, more people are going to college and fewer are landing well-paying jobs after graduation, so some of the world's best and brightest are turning to these communities and foregoing the more structured "higher education" process.

Even if your startup is plugging along smoothly, a startup accelerator or incubator program might be worth a look. Venture capitalists often trust programs like TechStars and 500 Startups to filter or vet early stage companies. If your business has the stamp of approval from one of these organizations, it's decidedly less risky than a business idea pitched by a random entrepreneur.

If you understand each of these lessons and you take advantage of the resources and opportunities provided by startup accelerators and incubators, the sky is the limit for your business. Now get to work.

Class dismissed.

-@gkdog

March 22, 2013

Social Media for Brands: Monitor Twitter Search via Email

If you're responsible for monitoring Twitter for conversations about your brand, you're faced with a challenge: You need to know what people are saying about your brand at all times AND you don't want to live your entire life in front of Twitter Search.

Over the years, a number of social media applications have been released specifically for brand managers and social media teams, but most of those applications (especially the free/inexpensive ones) differentiate themselves only by the quality of their analytics and how real-time their data is reported. If that's what you need, you have plenty of fantastic options. Those differentiators don't really help you if you want to take a more passive role in monitoring Twitter search ... You still have to log into the application to see your fancy dashboards with all of the information. Why can't the data come to you?

About three weeks ago, Hazzy stopped by my desk and asked if I'd help build a tool that uses the Twitter Search API to collect brand keywords mentions and send an email alert with those mentions in digest form every 30 minutes. The social media team had been using Twilert for these types of alerts since February 2012, but over the last few months, messages have been delayed due to issues connecting to Twitter search ... It seems that the service is so popular that it hits Twitter's limits on API calls. An email digest scheduled to be sent every thirty minutes ends up going out ten hours late, and ten hours is an eternity in social media time. We needed something a little more timely and reliable, so I got to work on a simple "Twitter Monitor" script to find all mentions of our keyword(s) on Twitter, email those results in a simple digest format, and repeat the process every 30 minutes when new mentions are found.

With Bear's Python-Twitter library on GitHub, connecting to the Twitter API is a breeze. Why did we use Bear's library in particular? Just look at his profile picture. Yeah ... 'nuff said. So with that Python wrapper to the Twitter API in place, I just had to figure out how to use the tools Twitter provided to get the job done. For the most part, the process was very clear, and Twitter actually made querying the search service much easier than we expected. The Search API finds all mentions of whatever string of characters you designate, so instead of creating an elaborate Boolean search for "SoftLayer OR #SoftLayer OR @SoftLayer ..." or any number of combinations of arbitrary strings, we could simply search for "SoftLayer" and have all of those results included. If you want to see only @ replies or hashtags, you can limit your search to those alone, but because "SoftLayer" isn't a word that gets thrown around much without referencing us, we wanted to see every instance. This is the code we ended up working with for the search functionality:

def status_by_search(search):
    statuses = api.GetSearch(term=search)
    results = filter(lambda x: x.id > get_log_value(), statuses)
    returns = []
    if len(results) > 0:
        for result in results:
            returns.append(format_status(result))
 
        new_tweets(results)
        return returns, len(returns)
    else:
        exit()

If you walk through the script, you'll notice that we want to return only unseen Tweets to our email recipients. Shortly after got the Twitter Monitor up and running, we noticed how easy it would be to get spammed with the same messages every time the script ran, so we had to filter our results accordingly. Twitter's API allows you to request tweets with a Tweet ID greater than one that you specify, however when I tried designating that "oldest" Tweet ID, we had mixed results ... Whether due to my ignorance or a fault in the implementation, we were getting fewer results than we should. Tweet IDs are unique and numerically sequential, so they can be relied upon as much as datetime (and far easier to boot), so I decided to use the highest Tweet ID from each batch of processed messages to filter the next set of results. The script stores that Tweet ID and uses a little bit of logic to determine which Tweets are newer than the last Tweet reported.

def new_tweets(results):
    if get_log_value() < max(result.id for result in results):
        set_log_value(max(result.id for result in results))
        return True
 
 
def get_log_value():
    with open('tweet.id', 'r') as f:
        return int(f.read())
 
 
def set_log_value(messageId):
    with open('tweet.id', 'w+') as f:
        f.write(str(messageId))

Once we culled out our new Tweets, we needed our script to email those results to our social media team. Luckily, we didn't have to reinvent the wheel here, and we added a few lines that enabled us to send an HTML-formatted email over any SMTP server. One of the downsides of the script is that login credentials for your SMTP server are stored in plaintext, so if you can come up with another alternative that adds a layer of security to those credentials (or lets you send with different kinds of credentials) we'd love for you to share it.

From that point, we could run the script manually from the server (or a laptop for that matter), and an email digest would be sent with new Tweets. Because we wanted to automate that process, I added a cron job that would run the script at the desired interval. As a bonus, if the script doesn't find any new Tweets since the last time it was run, it doesn't send an email, so you won't get spammed by "0 Results" messages overnight.

The script has been in action for a couple of weeks now, and it has gotten our social media team's seal of approval. We've added a few features here and there (like adding the number of Tweets in an email to the email's subject line), and I've enlisted the help of Kevin Landreth to clean up the code a little. Now, we're ready to share the SoftLayer Twitter Monitor script with the world via GitHub!

SoftLayer Twitter Monitor on GitHub

The script should work well right out of the box in any Python environment with the required libraries after a few simple configuration changes:

  • Get your Twitter Customer Secret, Access Token and Access Secret from https://dev.twitter.com/
  • Copy/paste that information where noted in the script.
  • Update your search term(s).
  • Enter your mailserver address and port.
  • Enter your email account credentials if you aren't working with an open relay.
  • Set the self.from_ and self.to values to your preference.
  • Ensure all of the Python requirements are met.
  • Configure a cron job to run the script your desired interval. For example, if you want to send emails every 10 minutes: */10 * * * * <path to python> <path to script> 2>&1 /dev/null

As soon as you add your information, you should be in business. You'll have an in-house Twitter Monitor that delivers a simple email digest of your new Twitter mentions at whatever interval you specify!

Like any good open source project, we want the community's feedback on how it can be improved or other features we could incorporate. This script uses the Search API, but we're also starting to play around with the Stream API and SoftLayer Message Queue to make some even cooler tools to automate brand monitoring on Twitter.

If you end up using the script and liking it, send SoftLayer a shout-out via Twitter and share it with your friends!

-@SoftLayerDevs

March 20, 2013

Learntrail: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome Daniel Hamilton, CTO of Learntrail. Learntrail is a learning management system for creating, assigning, and tracking e-learning programs. It helps you train your employees and develop a more effective workforce.

The Power of Great People

In 1995, Peter Drucker, one of the founding fathers of modern-day management, shared a profoundly simple idea: "People are our greatest asset." Today, almost two decades later, that quote is reiterated in one form or another by the top executives at the largest companies in the world. You can have the best product, a stellar marketing plan and the perfect vision, but without a great team of people to execute with those tools, your company isn't going anywhere.

In an online world now driven by innovation, it's easy to want to substitute "technology" for "people" as a business's greatest asset, but I'd argue that Peter Drucker's quote is as true now as it was in 1995. Think about it in terms of keeping your webiste online. Your server's hardware — a powerful CPU, ample storage space, tons of RAM and a fast network connection — might dictate how your website runs when everything is going smoothly, but when your traffic spikes over the holidays or an article on your blog goes viral, your ability to respond quickly to keep your website operational will be dictated by the quality of your server admins and support staff.

While good companies focus on improving their products, great companies focus on improving their people. In 2010, Google approached the challenge of improving its people by creating GoogleEDU — a program designed to formalize the process of educating employees in new skills, strategies and perspectives. Beyond building a stronger team of smarter individuals, Google is clearly investing in its employees, and that investment goes a long way to engender loyalty and job satisfaction.

What if your business doesn't happen to have Google's resources or a $269 billion market cap? That's the problem Learntrail set out to solve. Our platform was designed to make it easy for businesses to create stunning, full-featured multimedia courses that can be monitored and tracked in detail with a few clicks.

Learntrail Chalkboard

You can bring your new-hire orientation program online, centralize training documents for new products, or create simple lessons about company-specific procedures through a sleek, easy-to-use portal. You’ll also get real-time reports about your team’s progress, so you'll know exactly how your training is being used by your employees. To prove how confident we are that Learntrail will meet your needs, we have a risk-free, no credit card required 14-day trial that lets you kick the tires and get a feel for how Learntrail can work for your business.

Your people are your greatest asset.

-Daniel Hamilton, Learntrail

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
March 19, 2013

iptables Tips and Tricks: CSF Configuration

In our last "iptables Tips and Tricks" installment, we talked about Advanced Policy Firewall (APF) configuration, so it should come as no surprise that in this installment, we're turning our attention to ConfigServer Security & Firewall (CSF). Before we get started, you should probably run through the list of warnings I include at the top of the APF blog post and make sure you have your Band-Aid ready in case you need it.

To get the ball rolling, we need to download CSF and install it on our server. In this post, we're working with a CentOS 6.0 32-bit server, so our (root) terminal commands would look like this to download and install CSF:

$ wget http://www.configserver.com/free/csf.tgz #Download CSF using wget.
$ tar zxvf csf.tgz #Unpack it.
$ yum install perl-libwww-perl #Make sure perl modules are installed ...
$ yum install perl-Time-HiRes  #Otherwise it will generate an error.
$ cd csf
$ ./install.sh #Install CSF.
 
#MAKE SURE YOU HAVE YOUR BAND-AID READY
 
$ /etc/init.d/csf start #Start CSF. (Note: You can also use '$ service csf start')

Once you start CSF, you can see a list of the default rules that load at startup. CSF defaults to a DROP policy:

$ iptables -nL | grep policy
Chain INPUT (policy DROP)
Chain FORWARD (policy DROP)
Chain OUTPUT (policy DROP)

Don't ever run "iptables -F" unless you want to lock yourself out. In fact, you might want to add "This server is running CSF - do not run 'iptables -F'" to your /etc/motd, just as a reminder/warning to others.

CSF loads on startup by default. This means that if you get locked out, a simple reboot probably won't fix the problem. Runlevels 2, 3, 4, and 5 are all on:

$ chkconfig --list | grep csf
csf             0:off   1:off   2:on    3:on    4:on    5:on    6:off

Some features of CSF will not work unless you have certain iptables modules installed. I believe they are installed by default in CentOS, but if you custom-built your iptables, they might not all be installed. Run this script to see if all modules are installed:

$ /etc/csf/csftest.pl
Testing ip_tables/iptable_filter...OK
Testing ipt_LOG...OK
Testing ipt_multiport/xt_multiport...OK
Testing ipt_REJECT...OK
Testing ipt_state/xt_state...OK
Testing ipt_limit/xt_limit...OK
Testing ipt_recent...OK
Testing xt_connlimit...OK
Testing ipt_owner/xt_owner...OK
Testing iptable_nat/ipt_REDIRECT...OK
Testing iptable_nat/ipt_DNAT...OK
 
RESULT: csf should function on this server

As I mentioned, this is the default iptables installation on a minimal CentOS 6.0 image, so chances are good that these modules are already installed on your system. It never hurts to check, though.

The CSF Configuration File

The primary CSF configuration is stored in the well-documented /etc/csf/csf.conf file. CSF is extremely configurable, so there are a lot of options to read over. Let's take a look over some of the more important features:

Testing

TESTING = "1"
TESTING_INTERVAL = "5"

This TESTING cron job runs every "5" minutes so you don't lock yourself out when you're testing your rules. When you are satisfied with your rules (and confident that you won't lock yourself out), you can set TESTING to "0".

Globally Allowed Ports

# Allow incoming TCP ports
TCP_IN = "20,21,22,25,53,80,110,143,443,465,587,993,995"
 
# Allow outgoing TCP ports
TCP_OUT = "20,21,22,25,53,80,110,113,443"
 
# Allow incoming UDP ports
UDP_IN = "20,21,53"
 
# Allow outgoing UDP ports
# To allow outgoing traceroute add 33434:33523 to this list
UDP_OUT = "20,21,53,113,123"

Incoming Ping Requests

# Allow incoming PING
ICMP_IN = "1"

Allowing ping is usually a good option for diagnostic purposes, so I don't recommend turning it off. Disallowing ping is an example of "security through obscurity," and it will not typically dissuade your attackers.

Ethernet Device

ETH_DEVICE = ""
ETH6_DEVICE = ""

Here, you can configure iptables to ONLY use one Ethernet adapter. You might want to only guard your public network adapter in some situations.

IP Limit in Permanent "Deny" File

DENY_IP_LIMIT = "200"

A higher number here will obviously screen out more IP addresses in csf.deny, but higher numbers also may cause slowdowns.

IP Limit in Temporary "Deny" File

DENY_TEMP_IP_LIMIT = "100"

Similar to DENY_IP_LIMIT, the DENY_TEMP_IP_LIMIT represents the maximum number of IPs that can be stored in the temporary ban list.

SMTP Blocking

SMTP_BLOCK = "0"

When set to "1", SMTP_BLOCK does not completely block outbound SMTP, but it does block it for most users. This will prevent malicious scripts and compromised users from making outbound connections from unauthorized mail clients on the server. SMTP_BLOCK doesn't stop those scripts from running, but it does stop them from functioning. Mail sent through the proper channels will still be delivered normally.

Allowing SMTP on localhost

SMTP_ALLOWLOCAL = "1"

Custom Mail Port Designation

SMTP_PORTS = "25,465,587"

Allowing SMTP Access to Users/Groups

SMTP_ALLOWUSER = ""
SMTP_ALLOWGROUP = "mail,mailman"

SYN Flood Protection

SYNFLOOD = "0"
SYNFLOOD_RATE = "100/s"
SYNFLOOD_BURST = "150"

Per the documentation, you should only enable SYN flood protection (SYNFLOOD= "1") if you are currently under a SYN flood attack.

Concurrent Connections Limit

CONNLIMIT = "22;5,80;20"
PORTFLOOD = "22;tcp;5;300,80;tcp;20;5

These options allow you to add customized DoS protection. CONNLIMIT handles the number of concurrent connections, and in this example, we're limiting port 22 to 5 connections and port 80 to 20 connections.

PORTFLOOD watches the number of connections per a given number of seconds. In this example, we're limiting the TCP connection on port 22 to 5 connections/second with a quiet period of 300 seconds before the connection is unblocked. Additonally, we're limiting the TCP connection on port 80 to 20 connections/second with a quiet period of 5 seconds before the connection is unblocked.

Check the readme.txt file for more information about the syntax.

Logging to Syslog

SYSLOG = "0"

When enabled, this option logs lfd (Login Failure Daemon) messages to syslog as well as to /var/log/lfd.log.

Dropping v. Rejecting Packets

DROP = "DROP"

This configuration allows you to either DROP or REJECT packets. REJECT tells the sender that the packet has been blocked by the firewall. DROP just drops the packet and does not send a response. I like DROP better for regular use, but REJECT might be more helpful if you need to diagnose a connectivity issue.

Logging Dropped Connections

DROP_LOGGING = "1"

This option logs dropped connections to syslog. I don't see any reason to turn this off unless your hard drive is getting full.

Port Exceptions When Logging Dropped Connections

DROP_NOLOG = "67,68,111,113,135:139,445,500,513,520"

These ports are specifically blocked from being logged either to conserve hard drive space or make the log file easier to read.

"Watch Mode"

WATCH_MODE = "0"

If you are ever stuck trying to troubleshoot a large ruleset, you might consider turning this option on. You can use it to track the actions to watched IP addresses to see where they are getting blocked or accepted.

Login Failure Daemon Alert

LF_ALERT_TO = ""
LF_ALERT_FROM = ""
LF_ALERT_SMTP = ""

You can specify an email address to report errors from the Login Failure Daemon, which tracks and automatically blocks brute force login attempts.

Permanent Blocks and NetBlocks

LF_PERMBLOCK = "1"
LF_PERMBLOCK_INTERVAL = "86400"
LF_PERMBLOCK_COUNT = "4"
LF_PERMBLOCK_ALERT = "1"
LF_NETBLOCK = "0"
LF_NETBLOCK_INTERVAL = "86400"
LF_NETBLOCK_COUNT = "4"
LF_NETBLOCK_CLASS = "C"
LF_NETBLOCK_ALERT = "1"

These settings control the permanent block and netblock blocking. You probably don't need to touch these settings, but you might want some additional security or less security depending on your company needs. If something gets permablocked, it will require your intervention to clear it, which might create downtime for your clients. Likewise, if a legitimate IP address happens to be part of a netblock which has an attacking IP address on it, it will get blocked if you have that feature turned on. A class C network encompasses 256 IP addresses. You can set this to class B or A, but that could block thousands or millions of IP addresses, respectively. Unless you find yourself under constant attack, I would advise you to leave that LF_NETBLOCK off.

Additional Protection During Updates

# Safe Chain Update. If enabled, all dynamic update chains (GALLOW*, GDENY*,
# SPAMHAUS, DSHIELD, BOGON, CC_ALLOW, CC_DENY, ALLOWDYN*) will create a new
# chain when updating, and insert it into the relevant LOCALINPUT/LOCALOUTPUT
# chain, then flush and delete the old dynamic chain and rename the new chain.
#
# This prevents a small window of opportunity opening when an update occurs and
# the dynamic chain is flushed for the new rules.
SAFECHAINUPDATE = "0"

Activating this option will increase your system resource usage and will require more rules to be running at one time, but it provides an additional layer of protection during updates. Without this option turned on, your rules will be flushed for a short amount of time, leaving your server vulnerable.

Multi-Server Deployment Options

LF_GLOBAL = "0"
GLOBAL_ALLOW = ""
GLOBAL_DENY = ""
GLOBAL_IGNORE = ""

Like APF, you can configure global lists for multiple server deployments. You'll need to specify a URL of the text file with the IP addresses for the global lists.

SPAMHAUSE Blocklist

LF_SPAMHAUS = "0"

This option enables the SPAMHAUS blocklist. Specify the number of seconds between refreshes. Recommended setting is 86400 (1 day).

Blocking TOR Exit IP Addresses

LF_TOR = "0"

Enabling this option will block TOR exit IP addresses. If you are not familiar with TOR, it is a completely anonymous proxy network. This could block some legitimate users who are trying to protect their anonymity, so I would recommend only turning this on if you are already under attack from a TOR exit address.

Blocking Bogon Addresses

LF_BOGON = "0"
LF_BOGON_URL = "http://www.cymru.com/Documents/bogon-bn-agg.txt"
LF_BOGON_SKIP = ""

Blocking bogon addresses (addresses that should not be possible) is usually a good decision. To enable, set the number of seconds between refreshes. I recommend enabling this option and setting the refresh at 86400 (1 day). If you do so, be sure to add your private network adapters to the skip list.

Country-Specific Access to Your Server

CC_DENY = ""
CC_ALLOW = ""

With these options, you can block or allow entire countries from accessing your server. To do so, enter the country codes in a comma separated list. Even though this generates a lot of additional rules, it's valuable to some sysadmins.

CC_ALLOW_FILTER = ""

Alternatively, you can set your server to exclusively accept traffic from a list of country codes. All other countries not listed will have their traffic dropped. There are many other settings related to these options that I don't have time to cover in this blog.

Blocking Login Failures

LF_TRIGGER = "0"

This enables blocking of login failures (per service). There are a lot of great customization options in this section.

Scanning Directories for Malicious Files

LF_DIRWATCH = "300"

This feature scans /tmp and /dev/shm for potentially malicious files and alerts you to their presence based on the interval you designate. You can also have CSF automatically quarantine malicious files with this option:

LF_DIRWATCH_DISABLE = "0"

Distributed Attack Protection

LF_DISTATTACK = "0"

By enabling this option, you activate additional protection against distributed attacks.

Blocking Based on Abusive Email Usage

LT_POP3D = "0"
LT_IMAPD = "0"

If a user checks email too many times per hour (more than the non-zero value specified), the user's IP address is blocked.

Email Alert Following Block

LT_EMAIL_ALERT = "1"

This will send you email when something is blocked. I'd recommend leaving it on.

Blocking IP Addresses Based on Number of Connections

CT_LIMIT = "0"

This feature tracks connections and blocks the IP if the number of connections is too high. Use caution because if you enable this option and set this value too low, it will block legitimate traffic.

Application-Level Protection

PT_LIMIT = "60"

This feature provides application level protection against malicious scripts that take a long time to execute.

Blocking Port Scanners

PS_INTERVAL = "300"
PS_LIMIT = "10"

Enabling HTML User Interface for CSF

UI = "0"

CSF has a built-in HTML user interface. You can enable this by setting UI = "1". There are a list of prerequisites for this option in the readme.txt.

Notifying Blocked IP Addresses

MESSENGER = "0"

This option will notify blocked IP addresses when they have been blocked by the firewall.

Port Knocking

PORTKNOCKING = ""

CSF supports port knocking, which is a technique that provides an additional layer of security. See http://www.portknocking.org/ for details.

Allow and Deny Lists

As we walked through the CSF configuration file, you saw that I referenced the csf.deny file, so it should come as no surprise that CSF also includes csf.allow to customize "allow" rules as well. If you are familiar with APF, these files have a very similar syntax ... Each entry is made up of the same four components: protocol|flow|port|IP. The only real difference being that APF uses the colon as a delimiter while CSF uses the pipe:

#APF Version
tcp:in:d=48000_48020:s=10.0.0.0/8
 
#CSF Version
tcp|in|d=48000_48020|s=10.0.0.0/8

Fortunately, replacing your colon with a pipe is a minimally invasive procedure that can be automated with a tool like vi.

CSF Command Line Tool

The command line tool for CSF is much more robust than the one for APF:

$ csf --help
csf: v5.79 (cPanel)
 
ConfigServer Security &amp; Firewall
(c)2006-2013, Way to the Web Limited (http://www.configserver.com)
 
Usage: /usr/sbin/csf [option] [value]
 
Option              Meaning
-h, --help          Show this message
-l, --status        List/Show iptables configuration
-l6, --status6      List/Show ip6tables configuration
-s, --start         Start firewall rules
-f, --stop          Flush/Stop firewall rules (Note: lfd may restart csf)
-r, --restart       Restart firewall rules
-q, --startq        Quick restart (csf restarted by lfd)
-sf, --startf       Force CLI restart regardless of LF_QUICKSTART setting
-a, --add ip        Allow an IP and add to /etc/csf.allow
-ar, --addrm ip     Remove an IP from /etc/csf.allow and delete rule
-d, --deny ip       Deny an IP and add to /etc/csf.deny
-dr, --denyrm ip    Unblock an IP and remove from /etc/csf.deny
-df, --denyf        Remove and unblock all entries in /etc/csf.deny
-g, --grep ip       Search the iptables rules for an IP match (incl. CIDR)
-t, --temp          Displays the current list of temp IP entries and their TTL
-tr, --temprm ip    Remove an IPs from the temp IP ban and allow list
-td, --tempdeny ip ttl [-p port] [-d direction]
                    Add an IP to the temp IP ban list. ttl is how long to
                    blocks for (default:seconds, can use one suffix of h/m/d).
                    Optional port. Optional direction of block can be one of:
                    in, out or inout (default:in)
-ta, --tempallow ip ttl [-p port] [-d direction]
                    Add an IP to the temp IP allow list (default:inout)
-tf, --tempf        Flush all IPs from the temp IP entries
-cp, --cping        PING all members in an lfd Cluster
-cd, --cdeny ip     Deny an IP in a Cluster and add to /etc/csf.deny
-ca, --callow ip    Allow an IP in a Cluster and add to /etc/csf.allow
-cr, --crm ip       Unblock an IP in a Cluster and remove from /etc/csf.deny
-cc, --cconfig [name] [value]
                    Change configuration option [name] to [value] in a Cluster
-cf, --cfile [file] Send [file] in a Cluster to /etc/csf/
-crs, --crestart    Cluster restart csf and lfd
-w, --watch ip      Log SYN packets for an IP across iptables chains
-m, --mail [addr]   Display Server Check in HTML or email to [addr] if present
-lr, --logrun       Initiate Log Scanner report via lfd
-c, --check         Check for updates to csf but do not upgrade
-u, --update        Check for updates to csf and upgrade if available
-uf                 Force an update of csf
-x, --disable       Disable csf and lfd
-e, --enable        Enable csf and lfd if previously disabled
-v, --version       Show csf version

The command line tool will also tell you if the testing mode is enabled (which is a very useful feature). If TESTING were enabled, we'd see this line at the bottom of the output:

*WARNING* TESTING mode is enabled - do not forget to disable it in the configuration

Did you make it all the way through?! Great! I know it's a lot to take in, but it's not terribly complicated when we break it down and understand how each piece works. Next time, I'll be back with some tips on integrating CSF into cPanel.

-Mark

March 8, 2013

India: Using Global Technology to Go Hyper-Local

Bill Gates once told a journalist that everyone should care about developments in India because the world's largest democracy (of 1.2 billion people) and tenth-largest economy is quickly catching up with us. I recently had the opportunity to see those developments first-hand, and I wholeheartedly agree with Bill's sentiment. Innovation and technology breakthroughs are not owned by or limited to the United States, and as international markets mature, we're going to see more and more entrepreneurship and startup activity overseas. Now I don't mean to imply that the demise of Silicon Valley is imminent, but its influence will be greatly diminished in the future, and that's not necessarily a bad thing.

I just returned from a round-the-world trip that included nearly two weeks in India as part of a 500 Startups-sponsored market exploration tour called Geeks on a Plane. The tour stopped through Bangalore, Mumbai and New Delhi, with meetups for local entrepreneurs, startups, investors and some of the most influential companies in India's technology ecosystem. While in India, I had the chance to meet several SoftLayer customers — including Zoomin, PowerWeave, and Vidya Mantra — and their insight into the growing technology culture in the region was eye-opening.

India

One of the most interesting characteristics shared by many of the entrepreneurs I spoke with was that they were building businesses with a "hyper-local" focus: Unique business models that are specifically geared toward serving local communities while leveraging the latest technologies in mobility and e-commerce. This distinction is particularly noteworthy because they didn't assume that they'd need to succeed in the US market or compete with companies in the US to build their businesses ... And they're absolutely right. The opportunities that exist for hyper-local entrepreneurs in these emerging markets are staggering.

FlipKart is known as "The Amazon of India." It's very similar to the online shopping giant most of us know and use regularly, but with some unique regional twists. For example, because credit card and electronic payments in India are not as prevalent or reliable as they are in much of the world, orders are taken via both an online ordering system and through FlipKart call centers. Once processed, a highly developed network of "scooters" delivers about 50 percent of FlipKart's orders, and the payment is provided at the customer's door — IN CASH. While that might seem simplistic, each courier has a smartphone that allows them to become a geo-located, connected, data sharing entity. Hundreds of millions of dollars in FlipKart orders are delivered each year with very few issues, despite the fact that most of us can't even imagine how the company could operate that way in the US.

Another great example of how innovators are using technology to redefine businesses is redBus, India's largest bus ticketing company. A huge percentage of travel in India is done very inexpensively by bus, and before redBus came on the scene, travelers took their chances by buying tickets through middlemen and ticket brokers, often getting ripped off or becoming victims of double-booking. By centralizing the ticketing process, redBus is able to provide a reliable way to book a seat on any of India's vast system of buses via phone, online or in person. redBus offers the largest selection of bus seats in the country with over 350 bus operators and a flexible network of boarding points, timing and bus types. It's an incredibly simple service that meets a clear need for a hyper-local audience by leveraging the technologies being built and improved around the world.

If my two weeks in India taught me one thing, it was that the startups don't need to conquer international markets ... They can strive to service their local communities and interests, and they'll be just as successful (if not more). Our Catalyst program has just begun its international expansion into India, and the future certainly looks bright. In fact, I'm proud to announce that we've already signed up our first Catalyst program member in India with many more to come!

As we continue working with startup communities around the globe, I learn more and more about how the world is changing, and I get a stronger appreciation for the cultural and economic ties that bind us all together.

Stay tuned!

-@gkdog

March 8, 2013

Server Challenge II: Strata Conference 2013

If you want to find the Server Challenge II on an exhibit hall floor, just look for a crowd in one of the aisles and listen for cheers. When SoftLayer partnered with Supermicro to build a retro upgrade for our original Server Challenge, we knew the results would be phenomenal, and we haven't been disappointed. Other booths are chatting with one or two attendees while we've got the attention of 20+ as we explain what the Server Challenge II is all about and how it relates to what we do.

Strata Conference

About a dozen Strata Conference attendees asked where the Server Challenge II would show up next, and upon hearing that we'd have it at SXSW next week, one (semi-jokingly) begged us to let him rent the unit so he could practice beforehand. It almost seems like the competition is getting a cult following. And we love it.

Beyond the simple fact that the Server Challenge II affords us to talk about SoftLayer's differentiators as a cloud infrastructure provider, the competition actually brings flocks of attendees to our booth at the *end* of a show when other booths are already starting to packing up to go home. At Strata, the top four times were set in the last two hours of the show, and the very last attempt (which started right when the lights were flashing to signal the end of the show) was less than five seconds for taking the top spot.

In the end, Jonathan Heyne Galli bested the competition to take home bragging rights and a MacBook Air with a speedy time of 1:04.45. To showcase the winning attempt in a unique way, I grabbed my phone and fired up Vine:

If you have twelve more seconds to watch two other attempts, the Second Place and Third Place attempts were also captured with Vine.

In the midst of all of this competition, I've been blown away at the sportsmanship between competitors. I know how cheesy that sounds given the fact that we're talking about a game with a server rack in an expo hall, but it's true. Carson, the third place finisher, actually beat Jonathan's 1:04.45 toward the end of the show, but one of the drive tray arms wasn't clipped closed when he stopped the timer. We explained that we couldn't give him the top spot but that we could wipe that score and give him one more chance to replicate the result (with no errors), and he was quick to agree. He wouldn't want someone else to win with an "incomplete" build if he were in first place, so he didn't want to win that way.

Here was the final leader board from Strata 2013:

Strata Leader Board

Given the floods of traffic to our booth wherever the Server Challenge II turns up, it's only a matter of time until someone makes a documentary on the Server Challenge like The King of Kong: A Fistful of Quarters. I can see it now ... The Server Sultan: Get in Line to Bring Servers Online.

-@khazard

March 7, 2013

Script Clip: HTML5 Audio Player with jQuery Controls

HTML5 and jQuery provide mind-blowing functionality. Projects that would have taken hours of development and hundreds of lines of code a few years ago can now be completed in about the time it'll take you to read this paragraph. If you wanted to add your own audio player on a web page in the past, what would it have involved? Complicated elements? Flash (*shudders*)? It was so complicated that most developers just linked to the audio file, and the user just downloaded the file to play it locally. With HTML5, an embedded, cross-browser audio player can be added to a page with five lines of code, and if you want to get really fancy, you can easily use jQuery to add some custom controls.

If you've read any of my previous blogs, you know that I love when I find little code snippets that make life as a web developer easier. My go-to tools in that pursuit are HTML5 and jQuery, so when I came across this audio player, I knew I had to share. There are some great jQuery plugins to play music files on a web page, but they can be major overkill for a simple application if you have to include comprehensive controls and themes. Sometimes you just want something simple without all of that overhead:

Oooh... Ahhh...

That song — Pop Bounce by SoftLayer's very own Chris Interrante — is written in five simple lines of HTML5 code:

<audio style="width:550px; margin: 0 auto; display:block;" controls>
  <source src="http://cdn.softlayer.com/innerlayer/Interrante-PopBounce.ogg" type="audio/ogg">
  <source src="http://cdn.softlayer.com/innerlayer/Interrante-PopBounce.mp3" type="audio/mpeg">
Your browser does not support the audio element.
</audio>

If IE 9+, Chrome 6+, Firefox 3.6+, Safari 5+ and Opera 10+ would all agree on supported file formats for the <audio> tag, the code snippet would be even smaller. I completely geek out over it every time I look at it and remember the days of yore. As you can see, the HTML5 application has some simple default controls: Play, Pause, Scan to Time, etc. As a developers, I couldn't help but look for a to spice it up a little ... What if we want to fire an event when the user plays, pauses, stops or takes any other action with the audio file? jQuery!

Make sure your jQuery include is in the <head> of your page:

<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.8.2/jquery.min.js"></script>

Now let's use jQuery to script separate "Play" and "Pause" links ... And let's have those links fire off an alert when they are pressed:

$(document).ready(function(){
  $("#play-button").click(function(){
   $("#audioplayer")[0].play();
   alert('You have played the audio file!');
  })    
 
  $("#pause-button").click(function(){
   $("#audioplayer")[0].pause();
   alert('You have paused the audio file!');
  })    
})

With that script in the <head> as well, the HTML on our page will look like this:

<div class=:"audioplayer">
  <audio id="audioplayer" name="audioplayer" controls loop>
    <source src="http://cdn.softlayer.com/innerlayer/Interrante-PopBounce.ogg" type="audio/ogg">
    <source src="http://cdn.softlayer.com/innerlayer/Interrante-PopBounce.mp3" type="audio/mpeg">
  Your browser does not support the audio element.
  </audio>
 
  <a id="play-button" href="#">Play!</a>
  <a id="pause-button" href="#">Pause!</a>
</div>

Want proof that it works that simply? Boom.

You can theme it any way you like; you can add icons instead of the text ... The world is your oyster. The bonus is that you're using one of the lightest media players on the Internet! If you decide to get brave (or just more awesome), you can explore additional features. You're using jQuery, so your possibilities are nearly limitless. If you want to implement a "Stop" feature (which returns the audio back to the beginning when "Stop" is pressed), you can get creative:

$("#stop-button").click(function(){
    $("#audioplayer")[0].currentTime = 0; // return the audio file back to the beginning
}

If you want to include some volume controls, those can be added in a snap as well:

$("#volumeUp").click(function(){
    $("#audioplayer")[0].volume +=0.1;
}
 
$("#volumeDown").click(function(){
    $("#audioplayer")[0].volume -=0.1;
}

Try it out and let me know what you think. Your homework is to come up with some unique audio player functionality and share it here!

-Cassandra

March 5, 2013

Startup Series: Kickback Tickets

The very first client I recruited to Catalyst when I joined the CommDev team about a year ago happens to be one of Catalyst's most interesting customer success stories ... and I'm not just saying that because it was the first partner I signed on. Kickback Tickets — an online ticketing platform that utilized crowdfunding — has simplified the process of creating and funding amazing events, and as a result, they've made life a lot easier for the startup, developer and networking organizations that fuel Catalsyt.

Anyone who's organized events knows that it often involves a financial risk because it's hard to know whether the event will be well-enough attended to cover the costs of putting on the event. With Kickback Tickets, an event is listed an funded ahead of time, and when it reaches its "Tipping Point" goal of tickets ordered, it's completely funded, the early supporters are charged, and the ticket sales continue.

The process is simple:

Kickback Tickets

Event updates, guest registrations and QR-coded tickets are provided to attendees to make check-in seamless, so the hosts of each event don't have hassle with those details. Kickback's revenue comes from a small fee on each ticket for each successfully funded event, and they've got a ton of momentum. After signing on with Catalyst in March 2012, Kickback went live with an open beta in November 2012, and they launched their out-of-beta site in February 2013. They've successfully funded more than 20 events, and new events are added daily.

Kickback Tickets

When I met the Kickback founders Jonathan Perkins and Julian Balderas, I was attending SF Beta (my first official event as a SLayer). At the time, Jonathan and Julian were a couple of bankers with an innovative idea to help organizations alleviate the financial risk of planning and putting on events by enlisting community support. I told them about my experience as the COO of a small non-profit startup up called Slavery Footprint (also a Catalyst partner), and I guess they could relate to the challenges SoftLayer helped us overcome because they were excited to join.

In their own words, Jonathan and Julian explain that their partnership with Softlayer and the Catalyst program has been extremely valuable:

SoftLayer provides a rock-solid technical foundation and allows us to focus more resources on business development. On the technical side, what Softlayer offers is impressive — super fast speeds and an intricate level of control over the hardware. On the personal side, the mentorship and networking benefits of the program have been very helpful. We've always found the Catalyst team to be available to chat about any questions we had, ranging from development to biz dev to fundraising.

As they continue to expand their platform, it's going to be exciting to watch Kickback become a true force in the events space. Organize your next event with Kickback and make sure it's a success.

Oh, and if you want to speak to Jonathan and Julian, just reach out to me and I'll happily make the introduction.

-@JoshuaKrammes

February 27, 2013

The Three Most Common Hosting-Related Phobias

As a member of the illustrious the SoftLayer sales (SLales) team, I have the daily pleasure of talking with any number of potential, prospective, new and current customers, and in many of those conversations, I've picked up on a fairly common theme: FEAR. Now we're not talking about lachanophobia (fear of vegetables) or nomophobia (fear of losing cell phone contact) here ... We're talking about fear that paralyzes users and holds them captive — effectively preventing their growth and limiting their business's potential. Fear is a disease.

I've created my own little naming convention for the top three most common phobias I hear from users as they consider making changes to their hosting environments:

1. Pessimisobia
This phobia is best summarized by the saying, "Better the devil you know than the devil you don't." Users with this phobia could suffer from frequent downtime, a lack of responsive support and long term commitment contracts, but their service is a known quantity. What if a different provider is even worse? If you don't suffer from pessimisobia, this phobia probably seems silly, but it's very evident in many of the conversations I have.

2. Whizkiditus
This affliction is particularly prevalent in established companies. Symptoms of this phobia include recurring discomfort associated with the thought of learning a new management system or deviating from a platform where users have become experts. There's an efficiency to being comfortable with how a particular platform works, but the ceiling to that efficiency is the platform itself. Users with whizkiditus might not admit it, but the biggest reason they shy away from change is that they are afraid of losing the familiarity they've built with their old systems over the years ... even if that means staying on a platform that prohibits scale and growth.

3. Everythingluenza
In order to illustrate this phobia of compartmentalizing projects to phase in changes, let's look at a little scenario:

I host all of my applications at Company 1. I want to move Application A to the more-qualified Company 2, but if I do that, I'll have to move Applications B through Z to Company 2 also. All of that work would be too time-consuming and cumbersome, so I won't change anything.

It's easy to get overwhelmed when considering a change of cloud hosting for any piece of your business, and it's even more intimidating when you feel like it has to be an "all or nothing" decision.

Unless you are afflicted with euphobia (the fear of hearing good news), you'll be happy to hear that these common fears, once properly diagnosed, are quickly and easily curable on the SoftLayer platform. There are no known side effects from treatment, and patients experience immediate symptom relief with a full recovery in between 1-3 months.

This might be a lighthearted look at some quirky fears, but I don't want to downplay how significant these phobias are to the developers and entrepreneurs that suffer from them. If any of these fears strike a chord with you, reach out to the SLales team (by phone, chat or email), and we'll help you create a treatment plan. Once you address and conquer these fears, you can devote all of your energy back to getting over your selenophobia (fear of the moon).

-Arielle

Categories: 

Pages

Subscribe to sales