Posts Tagged 'Tips And Tricks 2'

November 14, 2013

Enhancing Usability by Building User Confidence

Consider your experiences with web applications, and see if this scenario seems familiar: Your electricity bill has some incorrect charges on it. Fearing that you will have to spend 40 minutes on hold if you call in, you find that the electric company website has a support center where you can submit billing issues and questions; you are saved! You carefully fill out the form with your sixteen-digit account number and detailed description of the incorrect charges. You read it over and click the submit button. Your page goes blank for a couple of seconds, the form comes back with a note saying you typed in your phone number incorrectly, and the detailed description you spent eleven minutes meticulously writing is gone.

Web applications have gotten much better at preventing these kinds of user experiences over the past few years, and I'm sure that none of your applications have this problem (if they do, fix it right now!), but "usability" is more than just handling errors gracefully. Having a seamless process is only half the battle when it comes to giving your users a great experience with your application. The other half of the battle is a much more subjective: Your users need to feel confident in their success every step of the way. By keeping a few general guidelines in mind, you can instill confidence in your users so that they feel positive about your application from start to finish with whatever they are trying to accomplish.

1. Keep the user in a familiar context.

As the user in our electric company support application example, let's assume the process works and does not lose any of my information. I have to have faith that the application is going to do what I expect it to do when the page refreshes. Faith and unfamiliar technology do not exactly go hand in hand. Instead of having the form submit with a page refresh, the site's developers could introduce a progress wheel or other another kind of indicator that shows the data is being submitted while the content is still visible. If detailed content never goes away during the submission process, I'm confident that I still have access to my information.

Another example of the same principle is the use of modal windows. Modal windows are presented on top of a previous page, so users have a clear way of going back if they get confused or decide they navigated to the wrong place. By providing this new content on top of a familiar page, users are much less likely to feel disoriented if they get stuck or lost, and they will feel more confident when they're using the application.

2. Reassure the user with immediate feedback.

By communicating frequently and clearly, users are reassured, and they are much less likely to become anxious. Users want to see their actions get a response from your application. In our electric company support application example, imagine how much better the experience would be if a small blurb was displayed in red next to the phone number text box when I typed in my phone number in the wrong format. The immediate feedback would pinpoint the problem when it is easy to correct, and it would make me confident that when the phone number is updated, the application will continue to work as expected.

3. Provide warnings or extra information for dangerous or complicated operations.

When users are new to an application, they are not always sure which actions will have negative consequences. This is another great opportunity for communication. Providing notices or alerts for important or risky operations can offer a good dose of hesitation for new users who aren't prepared. Effective warnings or notices will tell the user when they will want to perform this action or what the negative consequences might be, so the user can make an informed decision. Users are confident with informed decisions because a lack information causes anxiety.

I learned how to implement this tip when I designed a wizard system for a previous employer that standardized how the company's application would walked users through any step-by-step process. My team decided early on to standardize a review step at the end of any implemented wizard. This was an extra step that every user had to go through for every wizard in the application, but it made all of the related processes much more usable and communicative. This extra information gave the users a chance to see the totality of the operation they were performing, and it gave them a chance to correct any mistakes. Implementing this tip resulted in users who were fully informed and confident throught the process of very complicated operations.

4. Do not assume your users know your terminology, and don't expect them to learn it.

Every organization has its own language. I have never encountered an exception to this rule. It cannot be helped! Inside your organization, you come up with a defined vocabulary for referencing the topics you have to work with every day, but your users won't necessarily understand the terminology you use internally. Some of your ardent users pick up on your language through osmosis, but the vast majority of users just get confused when they encounter terms they are not familiar with.

When interacting with users, refrain from using any of your internal language, and strictly adhere to a universally-accepted vocabulary. In many cases, you need shorthand to describe complex concepts that users will already understand. In this situation, always use universal or industry-wide vocabulary if it is available.

This practice can be challenging and will often require extra work. Let's say you have a page in your application dealing with "display devices," which could either be TVs or monitors. All of your employees talk about display devices because to your organization, they are essentially the same thing. The technology of your application handles all display devices in exactly the same way, so as good software designers you have this abstracted (or condensed for non-technical people) so that you have the least amount of code possible. The easiest route is to just have a page that talks about display devices. The challenge with that approach is that your users understand what monitors and TVs are, but they don't necessarily think of those as display devices.

If that's the case, you should use the words "monitors" and "TVs" when you're talking about display devices externally. This can be difficult, and it requires a lot of discipline, but when you provide familiar terminology, users won't be disoriented by basic terms. To make users more comfortable, speak to them in their language. Don't expect them to learn yours, because most of them won't.

When you look at usability through the subjective lens of user confidence, you'll find opportunities to enhance your user experience ... even when you aren't necessarily fixing anything that's broken. While it's difficult to quantify, confidence is at the heart of what makes people like or dislike any product or tool. Pay careful attention to the level of confidence your users have throughout your application, and your application can reach new heights.

-Tony

November 11, 2013

Sysadmin Tips and Tricks - Using the ‘for’ Loop in Bash

Ever have a bunch of files to rename or a large set of files to move to different directories? Ever find yourself copy/pasting nearly identical commands a few hundred times to get a job done? A system administrator's life is full of tedious tasks that can be eliminated or simplified with the proper tools. That's right ... Those tedious tasks don't have to be executed manually! I'd like to introduce you to one of the simplest tools to automate time-consuming repetitive processes in Bash — the for loop.

Whether you have been programming for a few weeks or a few decades, you should be able to quickly pick up on how the for loop works and what it can do for you. To get started, let's take a look at a few simple examples of what the for loop looks like. For these exercises, it's always best to use a temporary directory while you're learning and practicing for loops. The command is very powerful, and we wouldn't want you to damage your system while you're still learning.

Here is our temporary directory:

rasto@lmlatham:~/temp$ ls -la
total 8
drwxr-xr-x 2 rasto rasto 4096 Oct 23 15:54 .
drwxr-xr-x 34 rasto rasto 4096 Oct 23 16:00 ..
rasto@lmlatham:~/temp$

We want to fill the directory with files, so let's use the for loop:

rasto@lmlatham:~/temp$ for cats_are_cool in {a..z}; do touch $cats_are_cool; done;
rasto@lmlatham:~/temp$

Note: This should be typed all in one line.

Here's the result:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z
rasto@lmlatham:~/temp$

How did that simple command populate the directory with all of the letters in the alphabet? Let's break it down.

for cats_are_cool in {a..z}

The for is the command we are running, which is built into the Bash shell. cats_are_cool is a variable we are declaring. The specific name of the variable can be whatever you want it to be. Traditionally people often use f, but the variable we're using is a little more fun. Hereafter, our variable will be referred to as $cats_are_cool (or $f if you used the more boring "f" variable). Aside: You may be familiar with declaring a variable without the $ sign, and then using the $sign to invoke it when declaring environment variables.

When our command is executed, the variable we declared in {a..z}, will assume each of the values of a to z. Next, we use the semicolon to indicate we are done with the first phase of our for loop. The next part starts with do, which say for each of a–z, do <some thing>. In this case, we are creating files by touching them via touch $cats_are_cool. The first time through the loop, the command creates a, the second time through b and so forth. We complete that command with a semicolon, then we declare we are finished with the loop with "done".

This might be a great time to experiment with the command above, making small changes, if you wish. Let's do a little more. I just realized that I made a mistake. I meant to give the files a .txt extension. This is how we'd make that happen:

for dogs_are_ok_too in {a..z}; do mv $dogs_are_ok_too $dogs_are_ok_too.txt; done;
Note: It would be perfectly okay to re-use $cats_are_cool here. The variables are not persistent between executions.

As you can see, I updated the command so that a would be renamed a.txt, b would be renamed b.txt and so forth. Why would I want to do that manually, 26 times? If we check our directory, we see that everything was completed in that single command:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z.txt
rasto@lmlatham:~/temp$

Now we have files, but we don't want them to be empty. Let's put some text in them:

for f in `ls`; do cat /etc/passwd > $f; done

Note the backticks around ls. In Bash, backticks mean, "execute this and return the results," so it's like you executed ls and fed the results to the for loop! Next, cat /etc/passwd is redirecting the results to $f, in filenames a.txt, b.txt, etc. Still with me?

So now I've got a bunch of files with copies of /etc/passwd in them. What if I never wanted files for a, g, or h? First, I'd get a list of just the files I want to get rid of:

rasto@lmlatham:~/temp$ ls | egrep 'a|g|h'
a.txt
g.txt
h.txt

Then I could plug that command into the for loop (using backticks again) and do the removal of those files:

for f in `ls | egrep 'a|g|h'`; do rm $f; done

I know these examples don't seem very complex, but they give you a great first-look at the kind of functionality made possible by the for loop in Bash. Give it a whirl. Once you start smartly incorporating it in your day-to-day operations, you'll save yourself massive amounts of time ... Especially when you come across thousands or tens of thousands of very similar tasks.

Don't do work a computer should do!

-Lee

October 16, 2013

Tips and Tricks: Troubleshooting Email Issues

Working in support, one of the most common issues we troubleshoot is a customer's ability to receive email. Depending on email server, this can be a headache and a half to figure out, but more often than not, we're able to fix the problem with one of only a few simple solutions. Because the SoftLayer Blog audience loves technical tips and tricks, I thought I'd share a few easy steps that make pinpointing the root cause of email issues much easier.

Before you gear up to go into battle, check the that server is not out of disk space on /var and that it is not in a read only state. That precursory step may seem silly, but Occam's Razor often holds true in technical troubleshooting. Once you verify that those two common problems aren't causing your email problems, the next step is to determine whether the email issues are server-wide or isolated to one mail account/domain. To do that, the first thing you need to do is make sure that the IMAP and POP services are responding.

Check IMAP and POP Services

The universal approach to checking IMAP and POP services is to use telnet:

telnet <serverip> 110
telnet <serverip> 143

If either of those commands fail, you're able to pinpoint which service to check on your server.

For most variants of Linux, you can check both services with a single command: netstat -plan|egrep -i "110|143". The resulting output will show if the services are listening and which process is doing the listening. In Windows, you can run a similar command from a command prompt: netstat -anb|find "LISTEN"| findstr "110 143".

If the ports are listening, and you're able to connect to them over telnet, your next stop should be your server's error logs.

Check Error Logs

You want to look for any mail errors that might clue you into the root cause of your email issues. In Linux, you can check /var/log/maillog, and in Windows, you can filter eventvwr.msc for mail only. If there are errors, a simple search will highlight them quickly.

If there are no errors, it's time to dig into the mail queue directly.

Check the Mail Queue

Depending on the mail server you use, the commands here are going to vary. Here are a few examples of how we'd investigate the most common mail servers we encounter:

QMail

Display the mail queue: /var/qmail/bin/qmail-qread
Display the number of messages in the queue: /var/qmail/bin/qmail-qstat
Reference article: Gaining Control Over the QMail Queue

Sendmail

Display the mail queue: sendmail -bp or mailq
Display the number of messages in the queue: mailq –OmaxQueueRunSize=1
Reference article: Quick Sendmail Cheatsheet

Exim

Display the mail queue: exim -bp
Display the number of messages in the queue: exim -bpc
Reference article: Exim cheatsheet

MailEnable

MailEnable users can can check to see that messages are moving by opening the mail directory:
Program Files\MailEnable\Queues\SMTP\Inbound\Messages
Reference article: How to diagnose inbound message delivery delays

With these commands, you can filter through the email queues to see whether any of them are for the users or domains you're having problems with. If nothing obvious presents itself at that point, it's time for some active testing.

Active Testing

Send an email to your mailserver from an external mailserver (anything will do as long as it's not on the same server). Watch for logging of the email as it's delivered:
tail -f maillog
On busy mailservers you might add |grep youremailid or simply look for a new message in the directory where the email will be stored.

The your primary goal in troubleshooting your email issues in this way is to isolate the root cause of your problem so that you can fix it more quickly. SoftLayer customers have direct access to our support team to help you through this process, but it's always nice to keep a quick reference like this in your back pocket to be able to pinpoint the problem yourself.

-Bill

September 20, 2013

Building a Mobile App with jQuery Mobile: The Foundation

Based on conversations I've had in the past, at least half of web developers I've met have admitted to cracking open an Objective-C book at some point in their careers with high hopes of learning mobile development ... After all, who wouldn't want to create "the next big thing" for a market growing so phenomenally every year? I count myself among that majority: I've been steadily learning Objective-C over the past year, dedicating a bit of time every day, and I feel like I still lack skill-set required to create an original, complex application. Wouldn't it be great if we web developers could finally get our shot in the App Store without having to unlearn and relearn the particulars of coding a mobile application?

Luckily for us: There is!

The rock stars over at jQuery have created a framework called jQuery Mobile that allows developers to create cross-platform, responsive applications on a HTML5-based jQuery foundation. The framework allows for touch and mouse event support, so you're able to publish across multiple platforms, including iOS, Android, Blackberry, Kindle, Nook and on and on and on. If you're able to create web applications with jQuery, you can now create an awesome cross-platform app. All you have to do is create an app as if it was a dynamic HTML5 web page, and jQuery takes care of the rest.

Let's go through a real-world example to show this functionality in action. The first thing we need to do is fill in the <head> content with all of our necessary jQuery libraries:

<!DOCTYPE html>
<html>
<head>
    <title>SoftLayer Hello World!</title>
    <meta name="viewport" content="width=device-width, initial-scale=1">
    <link rel="stylesheet" href="http://code.jquery.com/mobile/1.3.2/jquery.mobile-1.3.2.min.css" />
    <script src="http://code.jquery.com/jquery-1.9.1.min.js"></script>
    <script src="http://code.jquery.com/mobile/1.3.2/jquery.mobile-1.3.2.min.js"></script>
</head>

Now let's create a framework for our simplistic app in the <body> section of our page:

<body>
    <div data-role="page">
        <div data-role="header">
            <h1>My App!</h1>
        </div>
 
        <div data-role="content">
            <p>This is my application! Pretty cool, huh?</p>
        </div>
 
        <div data-role="footer">
            <h1>Bottom Footer</h1>
        </div>
 
    </div>
</body>
</html>

Even novice web developers should recognize the structure above. You have a header, content and a footer just as you would in a regular web page, but we're letting jQuery apply some "native-like" styling to those sections with the data-role attributes. This is what our simple app looks like so far: jQuery Mobile App Screenshot #1

While it's not very fancy (yet), you see that the style is well suited to the iPhone I'm using to show it off. Let's spice it up a bit and add a navigation bar. Since we want the navigation to be a part of the header section of our app, let's add an unordered list there:

<div data-role="header">
    <h1>My App!</h1>
        <div data-role="navbar">
            <ul>
                <li><a href="#home" class="ui-btn-active" data-icon="home" data-theme="b">Home</a></li>
                <li><a href="#softlayer_cool_news" data-icon="grid" data-theme="b">SL Cool News!</a></li>
                <li><a href="#softlayer_cool_stuff" data-icon="star" data-theme="b">SL Cool Stuff!</a></li>
            </ul>
        </div>
    </div>

You'll notice again that it's not much different from regular HTML. We've created a navbar div with an unordered list of menu items we'd like to add to the header: Home, SL Cool News and SL Cool Stuff. Notice in the anchor tag of each that there's an attribute called data-icon which defines which graphical icon we want to represent the navigation item. Let's have a peek at what it looks like now: jQuery Mobile App Screenshot #2

Our app isn't doing a whole lot yet, but you can see from our screenshot that the pieces are starting to come together nicely. Because we're developing our mobile app as an HTML5 app first, we're able to make quick changes and see those changes in real time from our phone's browser. Once we get the functionality we want to into our app, we can use a tool such as PhoneGap or Cordova to package our app into a ready-to-use standalone iPhone app (provided you're enrolled in the Apple Development Program, of course), or we can leave the app as-is for a very nifty mobile browser application.

In my next few blogs, I plan to expand on this topic by showing you some of the amazingly easy (and impressive) functionality available in jQuery Mobile. In the meantime, go grab a copy of jQuery Mobile and start playing around with it!

-Cassandra

May 15, 2013

Secure Quorum: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we’re happy to welcome Gerard Ibarra from Secure Quorum. Secure Quorum is an easy-to-use emergency notification system and crisis management system that resides in the cloud.

Are You Prepared for an Emergency?

Every company's management team faces the challenge of having too many things going on with not enough time in the day. It's difficult to get everything done, so when push comes to shove, particular projects and issues need to be prioritized to be completed. What do we have to do today that can't be put off to tomorrow? Often, a businesses fall into a reactionary rut where they are constantly "putting out the fires" first, and while it's vital for a business to put out those fires (literal or metaphorical), that approach makes it difficult to proactively prepare for those kinds of issues to streamline the process of resolving them. Secure Quorum was created to provide a simple, secure medium to deal with emergencies and incidents.

What we noticed was that businesses didn't often consider planning for emergencies as part of their operations. The emergencies I'm talking about thankfully don't happen often, but fires, accidents, power outages, workplace violence and denial of service attacks can severely impact the bottom line if they aren't addressed quickly ... They can make or break you. Are you prepared?

Every second that we fail to make informed and logical decisions during an emergency is time lost in taking action. Take these facts for a little perspective:

  • "Property destruction and business disruption due to disasters now rival warfare in terms of loss." (University Corporation for Atmospheric Research)
  • More than 10,000 severe thunderstorms, 2,500 floods, 1,000 tornadoes and 10 hurricanes affect the United States each year. On average, 500 people die yearly because of severe weather and floods. (National Weather News 2005)
  • The cost of natural disasters is rising. During the past two decades, natural disaster damage costs have exceeded the $500 billion mark. Only 17 percent of that figure was covered by insurance. (Dennis S. Mileti, Disasters by Design)
  • Losses as a result of global disasters continue to increase on average every year, with an estimated $360 billion USD lost in 2011. (Centre for Research in the Epidemiology of Disasters)
  • Natural disasters, power outages, IT failures and human error are common causes of disruptions to internal and external communications. They "can cause downtime and have a significant negative impact on employee productivity, customer retention, and the confidence of vendors, partners, and customers." (Debra Chin, Palmer Research, May 2011)

These kinds of "emergencies" are not going away, but because specific emergencies are difficult (if not impossible) to predict, it's not obvious how to deal with them. How do we reduce risk for our employees, vendors, customers and our business? The two best answers to that question are to have a business continuity plan (BCP) and to have a way to communicate and collaborate in the midst of an emergency.

Start with a BCP. A BCP is a strategic plan to help identify and mitigate risk. Investopedia gives a great explanation:

The creation of a strategy through the recognition of threats and risks facing a company, with an eye to ensure that personnel and assets are protected and able to function in the event of a disaster. Business continuity planning (BCP) involves defining potential risks, determining how those risks will affect operations, implementing safeguards and procedures designed to mitigate those risks, testing those procedures to ensure that they work, and periodically reviewing the process to make sure that it is up to date.

Make sure you understand the basics of a BCP, and look for cues from organizations like FEMA for examples of how to approach emergency situations: http://www.ready.gov/business-continuity-planning-suite.

Once you have a basic BCP in place, it's important to be able to execute it when necessary ... That's where an emergency communication and collaboration solution comes into play. You need to streamline how you communicate when an emergency occurs, and if you're relying on a manual process like a phone tree to spread the word and contact key stakeholders in the midst of an incident, you're wasting time that could better be spent focusing to the issue at hand. An emergency communication solution automates that process quickly and logically.

When you create a BCP, you consider which people in your organization are key to responding to specific types of emergencies, and if anything ever happens, you want to get all of those people together. An emergency communication system will collect the relevant information, send it to the relevant people in your organization and seamlessly bridge them into a secured conference call. What would take minutes to complete now takes seconds, and when it comes to responding to these kinds of issues, seconds count. With everyone on a secure call, decisions can be made quickly and recorded to inform employees and stakeholders of what occurred and what the next steps are.

Plan for emergencies and hope that you never have to use that plan. Think about preparing for emergencies strategically, and it could make all the difference in the world. Secure Quorum is a platform that makes it easy to communicate and collaborate quickly, reliably and securely in those high-stress situations, so if you're interested getting help when it comes to responding to emergencies and incidents, visit our site at SecureQuorum.com and check out the whitepaper we just published with one of our customers: Ease of Use: Make it Part of Your Software Decision.

-Gerard Ibarra, CEO of Secure Quorum

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
May 7, 2013

Tips from the Abuse Department: DMCA Takedown Notices

If you are in the web hosting business or you provide users with access to store content on your servers, chances are that you're familiar with the Digital Millennium Copyright Act (DMCA). If you aren't familiar with it, you certainly should be. All it takes is one client plagiarizing an article or using a filesharing program unscrupulously, and you could find yourself the recipient of a scary DMCA notice from a copyright holder. We've talked before about how to file a DMCA complaint with SoftLayer, but we haven't talked in detail about SoftLayer's role in processing DMCA complaints or what you should do if you find yourself on the receiving end of a copyright infringement notification.

The most important thing to understand when it comes to the way the abuse team handles DMCA complaints is that our procedures aren't just SoftLayer policy — they are the law. Our role in processing copyright complaints is essentially that of a middleman. In order to protect our Safe Harbor status under the Online Copyright Infringement Liability Limitation Act (OCILLA), we must enforce any complaint that meets the legal requirements of a takedown notice. That DMCA complaint must contain specific elements and be properly formatted in order to be considered valid.

Responding to a DMCA Complaint

When we receive a complaint that meets the legal requirements of a DMCA takedown notice, we must relay the complaint to our direct customer and enforce a deadline for removal of the violating material. We are obligated to remove access to infringing content when we are notified about it, and we aren't able to make a determination about the validity of a claim beyond confirming that all DMCA requirements are met.

The law states that SoftLayer must act expeditiously, so if you receive notification of a DMCA complaint, it's important that you acknowledge the ticket that the abuse department opened on your account and let us know your intended course of action. Sometimes that action is as simple as removing an infringing URL. Sometimes you may need to contact your client and instruct them to take the material down. Whatever the case may be, it's important to be responsive and to expressly confirm when you have complied and removed the material. Failure to acknowledge an abuse ticket can result in disconnection of service, and in the case of copyright infringement, SoftLayer has a legal obligation to remove access to the material or we face serious liability.

DMCA Counter Notifications

Most DMCA complaints are resolved without issue, but what happens if you disagree with the complaint? What if you own the material and a disgruntled former business partner is trying to get revenge? What if you wrote the content and the complaining party is copying your website? Thankfully there are penalties for filing a false DMCA complaint, but you also have recourse in the form of a counter notification. Keep in mind that while it may be tempting to plead your case to the abuse department, our role is not to play judge or jury but to allow the process to work as it was designed.

In some cases, you may be able to work out a resolution with the complaining party directly (misunderstandings happen, licenses lapse, etc.) and have them send a retraction, but most of the time your best course of action is to submit a counter notification.

Just as a takedown notice must be crafted in a specific way, counter notifications have their own set of requirements. Once you have disabled the material identified in the original complaint, we can provide your valid, properly formatted counter notification to the complaining party. Unless we receive a court order from the complaining party within the legally mandated time frame the material can be re-enabled and the case is closed for the time being.

While it might sound complicated, it's actually pretty straightforward, but we urge you to do your research and make sure you know what to do in the event a client of yours is hit with a DMCA takedown notice. Just as we are unable to make judgment calls when it comes to takedown notices or counter notifications, we are also unable to offer any legal advice for you if you need help. Hopefully this post cleared up a few questions and misconceptions about how the abuse department handles copyright complaints. In short:

Do take DMCA notifications seriously. You are at risk for service interruption and possible legal liability.
Do respond to the abuse department letting them know the material has been disabled and, if applicable, if you plan to file a counter notification.
Don't refuse to disable the material. Even if you believe the claim is false and you wish to file a counter notification, the material must be disabled within the time period allotted by the abuse department or we have to block access to it.
Don't expect the abuse department to take sides.

As with any abuse issue, communication and responsiveness is important. Disconnecting your server is a last resort, but we have ethical and legal obligations to uphold. The DMCA process certainly has its weaknesses and it leaves a bit to be desired, but at the end of the day, it's the law, and we have to operate inside of our legal obligation to it.

-Jennifer

April 16, 2013

iptables Tips and Tricks - Track Bandwidth with iptables

As I mentioned in my last post about CSF configuration in iptables, I'm working on a follow-up post about integrating CSF into cPanel, but I thought I'd inject a simple iptables use-case for bandwidth tracking. You probably think about iptables in terms of firewalls and security, but it also includes a great diagnostic tool for counting bandwidth for individual rules or set of rules. If you can block it, you can track it!

The best part about using iptables to track bandwidth is that the tracking is enabled by default. To see this feature in action, add the "-v" into the command:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 2495 packets, 104K bytes)

The output includes counters for both the policies and the rules. To track the rules, you can create a new chain for tracking bandwidth:

[root@server ~]$ iptables -N tracking
[root@server ~]$ iptables -vnL
...
Chain tracking (0 references)
 pkts bytes target prot opt in out source           destination

Then you need to set up new rules to match the traffic that you wish to track. In this scenario, let's look at inbound http traffic on port 80:

[root@server ~]$ iptables -I INPUT -p tcp --dport 80 -j tracking
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35111 packets, 1490K bytes)
 pkts bytes target prot opt in out source           destination
    0   0 tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80

Now let's generate some traffic and check it again:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35216 packets, 1500K bytes)
 pkts bytes target prot opt in out source           destination
  101  9013 tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80

You can see the packet and byte transfer amounts to track the INPUT — traffic to a destination port on your server. If you want track the amount of data that the server is generating, you'd look for OUTPUT from the source port on your server:

[root@server ~]$ iptables -I OUTPUT -p tcp --sport 80 -j tracking
[root@server ~]$ iptables -vnL
...
Chain OUTPUT (policy ACCEPT 26149 packets, 174M bytes)
 pkts bytes target prot opt in out source           destination
  488 3367K tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp spt:80

Now that we know how the tracking chain works, we can add in a few different layers to get even more information. That way you can keep your INPUT and OUTPUT chains looking clean.

[root@server ~]$ iptables –N tracking
[root@server ~]$ iptables –N tracking2
[root@server ~]$ iptables –I INPUT –j tracking
[root@server ~]$ iptables –I OUTPUT –j tracking
[root@server ~]$ iptables –A tracking –p tcp --dport 80 –j tracking2
[root@server ~]$ iptables –A tracking –p tcp --sport 80 –j tracking2
[root@server ~]$ iptables -vnL
 
Chain INPUT (policy ACCEPT 96265 packets, 4131K bytes)
 pkts bytes target prot opt in out source           destination
 4002  184K tracking    all  --  *  *   0.0.0.0/0        0.0.0.0/0
 
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source           destination
 
Chain OUTPUT (policy ACCEPT 33751 packets, 231M bytes)
 pkts bytes target prot opt in out source           destination
 1399 9068K tracking    all  --  *  *   0.0.0.0/0        0.0.0.0/0
 
Chain tracking (2 references)
 pkts bytes target prot opt in out source           destination
 1208 59626 tracking2   tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80
  224 1643K tracking2   tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp spt:80
 
Chain tracking2 (2 references)
 pkts bytes target prot opt in out source           destination

Keep in mind that every time a packet passes through one of your rules, it will eat CPU cycles. Diverting all your traffic through 100 rules that track bandwidth may not be the best idea, so it's important to have an efficient ruleset. If your server has eight processor cores and tons of overhead available, that concern might be inconsequential, but if you're running lean, you could conceivably run into issues.

The easiest way to think about making efficient rulesets is to think about eating the largest slice of pie first. Understand iptables rule processing and put the rules that get more traffic higher in your list. Conversely, save the tiniest pieces of your pie for last. If you run all of your traffic by a rule that only applies to a tiny segment before you screen out larger segments, you're wasting processing power.

Another thing to keep in mind is that you do not need to specify a target (in our examples above, we established tracking and tracking2 as our targets). If you're used to each rule having a specific purpose of either blocking, allowing, or diverting traffic, this simple tidbit might seem revolutionary. For example, we could use this rule:

[root@server ~]$ iptables -A INPUT

If that seems a little bare to you, don't worry ... It is! The output will show that it is a rule that tracks all traffic in the chain at that point. We're appending the data to the end of the chain in this example ("-A") but we could also insert it ("-I") at the top of the chain instead. This command could be helpful if you are using a number of different chains and you want to see the exact volume of packets that are filtered at any given point. Additionally, this strategy could show how much traffic a potential rule would filter before you run it on your production system. Because having several of these kinds of commands can get a little messy, it's also helpful to add comments to help sort things out:

[root@server ~]$ iptables -A INPUT -m comment --comment "track all data"
 
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 11M packets, 5280M bytes)
 pkts bytes target prot opt in out source           destination
   98  9352        all  --  *  *   0.0.0.0/0        0.0.0.0/0       /* track all data */

Nothing terribly complicated about using iptables to count bandwidth, right? If you have iptables rulesets and you want to get a glimpse at how your traffic is being affected, this little trick could be useful. You can rely on the information iptables gives you about your bandwidth usage, and you won't be the only one ... cPanel actually uses iptables to track bandwidth.

-Mark

March 19, 2013

iptables Tips and Tricks: CSF Configuration

In our last "iptables Tips and Tricks" installment, we talked about Advanced Policy Firewall (APF) configuration, so it should come as no surprise that in this installment, we're turning our attention to ConfigServer Security & Firewall (CSF). Before we get started, you should probably run through the list of warnings I include at the top of the APF blog post and make sure you have your Band-Aid ready in case you need it.

To get the ball rolling, we need to download CSF and install it on our server. In this post, we're working with a CentOS 6.0 32-bit server, so our (root) terminal commands would look like this to download and install CSF:

$ wget http://www.configserver.com/free/csf.tgz #Download CSF using wget.
$ tar zxvf csf.tgz #Unpack it.
$ yum install perl-libwww-perl #Make sure perl modules are installed ...
$ yum install perl-Time-HiRes  #Otherwise it will generate an error.
$ cd csf
$ ./install.sh #Install CSF.
 
#MAKE SURE YOU HAVE YOUR BAND-AID READY
 
$ /etc/init.d/csf start #Start CSF. (Note: You can also use '$ service csf start')

Once you start CSF, you can see a list of the default rules that load at startup. CSF defaults to a DROP policy:

$ iptables -nL | grep policy
Chain INPUT (policy DROP)
Chain FORWARD (policy DROP)
Chain OUTPUT (policy DROP)

Don't ever run "iptables -F" unless you want to lock yourself out. In fact, you might want to add "This server is running CSF - do not run 'iptables -F'" to your /etc/motd, just as a reminder/warning to others.

CSF loads on startup by default. This means that if you get locked out, a simple reboot probably won't fix the problem. Runlevels 2, 3, 4, and 5 are all on:

$ chkconfig --list | grep csf
csf             0:off   1:off   2:on    3:on    4:on    5:on    6:off

Some features of CSF will not work unless you have certain iptables modules installed. I believe they are installed by default in CentOS, but if you custom-built your iptables, they might not all be installed. Run this script to see if all modules are installed:

$ /etc/csf/csftest.pl
Testing ip_tables/iptable_filter...OK
Testing ipt_LOG...OK
Testing ipt_multiport/xt_multiport...OK
Testing ipt_REJECT...OK
Testing ipt_state/xt_state...OK
Testing ipt_limit/xt_limit...OK
Testing ipt_recent...OK
Testing xt_connlimit...OK
Testing ipt_owner/xt_owner...OK
Testing iptable_nat/ipt_REDIRECT...OK
Testing iptable_nat/ipt_DNAT...OK
 
RESULT: csf should function on this server

As I mentioned, this is the default iptables installation on a minimal CentOS 6.0 image, so chances are good that these modules are already installed on your system. It never hurts to check, though.

The CSF Configuration File

The primary CSF configuration is stored in the well-documented /etc/csf/csf.conf file. CSF is extremely configurable, so there are a lot of options to read over. Let's take a look over some of the more important features:

Testing

TESTING = "1"
TESTING_INTERVAL = "5"

This TESTING cron job runs every "5" minutes so you don't lock yourself out when you're testing your rules. When you are satisfied with your rules (and confident that you won't lock yourself out), you can set TESTING to "0".

Globally Allowed Ports

# Allow incoming TCP ports
TCP_IN = "20,21,22,25,53,80,110,143,443,465,587,993,995"
 
# Allow outgoing TCP ports
TCP_OUT = "20,21,22,25,53,80,110,113,443"
 
# Allow incoming UDP ports
UDP_IN = "20,21,53"
 
# Allow outgoing UDP ports
# To allow outgoing traceroute add 33434:33523 to this list
UDP_OUT = "20,21,53,113,123"

Incoming Ping Requests

# Allow incoming PING
ICMP_IN = "1"

Allowing ping is usually a good option for diagnostic purposes, so I don't recommend turning it off. Disallowing ping is an example of "security through obscurity," and it will not typically dissuade your attackers.

Ethernet Device

ETH_DEVICE = ""
ETH6_DEVICE = ""

Here, you can configure iptables to ONLY use one Ethernet adapter. You might want to only guard your public network adapter in some situations.

IP Limit in Permanent "Deny" File

DENY_IP_LIMIT = "200"

A higher number here will obviously screen out more IP addresses in csf.deny, but higher numbers also may cause slowdowns.

IP Limit in Temporary "Deny" File

DENY_TEMP_IP_LIMIT = "100"

Similar to DENY_IP_LIMIT, the DENY_TEMP_IP_LIMIT represents the maximum number of IPs that can be stored in the temporary ban list.

SMTP Blocking

SMTP_BLOCK = "0"

When set to "1", SMTP_BLOCK does not completely block outbound SMTP, but it does block it for most users. This will prevent malicious scripts and compromised users from making outbound connections from unauthorized mail clients on the server. SMTP_BLOCK doesn't stop those scripts from running, but it does stop them from functioning. Mail sent through the proper channels will still be delivered normally.

Allowing SMTP on localhost

SMTP_ALLOWLOCAL = "1"

Custom Mail Port Designation

SMTP_PORTS = "25,465,587"

Allowing SMTP Access to Users/Groups

SMTP_ALLOWUSER = ""
SMTP_ALLOWGROUP = "mail,mailman"

SYN Flood Protection

SYNFLOOD = "0"
SYNFLOOD_RATE = "100/s"
SYNFLOOD_BURST = "150"

Per the documentation, you should only enable SYN flood protection (SYNFLOOD= "1") if you are currently under a SYN flood attack.

Concurrent Connections Limit

CONNLIMIT = "22;5,80;20"
PORTFLOOD = "22;tcp;5;300,80;tcp;20;5

These options allow you to add customized DoS protection. CONNLIMIT handles the number of concurrent connections, and in this example, we're limiting port 22 to 5 connections and port 80 to 20 connections.

PORTFLOOD watches the number of connections per a given number of seconds. In this example, we're limiting the TCP connection on port 22 to 5 connections/second with a quiet period of 300 seconds before the connection is unblocked. Additonally, we're limiting the TCP connection on port 80 to 20 connections/second with a quiet period of 5 seconds before the connection is unblocked.

Check the readme.txt file for more information about the syntax.

Logging to Syslog

SYSLOG = "0"

When enabled, this option logs lfd (Login Failure Daemon) messages to syslog as well as to /var/log/lfd.log.

Dropping v. Rejecting Packets

DROP = "DROP"

This configuration allows you to either DROP or REJECT packets. REJECT tells the sender that the packet has been blocked by the firewall. DROP just drops the packet and does not send a response. I like DROP better for regular use, but REJECT might be more helpful if you need to diagnose a connectivity issue.

Logging Dropped Connections

DROP_LOGGING = "1"

This option logs dropped connections to syslog. I don't see any reason to turn this off unless your hard drive is getting full.

Port Exceptions When Logging Dropped Connections

DROP_NOLOG = "67,68,111,113,135:139,445,500,513,520"

These ports are specifically blocked from being logged either to conserve hard drive space or make the log file easier to read.

"Watch Mode"

WATCH_MODE = "0"

If you are ever stuck trying to troubleshoot a large ruleset, you might consider turning this option on. You can use it to track the actions to watched IP addresses to see where they are getting blocked or accepted.

Login Failure Daemon Alert

LF_ALERT_TO = ""
LF_ALERT_FROM = ""
LF_ALERT_SMTP = ""

You can specify an email address to report errors from the Login Failure Daemon, which tracks and automatically blocks brute force login attempts.

Permanent Blocks and NetBlocks

LF_PERMBLOCK = "1"
LF_PERMBLOCK_INTERVAL = "86400"
LF_PERMBLOCK_COUNT = "4"
LF_PERMBLOCK_ALERT = "1"
LF_NETBLOCK = "0"
LF_NETBLOCK_INTERVAL = "86400"
LF_NETBLOCK_COUNT = "4"
LF_NETBLOCK_CLASS = "C"
LF_NETBLOCK_ALERT = "1"

These settings control the permanent block and netblock blocking. You probably don't need to touch these settings, but you might want some additional security or less security depending on your company needs. If something gets permablocked, it will require your intervention to clear it, which might create downtime for your clients. Likewise, if a legitimate IP address happens to be part of a netblock which has an attacking IP address on it, it will get blocked if you have that feature turned on. A class C network encompasses 256 IP addresses. You can set this to class B or A, but that could block thousands or millions of IP addresses, respectively. Unless you find yourself under constant attack, I would advise you to leave that LF_NETBLOCK off.

Additional Protection During Updates

# Safe Chain Update. If enabled, all dynamic update chains (GALLOW*, GDENY*,
# SPAMHAUS, DSHIELD, BOGON, CC_ALLOW, CC_DENY, ALLOWDYN*) will create a new
# chain when updating, and insert it into the relevant LOCALINPUT/LOCALOUTPUT
# chain, then flush and delete the old dynamic chain and rename the new chain.
#
# This prevents a small window of opportunity opening when an update occurs and
# the dynamic chain is flushed for the new rules.
SAFECHAINUPDATE = "0"

Activating this option will increase your system resource usage and will require more rules to be running at one time, but it provides an additional layer of protection during updates. Without this option turned on, your rules will be flushed for a short amount of time, leaving your server vulnerable.

Multi-Server Deployment Options

LF_GLOBAL = "0"
GLOBAL_ALLOW = ""
GLOBAL_DENY = ""
GLOBAL_IGNORE = ""

Like APF, you can configure global lists for multiple server deployments. You'll need to specify a URL of the text file with the IP addresses for the global lists.

SPAMHAUSE Blocklist

LF_SPAMHAUS = "0"

This option enables the SPAMHAUS blocklist. Specify the number of seconds between refreshes. Recommended setting is 86400 (1 day).

Blocking TOR Exit IP Addresses

LF_TOR = "0"

Enabling this option will block TOR exit IP addresses. If you are not familiar with TOR, it is a completely anonymous proxy network. This could block some legitimate users who are trying to protect their anonymity, so I would recommend only turning this on if you are already under attack from a TOR exit address.

Blocking Bogon Addresses

LF_BOGON = "0"
LF_BOGON_URL = "http://www.cymru.com/Documents/bogon-bn-agg.txt"
LF_BOGON_SKIP = ""

Blocking bogon addresses (addresses that should not be possible) is usually a good decision. To enable, set the number of seconds between refreshes. I recommend enabling this option and setting the refresh at 86400 (1 day). If you do so, be sure to add your private network adapters to the skip list.

Country-Specific Access to Your Server

CC_DENY = ""
CC_ALLOW = ""

With these options, you can block or allow entire countries from accessing your server. To do so, enter the country codes in a comma separated list. Even though this generates a lot of additional rules, it's valuable to some sysadmins.

CC_ALLOW_FILTER = ""

Alternatively, you can set your server to exclusively accept traffic from a list of country codes. All other countries not listed will have their traffic dropped. There are many other settings related to these options that I don't have time to cover in this blog.

Blocking Login Failures

LF_TRIGGER = "0"

This enables blocking of login failures (per service). There are a lot of great customization options in this section.

Scanning Directories for Malicious Files

LF_DIRWATCH = "300"

This feature scans /tmp and /dev/shm for potentially malicious files and alerts you to their presence based on the interval you designate. You can also have CSF automatically quarantine malicious files with this option:

LF_DIRWATCH_DISABLE = "0"

Distributed Attack Protection

LF_DISTATTACK = "0"

By enabling this option, you activate additional protection against distributed attacks.

Blocking Based on Abusive Email Usage

LT_POP3D = "0"
LT_IMAPD = "0"

If a user checks email too many times per hour (more than the non-zero value specified), the user's IP address is blocked.

Email Alert Following Block

LT_EMAIL_ALERT = "1"

This will send you email when something is blocked. I'd recommend leaving it on.

Blocking IP Addresses Based on Number of Connections

CT_LIMIT = "0"

This feature tracks connections and blocks the IP if the number of connections is too high. Use caution because if you enable this option and set this value too low, it will block legitimate traffic.

Application-Level Protection

PT_LIMIT = "60"

This feature provides application level protection against malicious scripts that take a long time to execute.

Blocking Port Scanners

PS_INTERVAL = "300"
PS_LIMIT = "10"

Enabling HTML User Interface for CSF

UI = "0"

CSF has a built-in HTML user interface. You can enable this by setting UI = "1". There are a list of prerequisites for this option in the readme.txt.

Notifying Blocked IP Addresses

MESSENGER = "0"

This option will notify blocked IP addresses when they have been blocked by the firewall.

Port Knocking

PORTKNOCKING = ""

CSF supports port knocking, which is a technique that provides an additional layer of security. See http://www.portknocking.org/ for details.

Allow and Deny Lists

As we walked through the CSF configuration file, you saw that I referenced the csf.deny file, so it should come as no surprise that CSF also includes csf.allow to customize "allow" rules as well. If you are familiar with APF, these files have a very similar syntax ... Each entry is made up of the same four components: protocol|flow|port|IP. The only real difference being that APF uses the colon as a delimiter while CSF uses the pipe:

#APF Version
tcp:in:d=48000_48020:s=10.0.0.0/8
 
#CSF Version
tcp|in|d=48000_48020|s=10.0.0.0/8

Fortunately, replacing your colon with a pipe is a minimally invasive procedure that can be automated with a tool like vi.

CSF Command Line Tool

The command line tool for CSF is much more robust than the one for APF:

$ csf --help
csf: v5.79 (cPanel)
 
ConfigServer Security &amp; Firewall
(c)2006-2013, Way to the Web Limited (http://www.configserver.com)
 
Usage: /usr/sbin/csf [option] [value]
 
Option              Meaning
-h, --help          Show this message
-l, --status        List/Show iptables configuration
-l6, --status6      List/Show ip6tables configuration
-s, --start         Start firewall rules
-f, --stop          Flush/Stop firewall rules (Note: lfd may restart csf)
-r, --restart       Restart firewall rules
-q, --startq        Quick restart (csf restarted by lfd)
-sf, --startf       Force CLI restart regardless of LF_QUICKSTART setting
-a, --add ip        Allow an IP and add to /etc/csf.allow
-ar, --addrm ip     Remove an IP from /etc/csf.allow and delete rule
-d, --deny ip       Deny an IP and add to /etc/csf.deny
-dr, --denyrm ip    Unblock an IP and remove from /etc/csf.deny
-df, --denyf        Remove and unblock all entries in /etc/csf.deny
-g, --grep ip       Search the iptables rules for an IP match (incl. CIDR)
-t, --temp          Displays the current list of temp IP entries and their TTL
-tr, --temprm ip    Remove an IPs from the temp IP ban and allow list
-td, --tempdeny ip ttl [-p port] [-d direction]
                    Add an IP to the temp IP ban list. ttl is how long to
                    blocks for (default:seconds, can use one suffix of h/m/d).
                    Optional port. Optional direction of block can be one of:
                    in, out or inout (default:in)
-ta, --tempallow ip ttl [-p port] [-d direction]
                    Add an IP to the temp IP allow list (default:inout)
-tf, --tempf        Flush all IPs from the temp IP entries
-cp, --cping        PING all members in an lfd Cluster
-cd, --cdeny ip     Deny an IP in a Cluster and add to /etc/csf.deny
-ca, --callow ip    Allow an IP in a Cluster and add to /etc/csf.allow
-cr, --crm ip       Unblock an IP in a Cluster and remove from /etc/csf.deny
-cc, --cconfig [name] [value]
                    Change configuration option [name] to [value] in a Cluster
-cf, --cfile [file] Send [file] in a Cluster to /etc/csf/
-crs, --crestart    Cluster restart csf and lfd
-w, --watch ip      Log SYN packets for an IP across iptables chains
-m, --mail [addr]   Display Server Check in HTML or email to [addr] if present
-lr, --logrun       Initiate Log Scanner report via lfd
-c, --check         Check for updates to csf but do not upgrade
-u, --update        Check for updates to csf and upgrade if available
-uf                 Force an update of csf
-x, --disable       Disable csf and lfd
-e, --enable        Enable csf and lfd if previously disabled
-v, --version       Show csf version

The command line tool will also tell you if the testing mode is enabled (which is a very useful feature). If TESTING were enabled, we'd see this line at the bottom of the output:

*WARNING* TESTING mode is enabled - do not forget to disable it in the configuration

Did you make it all the way through?! Great! I know it's a lot to take in, but it's not terribly complicated when we break it down and understand how each piece works. Next time, I'll be back with some tips on integrating CSF into cPanel.

-Mark

January 24, 2013

Startup Series: SPEEDILICIOUS

Research from the Aberdeen Group shows the average website is losing 9% of its business because
 the speed of the site frustrates visitors into leaving. 9% of your traffic might be leaving your site because they feel like it's too slow. That thought is staggering, and any site owner would be foolish not to fix the problem. SPEEDILICIOUS — one of our new Catalyst partners — has an innovative solution that optimizes website performance and helps businesses deliver content to their end users faster.

SPEEDILICIOUS

I recently had the chance to chat with SPEEDILICIOUS founders Seymour Segnit and Chip Krauskopf, and Seymour rephrased that "9%" statistic in a pretty alarming way: "Losing 9% of your business is the equivalent of simply allowing your website to go offline, down, dark, dead, 404 for over a MONTH each year!" There is ample data to back this up from high-profile sites like Amazon, Microsoft and Walmart.com, but intuitively, you know it already ... A slow site (even a slightly slow site) is annoying.

The challenge many website owners have when it comes to their loading speeds is that problems might not be noticeable from their own workstations. Thanks to caching and the Internet connections most of us have, when we visit our own sites, we don't have any trouble accessing our content quickly. Unfortunately, many of our customers don't share that experience when they visit our sites on mobile, hotel, airports and (worst of all) conference connections. The most common approach to speeding up load times is to throw bigger servers or a CDN (content delivery network) at the problem, but while those improvements make a difference, they only address part of the problem ... Even with the most powerful servers in SoftLayer's fleet, your page can load at a crawl if your code can't be rendered quickly by a browser.

That makes life as a website developer difficult. The process of optimizing code and tweaking settings to speed up load times can be time-consuming and frustrating. Or as Chip explained to me, "Speeding up your site is essential, it shouldn’t be be slow and complicated. We fix that problem." Take a look:

The idea that your site performance can be sped up significantly overnight seems a little crazy, but if it works (which it clearly does), wouldn't it be crazier not to try it? SPEEDILICIOUS offers a $1 trial for you to see the results on your own site, and they regularly host a free webinar called "How to Grow Your Business 5-15% Overnight" which covers the critical techniques for speeding up any website.

As technology continues to improve and behavioral patterns of purchasing migrate away from the mall and onto our computers and smart phones, SPEEDILICIOUS has a tremendous opportunity to capture a ripe market. So they're clearly a great fit for Catalyst. If you're interested in learning more or would like to speak to Seymour, Chip or anyone on their team, please let me know and I'll make the direct introduction any time.

-@JoshuaKrammes

January 10, 2013

Web Development - JavaScript Packaging

If you think of JavaScript as the ugly duckling of programming languages, think again! It got a bad rap in the earlier days of the web because developers knew enough just to get by but didn't really respect it like they did Java, PHP or .Net. Like other well-known and heavily used languages, JavaScript contains various data types (String, Boolean, Number, etc.), objects and functions, and it is even capable of inheritance. Unfortunately, that functionality is often overlooked, and many developers seem to implement it as an afterthought: "Oh, we need to add some neat jQuery effects over there? I'll just throw some inline JavaScript here." That kind of implementation perpetuates a stereotype that JavaScript code is unorganized and difficult to maintain, but it doesn't have to be! I'm going to show you how easy it is to maintain and organize your code base by packaging your JavaScript classes into a single file to be included with your website.

There are a few things to cover before we jump into code:

  1. JavaScript Framework - Mootools is my framework of choice, but you can use whatever JavaScript framework you'd like.
  2. Classes - Because I see JavaScript as another programming language that I respect (and is capable of object-oriented-like design), I write classes for EVERYTHING. Don't think of your JavaScript code as something you use once and throw away. Write your code to be generic enough to be reused wherever it's placed. Object-oriented design is great for this! Mootools makes object-oriented design easy to do, so this point reinforces the point above.
  3. Class Files - Just like you'd organize your PHP to contain one class per file, I do the exact same thing with JavaScript. Note: Each of the class files in the example below uses the class name appended with .js.
  4. Namespacing - I will be organizing my classes in a way that will only add a single property — PT — to the global namespace. I won't get into the details of namespacing in this blog because I'm sure you're already thinking, "The code! The code! Get on with it!" You can namespace whatever is right for your situation.

For this example, our classes will be food-themed because ... well ... I enjoy food. Let's get started by creating our base object:

/*
---
name: PT
description: The base class for all the custom classes
authors: [Philip Thompson]
provides: [PT]
...
*/
var PT = {};

We now have an empty object from which we'll build all of our classes. We'll go I will go into more details later about the comment section, but let's build our first class: PT.Ham.

/*
---
name: PT.Ham
description: The ham class
authors: [Philip Thompson]
requires: [/PT]
provides: [PT.Ham]
...
*/
 
(function() {
    PT.Ham = new Class({
        // Custom code here...
    });
}());

As I mentioned in point three (above), PT.Ham should be saved in the file named PT.Ham.js. When we create second class, PT.Pineapple, we'll store it in PT.Pineapple.js:

/*
---
name: PT.Pineapple
description: The pineapple class
authors: [Philip Thompson]
requires: [/PT]
provides: [PT.Pineapple]
...
*/
 
(function() {
    PT.Pineapple = new Class({
        // Custom code here...
    });
}());

Our final class for this example will be PT.Pizza (I'll let you guess the name of the file where PT.Pizza lives). Our PT.Pizza class will require that PT, PT.Ham and PT.Pineapple be present.

/*
---
name: PT.Pizza
description: The pizza class
authors: [Philip Thompson]
requires: [/PT, /PT.Ham, /PT.Pineapple]
provides: [PT.Pizza]
...
*/
 
(function() {
    PT.Pizza = new Class({
        // Custom code here that uses PT.Ham and PT.Pineapple...
    });
}());

Before we go any further, let's check out the comments we include above each of the classes. The comments are formatted for YAML — YAML Ain't Markup Language (you gotta love recursive acronyms). These comments allow our parser to determine how our classes are related, and they help resolve dependencies. YAML's pretty easy to learn and you only need to know a few basic features to use it. The YAML comments in this example are essential for our JavaScript package-manager — Packager. I won't go into all the details about Packager, but simply mention a few commands that we'll need to build our single JavaScript file.

In addition to the YAML comments in each of the class files, we also need to create a YAML file that will organize our code. This file — package.yml for this example — is used to load our separate JavaScript classes:

name: "PT"
description: "Provides our fancy PT classes"
authors: "[Philip Thompson]"
version: "1.0.0"
sources:
    - js/PT.js
    - js/PT.Ham.js
    - js/PT.Pineapple.js
    - js/PT.Pizza.js

package.yml shows that all of our PT* files are located in the js directory, one directory up from the package.yml file. Some of the properties in the YAML file are optional, and you can add much more detail if you'd like, but this will get the job done for our purposes.

Now we're ready to turn back to Packager to build our packaged file. Packager includes an option to use PHP, but we're just going to do it command-line. First, we need to register the new package (package.yml) we created for PT. If our JavaScript files are located in /path/to/web/directory/js, the package.yml file is in /path/to/web/directory:

./packager register /path/to/web/directory

This finds our package.yml file and registers our PT package. Now that we have our package registered, we can build it:

./packager build * > /path/to/web/directory/js/PT.all.js

The Packager sees that our PT package is registered, so it looks at each of the individual class files to build a single large file. In the comments of each of the class files, it determines if there are dependencies and warns you if any are not found.

It might seem like a lot of work when it's written out like this, but I can assure you that when you go through the process, it takes no time at all. The huge benefit of packaging our JavaScript is evident as soon as you start incorporating those JavaScript classes into your website ... Because we have built all of our class files into a single file, we don't need to include each of the individual JavaScript files into our website (much less include the inline JavaScript declarations that make you cringe). To streamline your implementation even further if you're using your JavaScript package in a production deployment, I recommend that you "minify" your code as well.

See ... Organized code is no longer just for server-side only languages. Treat your JavaScript kindly, and it will be your friend!

Happy coding!

-Philip

Subscribe to tips-and-tricks-2