Posts Tagged 'Linux'

November 11, 2013

Sysadmin Tips and Tricks - Using the ‘for’ Loop in Bash

Ever have a bunch of files to rename or a large set of files to move to different directories? Ever find yourself copy/pasting nearly identical commands a few hundred times to get a job done? A system administrator's life is full of tedious tasks that can be eliminated or simplified with the proper tools. That's right ... Those tedious tasks don't have to be executed manually! I'd like to introduce you to one of the simplest tools to automate time-consuming repetitive processes in Bash — the for loop.

Whether you have been programming for a few weeks or a few decades, you should be able to quickly pick up on how the for loop works and what it can do for you. To get started, let's take a look at a few simple examples of what the for loop looks like. For these exercises, it's always best to use a temporary directory while you're learning and practicing for loops. The command is very powerful, and we wouldn't want you to damage your system while you're still learning.

Here is our temporary directory:

rasto@lmlatham:~/temp$ ls -la
total 8
drwxr-xr-x 2 rasto rasto 4096 Oct 23 15:54 .
drwxr-xr-x 34 rasto rasto 4096 Oct 23 16:00 ..
rasto@lmlatham:~/temp$

We want to fill the directory with files, so let's use the for loop:

rasto@lmlatham:~/temp$ for cats_are_cool in {a..z}; do touch $cats_are_cool; done;
rasto@lmlatham:~/temp$

Note: This should be typed all in one line.

Here's the result:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z
rasto@lmlatham:~/temp$

How did that simple command populate the directory with all of the letters in the alphabet? Let's break it down.

for cats_are_cool in {a..z}

The for is the command we are running, which is built into the Bash shell. cats_are_cool is a variable we are declaring. The specific name of the variable can be whatever you want it to be. Traditionally people often use f, but the variable we're using is a little more fun. Hereafter, our variable will be referred to as $cats_are_cool (or $f if you used the more boring "f" variable). Aside: You may be familiar with declaring a variable without the $ sign, and then using the $sign to invoke it when declaring environment variables.

When our command is executed, the variable we declared in {a..z}, will assume each of the values of a to z. Next, we use the semicolon to indicate we are done with the first phase of our for loop. The next part starts with do, which say for each of a–z, do <some thing>. In this case, we are creating files by touching them via touch $cats_are_cool. The first time through the loop, the command creates a, the second time through b and so forth. We complete that command with a semicolon, then we declare we are finished with the loop with "done".

This might be a great time to experiment with the command above, making small changes, if you wish. Let's do a little more. I just realized that I made a mistake. I meant to give the files a .txt extension. This is how we'd make that happen:

for dogs_are_ok_too in {a..z}; do mv $dogs_are_ok_too $dogs_are_ok_too.txt; done;
Note: It would be perfectly okay to re-use $cats_are_cool here. The variables are not persistent between executions.

As you can see, I updated the command so that a would be renamed a.txt, b would be renamed b.txt and so forth. Why would I want to do that manually, 26 times? If we check our directory, we see that everything was completed in that single command:

rasto@lmlatham:~/temp$ ls -l
total 0
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 a.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 b.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 c.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 d.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 e.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 f.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 g.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 h.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 i.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 j.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 k.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 l.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 m.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 n.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 o.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 p.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 q.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 r.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 s.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 t.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 u.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 v.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 w.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 x.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 y.txt
-rw-rw-r-- 1 rasto rasto 0 Oct 23 16:13 z.txt
rasto@lmlatham:~/temp$

Now we have files, but we don't want them to be empty. Let's put some text in them:

for f in `ls`; do cat /etc/passwd > $f; done

Note the backticks around ls. In Bash, backticks mean, "execute this and return the results," so it's like you executed ls and fed the results to the for loop! Next, cat /etc/passwd is redirecting the results to $f, in filenames a.txt, b.txt, etc. Still with me?

So now I've got a bunch of files with copies of /etc/passwd in them. What if I never wanted files for a, g, or h? First, I'd get a list of just the files I want to get rid of:

rasto@lmlatham:~/temp$ ls | egrep 'a|g|h'
a.txt
g.txt
h.txt

Then I could plug that command into the for loop (using backticks again) and do the removal of those files:

for f in `ls | egrep 'a|g|h'`; do rm $f; done

I know these examples don't seem very complex, but they give you a great first-look at the kind of functionality made possible by the for loop in Bash. Give it a whirl. Once you start smartly incorporating it in your day-to-day operations, you'll save yourself massive amounts of time ... Especially when you come across thousands or tens of thousands of very similar tasks.

Don't do work a computer should do!

-Lee

October 16, 2013

Tips and Tricks: Troubleshooting Email Issues

Working in support, one of the most common issues we troubleshoot is a customer's ability to receive email. Depending on email server, this can be a headache and a half to figure out, but more often than not, we're able to fix the problem with one of only a few simple solutions. Because the SoftLayer Blog audience loves technical tips and tricks, I thought I'd share a few easy steps that make pinpointing the root cause of email issues much easier.

Before you gear up to go into battle, check the that server is not out of disk space on /var and that it is not in a read only state. That precursory step may seem silly, but Occam's Razor often holds true in technical troubleshooting. Once you verify that those two common problems aren't causing your email problems, the next step is to determine whether the email issues are server-wide or isolated to one mail account/domain. To do that, the first thing you need to do is make sure that the IMAP and POP services are responding.

Check IMAP and POP Services

The universal approach to checking IMAP and POP services is to use telnet:

telnet <serverip> 110
telnet <serverip> 143

If either of those commands fail, you're able to pinpoint which service to check on your server.

For most variants of Linux, you can check both services with a single command: netstat -plan|egrep -i "110|143". The resulting output will show if the services are listening and which process is doing the listening. In Windows, you can run a similar command from a command prompt: netstat -anb|find "LISTEN"| findstr "110 143".

If the ports are listening, and you're able to connect to them over telnet, your next stop should be your server's error logs.

Check Error Logs

You want to look for any mail errors that might clue you into the root cause of your email issues. In Linux, you can check /var/log/maillog, and in Windows, you can filter eventvwr.msc for mail only. If there are errors, a simple search will highlight them quickly.

If there are no errors, it's time to dig into the mail queue directly.

Check the Mail Queue

Depending on the mail server you use, the commands here are going to vary. Here are a few examples of how we'd investigate the most common mail servers we encounter:

QMail

Display the mail queue: /var/qmail/bin/qmail-qread
Display the number of messages in the queue: /var/qmail/bin/qmail-qstat
Reference article: Gaining Control Over the QMail Queue

Sendmail

Display the mail queue: sendmail -bp or mailq
Display the number of messages in the queue: mailq –OmaxQueueRunSize=1
Reference article: Quick Sendmail Cheatsheet

Exim

Display the mail queue: exim -bp
Display the number of messages in the queue: exim -bpc
Reference article: Exim cheatsheet

MailEnable

MailEnable users can can check to see that messages are moving by opening the mail directory:
Program Files\MailEnable\Queues\SMTP\Inbound\Messages
Reference article: How to diagnose inbound message delivery delays

With these commands, you can filter through the email queues to see whether any of them are for the users or domains you're having problems with. If nothing obvious presents itself at that point, it's time for some active testing.

Active Testing

Send an email to your mailserver from an external mailserver (anything will do as long as it's not on the same server). Watch for logging of the email as it's delivered:
tail -f maillog
On busy mailservers you might add |grep youremailid or simply look for a new message in the directory where the email will be stored.

The your primary goal in troubleshooting your email issues in this way is to isolate the root cause of your problem so that you can fix it more quickly. SoftLayer customers have direct access to our support team to help you through this process, but it's always nice to keep a quick reference like this in your back pocket to be able to pinpoint the problem yourself.

-Bill

September 16, 2013

Sysadmin Tips and Tricks - Using strace to Monitor System Calls

Linux admins often encounter rogue processes that die without explanation, go haywire without any meaningful log data or fail in other interesting ways without providing useful information that can help troubleshoot the problem. Have you ever wished you could just see what the program is trying to do behind the scenes? Well, you can — strace (system trace) is very often the answer. It is included with most distros' package managers, and the syntax should be pretty much identical on any Linux platform.

First, let's get rid of a misconception: strace is not a "debugger," and it isn't a programmer's tool. It's a system administrator's tool for monitoring system calls and signals. It doesn't involve any sophisticated configurations, and you don't have to learn any new commands ... In fact, the most common uses of strace involve the bash commands you learned the early on:

  • read
  • write
  • open
  • close
  • stat
  • fork
  • execute (execve)
  • chmod
  • chown

 

You simply "attach" strace to the process, and it will display all the system calls and signals resulting from that process. Instead of executing the command's built-in logic, strace just makes the process's normal calls to the system and returns the results of the command with any errors it encountered. And that's where the magic lies.

Let's look an example to show that behavior in action. First, become root — you'll need to be root for strace to function properly. Second, make a simple text file called 'test.txt' with these two lines in it:

# cat test.txt
Hi I'm a text file
there are only these two lines in me.

Now, let's execute the cat again via strace:

$ strace cat test.txt 
execve("/bin/cat", ["cat", "test.txt"], [/* 22 vars */]) = 0
brk(0)  = 0x9b7b000
uname({sys="Linux", node="ip-208-109-127-49.ip.secureserver.net", ...}) = 0
access("/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=30671, ...}) = 0
mmap2(NULL, 30671, PROT_READ, MAP_PRIVATE, 3, 0) = 0xb7f35000
close(3) = 0
open("/lib/libc.so.6", O_RDONLY) = 3
read(3, "\177ELF\1\1\1\0\0\0\0\0\0\0\0\0\3\0\3\0\1\0\0\0000_\1\0004\0\0\0"..., 512) = 512
fstat64(3, {st_mode=S_IFREG|0755, st_size=1594552, ...}) = 0
mmap2(NULL, 1320356, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xb7df2000
mmap2(0xb7f2f000, 12288, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x13c) = 0xb7f2f000
mmap2(0xb7f32000, 9636, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xb7f32000
close(3) = 0
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7df1000
mmap2(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb7df0000
set_thread_area({entry_number:-1 -> 6, base_addr:0xb7df1b80, limit:1048575, seg_32bit:1, contents:0, read_exec_only:0, limit_in_pages:1, seg_not_present:0, useable:1}) = 0
mprotect(0xb7f2f000, 8192, PROT_READ) = 0
mprotect(0xb7f57000, 4096, PROT_READ) = 0
munmap(0xb7f35000, 30671) = 0
brk(0)  = 0x9b7b000
brk(0x9b9c000) = 0x9b9c000
fstat64(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 0), ...}) = 0
open("test.txt", O_RDONLY|O_LARGEFILE) = 3
fstat64(3, {st_mode=S_IFREG|0644, st_size=57, ...}) = 0
read(3, "Hi I'm a text file\nthere are onl"..., 4096) = 57
write(1, "Hi I'm a text file\nthere are onl"..., 57Hi I’m a text file
there are only these two lines in me.
) = 57
read(3, "", 4096) = 0
close(3) = 0
close(1) = 0
exit_group(0) = ?

Now that return may look really arcane, but if you study it a little bit, you'll see that it includes lots of information that even an ordinary admin can easily understand. The first line returned includes the execve system call where we'd execute /bin/cat with the parameter of test.txt. After that, you'll see the cat binary attempt to open some system libraries, and the brk and mmap2 calls to allocate memory. That stuff isn't usually particularly useful in the context we're working in here, but it's important to understand what's going on. What we're most interested in are often open calls:

open("test.txt", O_RDONLY|O_LARGEFILE) = 3

It looks like when we run cat test.txt, it will be opening "test.txt", doesn't it? In this situation, that information is not very surprising, but imagine if you are in a situation were you don't know what files a given file is trying to open ... strace immediately makes life easier. In this particular example, you'll see that "= 3" at the end, which is a temporary sort of "handle" for this particular file within the strace output. If you see a "read" call with '3' as the first parameter after this, you know it's reading from that file:

read(3, "Hi I'm a text file\nthere are onl"..., 4096) = 57

Pretty interesting, huh? strace defaults to just showing the first 32 or so characters in a read, but it also lets us know that there are 57 characters (including special characters) in the file! After the text is read into memory, we see it writing it to the screen, and delivering the actual output of the text file. Now that's a relatively simplified example, but it helps us understand what's going on behind the scenes.

Real World Example: Finding Log Files

Let's look at a real world example where we'll use strace for a specific purpose: You can't figure out where your Apache logs are being written, and you're too lazy to read the config file (or perhaps you can't find it). Wouldn't it be nice to follow everything Apache is doing when it starts up, including opening all its log files? Well you can:

strace -Ff -o output.txt -e open /etc/init.d/httpd restart

We are executing strace and telling it to follow all forks (-Ff), but this time we'll output to a file (-o output.txt) and only look for 'open' system calls to keep some of the chaff out of the output (-e open), and execute '/etc/init.d/httpd restart'. This will create a file called "output.txt" which we can use to find references to our log files:

#cat output.txt | grep log
[pid 13595] open("/etc/httpd/modules/mod_log_config.so", O_RDONLY) = 4
[pid 13595] open("/etc/httpd/modules/mod_logio.so", O_RDONLY) = 4
[pid 13595] open("/etc/httpd/logs/error_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 10
[pid 13595] open("/etc/httpd/logs/ssl_error_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 11
[pid 13595] open("/etc/httpd/logs/access_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 12
[pid 13595] open("/etc/httpd/logs/cm4msaa7.com", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 13
[pid 13595] open("/etc/httpd/logs/ssl_access_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 14
[pid 13595] open("/etc/httpd/logs/ssl_request_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 15
[pid 13595] open("/etc/httpd/modules/mod_log_config.so", O_RDONLY) = 9
[pid 13595] open("/etc/httpd/modules/mod_logio.so", O_RDONLY) = 9
[pid 13596] open("/etc/httpd/logs/error_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 10
[pid 13596] open("/etc/httpd/logs/ssl_error_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 11
open("/etc/httpd/logs/access_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 12
open("/etc/httpd/logs/cm4msaa7.com", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 13
open("/etc/httpd/logs/ssl_access_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 14
open("/etc/httpd/logs/ssl_request_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = 15

The log files jump out at you don't they? Because we know that Apache will want to open its log files when it starts, all we have to do is we follow all the system calls it makes when it starts, and we'll find all of those files. Easy, right?

Real World Example: Locating Errors and Failures

Another valuable use of strace involves looking for errors. If a program fails when it makes a system call, you'll want to be able pinpoint any errors that might have caused that failure as you troubleshoot. In all cases where a system call fails, strace will return a line with "= -1" in the output, followed by an explanation. Note: The space before -1 is very important, and you'll see why in a moment.

For this example, let's say Apache isn't starting for some reason, and the logs aren't telling ua anything about why. Let's run strace:

strace -Ff -o output.txt -e open /etc/init.d/httpd start

Apache will attempt to restart, and when it fails, we can grep our output.txt for '= -1' to see any system calls that failed:

$ cat output.txt | grep '= -1'
[pid 13748] open("/etc/selinux/config", O_RDONLY|O_LARGEFILE) = -1 ENOENT (No such file or directory)
[pid 13748] open("/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE/tls/i686/sse2/libperl.so", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE/tls/i686/libperl.so", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE/tls/sse2/libperl.so", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE/tls/libperl.so", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE/i686/sse2/libperl.so", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE/i686/libperl.so", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE/sse2/libperl.so", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE/libnsl.so.1", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/usr/lib/perl5/5.8.8/i386-linux-thread-multi/CORE/libutil.so.1", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/etc/gai.conf", O_RDONLY) = -1 ENOENT (No such file or directory)
[pid 13748] open("/etc/httpd/logs/error_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)

With experience, you'll come to understand which errors matter and which ones don't. Most often, the last error is the most significant. The first few lines show the program trying different libraries to see if they are available, so they don't really matter to us in our pursuit of what's going wrong with our Apache restart, so we scan down and find that the last line:

[pid 13748] open("/etc/httpd/logs/error_log", O_WRONLY|O_CREAT|O_APPEND|O_LARGEFILE, 0666) = -1 EACCES (Permission denied)

After a little investigation on that file, I see that some maniac as set the immutable attribute:

lsattr /etc/httpd/logs/error_log
----i-------- /etc/httpd/logs/error_log

Our error couldn't be found in the log file because Apache couldn't open it! You can imagine how long it might take to figure out this particular problem without strace, but with this useful tool, the cause can be found in minutes.

Go and Try It!

All major Linux distros have strace available — just type strace at the command line for the basic usage. If the command is not found, install it via your distribution's package manager. Get in there and try it yourself!

For a fun first exercise, bring up a text editor in one terminal, then strace the editor process in another with the -p flag (strace -p <process_id>) since we want to look at an already-running process. When you go back and type in the text editor, the system calls will be shown in strace as you type ... You see what's happening in real time!

-Lee

March 5, 2012

iptables Tips and Tricks - Not Locking Yourself Out

The iptables tool is one of the simplest, most powerful tools you can use to protect your server. We've covered port redirection, rule processing and troubleshooting in previous installments to this "Tips and Tricks" series, but what happens when iptables turns against you and locks you out of your own system?

Getting locked out of a production server can cost both time and money, so it's worth your time to avoid this. If you follow the correct procedures, you can safeguard yourself from being firewalled off of your server. Here are seven helpful tips to help you keep your sanity and prevent you from locking yourself out.

Tip 1: Keep a safe ruleset handy.

If you are starting with a working ruleset, or even if you are trying to troubleshoot an existing ruleset, take a backup of your iptables configuration before you ever start working on it.

iptables-save > /root/iptables-safe

Then if you do something that prevents your website from working, you can quickly restore it.

iptables-restore

Tip 2: Create a cron script that will reload to your safe ruleset every minute during testing.

This was pointed out to my by a friend who swears by this method. Just write a quick bash script and set a cron entry that will reload it back to the safe set every minute. You'll have to test quickly, but it will keep you from getting locked out.

Tip 3: Have the IPMI KVM ready.

SoftLayer-pod servers* are equipped with some sort of remote access device. Most of them have a KVM console. You will want to have your VPN connection set up, connected and the KVM window up. You can't paste to and from the KVM, so SSH is typically easier to work with, but it will definitely cut down on the downtime if something does go wrong.

*This may not apply to servers that were originally provisioned under another company name.

Tip 4: Try to avoid generic rules.

The more criteria you specify in the rule, the less chance you will have of locking yourself out. I would liken this to a pie. A specific rule is a very thin slice of the pie.

iptables -A INPUT -p tcp --dport 22 -s 10.0.0.0/8 -d 123.123.123.123 -j DROP

But if you block port 22 from any to any, it's a very large slice.

iptables -A INPUT -p tcp --dport 22 -j DROP

There are plenty of ways that you can be more specific. For example, using "-i eth0" will limit the processing to a single NIC in your server. This way, it will not apply the rule to eth1.

Tip 5: Whitelist your IP address at the top of your ruleset.

This may make testing more difficult unless you have a secondary offsite test server, but this is a very effective method of not getting locked out.

iptables -I INPUT -s <your IP> -j ACCEPT

You need to put this as the FIRST rule in order for it to work properly ("-I" inserts it as the first rule, whereas "-A" appends it to the end of the list).

Tip 6: Know and understand all of the rules in your current configuration.

Not making the mistake in the first place is half the battle. If you understand the inner workings behind your iptables ruleset, it will make your life easier. Draw a flow chart if you must.

Tip 7: Understand the way that iptables processes rules.

Remember, the rules start at the top of the chain and go down, unless specified otherwise. Crack open the iptables man page and learn about the options you are using.

-Mark

January 9, 2012

iptables Tips and Tricks – Troubleshooting Rulesets

One of the most time consuming tasks with iptables is troubleshooting a problematic ruleset. That will not change no matter how much experience you have with it. However, with the right mindset, this task becomes considerably easier.

If you missed my last installment about iptables rule processing, here's a crash course:

  1. The rules start at the top, and proceed down, one by one unless otherwise directed.
  2. A rule must match exactly.
  3. Once iptables has accepted, rejected, or dropped a packet, it will not process any further rules on it.

There are essentially two things that you will be troubleshooting with iptables ... Either it's not accepting traffic and it should be OR it's accepting traffic and it shouldn't be. If the server is intermittently blocking or accepting traffic, that may take some additional troubleshooting, and it may not even be related to iptables.

Keep in mind what you are looking for, and don't jump to any conclusions. Troubleshooting iptables takes patience and time, and there shouldn't be any guesswork involved. If you have a configuration of 800 rules, you should expect to need to look through every single rule until you find the rule that is causing your problems.

Before you begin troubleshooting, you first need to know some information about the traffic:

  1. What is the source IP address or range that is having difficulty connecting?
  2. What is the destination IP address or website IP?
  3. What is the port or port range affected, or what type of traffic is it (TCP, ICMP, etc.)?
  4. Is it supposed to be accepted or blocked?

Those bits of information should be all you need to begin troubleshooting a buggy ruleset, except in some rare cases that are outside the scope of this article.

Here are some things to keep in mind (especially if you did not program every rule by hand):

  • iptables has three built in chains. These are for INPUT – the traffic coming in to the server, OUTPUT – the traffic coming out of the server, and FORWARD – traffic that is not destined to or coming from the server (usually only used when iptable is acting as a firewall for other servers). You will start your troubleshooting at the top of one of these three chains, depending on the type of traffic.
  • The "target" is the action that is taken when the rule matches. This may be another custom chain, so if you see a rule with another chain as the target that matches exactly, be sure to step through every rule in that chain as well. In the following example, you will see the BLACKLIST2 sub-chain that applies to traffic on port 80. If traffic comes through on port 80, it will be diverted to this other chain.
  • The RETURN target indicates that you should return to the parent chain. If you see a rule that matches with a RETURN target, stop all your troubleshooting on the current chain, and return the rule directly after the rule that referenced the custom chain.
  • If there are no matching rules, the chain policy is applied.
  • There may be rules in the "nat," "mangle" or "raw" tables that are blocking or diverting your traffic. Typically, all the rules will be in the "filter" table, but you might run into situations where this is not the case. Try running this to check: iptables -t mangle -nL ; iptables -t nat -nL ; iptables -t raw -nL
  • Be cognisant of the policy. If the policy is ACCEPT, all traffic that does not match a rule will be accepted. Conversely, if the policy is DROP or REJECT, all traffic that does not match a rule will be blocked.
  • My goal with this article is to introduce you to the algorithm by which you can troubleshoot a more complex ruleset. It is intentionally left simple, but you should still follow through even when the answer may be obvious.

Here is an example ruleset that I will be using for an example:

Chain INPUT (policy DROP)
target prot opt source destination
BLACKLIST2 tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:50
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1010

Chain BLACKLIST2 (1 references)
target prot opt source destination
REJECT * -- 123.123.123.123 0.0.0.0/0
REJECT * -- 45.34.234.234 0.0.0.0/0
ACCEPT * -- 0.0.0.0/0 0.0.0.0/0

Here is the problem: Your server is accepting SSH traffic to anyone, and you wish to only allow SSH to your IP – 111.111.111.111. We know that this is inbound traffic, so this will affect the INPUT chain.

We are looking for:

source IP: any
destination IP: any
protocol: tcp
port: 22

Step 1: The first rule denotes any source IP and and destination IP on destination port 80. Since this is regarding port 22, this rule does not match, so we'll continue to the next rule. If the traffic here was on port 80, it would invoke the BLACKLIST2 sub chain.
Step 2: The second rule denotes any source IP and any destination IP on destination port 50. Since this is regarding port 22, this rule does not match, so let's continue on.
Step 3: The third rule denotes any source IP and any destination IP on destination port 53. Since this is regarding port 22, this rule does not match, so let's continue on.
Step 4: The fourth rule denotes any source IP and any destination IP on destination port 22. Since this is regarding port 22, this rule matches exactly. The target ACCEPT is applied to the traffic. We found the problem, and now we need to construct a solution. I will be showing you the Redhat method of doing this.

Do this to save the running ruleset as a file:

iptables-save > current-iptables-rules

Then edit the current-iptables-rules file in your favorite editor, and find the rule that looks like this:

-A INPUT -p tcp --dport 22 -j ACCEPT

Then you can modify this to only apply to your IP address (the source, or "-s", IP address).

-A INPUT -p tcp -s 111.111.111.111 --dport 22 -j ACCEPT

Once you have this line, you will need to load the iptables configuration from this file for testing.

iptables-restore < current-iptables-rules

Don't directly edit the /etc/sysconfig/iptables file as this might lock you out of your server. It is good practice to test a configuration before saving to the system configuration files. This way, if you do get locked out, you can reboot your server and it will be working. The ruleset should look like this now:

Chain INPUT (policy DROP)
target prot opt source destination
BLACKLIST2 tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:80
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:50
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:53
ACCEPT tcp -- 111.111.111.111 0.0.0.0/0 tcp dpt:22
ACCEPT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:1010

Chain BLACKLIST2 (1 references)
target prot opt source destination
REJECT * -- 123.123.123.123 0.0.0.0/0
REJECT * -- 45.34.234.234 0.0.0.0/0
ACCEPT * -- 0.0.0.0/0 0.0.0.0/0

The policy of "DROP" will now block any other connection on port 22. Remember, the rule must match exactly, so the rule on port 22 now *ONLY* applies if the IP address is 111.111.111.111.

Once you have confirmed that the rule is behaving properly (be sure to test from another IP address to confirm that you are not able to connect), you can write the system configuration:

service iptables save

If this troubleshooting sounds boring and repetitive, you are right. However, this is the secret to solid iptables troubleshooting. As I said earlier, there is no guesswork involved. Just take it step by step, make sure the rule matches exactly, and follow it through till you find the rule that is causing the problem. This method may not be fast, but it's reliable. You'll look like an expert in no time.

-Mark

January 5, 2012

iptables Tips and Tricks - Rule Processing

As I mentioned in "iptables Tips and Tricks - Port Redirection," iptables is probably a complete mystery to a lot of users, and one the biggest hurdles is understanding the method by which it filters traffic ... Once you understand this, you'll be able to tame the beast.

When I think of iptables, the best analogy that comes to mind is a gravity coin sorting bank with four rules and one policy. If you're not familiar with a gravity coin sorting bank, each coin is starts at the same place and slides down an declined plane until it can fall into it's appropriate tube:

iptables Rule Sorter

As you can see, once a coin starts down the path, there are four rules – each one "filtering traffic" based on the width of the coin in millimeters (Quarter = 25mm, Nickel = 22mm, Penny = 20mm, Dime = 18mm). Due to possible inconsistencies in the coins, the tube widths are slightly larger than the official sizes of each coin to prevent jamming. At the end of the line, if a coin didn't fit in any of the tubes, it's dropped out of the sorter.

As we use this visualization to apply to iptables, there are three important things to remember:

  1. The rules start at the top, and proceed down, one by one unless otherwise directed.
  2. A rule must match exactly.
  3. Once iptables has accepted, rejected, or dropped a packet, it will not process any further rules on it.

Let's jump back to the coin sorter. What would happen if you introduced a 23mm coin (slightly larger than a nickel)? What would happen if you introduced a 17mm coin (smaller than a dime)? What would happen if you dropped in a $1 coin @ 26.5mm?

In the first scenario, the coin would enter into the rule processing by being dropped in at the top. It would first pass by the dime slot, which requires a diameter of less than 18mm. It passes by the pennies slot as well, which requires less than 20mm. It continues past the nickels slot, which requires 22mm or less. It will then be "accepted" into the quarters slot, and there will be no further "processing" on the coin.

The iptables rules might look something like this:

Chain INPUT (policy DROP)
target prot opt source destination
ACCEPT all --- 0.0.0.0/00.0.0.0/0width ACCEPT all --- 0.0.0.0/00.0.0.0/0width ACCEPT all --- 0.0.0.0/00.0.0.0/0width ACCEPT all --- 0.0.0.0/00.0.0.0/0width

It's important to remember that once iptables has accepted, rejected, or dropped a packet, it will not process any further rules on it. In the second scenario (17mm coin), the coin would only be processed through the first rule; the other 3 rules would not be used even though the coin would meet their rules as well. Just because a port or and IP address is allowed somewhere in a chain, if a matching rule has dropped the packet, no further rules will be processed.

The final scenario (26.5mm coin) outlines a situation where none of the rules match, and this indicates that the policy will be used. In the coin bank example, it would be physically dropped off the side of the bank. iptables keeps a tally of the number of packets dropped and the corresponding size of the data. You can view this data by using the "iptables -vnL" command.

Chain OUTPUT (policy ACCEPT 3418K packets, 380M bytes)

cPanel even uses this tally functionality to track bandwidth usage (You may have seen the "acctboth" chain - this is used for tracking usage per IP).

So there you have it: iptables is just like a gravity coin sorting bank!

-Mark

December 26, 2011

iptables Tips and Tricks - Port Redirection

One of the most challenging and rewarding aspects of Linux administration is the iptables firewall. To the unenlightened, this can be a confusing black box that breaks your web server and blocks your favorite visitors from viewing your content at the most inconvenient times. This blog is the first in a series aimed at clarifying this otherwise mysterious force at work in your server.

Nothing compares with the frustration of trying to make a program listen on a different port – like if you wanted to configure your mail client to listen on port 2525. Many times, configuring a program the hard way (some would say the "correct" way) using configuration files may not be worth your time and effort ... Especially if the server is running on a control panel that does not natively support this functionality.

Fortunately, iptables offers an elegant solution:

iptables -t nat -A PREROUTING -p tcp --dport 2525 -j REDIRECT --to-ports 25

What this does:

  1. This specifies -t nat to indicate the nat table. Typically rules are added to the "filter" table (if you do not specify another table), and this is where the majority of the traffic is handled. In this case, however, we require the use of the nat table.
  2. This rules appends (-A), which means to add the rule at the bottom of the list.
  3. This rule is added to the PREROUTING chain.
  4. For the tcp protocol (-p tcp)
  5. The destination port (--dport) is 2525 - this is the port that the client is trying to access on your server.
  6. The traffic is jumped (-j) to the REDIRECT action. This is the action that is taken when the rule matches.
  7. The port is redirected to port 25 on the server.

As you can see, by changing the protocol to either tcp or udp or by adjusting the dport number and the to-ports number, you can redirect any port incoming to any listening port on the server. Just remember that the dport is the port the client machine is trying to connect to (the port they configure in the mail client, for example).

But check this out: Say for example you have a website (shocking, I know). You don't have a load balancer or a firewall set up, but you want to split off your email traffic to a second server to reduce strain on your web server. Essentially, you want to take incoming port 25 and redirect it ... to ANOTHER SERVER. With iptables, you can make this work:

iptables -t nat -A PREROUTING -p tcp -d 123.123.123.123 --dport 25 -j DNAT --to-destination 10.10.10.10:25

What this does:

  1. It specifies a destination (-d) IP address. This is not needed, but if you want to limit the email redirection to a single address, this is how you can do it.
  2. It is jumped to DNAT, which stands for destination nat.
  3. The destination and port are specified as arguments on to-destination

As you can see, this forwards all traffic on port 25 to an internal IP address.

Now, say you want to redirect from a different incoming port to a port on another server:

iptables -t nat -A PREROUTING -p tcp --dport 5001 -j DNAT --to-destination 10.10.10.10:25
iptables -t nat -A POSTROUTING -p tcp --dport 25 -j MASQUERADE

In this example, the incoming port is different, so we need to change it back to the standard port on the way back out through the primary server.

If you would like further reading on this topic, I recommend this great tutorial:
http://www.karlrupp.net/en/computer/nat_tutorial

Remember, when you are modifying your running configuration of iptables, you will still need to save your changes in order for it to persist on reboot. Be sure to test your configuration before saving it with "service iptables save" so that you don't lock yourself out.

-Mark

December 1, 2011

UNIX Sysadmin Boot Camp: Permissions

I hope you brought your sweat band ... Today's Boot Camp workout is going to be pretty intense. We're focusing on our permissions muscles. Permissions in a UNIX environment cause a lot of customer issues ... While everyone understands the value of secure systems and limited access, any time an "access denied" message pops up, the most common knee-jerk reaction is to enable full access to one's files (chmod 777, as I'll explain later). This is a BAD IDEA. Open permissions are a hacker's dream come true. An open permission setting might have been a temporary measure, but more often than not, the permissions are left in place, and the files remain vulnerable.

To better understand how to use permissions, let's take a step back and get a quick refresher on key components.

You'll need to remember the three permission types:

r w x: r = read; w = write; x = execute

And the three types of access they can be applied to:

u g o: u = user; g = group; o = other

Permissions are usually displayed in one of two ways – either with letters (rwxrwxrwx) or numbers (777). When the permissions are declared with letters, you should look at it as three sets of three characters. The first set applies to the user, the second applies to the group, and the third applies to other (everyone else). If a file is readable only by the user and cannot be written to or executed by anyone, its permission level would be r--------. If it could be read by anyone but could only be writeable by the user and the group, its permission level would be rw-rw-r--.

The numeric form of chmod uses bits to represent permission levels. Read access is marked by 4 bits, write is 2, and execute is 1. When you want a file to have read and write access, you just add the permission bits: 4 + 2 = 6. When you want a file to have read, write and execute access, you'll have 4 + 2 + 1, or 7. You'd then apply that numerical permission to a file in the same order as above: user, group, other. If we used the example from the last sentence in the previous paragraph, a file that could be read by anyone, but could only be writeable by the user and the group, would have a numeric permission level of 664 (user: 6, group: 6, other: 4).

Now the "chmod 777" I referenced above should make a little more sense: All users are given all permissions (4 + 2 + 1 = 7).

Applying Permissions

Understanding these components, applying permissions is pretty straightforward with the use of the chmod command. If you want a user (u) to write and execute a file (wx) but not read it (r), you'd use something like this:

chmod Output

In the above terminal image, I added the -v parameter to make it "verbose," so it displays the related output or results of the command. The permissions set by the command are shown by the number 0300 and the series (-wx------). Nobody but the user can write or execute this file, and as of now, the user can't even read the file. If you were curious about the leading 0 in "0300," it simply means that you're viewing an octal output, so for our purposes, it can be ignored entirely.

In that command, we're removing the read permission from the user (hence the minus sign between u and r), and we're giving the user write and execute permissions with the plus sign between u and wx. Want to alter the group or other permissions as well? It works exactly the same way: g+,g-,o+,o- ... Getting the idea? chmod permissions can be set with the letter-based commands (u+r,u-w) or with their numeric equivalents (eg. 400 or 644), whichever floats your boat.

A Quick Numeric chmod Reference

chmod 777 | Gives specified file read, write and execute permissions (rwx) to ALL users
chmod 666 | Allows for read and write privileges (rw) to ALL users
chmod 555 | Gives read and execute permissions (rx) to ALL users
chmod 444 | Gives read permissions (r) to ALL users
chmod 333 | Gives write and execute permissions (wx) to ALL users
chmod 222 | Gives write privileges (w) to ALL users
chmod 111 | Gives execute privileges (x) to ALL users
chmod 000 | Last but not least, gives permissions to NO ONE (Careful!)

Get a List of File Permissions

To see what your current file permissions are in a given directory, execute the ls –l command. This returns a list of the current directory including the permissions, the group it's in, the size and the last date the file was modified. The output of ls –l looks like this:

ls -l Output

On the left side of that image, you'll see the permissions in the rwx format. When the permission begins with the "d" character, it means that object is a directory. When the permission starts with a dash (-), it is a file.

Practice Deciphering Permissions

Let's look at a few examples and work backward to apply what we've learned:

  • Example 1: -rw-------
  • Example 2: drwxr-x---
  • Example 3: -rwxr-xr-x

In Example 1, the file is not a directory, the user that owns this particular object has read and write permissions, and when the group and other fields are filled with dashes, we know that their permissions are set to 0, so they have no access. In this case, only the user who owns this object can do anything with it. We'll cover "ownership" in a future blog, but if you're antsy to learn right now, you can turn to the all-knowing Google.

In Example 2, the permissions are set on a directory. The user has read, write and execute permissions, the group has read and execute permissions, and anything/anyone besides user or group is restricted from access.

For Example 3, put yourself to the test. What access is represented by "-rwxr-xr-x"? The answer is included at the bottom of this post.

Wrapping It Up

How was that for a crash course in Unix environment permissions? Of course there's more to it, but this will at least make you think about what kind of access you're granting to your files. Armed with this knowledge, you can create the most secure server environment.

Here are a few useful links you may want to peruse at your own convenience to learn more:

Linuxforums.org
Zzee.com
Comptechdoc.org
Permissions Calculator

Did I miss anything? Did I make a blatantly ridiculous mistake? Did I use "their" when I should have used "they're"??!!... Let me know about it. Leave a comment if you've got anything to add, suggest, subtract, quantize, theorize, ponderize, etc. Think your useful links are better than my useful links? Throw those at me too, and we'll toss 'em up here.

Are you still feeling the burn from your Sysadmin Boot Camp workout? Don't forget to keep getting reps in bash, logs, SSH, passwords and user management!

- Ryan

Example 3 Answer

November 15, 2011

UNIX Sysadmin Boot Camp: User Management

Now that you're an expert when it comes to bash, logs, SSH, and passwords, you're probably foaming at the mouth to learn some new skills. While I can't equip you with the "nunchuck skills" or "bowhunting skills" Napoleon Dynamite reveres, I can help you learn some more important — though admittedly less exotic — user management skills in UNIX.

Root User

The root user — also known as the "super user" — has absolute control over everything on the server. Nothing is held back, nothing is restricted, and anything can be done. Only the server administrator should have this kind of access to the server, and you can see why. The root user is effectively the server's master, and the server accordingly will acquiesce to its commands.

Broad root access should be avoided for the sake of security. If a program or service needs extensive abilities that are generally reserved for the root user, it's best to grant those abilities on a narrow, as-needed basis.

Creating New Users

Because the Sysadmin Boot Camp series is geared toward server administration from a command-line point of view, that's where we'll be playing today. Tasks like user creation can be performed fairly easily in a control panel environment, but it's always a good idea to know the down-and-dirty methods as a backup.

The useradd command is used for adding users from shell. Let's start with an example and dissect the pieces:

useradd -c "admin" -d /home/username -g users\ -G admin,helpdesk -s\ /bin/bash userid

-c "admin" – This command adds a comment to the user we're creating. The comment in this case is "admin," which may be used to differentiate the user a little more clearly for better user organization.
-d /home/username – This block sets the user's home directory. The most common approach is to replace username with the username designated at the end of the command.
-g users\ – Here, we're setting the primary group for the user we're creating, which will be users.
-G admin,helpdesk – This block specifies other user groups the new user may be a part of.
-s\ /bin/bash userid – This command is in two parts. It says that the new user will use /bin/bash for its shell and that userid will be the new user's username.

Changing Passwords

Root is the only user that can change other users' passwords. The command to do this is:

passwd userid

If you are a user and want to change your own password, you would simply issue the passwd command by itself. When you execute the command, you will be prompted for a new entry. This command can also be executed by the root user to change the root password.

Deleting Users

The command for removing users is userdel, and if we were to execute the command, it might look like this:

userdel -r username

The –r designation is your choice. If you choose to include it, the command will remove the home directory of the specified user.

Where User Information is Stored

The /etc/passwd file contains all user information. If you want to look through the file one page at a time — the way you'd use /p in Windows — you can use the more command:

more /etc/passwd

Keep in mind that most of your important configuration files are going to be located in the /etc folder, commonly spoken with an "et-see" pronunciation for short. Each line in the passwd file has information on a single user. Arguments are segmented with colons, as seen in the example below:

username:password:12345:12345::/home/username:/bin/bash

Argument 1 – username – the user's username
Argument 2 – password – the user's password
Argument 3 – 12345 – the user's numeric ID
Argument 4 – 12345 – the user group's numeric ID
Argument 5 – "" – where either a comment or the user's full name would go
Argument 6 - /home/username – the user's home directory
Argument 7 - /bin/bash – the user's default console shell

Now that you've gotten a crash course on user management, we'll start going deeper into group management, more detailed permissions management and the way shadow file relates to the passwd usage discussed above.

-Ryan

November 14, 2011

My Road to LPIC-1 Certification

I've been a Linux user for many years, but for various reasons I never bothered to get a certification even though it's a fantastic validation of Linux skills. When I moved up in the world by joining SoftLayer, my attitude quickly changed.

As a new Systems Administrator at SoftLayer, one of the first challenges I was presented with was to try for my LPIC-1 certification. True to SoftLayer's motto of "Challenging, but not Overwhelming," I was given 3 months, a practice environment and reimbursement for my fees if I passed the tests. With an offer like that, it was impossible to refuse.

The LPIC-1 tests are not easy, and it took a lot of work to pass them, but if you're interested all you need to succeed is a solid background in Linux and the time to dedicate to preparation. Here are some of the things I learned along the way:

  1. Don't attempt the LPIC-1 exam unless you have at least a couple of years' worth of hands-on Linux experience. Seriously, it's not for newbies.
  2. Acquire at least two test-prep books, and read one of them every day. I used O'Reilly's LPIC-1 Certification in a Nutshell and LPIC-1 In Depth by Michael Jang. Both are easy to read, have good explanations of concepts you need to understand, and provide valuable tips in addition to practice exams.
  3. Set up a practice environment. It's essential for reviewing commands you may not be familiar with.
  4. When you think you are ready for the first exam, take a few free practice tests online. There are a number of them available.
  5. I didn't buy any test-prep software, but I did download a couple of trial versions as they offered some free practice questions.
  6. Take all of the practice exams available to you several times each. You'll get more comfortable with the format of the test questions and will also learn which areas you need to revisit before the actual test.

After earning the LPIC-1 certification I received a nice surprise in my mailbox along with my certificate. Apparently Novell and the Linux Professional Institute have a partnership: By earning the LPIC-1 I had also satisfied the requirements for Novell's Certified Linux Administrator (CLA) certification, so now I can enjoy the benefits of having two IT certifications for the price of one and I have SoftLayer to thank for it!

-Todd

Subscribe to linux