Posts Tagged 'Command'

April 16, 2013

iptables Tips and Tricks - Track Bandwidth with iptables

As I mentioned in my last post about CSF configuration in iptables, I'm working on a follow-up post about integrating CSF into cPanel, but I thought I'd inject a simple iptables use-case for bandwidth tracking. You probably think about iptables in terms of firewalls and security, but it also includes a great diagnostic tool for counting bandwidth for individual rules or set of rules. If you can block it, you can track it!

The best part about using iptables to track bandwidth is that the tracking is enabled by default. To see this feature in action, add the "-v" into the command:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 2495 packets, 104K bytes)

The output includes counters for both the policies and the rules. To track the rules, you can create a new chain for tracking bandwidth:

[root@server ~]$ iptables -N tracking
[root@server ~]$ iptables -vnL
...
Chain tracking (0 references)
 pkts bytes target prot opt in out source           destination

Then you need to set up new rules to match the traffic that you wish to track. In this scenario, let's look at inbound http traffic on port 80:

[root@server ~]$ iptables -I INPUT -p tcp --dport 80 -j tracking
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35111 packets, 1490K bytes)
 pkts bytes target prot opt in out source           destination
    0   0 tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80

Now let's generate some traffic and check it again:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35216 packets, 1500K bytes)
 pkts bytes target prot opt in out source           destination
  101  9013 tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80

You can see the packet and byte transfer amounts to track the INPUT — traffic to a destination port on your server. If you want track the amount of data that the server is generating, you'd look for OUTPUT from the source port on your server:

[root@server ~]$ iptables -I OUTPUT -p tcp --sport 80 -j tracking
[root@server ~]$ iptables -vnL
...
Chain OUTPUT (policy ACCEPT 26149 packets, 174M bytes)
 pkts bytes target prot opt in out source           destination
  488 3367K tracking    tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp spt:80

Now that we know how the tracking chain works, we can add in a few different layers to get even more information. That way you can keep your INPUT and OUTPUT chains looking clean.

[root@server ~]$ iptables –N tracking
[root@server ~]$ iptables –N tracking2
[root@server ~]$ iptables –I INPUT –j tracking
[root@server ~]$ iptables –I OUTPUT –j tracking
[root@server ~]$ iptables –A tracking –p tcp --dport 80 –j tracking2
[root@server ~]$ iptables –A tracking –p tcp --sport 80 –j tracking2
[root@server ~]$ iptables -vnL
 
Chain INPUT (policy ACCEPT 96265 packets, 4131K bytes)
 pkts bytes target prot opt in out source           destination
 4002  184K tracking    all  --  *  *   0.0.0.0/0        0.0.0.0/0
 
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source           destination
 
Chain OUTPUT (policy ACCEPT 33751 packets, 231M bytes)
 pkts bytes target prot opt in out source           destination
 1399 9068K tracking    all  --  *  *   0.0.0.0/0        0.0.0.0/0
 
Chain tracking (2 references)
 pkts bytes target prot opt in out source           destination
 1208 59626 tracking2   tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp dpt:80
  224 1643K tracking2   tcp  --  *  *   0.0.0.0/0        0.0.0.0/0       tcp spt:80
 
Chain tracking2 (2 references)
 pkts bytes target prot opt in out source           destination

Keep in mind that every time a packet passes through one of your rules, it will eat CPU cycles. Diverting all your traffic through 100 rules that track bandwidth may not be the best idea, so it's important to have an efficient ruleset. If your server has eight processor cores and tons of overhead available, that concern might be inconsequential, but if you're running lean, you could conceivably run into issues.

The easiest way to think about making efficient rulesets is to think about eating the largest slice of pie first. Understand iptables rule processing and put the rules that get more traffic higher in your list. Conversely, save the tiniest pieces of your pie for last. If you run all of your traffic by a rule that only applies to a tiny segment before you screen out larger segments, you're wasting processing power.

Another thing to keep in mind is that you do not need to specify a target (in our examples above, we established tracking and tracking2 as our targets). If you're used to each rule having a specific purpose of either blocking, allowing, or diverting traffic, this simple tidbit might seem revolutionary. For example, we could use this rule:

[root@server ~]$ iptables -A INPUT

If that seems a little bare to you, don't worry ... It is! The output will show that it is a rule that tracks all traffic in the chain at that point. We're appending the data to the end of the chain in this example ("-A") but we could also insert it ("-I") at the top of the chain instead. This command could be helpful if you are using a number of different chains and you want to see the exact volume of packets that are filtered at any given point. Additionally, this strategy could show how much traffic a potential rule would filter before you run it on your production system. Because having several of these kinds of commands can get a little messy, it's also helpful to add comments to help sort things out:

[root@server ~]$ iptables -A INPUT -m comment --comment "track all data"
 
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 11M packets, 5280M bytes)
 pkts bytes target prot opt in out source           destination
   98  9352        all  --  *  *   0.0.0.0/0        0.0.0.0/0       /* track all data */

Nothing terribly complicated about using iptables to count bandwidth, right? If you have iptables rulesets and you want to get a glimpse at how your traffic is being affected, this little trick could be useful. You can rely on the information iptables gives you about your bandwidth usage, and you won't be the only one ... cPanel actually uses iptables to track bandwidth.

-Mark

December 8, 2011

UNIX Sysadmin Boot Camp: bash - Keyboard Shortcuts

On the support team, we're jumping in and out of shells constantly. At any time during my work day, I'll see at least four instances of PuTTY in my task bar, so one thing I learned quickly was that efficiency and accuracy in accessing ultimately make life easier for our customers and for us as well. Spending too much time rewriting paths, commands, VI navigation, and history cycling can really bring you to a crawl. So now that you have had some time to study bash and practice a little, I thought I'd share some of the keyboard shortcuts that help us work as effectively and as expediently as we do. I won't be able to cover all of the shortcuts, but these are the ones I use most:

Tab

[Tab] is one of the first keyboard shortcuts that most people learn, and it's ever-so-convenient. Let's say you just downloaded pckg54andahalf-5.2.17-v54-2-x86-686-Debian.tar.gz, but a quick listing of the directory shows you ALSO downloaded 5.1.11, 4.8.6 and 1.2.3 at some point in the past. What was that file name again? Fret not. You know you downloaded 5.2.something, so you just start with, say, pckg, and hit [Tab]. This autocompletes everything that it can match to a unique file name, so if there are no other files that start with "pckg," it will populate the whole file name (and this can occur at any point in a command).

In this case, we've got four different files that are similar:
pckg54andahalf-5.2.17-v54-2-x86-686-Debian.tar.gz pckg54andahalf-5.1.11-v54-2-x86-686-Debian.tar.gz
pckg54andahalf-4.8.6-v54-2-x86-686-Debian.tar.gz
pckg54andahalf-1.2.3-v54-2-x86-686-Debian.tar.gz

So typing "pckg" and hitting [Tab] brings up:
pckg54andahalf-

NOW, what you could do, knowing what files are there already, is type "5.2" and hit [Tab] again to fill out the rest. However, if you didn't know what the potential matches were, you could double-tap [Tab]. This displays all matching file names with that string.

Another fun fact: This trick also works in Windows. ;)

CTRL+R

[CTRL+R] is a very underrated shortcut in my humble opinion. When you've been working in the shell for untold hours parsing logs, moving files and editing configs, your bash history can get pretty immense. Often you'll come across a situation where you want to reproduce a command or series of commands that were run regarding a specific file or circumstance. You could type "history" and pore through the commands line by line, but I propose something more efficient: a reverse search.

Example: I've just hopped on my system and discovered that my SVN server isn't doing what it's supposed to. I want to take a look at any SVN related commands that were executed from bash, so I can make sure there were no errors. I'd simply hit [CTRL+R], which would pull up the following prompt:

(reverse-i-search)`':

Typing "s" at this point would immediately return the first command with the letter "s" in it in the history ... Keep in mind that's not just starting with s, it's containing an s. Finishing that out to "svn" brings up any command executed with those letters in that order. Pressing [CTRL+R] again at this point will cycle through the commands one by one.

In the search, I find the command that was run incorrectly ... There was a typo in it. I can edit the command within the search prompt before hitting enter and committing it to the command prompt. Pretty handy, right? This can quickly become one of your most used shortcuts.

CTRL+W & CTRL+Y

This pair of shortcuts is the one I find myself using the most. [CTRL+W] will basically take the word before your cursor and "cut" it, just like you would with [CTRL+X] in Windows if you highlighted a word. A "word" doesn't really describe what it cuts in bash, though ... It uses whitespace as a delimiter, so if you have an ultra long file path that you'll probably be using multiple times down the road, you can [CTRL+W] that sucker and keep it stowed away.

Example: I'm typing nano /etc/httpd/conf/httpd.conf (Related: The redundancy of this path always irked me just a little).
Before hitting [ENTER] I tap [CTRL+W], which chops that path right back out and stores it to memory. Because I want to run that command right now as well, I hit [CTRL+Y] to paste it back into the line. When I'm done with that and I'm out referencing other logs or doing work on other files and need to come back to it, I can simply type "nano " and hit [CTRL+Y] to go right back into that file.

CTRL+C

For the sake of covering most of my bases, I want to make sure that [CTRL+C] is covered. Not only is it useful, but it's absolutely essential for standard shell usage. This little shortcut performs the most invaluable act of killing whatever process you were running at that point. This can go for most anything, aside from the programs that have their own interfaces and kill commands (vi, nano, etc). If you start something, there's a pretty good chance you're going to want to stop it eventually.

I should be clear that this will terminate a process unless that process is otherwise instructed to trap [CTRL+C] and perform a different function. If you're compiling something or running a database command, generally you won't want to use this shortcut unless you know what you're doing. But, when it comes to everyday usage such as running a "top" and then quitting, it's essential.

Repeating a Command

There are four simple ways you can easily repeat a command with a keyboard shortcut, so I thought I'd run through them here before wrapping up:

  1. The [UP] arrow will display the previously executed command.
  2. [CTRL+P] will do the exact same thing as the [UP] arrow.
  3. Typing "!!" and hitting [Enter] will execute the previous command. Note that this actually runs it. The previous two options only display the command, giving you the option to hit [ENTER].
  4. Typing "!-1" will do the same thing as "!!", though I want to point out how it does this: When you type "history", you see a numbered list of commands executed in the past -1 being the most recent. What "!-1" does is instructs the shell to execute (!) the first item on the history (-1). This same concept can be applied for any command in the history at all ... This can be useful for scripting.

Start Practicing

What it really comes down to is finding what works for you and what suits your work style. There are a number of other shortcuts that are definitely worthwhile to take a look at. There are plenty of cheat sheets on the internet available to print out while you're learning, and I'd highly recommend checking them out. Trust me on this: You'll never regret honing your mastery of bash shortcuts, particularly once you've seen the lightning speed at which you start flying through the command line. The tedium goes away, and the shell becomes a much more friendly, dare I say inviting, place to be.

Quick reference for these shortcuts:

  • [TAB] - Autocomplete to furthest point in a unique matching file name or path.
  • [CTRL+R] - Reverse search through your bash history
  • [CTRL+W] - Cut one "word" back, or until whitespace encountered.
  • [CTRL+Y] - Paste a previously cut string
  • [CTRL+P] - Display previously run command
  • [UP] - Display previously run command

-Ryan

December 1, 2011

UNIX Sysadmin Boot Camp: Permissions

I hope you brought your sweat band ... Today's Boot Camp workout is going to be pretty intense. We're focusing on our permissions muscles. Permissions in a UNIX environment cause a lot of customer issues ... While everyone understands the value of secure systems and limited access, any time an "access denied" message pops up, the most common knee-jerk reaction is to enable full access to one's files (chmod 777, as I'll explain later). This is a BAD IDEA. Open permissions are a hacker's dream come true. An open permission setting might have been a temporary measure, but more often than not, the permissions are left in place, and the files remain vulnerable.

To better understand how to use permissions, let's take a step back and get a quick refresher on key components.

You'll need to remember the three permission types:

r w x: r = read; w = write; x = execute

And the three types of access they can be applied to:

u g o: u = user; g = group; o = other

Permissions are usually displayed in one of two ways – either with letters (rwxrwxrwx) or numbers (777). When the permissions are declared with letters, you should look at it as three sets of three characters. The first set applies to the user, the second applies to the group, and the third applies to other (everyone else). If a file is readable only by the user and cannot be written to or executed by anyone, its permission level would be r--------. If it could be read by anyone but could only be writeable by the user and the group, its permission level would be rw-rw-r--.

The numeric form of chmod uses bits to represent permission levels. Read access is marked by 4 bits, write is 2, and execute is 1. When you want a file to have read and write access, you just add the permission bits: 4 + 2 = 6. When you want a file to have read, write and execute access, you'll have 4 + 2 + 1, or 7. You'd then apply that numerical permission to a file in the same order as above: user, group, other. If we used the example from the last sentence in the previous paragraph, a file that could be read by anyone, but could only be writeable by the user and the group, would have a numeric permission level of 664 (user: 6, group: 6, other: 4).

Now the "chmod 777" I referenced above should make a little more sense: All users are given all permissions (4 + 2 + 1 = 7).

Applying Permissions

Understanding these components, applying permissions is pretty straightforward with the use of the chmod command. If you want a user (u) to write and execute a file (wx) but not read it (r), you'd use something like this:

chmod Output

In the above terminal image, I added the -v parameter to make it "verbose," so it displays the related output or results of the command. The permissions set by the command are shown by the number 0300 and the series (-wx------). Nobody but the user can write or execute this file, and as of now, the user can't even read the file. If you were curious about the leading 0 in "0300," it simply means that you're viewing an octal output, so for our purposes, it can be ignored entirely.

In that command, we're removing the read permission from the user (hence the minus sign between u and r), and we're giving the user write and execute permissions with the plus sign between u and wx. Want to alter the group or other permissions as well? It works exactly the same way: g+,g-,o+,o- ... Getting the idea? chmod permissions can be set with the letter-based commands (u+r,u-w) or with their numeric equivalents (eg. 400 or 644), whichever floats your boat.

A Quick Numeric chmod Reference

chmod 777 | Gives specified file read, write and execute permissions (rwx) to ALL users
chmod 666 | Allows for read and write privileges (rw) to ALL users
chmod 555 | Gives read and execute permissions (rx) to ALL users
chmod 444 | Gives read permissions (r) to ALL users
chmod 333 | Gives write and execute permissions (wx) to ALL users
chmod 222 | Gives write privileges (w) to ALL users
chmod 111 | Gives execute privileges (x) to ALL users
chmod 000 | Last but not least, gives permissions to NO ONE (Careful!)

Get a List of File Permissions

To see what your current file permissions are in a given directory, execute the ls –l command. This returns a list of the current directory including the permissions, the group it's in, the size and the last date the file was modified. The output of ls –l looks like this:

ls -l Output

On the left side of that image, you'll see the permissions in the rwx format. When the permission begins with the "d" character, it means that object is a directory. When the permission starts with a dash (-), it is a file.

Practice Deciphering Permissions

Let's look at a few examples and work backward to apply what we've learned:

  • Example 1: -rw-------
  • Example 2: drwxr-x---
  • Example 3: -rwxr-xr-x

In Example 1, the file is not a directory, the user that owns this particular object has read and write permissions, and when the group and other fields are filled with dashes, we know that their permissions are set to 0, so they have no access. In this case, only the user who owns this object can do anything with it. We'll cover "ownership" in a future blog, but if you're antsy to learn right now, you can turn to the all-knowing Google.

In Example 2, the permissions are set on a directory. The user has read, write and execute permissions, the group has read and execute permissions, and anything/anyone besides user or group is restricted from access.

For Example 3, put yourself to the test. What access is represented by "-rwxr-xr-x"? The answer is included at the bottom of this post.

Wrapping It Up

How was that for a crash course in Unix environment permissions? Of course there's more to it, but this will at least make you think about what kind of access you're granting to your files. Armed with this knowledge, you can create the most secure server environment.

Here are a few useful links you may want to peruse at your own convenience to learn more:

Linuxforums.org
Zzee.com
Comptechdoc.org
Permissions Calculator

Did I miss anything? Did I make a blatantly ridiculous mistake? Did I use "their" when I should have used "they're"??!!... Let me know about it. Leave a comment if you've got anything to add, suggest, subtract, quantize, theorize, ponderize, etc. Think your useful links are better than my useful links? Throw those at me too, and we'll toss 'em up here.

Are you still feeling the burn from your Sysadmin Boot Camp workout? Don't forget to keep getting reps in bash, logs, SSH, passwords and user management!

- Ryan

Example 3 Answer

November 15, 2011

UNIX Sysadmin Boot Camp: User Management

Now that you're an expert when it comes to bash, logs, SSH, and passwords, you're probably foaming at the mouth to learn some new skills. While I can't equip you with the "nunchuck skills" or "bowhunting skills" Napoleon Dynamite reveres, I can help you learn some more important — though admittedly less exotic — user management skills in UNIX.

Root User

The root user — also known as the "super user" — has absolute control over everything on the server. Nothing is held back, nothing is restricted, and anything can be done. Only the server administrator should have this kind of access to the server, and you can see why. The root user is effectively the server's master, and the server accordingly will acquiesce to its commands.

Broad root access should be avoided for the sake of security. If a program or service needs extensive abilities that are generally reserved for the root user, it's best to grant those abilities on a narrow, as-needed basis.

Creating New Users

Because the Sysadmin Boot Camp series is geared toward server administration from a command-line point of view, that's where we'll be playing today. Tasks like user creation can be performed fairly easily in a control panel environment, but it's always a good idea to know the down-and-dirty methods as a backup.

The useradd command is used for adding users from shell. Let's start with an example and dissect the pieces:

useradd -c "admin" -d /home/username -g users\ -G admin,helpdesk -s\ /bin/bash userid

-c "admin" – This command adds a comment to the user we're creating. The comment in this case is "admin," which may be used to differentiate the user a little more clearly for better user organization.
-d /home/username – This block sets the user's home directory. The most common approach is to replace username with the username designated at the end of the command.
-g users\ – Here, we're setting the primary group for the user we're creating, which will be users.
-G admin,helpdesk – This block specifies other user groups the new user may be a part of.
-s\ /bin/bash userid – This command is in two parts. It says that the new user will use /bin/bash for its shell and that userid will be the new user's username.

Changing Passwords

Root is the only user that can change other users' passwords. The command to do this is:

passwd userid

If you are a user and want to change your own password, you would simply issue the passwd command by itself. When you execute the command, you will be prompted for a new entry. This command can also be executed by the root user to change the root password.

Deleting Users

The command for removing users is userdel, and if we were to execute the command, it might look like this:

userdel -r username

The –r designation is your choice. If you choose to include it, the command will remove the home directory of the specified user.

Where User Information is Stored

The /etc/passwd file contains all user information. If you want to look through the file one page at a time — the way you'd use /p in Windows — you can use the more command:

more /etc/passwd

Keep in mind that most of your important configuration files are going to be located in the /etc folder, commonly spoken with an "et-see" pronunciation for short. Each line in the passwd file has information on a single user. Arguments are segmented with colons, as seen in the example below:

username:password:12345:12345::/home/username:/bin/bash

Argument 1 – username – the user's username
Argument 2 – password – the user's password
Argument 3 – 12345 – the user's numeric ID
Argument 4 – 12345 – the user group's numeric ID
Argument 5 – "" – where either a comment or the user's full name would go
Argument 6 - /home/username – the user's home directory
Argument 7 - /bin/bash – the user's default console shell

Now that you've gotten a crash course on user management, we'll start going deeper into group management, more detailed permissions management and the way shadow file relates to the passwd usage discussed above.

-Ryan

July 27, 2009

Cool Tool: find

Have you ever gotten an e-mail from your server that a particular partition is filling up? Unfortunately, the e-mails don't usually tell you where the big files are hiding.

You can determine this and many other handy things by using the Unix utility 'find'. I use the 'find' command all the time in both my work at SoftLayer and also for running some sites that I manage outside of work. Being able to find the files owned by a particular user can be handy.

The 'find' command takes as arguments various tests to run on the files and directories that it scans. Just running 'find' with no arguments is going to list out the files and directories under your current location. Real power comes from using the different switches in various combinations.

find /some/path -name "myfile*" -perm 700

This format of the command will search for items within /some/path that have names starting with the string 'myfile' and also have the permission value of 700 (rwx------).

find /some/path -type f -size +50M

Find files that are larger than 50MB. The '-type f' argument tells find to only look for files.

find /some/path -type f -size +50M -ctime -7

Find files that are larger than 50MB and that have been created in the last seven days.

find /some/path -type f -size +50M -ctime -7 -exec ls -l {} \;

The -exec tells find to run some command against each match that it finds. In this case, it is going to run an 'ls -l'. Moves, removes and even custom full scripts are doable as well.

There are many, many more arguments that are possible for 'find'. Refer to the man pages for find on your particular flavor of Unix server to see all the different options for the command. As with all shell commands, know what you are running. Given the chance 'find' will wipe out anything it can ( via -exec rm {}, for example).

Subscribe to command