Posts Tagged 'Root'

April 23, 2014

Sysadmin Tips and Tricks - Stop Using Root!

A common mistake newer Linux system administrators make is the overuse of root. It seems so easy! Everything is so much simpler! But in the end, it’s not—and it’s only a matter of time before you wish you had not been so free and easy with your super-user, use. Let me try to convince you.

Let’s start with a little history. The antecedents of Linux go all the way back to the early 1970s, when computers cost tens of thousands of dollars (at least). With that kind of expense, you as a user would hardly have a computer sitting on your desk (not to mention they were at least refrigerator-sized), and you would also not have the use of it dedicated to your needs. What was obviously needed was an operating system that would allow multiple users to use the machine at once, via terminals, in order to make the most use of the computing resources available.

If you think about it, it’s clear that the operating system had to be very good at keeping users from being able to stomp on each other’s files and processes. So the early UNIX™ variants were multi-user systems from the get-go. In the ensuing forty years, these systems have only gotten better at keeping the various users and processes from harming each other. And this is the technology that you’re paying for when you use Linux or other modern variants.

Now, you may think, “That doesn’t apply to me—I’m the only user on my server!” But are you, really?

You probably run Apache, which is generally run as the user httpd or apache. Why not root? Because if you run Apache as root, then anyone on the outside who manages to get Apache to execute arbitrary code, would then have that code running as root! Next thing you know, they can execute "rm –rf /," or worse, invade your system altogether and steal proprietary information. By running as a non-root user, even if the attacker gets total access to that user, they are limited to what that user can touch. Thus, user httpd is compromised, but not the entire server.

The same thing is true for mail servers, FTP servers, and so on. They all rely on the Linux permissions system in order to give the programs access to as little as possible—ideally, only exactly what they need to do their jobs.

So, think of yourself as another process on the system. When you log in as your regular user, you are limited in what you can do. But this is not intended to harm you or irritate you—indeed; the system is designed to keep you from accidentally doing damage to your server.

For example, consider if you wanted to completely remove a directory called ‘home’ within your home directory. Note the ever so slight difference between the first command:

rm –R home

And the second command:

rm –R /home

The first command removes a directory called ‘home’ from wherever you happen to be sitting on the file system. The second removes all users’ home directories from the system. One little slash makes all the difference in the world. This is probably why it has been said that Linux gives you enough rope to hang yourself with. Executing the second command as root looks like this:

server:# rm –R /home 
server.com#

And it’s just gone! Whereas if you accidentally put that slash in there while logged in as your user, you would get:

server:# rm –R /home 
server:# rm: cannot remove `home’: Permission denied

This will annoy you, until you realize that if you’d done it as root you would have wiped out all your customers home directories.

In short, just like the processes that run on your machine, you would be well served to use only the permissions you need. This is why many Linux distributions today encourage the use of sudo—you don’t even become root, but just execute things as root when needed. It’s a good policy, and makes the best use of four decades of expertise that have gone into the system you are using.

- Lee

P.S. This is also why you pretty much never want to chmod 777 anything!

April 3, 2012

Tips and Tricks - How to Use SFTP

Too often, new customers can get overwhelmed by a small administrative task on a Linux server. One of the more common questions I see in technical support is when a drive partition runs out of space. The website appears offline, and on of my coworkers advises you to just free-up some space. "Just?! Where can I find files that are deletable without affecting my website?"

Don't worry ... it's really quit simple. If you can use FTP (File Transfer Protocol), you can handle this bit of server management. Depending on the exact problem, we might instruct you to free up space by removing files in one of the following directories:

  • /var/log
  • /usr/local/cpanel
  • /usr/local/apache/logs
  • /usr/local/apache/domlogs

The reason these directories are usually overlooked is because they are not accessible by normal FTP users — users who only upload website content. When you upload website content to the server via FTP, the FTP user is limited to the directory structure for that website. Directories starting with "/var" and "/usr" cannot be accessed by these non-root users (The "root" user can access anything). And while root is a powerful user, for the sake of security, it is not normally allowed to log in over FTP because FTP is not secure ... That's where SFTP (Secure File Transfer Protocol) comes in.

Most FTP clients support SFTP, so you don't have to learn a new environment to securely access any file on the server. Every FTP client is different, but I'll illustrate with FileZilla because it's free and available on Mac, Windows and Linux. If you don't already have an FTP client, I highly recommend FileZilla. Because there are a few ways to use FileZilla to get an SFTP connection, I can share different options for you to try:

Quick Connect

The Quick Connect bar is the quickest way to connect to your server. Start FileZilla and look immediately under the toolbar for the Quick Connect bar:

SFTP Tutorial

Enter the hostname (IP address or domain name), “root” in the Username field, the root password in the Password field, and “22″ in the port field. Remember, port 22 is for SFTP, the same as SSH. Click the Quickconnect button to connect.

Using the Site Manager

The Site Manager lets you save your login details. Start FileZilla and you'll see the following:

SFTP Tutorial

To open the Site Manager, click the left-most icon in tool bar or go to File >> Site Manager in the menu.

SFTP Tutorial

Enter an IP address or domain name for your server in the Host field, and select "SFTP" as your protocol. You'll enter the root user's login information, and you're ready to connect by clicking the "Connect" button or you can click the "OK" button to save and close the dialog box.

If you just saved your settings and the Site Manager is not open, click the Site Manager icon again. From there, you can select the site under the "Select Entry" box, and you just have to click "Connect" to initiate the SFTP connection with your saved settings.

If you see a pop-up that warns of an "Unknown host key," clicking the "Always trust this host, add this key to the cache" option will prevent this interruption from showing in the future. Once you click "OK" to complete the connection, your FileZilla screen should look like this:

SFTP Tutorial

Notice the "Remote site" section on the middle right of the FileZilla screen:

SFTP Tutorial

This area in FileZilla is the directory and file listing of the server. Navigate the server's file structure here, and click "/" to access the top of the folder structure. You should see the "/usr" and "/var" directories, and you can explore the filesystem to delete the files technical support recommended to create space!

Message Log

If you have a problem connecting to your server by FTP or SFTP, the open area below the Quickconnect bar is the Message Log. If you can copy and paste this text into a ticket, you'll help technical support troubleshoot your connection problems. Below is an example log of a successful FTP session:

Status: Connecting to server.example.com...
Response:   fzSftp started
Command:    open "root@server.example.com" 22
Command:    Trust new Hostkey: Once
Command:    Pass: **********
Status: Connected to server.example.com
Status: Retrieving directory listing...
Command:    pwd
Response:   Current directory is: "/root"
Command:    ls
Status: Listing directory /root
Status: Calculating timezone offset of server...
Command:    mtime ".lesshst"
Response:   1326387703
Status: Timezone offsets: Server: -21600 seconds. Local: -21600 seconds. Difference: 0 seconds.
Status: Directory listing successful

And here's an example of a failed connection:

Status: Resolving address of example.com
Status: Connecting to 192.0.43.10:21...
Error:  Connection timed out
Error:  Could not connect to server
Status: Waiting to retry...
Status: Resolving address of example.com
Status: Connecting to 192.0.43.10:21...
Error:  Connection attempt interrupted by user

If you have any questions, leave them in a comment below. Enjoy your new-found SFTP powers!

-Lyndell

January 11, 2011

Jurassic Park, Uptime, And You!

Some of you may remember in the movie Jurassic Park where the park founder's granddaughter Lex, played by Ariana Richards, sits down at a computer terminal, gasps, and says "This is Unix. I know this!" That particular film moment has always resonated with me as a victory for realistic depiction of computer systems - the interface used in the movie is called fsn and was an actual Unix file manager - in an industry rife with horrific exaggerations; Swordfish, anyone? I'm sure there's an unwritten story as to how she (or her brother if you follow the book) gained her skills at a computer system that in 1993 was almost exclusively relegated to universities. However, I digress.

Shortly before that scene was another scene and catchphrase that should resound with familiarity to system administrators around the world. In the face of marauding dinosaurs and computer sabotage, the character John Arnold, played by Samuel L. Jackson, must sacrifice what I'm sure was an absurd amount of uptime by killing the power and rebooting the mainframe. Would the system come back up? Would everything load up as needed to get the park's systems back online? John's mantra was simple: "Hold on to your butts!"

Every day as a Systems Administrator I'm faced with a comparable (though far less exhilarating) situation. Linux is an extremely stable operating system, and I have logged into systems that have been online for quite literally years. Eventually, though, kernel updates or stray mounts necessitate a reboot. Will the server's filesystems need a check on reboot? Will the server even come back up? When a server's been online for that long, the only way to know is to "throw the switch" and cross your fingers.

One way to have a better idea of how your system will behave during reboots in a production environment is to take the time to update your kernel once a month or so and perform a reboot to make sure the update sticks. This allows routine file system checks to take place as necessary and keeps your system abreast of the latest kernel updates. It also familiarizes you with how long the process takes, what sort of caveats you may run into, and reduces the overall surface area of your server to outside attackers.

In the last year, I have seen at least two exploits that can give an attacker root access to a server running an outdated kernel using common toolkits that can attack commonly deployed Content Management Systems with trivial effort. Compromising an unprivileged user account gives an attacker even more leverage against unpatched systems. Google CVE-2009-2695 and CVE-2010-3081 if you don't believe me.

If you run a production system or even a backend system that is exposed to the big, bad Internet, it is absolutely essential to make sure that your kernel, software, and security measures are up to date. Today's Slashdot article is tomorrow's exploit.

What lesson can we learn from the unfortunate folks at Jurassic Park? Don't assume your server is safe and don't wait until there are velociraptors roaming your halls looking for a snack to perform proper maintenance on your system.

-Adam

Subscribe to root