Posts Tagged 'Apache'

April 23, 2014

Sysadmin Tips and Tricks - Stop Using Root!

A common mistake newer Linux system administrators make is the overuse of root. It seems so easy! Everything is so much simpler! But in the end, it’s not—and it’s only a matter of time before you wish you had not been so free and easy with your super-user, use. Let me try to convince you.

Let’s start with a little history. The antecedents of Linux go all the way back to the early 1970s, when computers cost tens of thousands of dollars (at least). With that kind of expense, you as a user would hardly have a computer sitting on your desk (not to mention they were at least refrigerator-sized), and you would also not have the use of it dedicated to your needs. What was obviously needed was an operating system that would allow multiple users to use the machine at once, via terminals, in order to make the most use of the computing resources available.

If you think about it, it’s clear that the operating system had to be very good at keeping users from being able to stomp on each other’s files and processes. So the early UNIX™ variants were multi-user systems from the get-go. In the ensuing forty years, these systems have only gotten better at keeping the various users and processes from harming each other. And this is the technology that you’re paying for when you use Linux or other modern variants.

Now, you may think, “That doesn’t apply to me—I’m the only user on my server!” But are you, really?

You probably run Apache, which is generally run as the user httpd or apache. Why not root? Because if you run Apache as root, then anyone on the outside who manages to get Apache to execute arbitrary code, would then have that code running as root! Next thing you know, they can execute "rm –rf /," or worse, invade your system altogether and steal proprietary information. By running as a non-root user, even if the attacker gets total access to that user, they are limited to what that user can touch. Thus, user httpd is compromised, but not the entire server.

The same thing is true for mail servers, FTP servers, and so on. They all rely on the Linux permissions system in order to give the programs access to as little as possible—ideally, only exactly what they need to do their jobs.

So, think of yourself as another process on the system. When you log in as your regular user, you are limited in what you can do. But this is not intended to harm you or irritate you—indeed; the system is designed to keep you from accidentally doing damage to your server.

For example, consider if you wanted to completely remove a directory called ‘home’ within your home directory. Note the ever so slight difference between the first command:

rm –R home

And the second command:

rm –R /home

The first command removes a directory called ‘home’ from wherever you happen to be sitting on the file system. The second removes all users’ home directories from the system. One little slash makes all the difference in the world. This is probably why it has been said that Linux gives you enough rope to hang yourself with. Executing the second command as root looks like this:

server:# rm –R /home 
server.com#

And it’s just gone! Whereas if you accidentally put that slash in there while logged in as your user, you would get:

server:# rm –R /home 
server:# rm: cannot remove `home’: Permission denied

This will annoy you, until you realize that if you’d done it as root you would have wiped out all your customers home directories.

In short, just like the processes that run on your machine, you would be well served to use only the permissions you need. This is why many Linux distributions today encourage the use of sudo—you don’t even become root, but just execute things as root when needed. It’s a good policy, and makes the best use of four decades of expertise that have gone into the system you are using.

- Lee

P.S. This is also why you pretty much never want to chmod 777 anything!

April 29, 2013

Web Development - Installing mod_security with OWASP

You want to secure your web application, but you don't know where to start. A number of open-source resources and modules exist, but that variety is more intimidating than it is liberating. If you're going to take the time to implement application security, you don't want to put your eggs in the wrong basket, so you wind up suffering from analysis paralysis as you compare all of the options. You want a powerful, flexible security solution that isn't overly complex, so to save you the headache of making the decision, I'll make it for you: Start with mod_security and OWASP.

ModSecurity (mod_security) is an open-source Apache module that acts as a web application firewall. It is used to help protect your server (and websites) from several methods of attack, most common being brute force. You can think of mod_security as an invisible layer that separates users and the content on your server, quietly monitoring HTTP traffic and other interactions. It's easy to understand and simple to implement.

The challenge is that without some advanced configuration, mod_security isn't very functional, and that advanced configuration can get complex pretty quickly. You need to determine and set additional rules so that mod_security knows how to respond when approached with a potential threat. That's where Open Web Application Security Project (OWASP) comes in. You can think of the OWASP as an enhanced core ruleset that the mod_security module will follow to prevent attacks on your server.

The process of getting started with mod_security and OWASP might seem like a lot of work, but it's actually quite simple. Let's look at the installation and configuration process in a CentOS environment. First, we want to install the dependencies that mod_security needs:

## Install the GCC compiler and mod_security dependencies ##
$ sudo yum install gcc make
$ sudo yum install libxml2 libxml2-devel httpd-devel pcre-devel curl-devel

Now that we have the dependencies in place, let's install mod_security. Unfortunately, there is no yum for mod_security because it is not a maintained package, so you'll have to install it directly from the source:

## Get mod_security from its source ##
$ cd /usr/src
$ git clone https://github.com/SpiderLabs/ModSecurity.git

Now that we have mod_security on our server, we'll install it:

## Install mod_security ##
$ cd ModSecurity
$ ./configure
$ make install

And we'll copy over the default mod_security configuration file into the necessary Apache directory:

## Copy configuration file ##
$ cp modsecurity.conf-recommended /etc/httpd/conf.d/modsecurity.conf

We've got mod_security installed now, so we need to tell Apache about it ... It's no use having mod_security installed if our server doesn't know it's supposed to be using it:

## Apache configuration for mod_security ##
$ vi /etc/httpd/conf/httpd.conf

We'll need to load our Apache config file to include our dependencies (BEFORE the mod_security module) and the mod_security file module itself:

## Load dependencies ##
LoadFile /usr/lib/libxml2.so
LoadFile /usr/lib/liblua5.1.so
## Load mod_security ##
LoadModule security2_module modules/mod_security2.so

We'll save our configuration changes and restart Apache:

## Restart Apache! ##
$ sudo /etc/init.d/httpd restart

As I mentioned at the top of this post, our installation of mod_security is good, but we want to enhance our ruleset with the help of OWASP. If you've made it this far, you won't have a problem following a similar process to install OWASP:

## OWASP ##
$ cd /etc/httpd/
$ git clone https://github.com/SpiderLabs/owasp-modsecurity-crs.git
$ mv owasp-modsecurity-crs modsecurity-crs

Just like with mod_security, we'll set up our configuration file:

## OWASP configuration file ##
$ cd modsecurity-crs
$ cp modsecurity_crs_10_setup.conf.example modsecurity_crs_10_config.conf

Now we have mod_security and the OWASP core ruleset ready to go! The last step we need to take is to update the Apache config file to set up our basic ruleset:

## Apache configuration ##
$ vi /etc/httpd/conf/httpd.conf

We'll add an IfModule and point it to our new OWASP rule set at the end of the file:

<IfModule security2_module>
    Include modsecurity-crs/modsecurity_crs_10_config.conf
    Include modsecurity-crs/base_rules/*.conf
</IfModule>

And to complete the installation, we save the config file and restart Apache:

## Restart Apache! ##
$ sudo /etc/init.d/httpd restart

And we've got mod_security installed with the OWASP core ruleset! With this default installation, we're leveraging the rules the OWASP open source community has come up with, and we have the flexibility to tweak and enhance those rules as our needs dictate. If you have any questions about this installation or you have any other technical blog topics you'd like to hear from us about, please let us know!

-Cassandra

July 25, 2012

ServerDensity: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome David Mytton, Founder of ServerDensity. Server Density is a hosted server and website monitoring service that alerts you when your website is slow, down or back up.

5 Ways to Minimize Downtime During Summer Vacation

It's a fact of life that everything runs smoothly until you're out of contact, away from the Internet or on holiday. However, you can't be available 24/7 on the chance that something breaks; instead, there are several things you can do to ensure that when things go wrong, the problem can be managed and resolved quickly. To help you set up your own "get back up" plan, we've come up with a checklist of the top five things you can do to prepare for an ill-timed issue.

1. Monitoring

How will you know when things break? Using a tool like Server Density — which combines availability monitoring from locations around the world with internal server metrics like disk usage, Apache and MySQL — means that you can be alerted if your site goes down, and have the data to find out why.

Surprisingly, the most common problems we see are some that are the easiest to fix. One problem that happens all too often is when a customer simply runs out of disk space in a volume! If you've ever had it happen to you, you know that running out of space will break things in strange ways — whether it prevents the database from accepting writes or fails to store web sessions on disk. By doing something as simple as setting an alert to monitor used disk space for all important volumes (not just root) at around 75%, you'll have proactive visibility into your server to avoid hitting volume capacity.

Additionally, you should define triggers for unusual values that will set off a red flag for you. For example, if your Apache requests per second suddenly drop significantly, that change could indicate a problem somewhere else in your infrastructure, and if you're not monitoring those indirect triggers, you may not learn about those other problems as quickly as you'd like. Find measurable direct and indirect relationships that can give you this kind of early warning, and find a way to measure them and alert yourself when something changes.

2. Dealing with Alerts

It's no good having alerts sent to someone who isn't responding (or who can't at a given time). Using a service like Pagerduty allows you to define on-call rotations for different types of alerts. Nobody wants to be on-call every hour of every day, so differentiating and channeling alerts in an automated way could save you a lot of hassle. Another huge benefit of a platform like Pagerduty is that it also handles escalations: If the first contact in the path doesn't wake up or is out of service, someone else gets notified quickly.

3. Tracking Incidents

Whether you're the only person responsible or you have a team of engineers, you'll want to track the status of alerts/issues, particularly if they require escalation to different vendors. If an incident lasts a long time, you'll want to be able to hand it off to another person in your organization with all of the information they need. By tracking incidents with detailed notes information, you can avoid fatigue and prevent unnecessary repetition of troubleshooting steps.

We use JIRA for this because it allows you to define workflows an issue can progress along as you work on it. It also includes easy access to custom fields (e.g. specifying a vendor ticket ID) and can be assigned to different people.

4. Understanding What Happened

After you have received an alert, acknowledged it and started tracking the incident, it's time to start investigating. Often, this involves looking at logs, and if you only have one or two servers, it's relatively easy, but as soon as you add more, the process can get exponentially more difficult.

We recommend piping them all into a log search tool like (fellow Tech Partners Marketplace participant) Papertrail or Loggly. Those platforms afford you access to all of your logs from a single interface with the ability to see incoming lines in real-time or the functionality to search back to when the incident began (since you've clearly monitored and tracked all of that information in the first three steps).

5. Getting Access to Your Servers

If you're traveling internationally, access to the Internet via a free hotspot like the ones you find in Starbucks isn't always possible. It's always a great idea to order a portable 3G hotspot in advance of a trip. You can usually pick one up from the airport to get basic Internet access without paying ridiculous roaming charges. Once you have your connection, the next step is to make sure you can access your servers.

Both iPhone and Android have SSH and remote desktop apps available which allow you to quickly log into your servers to fix easy problems. Having those tools often saves a lot of time if you don't have access to your laptop, but they also introduce a security concern: If you open server logins to the world so you can login from the dynamic IPs that change when you use mobile connectivity, then it's worth considering a multi-factor authentication layer. We use Duo Security for several reasons, with one major differentiator being the modules they have available for all major server operating systems to lock down our logins even further.

You're never going to escape the reality of system administration: If your server has a problem, you need to fix it. What you can get away from is the uncertainty of not having a clearly defined process for responding to issues when they arise.

-David Mytton, ServerDensity

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
Subscribe to apache