Posts Tagged 'Cpanel'

July 14, 2015

Preventative Maintenance and Backups

Has your cPanel server ever gone down only to not come back online because the disk failed?

At SoftLayer, data migration is in the hands of our customers. That means you must save your data and move it to a new server. Well, thanks to a lot of slow weekends, I’ve had time to write a bash script that automates the process for you. It’s been tested in a dev environment of my own working with the data center to simulate the dreaded DRS (data retention service) when a drive fails and in a live environment to see what new curveballs could happen. In this three-part series, we’ll discuss how to do server preventative maintenance to prevent a total disaster, how to restore your backed up data (if you have backups), and finally we’ll go over the script itself to fully automate a process to backup, move, and restore all of your cPanel data safely (if the prior two aren’t options for you).

Let’s start off with some preventative maintenance first and work on setting up backups in WHM itself.

First thing you’ll need to do is log into your WHM, and then go to Home >> Backup >> Backup Configuration. You will probably have an information box at the top that says “The legacy backups system is currently disabled;” that’s fine, let it stay disabled. The legacy backup system is going away soon anyway, and the newer system allows for more customization. If you haven’t clicked “Enable” under the Global Settings, now would be the time to do so, so that the rest of the page becomes visible. Now, you should be able to modify the rest of the backup configuration, so let’s start with the type.

In my personal opinion, compressed is the only way to go. Yes, it takes longer, but uses less disk space in the end. Uncompressed uses up too much space, but it’s faster. Incremental is also not a good choice, as it only allows for one backup and it does not allow for users to include additional destinations.

The next section is scheduling and retention, and, personally, I like my backups done daily with a five-day retention plan. Yes it does use up a bit more space, but it’s also the safest because you’ll have backups from literally the day prior in case something happens.

The next section, Files, is where you will pick the users you want to backup along with what type of data you want to include. I prefer to just leave the defaulted settings here in this section and only choose my users that I want to backup instead. It’s your server though, so you’re free to enable/disable the various options as you see fit. I would definitely leave the options for backing up system files checked though as it is highly recommended to keep that option checked.

The next section deals with databases, and again, this one’s up to you. Per Account is your bare minimum option and is still safe regardless. Entire MySQL directory will just blanket backup the entire MySQL directory instead. The last option encompasses the two prior options, which to me is a bit overkill as the Per Account Only option works well enough on its own.

Now let’s start the actual configuration of the backup service. From here, we’ll choose the backup directory as well as a few other options regarding the retention and additional destinations. The best practice here is to have a drive specifically for backups, and not just another partition or a folder, but a completely separate drive. Wherever you want the backups to reside, type that path in the box. I usually have a secondary drive mounted as /backup to put them in so the pre-filled option works fine for me. The option for mounting the drive as needed should be enabled if you have a separate mount point that is not always mounted. As for the additional destination part, that’s up to you if you want to make backups of your backups. This will allow you to keep backups of the backups offsite somewhere else just in case your server decides to divide by zero or some other random issue that causes everything to go down without being recoverable. Clicking the “Create New Destination” option will bring up a new section to fill in all the data relevant to what you chose.

Once you’ve done all of this, simply click “Save Configuration.” Now you’re done!

But let’s say you’re ready to make a full backup right now instead of waiting for it to automatically run. For this, we’ll need to log in to the server via SSH and run a certain command instead. Using whatever SSH tool you prefer, PuTTY for me, connect to your server using the root username and password that you used to log into WHM. From there, we will run one simple command to backup everything - “/usr/local/cpanel/bin/backup --force” ← This will force a full backup of every user that you selected earlier when you configured the backup in WHM.

That’s pretty much it as far as preventative maintenance and backups go. Next time, we’ll go into how to restore all this content to a new drive in case something happens like someone accidentally deleting a database or a file that they really need back.


October 2, 2014

SoftLayer Rocks the 2014 cPanel Conference

For the past two days, SoftLayer set up shop at the 2014 cPanel® Conference held in Houston, TX. We mingled. We administered the Server Challenge II (more on that later) . . . And, we talked to Aaron Phillips, chief business officer at cPanel.

Holy cup of coffee; this guy has so much energy! Clad in shorts, a t-shirt, and Adidas Gazelle’s, this CBO was not what I expected, but neither is cPanel for that matter. Reading Phillips’ bio offers a glimpse into the cPanel culture; he pokes fun at the fact he never thought he would be working for a “company started by a 14-year-old genius.”(Maybe that’s why he can get away with the shorts.)

Regardless, you can’t dismiss cPanel’s expertise when it comes to specializing in control panel software. The cPanel software package automates server tasks by providing an accessible interface to help website owners manage their sites.

So Aaron, can you give us a brief overview of what the cPanel conference is all about?

The cPanel Conference is in its ninth year, and we really put this together to network, talk about web hosting, and give our partners a sneak peek at what we’re up to. I attended the event even before I came onboard at cPanel, and each year just gets bigger and better. It’s the conference I look forward to each year.

Oh yeah? Any big announcements this week?

Yep. We have a new update to our system. Our user interface is available in 29 languages. It’s really going to help our global customers and help our partners that have global customers like SoftLayer.

How so?

The quality of translations have improved dramatically. The older system we called LANG often created partial sentences which caused a lot of problems with translations. Our ‘newer model,’ Maketext, is more flexible and feature rich. We’ve also edited our content on the interface making it easier to translate. This also eases translation in languages read from right-to-left.

When do you anticipate a go-live date?

We’re in the beta stage but will be complete soon. Like, any day now.

Speaking of SoftLayer, what does cPanel think of us?

You guys were one of our first customers, and you’re one of our biggest customers. We go way back . . . like EV1 days. We love you guys over at SoftLayer. Enjoy the conference! Gotta run.

[Maybe that’s why he wears the Gazelle’s].

Speaking the Language – 29 Languages

Arabic French Japanese Spanish
Chinese German Korean Swedish
Czech Greek Latin American Spanish Thai
Danish Hebrew Malay Traditional Chinese
Dutch Hungarian Norwegian Turkish
English Iberian Spanish Polish Ukrainian
Filipino Indonesian Portuguese Vietnamese
Finnish Italian Romanian

The Server Challenge II Continues to Kick aaS and Take Names
We don’t like to brag, but we have the best booth setup of all time. Why? Because of the Server Challenge II. We would like to congratulate Mike Levine, Product Manager at OpenSRS (with the high score of 1:00.05) who beat out the hundreds of contenders who participated at the 2014 cPanel Conference.


April 16, 2013

iptables Tips and Tricks - Track Bandwidth with iptables

As I mentioned in my last post about CSF configuration in iptables, I'm working on a follow-up post about integrating CSF into cPanel, but I thought I'd inject a simple iptables use-case for bandwidth tracking. You probably think about iptables in terms of firewalls and security, but it also includes a great diagnostic tool for counting bandwidth for individual rules or set of rules. If you can block it, you can track it!

The best part about using iptables to track bandwidth is that the tracking is enabled by default. To see this feature in action, add the "-v" into the command:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 2495 packets, 104K bytes)

The output includes counters for both the policies and the rules. To track the rules, you can create a new chain for tracking bandwidth:

[root@server ~]$ iptables -N tracking
[root@server ~]$ iptables -vnL
Chain tracking (0 references)
 pkts bytes target prot opt in out source           destination

Then you need to set up new rules to match the traffic that you wish to track. In this scenario, let's look at inbound http traffic on port 80:

[root@server ~]$ iptables -I INPUT -p tcp --dport 80 -j tracking
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35111 packets, 1490K bytes)
 pkts bytes target prot opt in out source           destination
    0   0 tracking    tcp  --  *  *       tcp dpt:80

Now let's generate some traffic and check it again:

[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 35216 packets, 1500K bytes)
 pkts bytes target prot opt in out source           destination
  101  9013 tracking    tcp  --  *  *       tcp dpt:80

You can see the packet and byte transfer amounts to track the INPUT — traffic to a destination port on your server. If you want track the amount of data that the server is generating, you'd look for OUTPUT from the source port on your server:

[root@server ~]$ iptables -I OUTPUT -p tcp --sport 80 -j tracking
[root@server ~]$ iptables -vnL
Chain OUTPUT (policy ACCEPT 26149 packets, 174M bytes)
 pkts bytes target prot opt in out source           destination
  488 3367K tracking    tcp  --  *  *       tcp spt:80

Now that we know how the tracking chain works, we can add in a few different layers to get even more information. That way you can keep your INPUT and OUTPUT chains looking clean.

[root@server ~]$ iptables –N tracking
[root@server ~]$ iptables –N tracking2
[root@server ~]$ iptables –I INPUT –j tracking
[root@server ~]$ iptables –I OUTPUT –j tracking
[root@server ~]$ iptables –A tracking –p tcp --dport 80 –j tracking2
[root@server ~]$ iptables –A tracking –p tcp --sport 80 –j tracking2
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 96265 packets, 4131K bytes)
 pkts bytes target prot opt in out source           destination
 4002  184K tracking    all  --  *  *
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target prot opt in out source           destination
Chain OUTPUT (policy ACCEPT 33751 packets, 231M bytes)
 pkts bytes target prot opt in out source           destination
 1399 9068K tracking    all  --  *  *
Chain tracking (2 references)
 pkts bytes target prot opt in out source           destination
 1208 59626 tracking2   tcp  --  *  *       tcp dpt:80
  224 1643K tracking2   tcp  --  *  *       tcp spt:80
Chain tracking2 (2 references)
 pkts bytes target prot opt in out source           destination

Keep in mind that every time a packet passes through one of your rules, it will eat CPU cycles. Diverting all your traffic through 100 rules that track bandwidth may not be the best idea, so it's important to have an efficient ruleset. If your server has eight processor cores and tons of overhead available, that concern might be inconsequential, but if you're running lean, you could conceivably run into issues.

The easiest way to think about making efficient rulesets is to think about eating the largest slice of pie first. Understand iptables rule processing and put the rules that get more traffic higher in your list. Conversely, save the tiniest pieces of your pie for last. If you run all of your traffic by a rule that only applies to a tiny segment before you screen out larger segments, you're wasting processing power.

Another thing to keep in mind is that you do not need to specify a target (in our examples above, we established tracking and tracking2 as our targets). If you're used to each rule having a specific purpose of either blocking, allowing, or diverting traffic, this simple tidbit might seem revolutionary. For example, we could use this rule:

[root@server ~]$ iptables -A INPUT

If that seems a little bare to you, don't worry ... It is! The output will show that it is a rule that tracks all traffic in the chain at that point. We're appending the data to the end of the chain in this example ("-A") but we could also insert it ("-I") at the top of the chain instead. This command could be helpful if you are using a number of different chains and you want to see the exact volume of packets that are filtered at any given point. Additionally, this strategy could show how much traffic a potential rule would filter before you run it on your production system. Because having several of these kinds of commands can get a little messy, it's also helpful to add comments to help sort things out:

[root@server ~]$ iptables -A INPUT -m comment --comment "track all data"
[root@server ~]$ iptables -vnL
Chain INPUT (policy ACCEPT 11M packets, 5280M bytes)
 pkts bytes target prot opt in out source           destination
   98  9352        all  --  *  *       /* track all data */

Nothing terribly complicated about using iptables to count bandwidth, right? If you have iptables rulesets and you want to get a glimpse at how your traffic is being affected, this little trick could be useful. You can rely on the information iptables gives you about your bandwidth usage, and you won't be the only one ... cPanel actually uses iptables to track bandwidth.


September 15, 2011

PHIL’s DC: HostingCon

HostingCon 2011 in San Diego may have been a huge success for SoftLayer, but I walked away with a different experience following my intense pursuit of building the PHIL's DC brand. Apparently, the hosting industry wants to see my data center succeed before they believe it, and I think it's really just fear rearing its ugly head. People are afraid of what they don't understand, so the uninitiated would probably be terrified as they try to learn what I'm doing.

In an effort to help some of the bigger names in the hosting industry get in on the ground floor of PHIL's DC, I took a stroll down the HostingCon aisles. Vendors like Parallels and cPanel were obvious choices to discuss business partnerships, and I was sure TheWHIR wanted the scoop on the next big thing in hosting, so I made sure to give them all a chance to speak with me. The documentary film team I hired (the guy I met outside the San Diego Convention Center who said he'd follow me with a camera for $3.50/hour) recorded our interactions for posterity's sake:

I'd like send shouts out to thank Candice Rodriguez from TheWHIR, Aaron Phillips from cPanel and John McCarrick from Parallels for agreeing to let us film our organic interactions. They've further inspired me to build a data center that will make these apparent "snubs" and "rejections" a thing of the past. To Summer and Natalie at the SoftLayer booth: Please stop making fun of my Server Challenge attempt every time you see me at the office ... I think I had something in my eye when I was competing, so it wasn't a fair measure of my skillz.

Oh, and if you didn't get a chance to attend our "Geeks Gone Wild" party at HostingCon, you'd probably be interested in seeing video from The Dan Band's performance of "Total Eclipse of the Heart," cPanel posted it here: (NSFW language, The Dan Band take artistic license with profanity)


September 30, 2009

See You in Houston!

Next week a crowd of SoftLayer peeps are making the H-Town connection at cPanel Conference 2009. Representatives from the support, operations, sales, development, and management teams will be out in full force meeting, greeting, and learning. The conference is from Monday Oct 5 to Wednesday Oct 7 at the Hilton Americas Houston Hotel. Stop by our booth if you'd like to chat. We're throwing a reception for our awesome customers and partners at the lobby bar on Monday at 9pm. If that's not enough, yours truly will be giving a talk on Tuesday about how to extend cPanel and WHM through a 3rd party API. Y'all get three guesses as to whose API we're showing off. :) Bring your ripest fruits and vegetables and ready your air horns. It's been a while since I've had a good, old-fashioned heckling.

Come on out if you can make it. We love getting to know the folks who pay our salaries. ;) See you there!

Subscribe to cpanel