Posts Tagged 'Mysql'

December 27, 2012

Using SoftLayer Object Storage to Back Up Your Server

Before I came to my senses and moved my personal servers to SoftLayer, I was one of many victims of a SolusVM exploit that resulted in the wide-scale attack of many nodes in my previous host's Chicago data center. While I'm a firm believer in backing up my data, I could not have foreseen the situation I was faced with: Not only was my server in one data center compromised with all of its data deleted, but my backup server in one of the host's other data centers was also attacked ... This left me with old, stale backups on my local computer and not much else. I quickly relocated my data and decided that I should use SoftLayer Object Storage to supplement and improve upon my backup and disaster recovery plans.

With SoftLayer Object Storage Python Client set up and the SoftLayer Object Storage Backup script — slbackup.py — in hand, I had the tools I needed to build a solid backup infrastructure easily. On Linux.org, I contributed an article about how to perform MySQL backups with those resources, so the database piece is handled, but I also need to back up my web files, so I whipped up another quick bash script to run:

#!/bin/bash
 
# The path the backups will be dumped to
DUMP_DIR="/home/backups/"
 
# Path to the web files to be backed up
BACKUP_PATH="/var/www/sites /"
 
# Back up folder name (mmddyyyy)
BACKUP_DIR="`date +%m%d%Y`"
 
# Backup File Name
DUMP_FILE="`date +%m_%d_%Y_%H_%M_%S`_site_files"
 
# SL container name
CONTAINER="site_backups"
 
# Create backup dir if doesn't exist
if [ ! -d $DUMP_DIR$BACKUP_DIR ]; then
        mkdir -p $DUMP_DIR$BACKUP_DIR
fi
 
tar -zcvpf $DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz $BACKUP_PATH
 
# Make sure the archive exists
if [ -f $DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz ]; then
        /root/slbackup.py -s $DUMP_DIR$BACKUP_DIR/ -o "$CONTAINER" -r 30
 
        # Remove the backup stored locally
        rm -rf $DUMP_DIR$BACKUP_DIR
 
        # Success
        exit 0
else
        echo "$DUMP_DIR$BACKUP_DIR/$DUMP_FILE.tar.gz does not exist."
        exit 1
fi

It's not the prettiest bash script, but it gets the job done. By tweaking a few variables, you can easily generate backups for any important directory of files and push them to your SoftLayer Object Storage account. If you want to change the retention time of your backups to be longer or shorter, you can change the 30 after the –r in the line below to the number of days you want to keep each backup:

/root/slbackup.py -s $DUMP_DIR$BACKUP_DIR/ -o "$CONTAINER" -r 30

I created a script for each website on my server, and I set a CRON (crontab –e) entry to run each one on Sundays staggered by 5 minutes:

5 1 * * 0  /root/bin/cron/CRON-site1.com_web_files > /dev/null
10 1 * * 0  /root/bin/cron/CRON-site2.com_web_files > /dev/null
15 1 * * 0  /root/bin/cron/CRON-site3.com_web_files > /dev/null 

If you're looking for an easy way to automate and solidify your backups, this little bit of code could make life easier on you. Had I taken the few minutes to put this script together prior to the attack I experienced at my previous host, I wouldn't have lost any of my data. It's easy to get lulled into "backup apathy" when you don't need your backups, but just because nothing *has* happened to your data doesn't mean nothing *can* happen to your data.

Take it from me ... Be over-prepared and save yourself a lot of trouble.

-Ronald

July 25, 2012

ServerDensity: Tech Partner Spotlight

We invite each of our featured SoftLayer Tech Marketplace Partners to contribute a guest post to the SoftLayer Blog, and this week, we're happy to welcome David Mytton, Founder of ServerDensity. Server Density is a hosted server and website monitoring service that alerts you when your website is slow, down or back up.

5 Ways to Minimize Downtime During Summer Vacation

It's a fact of life that everything runs smoothly until you're out of contact, away from the Internet or on holiday. However, you can't be available 24/7 on the chance that something breaks; instead, there are several things you can do to ensure that when things go wrong, the problem can be managed and resolved quickly. To help you set up your own "get back up" plan, we've come up with a checklist of the top five things you can do to prepare for an ill-timed issue.

1. Monitoring

How will you know when things break? Using a tool like Server Density — which combines availability monitoring from locations around the world with internal server metrics like disk usage, Apache and MySQL — means that you can be alerted if your site goes down, and have the data to find out why.

Surprisingly, the most common problems we see are some that are the easiest to fix. One problem that happens all too often is when a customer simply runs out of disk space in a volume! If you've ever had it happen to you, you know that running out of space will break things in strange ways — whether it prevents the database from accepting writes or fails to store web sessions on disk. By doing something as simple as setting an alert to monitor used disk space for all important volumes (not just root) at around 75%, you'll have proactive visibility into your server to avoid hitting volume capacity.

Additionally, you should define triggers for unusual values that will set off a red flag for you. For example, if your Apache requests per second suddenly drop significantly, that change could indicate a problem somewhere else in your infrastructure, and if you're not monitoring those indirect triggers, you may not learn about those other problems as quickly as you'd like. Find measurable direct and indirect relationships that can give you this kind of early warning, and find a way to measure them and alert yourself when something changes.

2. Dealing with Alerts

It's no good having alerts sent to someone who isn't responding (or who can't at a given time). Using a service like Pagerduty allows you to define on-call rotations for different types of alerts. Nobody wants to be on-call every hour of every day, so differentiating and channeling alerts in an automated way could save you a lot of hassle. Another huge benefit of a platform like Pagerduty is that it also handles escalations: If the first contact in the path doesn't wake up or is out of service, someone else gets notified quickly.

3. Tracking Incidents

Whether you're the only person responsible or you have a team of engineers, you'll want to track the status of alerts/issues, particularly if they require escalation to different vendors. If an incident lasts a long time, you'll want to be able to hand it off to another person in your organization with all of the information they need. By tracking incidents with detailed notes information, you can avoid fatigue and prevent unnecessary repetition of troubleshooting steps.

We use JIRA for this because it allows you to define workflows an issue can progress along as you work on it. It also includes easy access to custom fields (e.g. specifying a vendor ticket ID) and can be assigned to different people.

4. Understanding What Happened

After you have received an alert, acknowledged it and started tracking the incident, it's time to start investigating. Often, this involves looking at logs, and if you only have one or two servers, it's relatively easy, but as soon as you add more, the process can get exponentially more difficult.

We recommend piping them all into a log search tool like (fellow Tech Partners Marketplace participant) Papertrail or Loggly. Those platforms afford you access to all of your logs from a single interface with the ability to see incoming lines in real-time or the functionality to search back to when the incident began (since you've clearly monitored and tracked all of that information in the first three steps).

5. Getting Access to Your Servers

If you're traveling internationally, access to the Internet via a free hotspot like the ones you find in Starbucks isn't always possible. It's always a great idea to order a portable 3G hotspot in advance of a trip. You can usually pick one up from the airport to get basic Internet access without paying ridiculous roaming charges. Once you have your connection, the next step is to make sure you can access your servers.

Both iPhone and Android have SSH and remote desktop apps available which allow you to quickly log into your servers to fix easy problems. Having those tools often saves a lot of time if you don't have access to your laptop, but they also introduce a security concern: If you open server logins to the world so you can login from the dynamic IPs that change when you use mobile connectivity, then it's worth considering a multi-factor authentication layer. We use Duo Security for several reasons, with one major differentiator being the modules they have available for all major server operating systems to lock down our logins even further.

You're never going to escape the reality of system administration: If your server has a problem, you need to fix it. What you can get away from is the uncertainty of not having a clearly defined process for responding to issues when they arise.

-David Mytton, ServerDensity

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
June 8, 2011

MySQL Slow? Check for Fragmentation.

Let's say you have a website and you notice that any calls to your MySQL database take longer to render. If you don't have a Database Administrator (DBA), this can be pretty frustrating. SoftLayer's Managed Hosting line of business employs some of the best DBAs in the country and is one of the only managed hosting providers that offers MySQL and MsSQL DBA services, and I don't just say that because I'm one of them ... We've got the certifications to prove it. :-)

Given my area of expertise, I wanted to share a few some simple tips with you to help you tweak variables and improve the performance of your MySQL server. Given that every application is different, this isn't necessarily a one-size-fits-all solution, but it'll at least give you a starting point for troubleshooting.

First: Get mysqltuner.pl. This is a fine script by Major Hayden that will give you some valuable information regarding the performance of your MySQL server.

Second: Look for fragmented tables. What are fragmented tables? If there are random insertions into or deletions from the indexes of a table, the indexes may become fragmented. Fragmentation means that the physical ordering of the index pages on the disk is not close to the index ordering of the records on the pages or that there are many unused pages in the 64-page blocks that were allocated to the index. The symptoms of fragmented tables can be that table can take more disk space than needed or the results may return slower with more disk I/O than needed. INNODB users need to check the fragmentation often because when INNODB marks data as deleted, it never overwrites the blocks with new data ... It just marks them as unusable. As a result, the data size is artificially inflated and data retrieval is slowed.

Fortunately, there is a way to see your table fragmentation and that is to run a query against the information_schemea to show all tables that are fragmented and the percentage of fragmentation:

SELECT TABLE_SCHEMA, TABLE_NAME, CONCAT(ROUND(data_length / ( 1024 * 1024 ), 2), 'MB') DATA, CONCAT(ROUND(data_free  / ( 1024 * 1024 ), 2), 'MB')FREE from information_schema.TABLES where TABLE_SCHEMA NOT IN ('information_schema','mysql') and Data_free < 0;

Fixing the fragmentation is easy, but there are a few caveats. When defragmenting a table, it will lock the table, so make sure you can afford the lock. To fix fragmented tables, you can simply run optimize table <table name>; to rebuild the table and all indexes or you can change the engine of the table with alter table <table name> engine = INNODB;

I have written a simple bash script in bash to go through, defragment and optimize your tables:

#!/bin/bash
 
MYSQL_LOGIN='-u<user name> --password=<passowrd>'
 
for db in $(echo "SHOW DATABASES;" | mysql $MYSQL_LOGIN | grep -v -e "Database" -e "information_schema")
do
        TABLES=$(echo "USE $db; SHOW TABLES;" | mysql $MYSQL_LOGIN |  grep -v Tables_in_)
        echo "Switching to database $db"
        for table in $TABLES
        do
                echo -n " * Optimizing table $table ... "
                echo "USE $db; OPTIMIZE TABLE $table" | mysql $MYSQL_LOGIN >/dev/null
                echo "done."
        done
done

You'd be surprised how much of an impact table fragmentation has on MySQL performance, and this is an easy way to quickly troubleshoot your database that "isn't as fast as it used to be." If you follow the above steps and still can't make sense of what's causing your database to lag, our Managed Hosting team is always here to work with you to get your servers back in shape ... And with the flexibility of month-to-month contract terms and the ability to add managed capabilities to specific pieces of your infrastructure, we have to earn your business every month with spectacular service.

-Lee

Subscribe to mysql