Posts Tagged 'Disaster Recovery'

November 19, 2013

Protect Your Data: Configure EVault for Server Backups

In "The Tenth Anniversary" episode of "Everybody Loves Raymond," Raymond accidentally records the Super Bowl over his wedding video. He hilariously tries to compensate for his gaffe by renewing his wedding vows so he can make a new tape for his wife Debra. If life imitates art, it's worth considering what would happen if that tape held your business data. It would be disaster!

While it's unlikely that one of your sysadmins will accidentally record the Super Bowl over the data in your database server cluster, data loss can occur in a number of ways. If your business data is not protected and backed up, it's unlikely that you'll have a neat and tidy sitcom episode resolution. Luckily, SoftLayer provides simple, inexpensive backup capabilities with software such as EVault, so you shouldn't ever be worried about anyone pulling a Raymond on your data.

The following quick, four-step process walks you through how to protect and back up your data by subscribing to SoftLayer's EVault Backup client. This software enables you to design and set your backup schedule, protecting your business from unexpected costs because of accidental deletions, viruses, and other disasters. To follow along on your own servers, your computing instances or bare metal servers need to be provisioned, and you need to have root or administrator level access to those servers. For the sake of brevity, I'll be using a Linux operating system in this guide, but if you're running Windows, the process, in general, is no different.

Step 1 - Order EVault Backup for the server or computing instance

  1. Log into the SoftLayer Customer Portal and select the server(s) that needs storage services from the device list.
  2. Scroll down to the Storage section. Select the Add (or Modify) link located on the right hand corner of the EVault record to place an order for an EVault Backup client subscription.
  3. On the EVault ordering screen, select either Local or Remote Data Center and the desired amount of storage. Agree to the terms and conditions and click the Order EVault button to place your EVault storage order.
  4. The order is typically provisioned in 5 minutes or less and the system creates a user and password for the new instance of EVault.
  5. Click Services→Storage→EVault and expand the EVAULT link to make note of the user credentials, which will be used in Step 3.

Step 2 - Download the EVault agent on the server or computing instance

  1. SSH into the server or computing instance and run the following command:
    # wget –N http://downloads.service.softlayer.com/evault/evault_manual.sh

Step 3 - Register the server or computing instance with EVault in order to run back up and restore jobs

  1. From the command prompt on the server or compute instance run the following command to register it with EVault:
    ~]# sh ./evault_manual.sh
  2. In the ensuing prompts, enter the credentials that were noted Step 1.5 and use ev-webcc01.service.softlayer.com for the web-based agent console address.

    Note: In the event the agent fails to register with EVault, you can quickly register the agent manually by running ~]#<Installation directory>/register

Once you've made it to this point, you're ready to run backup and restore jobs.

Step 4 – Login into EVault console with WebCCLogin

  1. From the SoftLayer Customer Portal, click Services→Storage→EVault.
  2. Expand the server or compute instance to which EVault Backup is attached. In the right-hand corner of the server entry you will find a link to WebCCLogin.
  3. Click the WebCCLogin link for the EVault Web CentralControl screen. Type in the credentials from Step 1.5 and you’ll be taken to the EVault Backup and Restore interface.
  4. You are now ready to run your backup and restore jobs!

Check your backups often to confirm that they're being created when, where, and how you want them to be created. To prepare for any possible disaster recovery scenarios, schedule periodic tests of your backups: Restore the most recent backup of your production server to an internal server. That way, if someone pulls a Raymond on your server(s), you'll be able to get all of your data back online quickly. If you're interested in learning more, visit the Evault Backup page on KnowledgeLayer.

-Vinayak Harnoor

Vinayak Harnoor is a Technical Architect with the IBM Global Technology Services (GTS) Global Cloud Ecosystem team.

January 3, 2012

Hosting Resolutions for the New Year

It's a new year, and though only real change between on January 1 is the last digit in the year, that change presents a blank canvas for the year. In the past, I haven't really made New Year's resolutions, but because some old Mayan calendar says this is my last chance, I thought I'd take advantage of it. In reality, being inspired to do anything that promotes positive change is great, so in the spirit of New Year's improvements, I thought I'd take a look at what hosting customers might want to make resolutions to do in 2012.

What in your work/hosting life would you like to change? It's easy to ignore or look past small goals and improvements we can make on a daily basis, so let's take advantage of the "clean slate" 2012 provides us to be intentional about making life easier. A few small changes can mean the difference between a great day in the office or a frantic overnight coffee binge (which we all know is so great for your health). Because these changes are relatively insignificant, you might not recognize anything in particular that needs to change right off the bat. You might want to answer a daunting question like, "What should you do to improve your work flow or reduce work related stress?" Luckily, any large goals like that can be broken down into smaller pieces that are much easier to manage.

Enough with the theoretical ... let's talk practical. In 2012, your hosting-related New Year's resolutions should revolve around innovation, conservation, security and redundancy.

Innovation

When it comes to hosting, a customer's experience and satisfaction is the most important focus of a successful business. There's an old cliche that says, "If you always do what you've always done, you'll always get what you've always gotten," and that's absolutely correct when it comes to building your business in the new year. What can you change or automate to make your business better? Are you intentionally "thinking outside the box?"

Conservation

The idea of "conservation" and "green hosting" has been written off as a marketing gimmick in the world of hosting, but there's something to be said for looking at your utilization from that perspective. We could talk about the environmental impact of hosting, and finding a host that is intentional about finding greener ways to do business, but if you're renting a server, you might feel a little disconnected from that process. When you're looking at your infrastructure in the New Year, determine whether your infrastructure is being used efficiently by your workload. Are there tools you can take advantage of to track your infrastructure's performance? Are you able to make changes quickly if/when you find inefficiencies?

Security

Another huge IT-related resolution you should make would be around security. Keeping your system tight and locked up can get forgotten when you're pushing development changes or optimizing your networking, so the beginning of the year is a great time to address any possible flaws in your security. Try to start with simple changes in your normal security practices ... Make sure your operating systems and software packages are regularly patched. Keep a strict password policy that requires regular password updates. Run system log checks regularly. Reevaluate your system firewall or ACL lists.

All of these safety nets may be set up, but they may not be functioning at their best. Even precautions as simple as locking your client or workstation when not in use can help stop attacks from local risks and prying eyes ... And this practice is very important if you keep system backups on the same workstations that you use. Imagine if someone local to your workstation or client was able to retrieve your backup file and restore it ... Your security measures would effectively be completely nullified.

Redundancy

Speaking of backups, when was your most recent backup? When is your next backup? How long would it take you to restore your site and/or data if your current server(s) were to disappear from the face of the Earth? These questions are easy to shrug off when you don't need to answer them, but by the time you do need to answer them, it's already too late. Create a backup and disaster recovery plan. Today. And automate it so you won't have the ability to forget to execute on it.

Make your objectives clear, and set calendar reminders throughout the year to confirm that you're executing on your goals. If some of these tasks are very daunting or difficult to implement in your current setup, don't get discouraged ... Set small goals and chip away at the bigger objective. Progress over time will speak for itself. Doing nothing won't get you anywhere

Happy New Year!

-Jonathan

September 14, 2011

FaxLogic: Tech Partner Spotlight

This is a guest blog from FaxLogic CEO Eric Lenington. The unique FaxLogic service combines the best of analog fax, Internet fax, and fax servers to create a highly reliable, secure and scalable collaborative environment.

Why the (Right) Cloud is the Best Place for Your Documents

Every business produces and consumes documents — this includes both paper and digital, both those created internally and those received from customers and business partners — all needing to be sorted and organized and most needing to be safely stored and easily retrieved (and ultimately, securely disposed of when they are no longer needed). The vast majority of companies find themselves trying to do this today in highly fragmented ways and usually with radically different approaches for paper documents than with digital ones. Often different departments, or even different groups within a department, develop their own way to deal with "their" documents, a way that "works for them."

Digital documents are usually stored on in-house servers, on "shares" with folder structures that may only make sense to the person that originally built it — not to the person trying to find something in it. And few companies can say that they don't have reams of paper files stored in file rooms or in "personal" file cabinets. FaxLogic helps our customers solve this problem by seamlessly integrating their paper and digital worlds.

We do this by supporting their existing network of fax machines, scanners and multi-function printers (the "gateways" to the digital world for paper documents) and by incorporating key features of current technologies that we are all familiar with – like email and search engines – into the realm of organizing, archiving, retrieving and sharing documents. FaxLogic is a cloud-based service, running on a cloud-based infrastructure, and it uses "the cloud" to safely and securely store our customer's documents (whether paper or digital). This was no accident, and that is what I will focus on in this article, trying to "demystify" the cloud a bit, and discuss why it's the best place for your documents.

What is "the Cloud" and What Value does it Bring?
Wikipedia, one of my favorite sources for good information, defines "cloud computing" like this:

Cloud computing is the delivery of computing as a service rather than a product, whereby shared resources, software and information are provided to computers and other devices as a utility (like the electricity grid) over a network (typically the Internet).

As I said earlier, FaxLogic is a cloud-based service; we are the "application" (document management) that is delivered to the "client" (our customer's web browser). And we run on a cloud-based infrastructure, using providers like SoftLayer to manage the hardware layer that our application runs on and the networking layer that we use to deliver our service to our customers. Leveraging that "best of breed" infrastructure is a huge win for us, letting us focus where we add value – our application – while leaving the "plumbing" to others. Of course, a choice like that isn't made lightly.

From Wikipedia's definition, the term "shared resources" is the key. By leveraging cloud-based infrastructure and platform resources, we are able to use a small portion of a much larger and more robust environment than we could economically build ourselves. But the big kicker is that even though we are using only a small portion of that environment, we get to take advantage of the whole architecture and all its capabilities, just as if we were the only application running on it.

The "80% Rule"
An anecdotal number that's been thrown around a lot, the "80% rule" says that 80% of all businesses fail within some short time period after a major catastrophe, like a fire, flood or earthquake. But this isn't just an anecdote, numbers in the 60-90% range are real and well-documented. A study conducted by the insurance giant Chubb in 2008 put the likelihood of business failure after a fire at 70%. According to FEMA, of businesses without a disaster recovery plan already in place, 80% of those affected by hurricane Andrew in 1992 were out of business within three years. I won't bore you with a long list of depressing statistics, a quick Google search will turn up many more. The point is that data loss, whether caused by natural disaster, human error, or malicious activity, is, more often than not, very difficult to overcome.

Paper files stored in file cabinets, and even digital files stored on backed-up in-house servers, are vulnerable. Ask yourself what you would do tomorrow if even half of the documents critical to your business were destroyed tonight.

The FaxLogic Cloud Solution
Now, I don't want to suggest for one second that FaxLogic is the single solution for surviving such an event or that our platform should be thought of as a comprehensive disaster recovery solution. It is neither. But it is a critical part of the solution. When it comes to disaster recovery, Ben Franklin's "ounce of prevention" couldn't be more relevant. And as it applies to your business documents, that ounce is to get those documents out of harms way in the first place. This is where the cloud comes in.

Companies like SoftLayer provide cloud storage as a service – a highly scalable, secure environment to safely store files of virtually any kind. The architecture that such services are built on and the layers of redundancy they incorporate are beyond the reach of most small and many medium sized companies, but through the magic of cloud computing, we only need a small portion of that shared resource, while still getting to take advantage of the whole thing. The bottom line is that a well-designed cloud storage service will be hundreds or even thousands of times more reliable and durable than anything most businesses could economically build themselves, not to mention more secure.

FaxLogic takes our small portion of that shared resource, and through our application, makes it even more reliable and durable, by doing things like ensuring broad geographic distribution of multiple copies of each file, so there is no single point of failure even in the face of a major regional disaster.

Beyond the Worst Case Scenario
Secure, reliable cloud-based storage is just the basic building block that our application makes useful. Just the fact that your business documents are safer in the cloud isn't the whole story, nor is it the whole value proposition of the cloud. Beyond the worst case scenario, storing your documents in the cloud brings real and tangible benefits to your day-to-day activities. We make it easy to capture both paper and digital documents and store them in the cloud, organize and easily find your documents when you need them, collaborate and share documents while controlling who has access to confidential information, and manage everything from a simple browser-based interface.

Think about how much easier day-to-day activity would be with capabilities like being able to access a shared document library from any Internet-enabled device, instantly find a faxed copy of a purchase order from six months ago by knowing only the name of the sender, or easily pull up a client's latest work order revision without having to figure out who's desk the client's folder is on. We use the cloud to make this possible. By getting your documents out of their hiding places (stacks of paper on people's desks, file cabinets down the hall, or even "shares" on local servers) that information is more freely accessible to those who need it.

Take Action
Businesses of all sizes can and are benefiting today from a wide range of cloud-based services, most of which weren't even available five years ago. The underlying value proposition they all have in common is that they give each customer access to a small piece of a large "shared resource," one that generally wouldn't be economically feasible to build and support in-house. And each customer can take advantage of the scale and capabilities of the whole resource. When it comes to capturing, storing, organizing, retrieving and sharing documents, the cloud's value proposition offers a clear advantage over any on-site approach.

FaxLogic has built a best-in-class cloud-based application on top of best-in-class cloud-based infrastructure and platform services, giving our customers a multiple of that value proposition. By letting our customers leverage their existing equipment and without requiring radical changes to their existing business processes, we make it easy to start taking advantage of the benefits of cloud storage for all of their paper and digital documents.

-Eric Lenington, FaxLogic

This guest blog series highlights companies in SoftLayer's Technology Partners Marketplace.
These Partners have built their businesses on the SoftLayer Platform, and we're excited for them to tell their stories. New Partners will be added to the Marketplace each month, so stay tuned for many more come.
September 5, 2011

How Scalable Are You?

The Northeastern part of the United States saw two natural disasters within the span of five days of each other. The first was in the Washington, D.C. area: A 5.8 earthquake on August 23, 2011. On August 28, Hurricane Irene made her way up the east coast, leaving nearly 5.5 million people without power. We do everything we can to prepare our facilities for natural disasters (generator power backup, staffing, redundant bandwidth links and providers, etc.), and given the recent events, now might be a good time to start thinking about how your servers respond when something out of the ordinary happens ... Let's look at two relatively easy ways you can set your business up to scale and recover.

The first option you may consider would be to set up a multi-tiered environment by deploying multiple servers in various geographical locations. Your servers in each location could be accessed via load balancing or round robin DNS. In this kind of high-availability environment, your servers could handle the incoming requests more quickly with the load being split amongst the multiple data centers. The failover would be just a few seconds should you lose connectivity to one of the locations.

The second option to consider would be the private image repository for our CloudLayer Computing. This options allows you to save a private image template in different data centers, each ready for quick deployment without having to install and configure the same operating system and applications. Should you need additional resources or lose connectivity to your instance in one facility, you can deploy the saved image in another facility. The failover time would be only in the provisioning process of the Computer Instance ... which doesn't take too long.

Scalability makes sense no matter what situation you may be facing – from natural disaster to hitting the front page of Reddit. If you have any questions about these scalability options, "Click to Chat" on our site or give us a call and a sales rep can help you get prepared. Your infrastructure may have come through these recent events unscathed, but don't let that lull you into a false sense of security. The "It's better to be safe than sorry" cliche is a cliche for a reason: It's worth saying often.

-Greg

June 18, 2008

Planning for Data Center Disasters Doesn’t Have to Cost a Lot of $$

One of the hot topics over the past couple of weeks in our growing industry has been how to minimize downtime should your (or your host’s) data center experience catastrophic failure leading to outages that could span multiple days.

Some will think that it is the host’s responsibility to essentially maintain a spare data center into which they can migrate customers in case of catastrophe. The reason we don’t do this is simple economics. To maintain this type of redundancy, we’d need to charge you at least double our current rates. Because costs begin jumping exponentially instead of linearly as extensive redundancy is added, we’d likely need to charge you more than double our current rates. You know what? Nobody would buy at that point. It would be above the “reservation price” of the market. Go check your old Econ 101 notes for more details.

Given this economic reality, we at SoftLayer provide the infrastructure and tools for you to recover quickly from a catastrophe with minimal cost and downtime. But, every customer must determine which tools to use and build a plan that suits the needs of the business.

One way to do this is to maintain a hot-synched copy of your server at a second of our three geographically diverse locations. Should catastrophe happen to the location of your server, you will stay up and have no downtime. Many of you do this already, even keeping servers at multiple hosts. According to our customer surveys, 61% of our customers use multiple providers for exactly that reason – to minimize business risk.

Now I know what you’re thinking – “why should I maintain double redundancy and double my costs if you won’t do it?” Believe me, I understand this - I realize that your profit margins may not be able to handle a doubling of your costs. That is why SoftLayer provides the infrastructure and tools to provide an affordable alternative to running double infrastructure in multiple locations in case of catastrophe.

SoftLayer’s eVault offering can be a great cost effective alternative to the cost of placing servers in multiple locations. Justin Scott has already blogged about the rich backup features of eVault and how his backup data is in Seattle while his server is in Dallas, so I won’t continue to restate what he has already said. I will add that eVault is available in each of our data centers, so no matter where your server is at SoftLayer, you can work with your sales rep to have your eVault backups in a different location. Thus, for prices that are WAY lower than an extra server (eVault starts at $20/month), you can keep near real-time backups of your server data off site. And because the data transfer between locations happens on SoftLayer’s private network, your data is secure and the transfer doesn’t count toward your bandwidth allotment.

So let’s say your server is in our new Washington DC data center and your eVault backups are kept in one of our Dallas data centers. A terrorist group decides to bomb data centers in the Washington DC area in an attempt to cripple US government infrastructure and our facility is affected and won’t be back up for several days. At this point, you can order a server in Dallas, and once it is provisioned in an hour or so, you restore the eVault backup of your choice, wait on DNS to propagate based on TTL, and you’re rolling again.

Granted, you do experience some downtime with this recovery strategy. But the tradeoff is that you are up and running smoothly after the brief downtime at a cost for this contingency that begins at only $20 per month. And when you factor in your SLA credit on the destroyed server, this offsets the cost of ordering a new server, so the cost of your eVault is the only cost of this recovery plan.

This is much less than doubling your costs with offsite servers to almost guarantee no downtime. The reason that I throw in the word “almost” is that if an asteroid storm takes out all of our locations and your other providers’ locations, you will experience downtime. Significant downtime.

-Gary

June 4, 2008

Wait … Back up. I Missed Something!

I’ve been around computers all my life (OK, since 1977 but that’s almost all my life) and was lucky to get my first computer in 1983.

Over the summer of 1984, I was deeply embroiled in (up to that point) the largest programming project of my life, coding Z80 ASM on my trusty CP/M computer when I encountered the most dreaded of all BDOS errors, “BDOS ERROR ON B: BAD SECTOR”

In its most mild form, this cryptic message simply means “copy this data to another disk before this one fails.” However, in this specific instance, it represented the most severe case… “this disk is toast, kaputt, finito, your data is GONE!!!”

Via the School of Hard Knocks, I learned the value of keeping proper backups that day.
If you’ve been in this game for longer than about 10 milliseconds, it’s probable that you’ve experienced data loss in one form or another. Over the years, I’ve seen just about every kind of data loss imaginable, from the 1980’s accountant who tacked her data floppy to the filing cabinet with a magnet so she wouldn’t misplace it-- all the way to enterprise/mainframe class SAN equipment that pulverizes terabytes of critical data in less than a heartbeat due to operator error on the part of a contractor.

I’ve consulted with thousands of individuals and companies about their backup implementations and strategies, and am no longer surprised by administrators who believe they have a foolproof backup utilizing a secondary hard disk in their systems. I have witnessed disk controller failures which corrupt the contents of all attached disk drives, operator error and/or forgetfulness that leave gaping holes in so-called backup strategies and other random disasters. On the other side of the coin, I have personally experienced tragic media failure from “traditional backups” utilizing removable media such as tapes and/or CD/DVD/etc.

Your data is your life. I’ve waited up until this point to mention this, because it should be painfully obvious to every administrator, but in my experience the mentality is along the lines of “My data exists, therefore it is safe.” What happens when your data ceases to exist, and you become aware of the flaws in your backup plan? I’ll tell you – you go bankrupt, you go out of business, you get sued, you lose your job, you go homeless, and so-on. Sure, maybe those things won’t happen to you, but is your livelihood worth the gamble?

“But Justin… my data is safe because it’s stored on a RAID mirror!” I disagree. Your data is AVAILABLE, your data is FAULT TOLERANT, but it is not SAFE. RAID controllers fail. Disaster happens. Disgruntled or improperly trained personnel type ‘rm –rf /’ or accidentally select the wrong physical device when working with the Disk Manager in Windows. Mistakes happen. The unforeseeable, unavoidable, unthinkable happens.

Safe data is geographically diverse data. Safe data is up-to-date data. Safe data is readily retrievable data. Safe data is more than a single point-in-time instance.

Unsafe data is “all your eggs in one basket.” Unsafe data is “I’ll get around to doing that backup tomorrow.” Unsafe data is “I stored the backups at my house which is also underwater now.” Unsafe data is “I only have yesterday’s backup and last week’s backup, and this data disappeared two days ago.”

SoftLayer’s customers are privileged to have the option to build a truly safe data backup strategy by employing the Evault option on StorageLayer. This solution provides instantaneous off-site backups and efficiently utilizes tight compression and block-level delta technologies, is fully automated, has an extremely flexible retention policy system permitting multiple tiers of recovery points-in-time, is always online via our very sophisticated private network for speedy recovery, and most importantly—is incredibly economical for the value it provides. To really pour on the industry-speak acronym soup, it gives the customer the tools for their BCP to provide a DR scenario with the fastest RTO with the best RPO that any CAB would approve because of its obvious TCR (Total Cost of Recovery). Ok, so I made that last one up… but if you don’t recover from data loss, what does it cost you?

On my personal server, I utilize this offering to protect more than 22 GB of data. It backs up my entire server daily, keeping no less than seven daily copies representing at least one week of data. It backs up my databases hourly, keeping no less than 72 hourly copies representing at least three days of data. It does all this seamlessly, in the background, and emails me when it is successful or if there is an issue.

Most importantly, it keeps my data safe in Seattle, while my server is located in Dallas. Alternatively, if my server were located in Seattle, I could choose for my data to be stored in Dallas or our new Washington DC facility. Here’s the kicker, though. It provides me the ability to have this level of protection, with all the bells and whistles mentioned above, without overstepping the boundary of my 10 GB service. That’s right, I have 72 copies of my database and 7 copies of my server, of which the original data totals in excess of 22 GB, stored within 10 GB on the backup server.

That’s more than sufficient for my needs, but I could retain weekly data or monthly data without significant increase in storage requirements, due to the nature of my dataset.
This service costs a mere $20/mo, or $240/yr. How much would you expect to pay to be able to sleep at night, knowing your data is safe?

Are you missing something? Wait … Backup!

-Justin

Subscribe to disaster-recovery