Posts Tagged 'NAS'

August 1, 2013

The "Unified Field Theory" of Storage

This guest blog was contributed by William Rocca of OS NEXUS. OS NEXUS makes the Quantastor Software Defined Storage platform designed to tackle the storage challenges facing cloud computing, Big Data and high performance applications.

Over the last decade, the creation and popularization of SAN/NAS systems simplified the management of storage into a single appliance so businesses could efficiently share, secure and manage data centrally. Fast forward about 10 years in storage innovation, and we're now rapidly changing from a world of proprietary hardware sold by big-iron vendors to open-source, scale-out storage technologies from software-only vendors that make use of commodity off-the-shelf hardware. Some of the new technologies are derivatives of traditional SAN/NAS with better scalability while others are completely new. Object storage technologies such as OpenStack SWIFT have created a foundation for whole new types of applications, and big data technologies like MongoDB, Riak and Hadoop go even further to blur the lines between storage and compute. These innovations provide a means for developing next-generation applications that can collect and analyze mountains of data. This is the exciting frontier of open storage today.

This frontier looks a lot like the "Wild West." With ad-hoc solutions that have great utility but are complex to setup and maintain, many users are effectively solving one-off problems, but these solutions are often narrowly defined and specifically designed for a particular application. The question everyone starts asking is, "Can't we just evolve to having one protocol ... one technology that unites them all?"

If each of these data storing technologies have unique advantages for specific use cases or applications, the answer isn't to eliminate protocols. To borrow a well-known concept from Physics, the solution lies in a "Unified Field Theory of Storage" — weaving them together into a cohesive software platform that makes them simple to deploy, maintain and operate.

When you look at the latest generation of storage technologies, you'll notice a common thread: They're all highly-available, scale-out, open-source and serve as a platform for next-generation applications. While SAN/NAS storage is still the bread-and-butter enterprise storage platform today (and will be for some time to come) these older protocols often don't measure up to the needs of applications being developed today. They run into problems storing, processing and gleaning value out of the mountains of data we're all producing.

Thinking about these challenges, how do we make these next-generation open storage technologies easy to manage and turn-key to deploy? What kind of platform could bring them all together? In short, "What does the 'Unified Field Theory of Storage' look like?"

These are the questions we've been trying to answer for the last few years at OS NEXUS, and the result of our efforts is the QuantaStor Software Defined Storage platform. In its first versions, we focused on building a flexible foundation supporting the traditional SAN/NAS protocols but with the launch of QuantaStor v3 this year, we introduced the first scale-out version of QuantaStor and integrated the first next-gen open storage technology, Gluster, into the platform. In June, we launched support of ZFS on Linux (ZoL), and enhanced the platform with a number of advanced enterprise features, such as snapshots, compression, deduplication and end-to-end checksums.

This is just the start, though. In our quest to solve the "Unified Field Theory of Storage," we're turning our eyes to integrating platforms like OpenStack SWIFT and Hadoop in QuantaStor v4 later this year, and as these high-power technologies are streamlined under a single platform, end users will have the ability to select the type(s) of storage that best fit a given application without having to learn (or unlearn) specific technologies.

The "Unified Field Theory of Storage" is emerging, and we hope to make it downloadable. Visit OSNEXUS.com to keep an eye on our progress. If you want to incorporate QuantaStor into your environment, check out SoftLayer's preconfigured QuantaStor Mass Storage Server solution.

-William Rocca, OS NEXUS

October 18, 2011

Adding 'Moore' Storage Solutions

In 1965, Intel co-founder Gordon Moore observed an interesting trend:"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase."

Moore was initially noting the number of transistors that can be placed on an integrated circuit at a relatively constant minimal cost. Because that measure has proven so representative of the progress of our technological manufacturing abilities, "Moore's Law" has become a cornerstone in discussions of pricing, capacity and speed of almost anything in the computer realm. You've probably heard the law used generically to refer to the constant improvements in technology: In two years, you can purchase twice as much capacity, speed, bandwidth or any other easily-measureable and relevant technology metric for the price you would pay today and for the current levels of production.

Think back to your first computer. How much storage capacity did it have? You were excited to be counting in bytes and kilobytes ... "Look at all this space!" A few years later, you heard about people at NASA using "gigabytes" of space, and you were dumbfounded. Fastforward a few more years, and you wonder how long your 32GB flash drive will last before you need to upgrade the capacity.

32GB Thumb Drive

As manufacturers have found ways to build bigger and faster drives, users have found ways to fill them up. As a result of this behavior, we generally go from "being able to use" a certain capacity to "needing to use" that capacity. From a hosting provider perspective, we've seen the same trend from our customers ... We'll introduce new high-capacity hard drives, and within weeks, we're getting calls about when we can double it. That's why we're always on the lookout for opportunities to incorporate product offerings that meet and (at least temporarily) exceed our customers' needs.

Today, we announced Quantastor Storage Servers, dedicated mass storage appliances with exceptional cost-effectiveness, control and scalability. Built on SoftLayer Mass Storage dedicated servers with the OS NEXUS QuantaStor Storage Appliance OS, the solution supports up to 48TB of data with the perfect combination of performance economics, scalability and manageability. To give you a frame of reference, this is 48TB worth of hard drives:

48TB

If you've been looking for a fantastic, high-capacity storage solution, you should give our QuantaStor offering a spin. The SAN (iSCSI) + NAS (NFS) storage system delivers advanced storage features including, thin-provisioning, and remote-replication. These capabilities make it ideally suited for a broad set of applications including VM application deployments, virtual desktops, as well as web and application servers. From what I've seen, it's at the top of the game right now, and it looks like it's a perfect option for long-term reliability and scalability.

-@nday91

May 15, 2009

Disaster Recovery Plan

A few days ago I was reading a news story about a man who just lost everything to a fire. One of the comments he made was that he had never thought to plan for something like this; it was the type of thing that happened to other people but never to me. I started thinking about how true that statement was. Many people just never think it will happen to them.

This type of situation happens every day in the IT field. There is some sort of disaster causing a server to crash or simply stop working all together, the drives on the server are completely corrupted and the data is just gone. The question is; when this happens to you, will you be prepared? Thankfully, there are steps each person can take to limit the pain and downtime a situation like this can cause. Like any other disaster recovery plan, the more you are willing to put into it, the more protection you will have when disaster strikes.

This is where SoftLayer comes in. Here at SoftLayer we understand the importants of providing our customers the means to create a good disaster recovery plan that meets their needs. We understand that a detailed disaster recovery plan will include things such as backups and replication. Our services such as NAS and EVault are perfect solutions for performing and managing the backups for you server. When looking into replication, we offer services such as iSCSI replication, Raids, local and global loadbalancing which will provide our customer with the tools to replicate not only their data across multiple locations but their servers as well. Above all, we provide our private network to securely transfer this data to the many locations without impacting the traffic on your public network.

We can only hope that on the day disaster strikes, everyone has some plan in place to deal with it. There is nothing more frustrating in this industry then the loss of crucial data that in many instances cannot be recovered.p

October 31, 2007

Backups

"ah - I don't need backups."
"Too busy to do backups - I'll get to that later."
"Backups? It costs too much."
"I don't need backups - MTBF of a Raptor is 1.2 Million hours."
"Oops - I forgot about doing backups."

Backups are one of the most commonly forgotten tasks of a system administrator. In some cases, they are never implemented. In other cases, they are implemented but not maintained. In other cases, they are implemented with a great backup and recovery plan - but the system usage or requirements change and the backups are not altered to compensate.

A hard drive really is a fairly reliable piece of IT equipment. The WD 150GB Raptor has a rating of 1.2 Million hours MTBF. With that kind of mean time between failures, you would think that you would never have to worry about a hard drive failing. How willing are you to take that chance? What if you double your odds by setting up two drives in a RAID 1 configuration? Now can you afford to take that chance? How willing are you to gamble with your data?

What if one of your system administrators accidentally deletes the wrong file? Maybe it's your apache config file. Maybe it's a piece of code you have been working on all day. Or, maybe your server gets compromised and you now have unknown trojans and back doors on your server. Now what do you do?

Working in a datacenter with thousands of servers, there are thousands and thousands of hard drives. When you see that many hard drives in production, you are naturally going to see some of them fail. I have seen small drives fail, large drives fail, and I have even seen RAID 1 mirrors completely fail beyond recovery. Is it bad hardware? Nope. Is it Murphy's Law? Nope. It's the laws of physics. Moving parts create heat and friction. Heat and friction cause failures. No piece of IT equipment is immune to failure.

That 1.2 million hours MTBF looks pretty impressive. For a round number, let's say there are 15,000 drives in the SL datacenter. 1,200,000 hours / 15,000 drives = 80 hours. That means that every 80 hours, one hard drive in the SL datacenter could potentially fail. Now how impressive is that number?

Ultimately, regardless of the levels of redundancy you implement, there is always a chance of a failure - hardware or human - that results in data loss. The question is - how important is that data to you? In the event of a catastrophic failure, are you willing to just perform an OS reload and start from scratch? Or, if a file is deleted and unrecoverable, are you willing to start over on your project? And lastly, how much downtime can you afford to endure?

Regardless of how much redundancy you can build into your infrastructure with the likes of load balancers, RAID arrays, active/passive servers, hot spares, etc, you should always have a good plan for doing backups as well as checking and maintaining those backups.

Have you checked your backups lately?

-SamF

Subscribe to nas