Posts Tagged 'NAS'

November 11, 2014

Which storage solution is best for your project?

Before building applications around our network storage, here’s a refresher on what network storage is, how it is used, the different types available, and the best uses for each.

What is network storage? Why would you use it?

Appropriately named, network storage is storage attached to a server over our network; not to be confused with directly attached storage (DAS), which is a hard drive located in the server (or connected with a device like a SCSI or USB cable). Although DAS transfers data to a server faster than network storage due to network latency and system caching, there is still a strong place for network storage.

Many different servers can access network storage, and with some network storage solutions, more than one server can get data from the same shared storage volume simultaneously. This comes in handy if one server dies, because another can pick up a storage device and start where the first left off.

With DAS, planned downtime for server upgrades, potential data loss, and provisioning larger or more servers can slow down productivity. The physical constraints of internal drives and costs associated with servers do not affect network storage.

Because SoftLayer manages the disk space of our network storage products, there’s no need to worry about rebuilding a redundant array of inexpensive disks (RAIDs) or failed disks. If a disk fails, SoftLayer automatically replaces it and rebuilds the RAID—in most cases you would be unaware that the changes occurred.

Select network storage solutions are available with tools for your important data. Schedule snapshots of your data, promote snapshots to full volumes, or reset your data to the snapshot point.

And with network storage, downtime is minimal. Disaster recovery tools available on select storage solutions let you send a command to quickly fail over to a different data center so you can access your data if our network is ever down in a data center.

Types of Network Storage And How They Are Different

Storage Area Network (SAN) or Block Storage

Block storage works like DAS, just remotely—only a single server can access a block storage volume at a time. Using an Internet small computer system interface (iSCSI) protocol over a secure transmission control protocol/Internet protocol (TCP/IP) connection, SoftLayer's block storage has excellent features for backup and disaster recovery, and adding snapshot schedules and failover redundancy make it a powerful enterprise solution.

Network Attached Storage (NAS) or File Storage

File storage acts like a remote file system. It has a slim operating system that allows servers to treat it like a remote directory structure. Multiple servers can share files on the same storage simultaneously. Our new consistent performance storage lets you share files quickly and easily using a network file system (NFS) with your choice of performance level and secure connections.

We also have a common Internet file system (CIFS) (Windows), which requires a credential that grants access to any server on our private network. File storage can only be accessed by SoftLayer servers.

Object Storage

Object storage is a standalone storage entity with its own representational state transfer (REST) API that grants applications (not operating systems) access to the files stored there. Located on a public network, servers in any of our data centers can directly access files stored there. Object storage is different in the way those files are stored as well. In object storage there is not a directory structure, but instead metadata tags are used to categorize and search for files. In conjunction with a content delivery network (CDN), you can quickly serve files to your users or to a mobile device in close proximity.

With pay-as-you-go pricing, you don’t have to worry about running out of space. We only charge based on the greatest usage in any given day. That means you can get started right now for free!

Which storage solution is best for your project?

If you are still confused about which network storage option you should build your applications around, take this eight-question quiz to find out if object, file or block storage will work best for you:

-Kevin

August 1, 2013

The "Unified Field Theory" of Storage

This guest blog was contributed by William Rocca of OS NEXUS. OS NEXUS makes the Quantastor Software Defined Storage platform designed to tackle the storage challenges facing cloud computing, Big Data and high performance applications.

Over the last decade, the creation and popularization of SAN/NAS systems simplified the management of storage into a single appliance so businesses could efficiently share, secure and manage data centrally. Fast forward about 10 years in storage innovation, and we're now rapidly changing from a world of proprietary hardware sold by big-iron vendors to open-source, scale-out storage technologies from software-only vendors that make use of commodity off-the-shelf hardware. Some of the new technologies are derivatives of traditional SAN/NAS with better scalability while others are completely new. Object storage technologies such as OpenStack SWIFT have created a foundation for whole new types of applications, and big data technologies like MongoDB, Riak and Hadoop go even further to blur the lines between storage and compute. These innovations provide a means for developing next-generation applications that can collect and analyze mountains of data. This is the exciting frontier of open storage today.

This frontier looks a lot like the "Wild West." With ad-hoc solutions that have great utility but are complex to setup and maintain, many users are effectively solving one-off problems, but these solutions are often narrowly defined and specifically designed for a particular application. The question everyone starts asking is, "Can't we just evolve to having one protocol ... one technology that unites them all?"

If each of these data storing technologies have unique advantages for specific use cases or applications, the answer isn't to eliminate protocols. To borrow a well-known concept from Physics, the solution lies in a "Unified Field Theory of Storage" — weaving them together into a cohesive software platform that makes them simple to deploy, maintain and operate.

When you look at the latest generation of storage technologies, you'll notice a common thread: They're all highly-available, scale-out, open-source and serve as a platform for next-generation applications. While SAN/NAS storage is still the bread-and-butter enterprise storage platform today (and will be for some time to come) these older protocols often don't measure up to the needs of applications being developed today. They run into problems storing, processing and gleaning value out of the mountains of data we're all producing.

Thinking about these challenges, how do we make these next-generation open storage technologies easy to manage and turn-key to deploy? What kind of platform could bring them all together? In short, "What does the 'Unified Field Theory of Storage' look like?"

These are the questions we've been trying to answer for the last few years at OS NEXUS, and the result of our efforts is the QuantaStor Software Defined Storage platform. In its first versions, we focused on building a flexible foundation supporting the traditional SAN/NAS protocols but with the launch of QuantaStor v3 this year, we introduced the first scale-out version of QuantaStor and integrated the first next-gen open storage technology, Gluster, into the platform. In June, we launched support of ZFS on Linux (ZoL), and enhanced the platform with a number of advanced enterprise features, such as snapshots, compression, deduplication and end-to-end checksums.

This is just the start, though. In our quest to solve the "Unified Field Theory of Storage," we're turning our eyes to integrating platforms like OpenStack SWIFT and Hadoop in QuantaStor v4 later this year, and as these high-power technologies are streamlined under a single platform, end users will have the ability to select the type(s) of storage that best fit a given application without having to learn (or unlearn) specific technologies.

The "Unified Field Theory of Storage" is emerging, and we hope to make it downloadable. Visit OSNEXUS.com to keep an eye on our progress. If you want to incorporate QuantaStor into your environment, check out SoftLayer's preconfigured QuantaStor Mass Storage Server solution.

-William Rocca, OS NEXUS

October 18, 2011

Adding 'Moore' Storage Solutions

In 1965, Intel co-founder Gordon Moore observed an interesting trend:"The complexity for minimum component costs has increased at a rate of roughly a factor of two per year ... Certainly over the short term this rate can be expected to continue, if not to increase."

Moore was initially noting the number of transistors that can be placed on an integrated circuit at a relatively constant minimal cost. Because that measure has proven so representative of the progress of our technological manufacturing abilities, "Moore's Law" has become a cornerstone in discussions of pricing, capacity and speed of almost anything in the computer realm. You've probably heard the law used generically to refer to the constant improvements in technology: In two years, you can purchase twice as much capacity, speed, bandwidth or any other easily-measureable and relevant technology metric for the price you would pay today and for the current levels of production.

Think back to your first computer. How much storage capacity did it have? You were excited to be counting in bytes and kilobytes ... "Look at all this space!" A few years later, you heard about people at NASA using "gigabytes" of space, and you were dumbfounded. Fastforward a few more years, and you wonder how long your 32GB flash drive will last before you need to upgrade the capacity.

32GB Thumb Drive

As manufacturers have found ways to build bigger and faster drives, users have found ways to fill them up. As a result of this behavior, we generally go from "being able to use" a certain capacity to "needing to use" that capacity. From a hosting provider perspective, we've seen the same trend from our customers ... We'll introduce new high-capacity hard drives, and within weeks, we're getting calls about when we can double it. That's why we're always on the lookout for opportunities to incorporate product offerings that meet and (at least temporarily) exceed our customers' needs.

Today, we announced Quantastor Storage Servers, dedicated mass storage appliances with exceptional cost-effectiveness, control and scalability. Built on SoftLayer Mass Storage dedicated servers with the OS NEXUS QuantaStor Storage Appliance OS, the solution supports up to 48TB of data with the perfect combination of performance economics, scalability and manageability. To give you a frame of reference, this is 48TB worth of hard drives:

48TB

If you've been looking for a fantastic, high-capacity storage solution, you should give our QuantaStor offering a spin. The SAN (iSCSI) + NAS (NFS) storage system delivers advanced storage features including, thin-provisioning, and remote-replication. These capabilities make it ideally suited for a broad set of applications including VM application deployments, virtual desktops, as well as web and application servers. From what I've seen, it's at the top of the game right now, and it looks like it's a perfect option for long-term reliability and scalability.

-@nday91

May 15, 2009

Disaster Recovery Plan

A few days ago I was reading a news story about a man who just lost everything to a fire. One of the comments he made was that he had never thought to plan for something like this; it was the type of thing that happened to other people but never to me. I started thinking about how true that statement was. Many people just never think it will happen to them.

This type of situation happens every day in the IT field. There is some sort of disaster causing a server to crash or simply stop working all together, the drives on the server are completely corrupted and the data is just gone. The question is; when this happens to you, will you be prepared? Thankfully, there are steps each person can take to limit the pain and downtime a situation like this can cause. Like any other disaster recovery plan, the more you are willing to put into it, the more protection you will have when disaster strikes.

This is where SoftLayer comes in. Here at SoftLayer we understand the importants of providing our customers the means to create a good disaster recovery plan that meets their needs. We understand that a detailed disaster recovery plan will include things such as backups and replication. Our services such as NAS and EVault are perfect solutions for performing and managing the backups for you server. When looking into replication, we offer services such as iSCSI replication, Raids, local and global loadbalancing which will provide our customer with the tools to replicate not only their data across multiple locations but their servers as well. Above all, we provide our private network to securely transfer this data to the many locations without impacting the traffic on your public network.

We can only hope that on the day disaster strikes, everyone has some plan in place to deal with it. There is nothing more frustrating in this industry then the loss of crucial data that in many instances cannot be recovered.p

October 31, 2007

Backups

"ah - I don't need backups."
"Too busy to do backups - I'll get to that later."
"Backups? It costs too much."
"I don't need backups - MTBF of a Raptor is 1.2 Million hours."
"Oops - I forgot about doing backups."

Backups are one of the most commonly forgotten tasks of a system administrator. In some cases, they are never implemented. In other cases, they are implemented but not maintained. In other cases, they are implemented with a great backup and recovery plan - but the system usage or requirements change and the backups are not altered to compensate.

A hard drive really is a fairly reliable piece of IT equipment. The WD 150GB Raptor has a rating of 1.2 Million hours MTBF. With that kind of mean time between failures, you would think that you would never have to worry about a hard drive failing. How willing are you to take that chance? What if you double your odds by setting up two drives in a RAID 1 configuration? Now can you afford to take that chance? How willing are you to gamble with your data?

What if one of your system administrators accidentally deletes the wrong file? Maybe it's your apache config file. Maybe it's a piece of code you have been working on all day. Or, maybe your server gets compromised and you now have unknown trojans and back doors on your server. Now what do you do?

Working in a datacenter with thousands of servers, there are thousands and thousands of hard drives. When you see that many hard drives in production, you are naturally going to see some of them fail. I have seen small drives fail, large drives fail, and I have even seen RAID 1 mirrors completely fail beyond recovery. Is it bad hardware? Nope. Is it Murphy's Law? Nope. It's the laws of physics. Moving parts create heat and friction. Heat and friction cause failures. No piece of IT equipment is immune to failure.

That 1.2 million hours MTBF looks pretty impressive. For a round number, let's say there are 15,000 drives in the SL datacenter. 1,200,000 hours / 15,000 drives = 80 hours. That means that every 80 hours, one hard drive in the SL datacenter could potentially fail. Now how impressive is that number?

Ultimately, regardless of the levels of redundancy you implement, there is always a chance of a failure - hardware or human - that results in data loss. The question is - how important is that data to you? In the event of a catastrophic failure, are you willing to just perform an OS reload and start from scratch? Or, if a file is deleted and unrecoverable, are you willing to start over on your project? And lastly, how much downtime can you afford to endure?

Regardless of how much redundancy you can build into your infrastructure with the likes of load balancers, RAID arrays, active/passive servers, hot spares, etc, you should always have a good plan for doing backups as well as checking and maintaining those backups.

Have you checked your backups lately?

-SamF

Subscribe to nas